Skip to content

Review Meta Analysis

Niall Walsh edited this page Feb 10, 2019 · 5 revisions

Review Meta

Review Meta is one of the leading product analysis websites for Amazon products.

To analyze the reviews Review Meta utilizes multiple features of word features and the review user itself to create an adjusted score for the product.

review

This is a list of all the unique features that Review Meta analyzes this includes the review itself but also a big portion of what the user has done on the specific day or the review count. It also allows the user to automatically adjust the weighting of each of these categories to give them more options to define what they would consider more or less important.

Other notable items on the review page of Review Meta is a Report Card feature that defines every check and shows if the product has passed or failed the checks as well as showing the most Trusted and Least Trusted Review only with the option to show up to 10. With the review itself having user information as well as the user rating and not just the review text and it also shows the main Test to show reasoning why these reviews are the most trusted or least trusted.

Analysis of Review Features

NOTE: I omitted some features that are Amazon specific

In this part I will analyze each feature RM uses to develop its score and see which one of these we are able to do with our current access to API's and things we can develop later on. I will give each feature a rating from 1 to 10 where one is won't implement to 10 should implement with comments on the difficulty and problems we may encounter.

Each feature also has a link the relevant blog post explaining the feature in-depth

For this feature RM has an excellent Blog Post explaining how this feature works in generally but in a nutshell they aggregate all the reviews that have common 3 word or more phrases in in them that are in some way substantial to avoid phrases like “it was the” that have no meaning or use.

After collecting all these reviews the check the phrase it self if it is using unnatural language and then against reviews that do not have common phrases and tries to compare review rating discrepancies. It also checks that not too many reviews repeat the same phrase.

This style of comparison is very useful on Business with a lot of reviews as the more reviews you have the more useful this is on a Business with less than 10 reviews this will not prove useful at all.

7/10 - This feature would be very useful to implement but we have to make sure to clearly define how to develop it and the way we will use it.

This section checks for each individual review and checks the amount of products they have reviewed or how many of each specific users previews reviews have been deleted.

5/10 - This would be a useful feature to implement but would not be possible with the Google Places API but is possible with the Yelp Dataset as we are provided this info in the dataset

This feature essentially groups all the reviewers based on the number of reviews they have posted and then these groups are checked against other products to see if there is a discrepancy in this product.

1/10 - I do not see this feature to be easy to implement first as this relies heavily on user data + other common product information this could be further down the line option but in my opinion this can be left out.

This feature uses historical data to determine the average rating of the product based on the number of reviews and checks if there are any significant spikes in a specific time frame

4/10 - This could be a useful feature to implement but we first need to do some operations with out Data to make this work.

This feature again groups reviews this time based on the word count of each review to generate a line chart and compares the the percentages of each group against other products in this products category to find any discrepancies.

6/10 - This is a very useful we can implement using the Yelp Dataset but not with the Google API.

This analyzes how many users overlap in their product reviews and if there is a statistical anomaly this can be used to flag for some unnatural manipulation

7/10 - Interesting analysis that can be implemented on the Yelp Dataset but not the Google API

This analysis checks how Easy is the reviewer pleased with a product if a reviewers average result is 3.3 but he gives a product a 5 it weights more than a reviewer whose average is 4.5.

8/10 - This is a very interesting feature that goes for more defined statistical analysis of data.

Relevant Charts used

Review Meta utilizes a number of charts to visualize their data.

The charts include: Pie Charts - To show percentages of certain reviews Bar Charts - To show the adjusted Rating Histogram with Line Chart - To show the rating trend against the number of reviews Stacked Line Chart - For Word Count Comparison and Reviewer Participation

Conclusion

Overall the Review Meta analysis showed a lot of good uses of User data and is not reliant is heavily on the Review itself and in there lies the best features we can extract for our further analysis of Yelp and Google businesses

Clone this wiki locally