The Problem With Online Ratings

Studies show that online ratings are one of the most trusted sources of consumer confidence in e-commerce decisions. But recent research suggests that they are systematically biased and easily manipulated.

A few months ago, I stopped in for a quick bite to eat at Dojo, a restaurant in New York City’s Greenwich Village. I had an idea of what I thought of the place. Of course I did — I ate there and experienced it for myself. The food was okay. The service was okay. On average, it was average.

So I went to rate the restaurant on Yelp with a strong idea of the star rating I would give it. I logged in, navigated to the page and clicked the button to write the review. I saw that, immediately to the right of where I would “click to rate,” a Yelp user named Shar H. was waxing poetic about Dojo’s “fresh and amazing, sweet and tart ginger dressing” — right under her bright red five-star rating.

I couldn’t help but be moved. I had thought the place deserved a three, but Shar had a point: As she put it, “the prices here are amazing!”

Her review moved me. And I gave the place a four.

As it turns out, my behavior is not uncommon. In fact, this type of social influence is dramatically biasing online ratings — one of the most trusted sources of consumer confidence in e-commerce decisions.

An Example of Social Influence

yelpView Exhibit

I had planned to give Dojo Restaurant a three-star review until I saw other raters’ glowing reviews, such as this one from Shar H. I ended up giving the restaurant four stars instead of three in my review.yelp

The Problem: Our Herd Instincts

In the digital age, we are inundated by other people’s opinions. We browse books on Amazon with awareness of how other customers liked (or disliked) a particular tome. On Expedia, we compare hotels based on user ratings. On YouTube, we can check out a video’s thumbs-up/thumbs-down score to help determine if it’s worth our time. We may even make serious decisions about medical professionals based in part on the feedback of prior patients.

For the most part, we have faith in these ratings and view them as trustworthy. A 2012 Nielsen report surveying more than 28,000 Internet users in 56 countries found that online consumer reviews are the second most-trusted source of brand information (after recommendations from friends and family).<

Read the Full Article:

Sign in, buy as a PDF or create an account.

References

1. “Nielsen: Global Consumers’ Trust in ‘Earned’ Advertising Grows in Importance,” April 10, 2012, www.nielsen.com.

2. S. Bikhchandani, I. Welch and D.A. Hirshleifer, “A Theory of Fads, Fashion, Custom and Cultural Change as Informational Cascades,” Journal of Political Economy 100, no. 5 (October 1992): 992-1026 ; and M.J. Salganik, P.S. Dodds and D.J. Watts, “Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market,” Science 311, no. 5762 (February 10, 2006): 854-856.

3. S. Gordon, “Call in the Nerds — Finance Is No Place for Extroverts,” Financial Times, April 24, 2013.

4. S. Aral and D. Walker, “Identifying Influential and Susceptible Members of Social Networks.” Science 337, no. 6092 (July 20, 2012): 337-341; and S. Aral and D. Walker, “Creating Social Contagion Through Viral Product Design: A Randomized Trial of Peer Influence in Networks,” Management Science 57, no. 9 (September 2011): 1623-1639.

5. L. Muchnik, S. Aral and S.J. Taylor, “Social Influence Bias: A Randomized Experiment,” Science 341, no. 6146 (August 9, 2013): 647-651.

6. See M. Luca and G. Zervas, “Fake It Till You Make It: Reputation, Competition and Yelp Review Fraud,” Harvard Business School NOM Unit working paper no. 14-006, Boston, Massachusetts, November 8, 2013; Y. Liu, “Word-of-Mouth for Movies: Its Dynamics and Impact on Box Office Revenue,” Journal of Marketing 70, no. 3 (2006): 74-89; and J.A. Chevalier and D. Mayzlin, “The Effect of Word of Mouth on Sales: Online Book Reviews,” Journal of Marketing Research 43, no. 3 (August 2006): 345-354.

7. N. Hu, J. Zhang and P.A. Pavlou, “Overcoming the J-Shaped Distribution of Product Reviews,” Communications of the ACM 52, no. 10 (October 2009): 144-147.

8. Special thanks to Georgios Zervas of Boston University for thoughtful discussions about this particular insight.

9. D. Mayzlin, Y. Dover and J. Chevalier, “Promotional Reviews: An Empirical Investigation of Online Review Manipulation,” American Economic Review, in press.

10. Splattypus, “Why Are Comment Scores Hidden?,” June 2013, www.reddit.com; see also Deimorz, “Moderators: New Subreddit Feature — Comment Scores May Be Hidden for a Defined Time Period After Posting,” May 2013, www.reddit.com.

i. One important exception is the seminal work of Salganik, Dodds and Watts, who conducted a large-scale lab experiment in an “artificial cultural market.” See Salganik et al., “Experimental Study of Inequality and Unpredictability.” Our work takes this research one step further by examining herding effects on a live website “in the wild” and by examining both negative and positive herding.

ii. Muchnik et al., “Social Influence Bias.”

3 Comments On: The Problem With Online Ratings

  • rob schmidt | December 21, 2013

    Great information! The one part of the article I find interesting, but different than my experience, is the the comment owners should take positive reviews with a grain of salt because of the herd mentality. While the explosion of smartphone penetration is changing the dynamics of when reviews are left, I expect more negative reviews than positive ones because the angry feelings are more motivating than feeling good about an experience and not all reviews happen immediately after the business interaction. Am I wrong?

  • jklondon | December 23, 2013

    Nice article. I can see the J curve in the reviews of my app on Google Play.

  • Hurriyet ilan Servisi | January 11, 2014

    Bu konunun çok önemli olduğunu düşünüyorum.
    (I think this subject is very important)

    Thanks.

Add a comment