And for most readers, the nuance between something being rated two stars or three stars isn’t even what’s important; only that it’s not a perfect score. It’s difficult to pinpoint when and where the trend in review inflation started, although it’s most noticeable now in crowd-sourced business reviews like those on Yelp, Uber, Amazon, or Airbnb. In one study of 1,000 online shops cited by The Wall Street Journal in 2017, for example, 4.3 stars was the “average” internet product rating, not 2.5 stars like you’d maybe expect. “Yelp says 46 percent of the reviews we give local businesses are five stars,” the Journal added, noting also that “Uber drivers can get the boot for relatively minor ratings dips … [s]o it feels socially awkward to give less than five stars, even if your driver’s car kinda smells.” Scaled rating systems have functionally become a way to warn others away from a purchase, rather than to honestly evaluate it.
Art and entertainment criticism has also become a blunt instrument for warning people away from something. Part of that is because even the best-intentioned professional reviewers now have their writing translated through aggregators like Rotten Tomatoes and Metacritic, which spit out scores to be displayed on rental websites, alongside the runtime and the MPAA rating. Even book reviewers, who historically haven’t used star ratings to the same extent as film and TV critics, now get aggregated as “rave,” “positive,” “mixed,” and “pan” on LitHub’s Bookmarks website. The result, though, is a haphazard system for gauging whether or not something is worth your time. “I feel like [Rotten Tomatoes and Metacritic] have created a sense that there’s an answer to whether a movie is good or bad when really that’s a very personal question,” Vox’s Emily VanDerWerff told FiveThirtyEight, adding: “Because it looks like math, we have it in our head that it’s somehow objectively true, but in reality, it’s all based on subjective experience.”
Join the conversation as a VIP Member