During online shopping, product’s rating and reviews help us best to choose the product. But according to a new study by Stanford scientists, we don’t investigate those figures properly to take in their real significance.
The study suggests that most online consumers fail to do a simple statistical task while viewing online ratings and reviews. Thus, they tend to choose inferior products.
Most of the time, online consumers engage themselves in social learning in which they end up by the choices of others. For example, you more likely to choose the product that is present on the top of best-selling or an app that’s been downloaded millions of times.
In social learning, other’s feedback comes through the mechanisms like online star rating. But how people interpret or fail to interpret affects their decision-making in a negative way.
Scientists involved 138 adults in the study and ask them to purchase a phone case. Each case was accompanied by its average star rating and the number of reviews. The star ratings varied minimally, but one of the cases always had 125 more reviews than the other.
They conducted two experiments in which scientists found that participants preferred the case that had more reviews. But against the fact, the way they set up the trail, that case was an inferior product.
Derek Powell, a postdoctoral research fellow said, “Think about it this way. Twenty-five people review a product and award an average 2.9 rating. While the rating is below average, there’s a possibility that with such few reviews the product may not be as poor as indicated.”
“Now imagine 150 consumers give that same product a 2.9 rating. That’s six times as many people rating the product below average. That should be a stronger signal of the product’s poor quality.”
Participants took the high number of reviews as a signal of quality rather than as an indicator of how accurately the review score should reflect the true quality of the product. Instead of conducting a rather simple statistical analysis for results, consumers are taking the number of reviews at face value.
Powell said, “What they’re doing is simply weighing cues. People seem to have this belief that popularity is good and are willing to use that as an important cue when making decisions.”
Scientists also examined 15 million reviews of more than 350,000 actual products on Amazon.com. They did not find any relationship between the number of reviews and its rating.
He claimed, “It doesn’t necessarily mean that better things don’t become more popular. But as a consumer, when you’re looking at this data point (number of reviews), it’s not telling you anything.”
“Overcoming this bias is difficult. Because consumers find comfort in popularity.”
“There are lots of contexts where following the herd is the rational thing to do. If there isn’t enough information available, that can be a smart thing to do. But what we’re arguing is that you have more information than just what people did; you also have what happened – did they like it, were they happy or unhappy with their purchase.”
According to him, the online consumers should make sure the ratings are above or below the average. Then, they should apply that rating to the number of reviews. It will help them best and give confidence about product quality.