The Reputation Economy

Navigating by the Stars: Current Challenges for Ensuring Trust in the Sharing Economy

Mareike Möhlmann and Timm Teubner


Trust, Ratings, Reputation, Fake Reviews

download pdf

New rules for the trust game
Our parents used to urge us not to get into strangers’ cars or houses, and not to meet people from the Internet. Nevertheless, today, many share their cars with strangers (BlaBlaCar), grant access to their homes (Airbnb, Helpling), and use platforms to connect to others in on- and offline environments (TaskRabbit, Facebook). Online and mobile technology has fueled the rise of what is often referred to as the sharing or platform economy, a landscape of digital businesses that enable resource exchange among multiple actors. To operate successfully, platforms build on network effects, a critical mass of participants, balanced value exchange, and – last but not least – trust among the key players. In terms of trust, the triumph of sharing economy platforms has challenged conventional wisdom. Why is that? One reason is that platforms have come up with new tools, mechanisms, and design patterns to build and curate trust. Prominent among these are star ratings and text reviews – which forge a bridge between the exchange principles of our early ancestors and today’s Internet users (see Box 1).



Navigation by the stars is tricky
Reputation systems were pioneered by online platforms, such as eBay, that implemented star ratings schemes to establish trust between anonymous buyers and sellers. Today, virtually all e-commerce and sharing-economy platforms are using similar systems. However, despite their promises, reputation systems by no means represent a silver bullet, and “navigating by the stars” can be fairly tricky. Platforms and users face several challenges in making sure that reputation systems remain credible, and different approaches to meet those challenges have emerged.

  • Skewed ratings with little variance
    One common observation is that average ratings tend to be very positive. In fact, being awarded the best possible rating score – typically 5 out of 5 stars – is the norm rather than the exception. Skewed ratings and low rating variance, however, make it difficult for users to differentiate good products and services from bad. Skewness results from several effects, including social desirability, fear of retaliating feedback and publicity, and survivorship bias – meaning that businesses with low ratings are more likely to disappear from the market. To tackle the issue of retaliation, most platforms use so-called simultaneous review schemes, only publishing ratings once both parties have committed. Furthermore, platforms may offer individuals the opportunity to leave text reviews as a complement to numeric ratings. While numeric reviews represent a “summary” of others’ experiences, text reviews allow users to discuss specifics of a product, service, or provider in more detail. For example, Airbnb listings tend to show high variance concerning aspects such as the location of an accommodation, the level of cleanliness, or the quality of amenities. Users may use text reviews to share information very specific to a listing, such as loud neighbors or street noise.


  • Fake reviews, detection, and prevention
    In most open review ecosystems, such as Amazon, Google, Tripadvisor, or Jameda, anyone can leave a review for products, places, hotels, medical doctors, or apps, even if the product or service has not actually been purchased or used. Given the economic power of reviews in making and breaking businesses, this represents a razor-sharp, double-edged, and largely unrestricted sword. Buying reviews for one’s company can have tremendous benefits, hence it is not surprising that a vibrant secondary industry around commissioned and fake reviews has emerged. Especially in early phases when users or sellers represent dark horses, boosting their own reputation by buying favorable ratings and reviews appears tempting. Note also that negative reviews for one’s competition can be ordered. While this may, at best, be pesky for some businesses, it may financially ruin others. Although many platforms review comments before releasing them publicly, lack of control is widely recognized as a major drawback in the sharing economy. Naturally, the issue of fake reviews is much less of a concern for platforms like Airbnb or Uber in which the possibility to review is bound to actual transactions. Today, a growing number of platforms implement algorithms to automatically identify, flag, and delete suspicious reviews. Also, third-party services, such as ReviewMeta.com, attempt to de-bias inflated and polluted product reviews.
  • Cold start and reputation transfer across platforms
    Another challenge is the absence of reviews when starting anew on a platform. Even after having collected the first few ratings, without a sufficiently large number, average rating scores exhibit little credibility. For example, a user who has received just one 5-star rating will typically be trusted less than a user who has received, say, twenty 5-star ratings and three 4-star ratings. There exist very different approaches to tackle this. Apart from the problematic buying of reviews, platforms also think about incentivizing users to provide reviews, for instance, by offering coupons or discounts or by less tangible means, such as gamification or repeated (and annoying) email notifications. Another potential way to address the cold-start challenge or “newbie dilemma” is reputation portability – referring to the display of ratings that originated in another context. For example, new hosts on Airbnb may refer to their history as a reputable and trustworthy person on the platform BlaBlaCar or any other. Early studies show that imported star ratings do in fact have trust-promoting effects across platform boundaries. One particular finding is that the transfer of ratings works very well for “matching” platforms and that, somewhat surprisingly, ratings from quite different contexts can also be very effective and beneficial. Despite this obvious potential, practical applications haven’t yet reached widespread adoption.

Navigation by the stars – don’t go on autopilot (yet)
It is staggering to see how much power star ratings and text reviews have gained in many domains of (electronic) commerce and on sharing economy platforms. Despite their promises to facilitate trusted transactions between strangers, many challenges remain. The responsibility of addressing these challenges resides with four main groups of actors: platform operators, service providers, consumers, and regulators.

For the time being, the sharing economy will be safer with cautious drivers using several orientation points rather than “navigating by stars” on autopilot.

To maintain legitimacy, platform operators need to design reputation systems with minimal negative side effects and make crucial decisions about the level of control they seek to enact. For example, by implementing a less conservative approach to algorithmically supported fraud detection, platforms may not sufficiently mitigate risks of fake reviews. Yet, by implementing all-too-rigorous policies, they may end up blocking non-fraudulent information – thereby, disabling the articulation of relevant experience. Service providers and consumers can contribute to the striving of online reputation by remaining honest and active. Truthful reviews, even if negative, will help other platform participants to make informed decisions when engaging in sharing economy transactions. Regulators have the responsibility to set a legal frame that allows for dynamic and trusted marked environments. The EU paves the way by urging research to explore potential benefits and underlying mechanisms, for instance, with regard to reputation portability. The General Data Protection Regulation, particularly its article on data portability, can be seen as a first step in this direction by forcing platform operators to allow for free-floating data. The actions and interplay of these four groups will ultimately determine how platforms as a broader economic modus operandi will succeed in addressing the current challenges for ensuring trust in the sharing economy. For the time being, the sharing economy will be safer with cautious drivers using several orientation points rather than “navigating by stars” on autopilot.


Mareike Möhlmann, Incoming Assistant Professor, Bentley University, US, mareike.moehlmann@gmail.com

Timm Teubner, Assistant Professor, TU Berlin, Germany, teubner@tu-berlin.de

Further Reading

Möhlmann, M.; Teubner, T.; & Graul, A. (2019): “Trust and reputation in sharing economies,” in Handbook of the Sharing Economy, R. Belk; G. M. Eckhardt; and F. Bardhi (eds.), Edward Elgar Publishing, 290–302.

Hawlitschek, F.; Teubner, T.; Adam, M. T. P.; Borchers; N.; Möhlmann, M.; & Weinhardt, C. (2016): “Trust in the sharing economy: An experimental framework,” in ICIS 2016 Proceedings, 1–14.

Mazzella, F.; Sundararajan, A.; D’Espous, V.; & Möhlmann, M. (2016): “How digital trust powers the sharing economy”, IESE Insight, Third Quarter (30), 24-31.

Teubner, T.; & Hawlitschek, F. (2018): “The economics of peer-to-peer online sharing,” in The Sharing Economy: Possibilities, Challenges, and the way forward, P. Albinsson and Y. Perera (eds.), Praeger Publishing, 129–156.