Login

Crowd Innovation: Hype or Help?

How to Manage Crowdsourcing Platforms Effectively

Ivo Blohm, Shkodran Zogaj, Ulrich Bretschneider and Jan Marco Leimeister

Keywords

Crowdsourcing, Platform Management, Governance

download pdf

Not all crowdsourcing challenges are created equal
New information technologies have allowed companies to tap into the creative potential, distributed work patterns, and expansive knowledge of huge online crowds. In various business fields, crowds can solve certain problems faster, better, and cheaper than companies are able to do in house.  Today, according to a trend report published by the platform provider eYeka in 2015, 84 % of the world’s top companies – including SAP, Dell, Google, General Electric, Fiat, LEGO, and Procter & Gamble – have started to build their own crowdsourcing platforms.  The crowd-sourced tasks, however, are highly diverse, as are crowdsourcing platforms. For instance, the Fiat Mio platform, where contributors collaborated to develop a new concept car, is completely different from the GE Ecomagination Challenge, where contributors compete against each other. In the case of Fiat Mio, contributions were small and reflected by sharing, commenting, editing, or integrating ideas for further developing the car in a collaborative fashion. In contrast, GE’s Ecomagination Challenge does not require substantial collaboration among contributors. It facilitates an innovation contest in which each contribution reflects an independent and exhaustive solution to a specific crowdsourced task. Of course, the different nature of the tasks demands different governance mechanisms. While collaboration is an important issue for Fiat Mio, mechanisms that permit the control and evaluation of a high number of alternative contributions will be a key challenge for GE.

Different types of crowdsourcing platforms
Crowdsourcing platforms fall into four categories, distinguished by the diversity and aggregation of their contributions (see Figure 1). The main goal of microtasking crowdsourcing platforms is the scalable and time-efficient batch processing of highly repetitive tasks, e.g., categorizing data or writing and translating small chunks of text. Crowdsourcing platforms for information pooling aggregate contributions such as votes, opinions, assessments, and forecasts through approaches such as averaging, summation, or visualization. Broadcast search platforms collect contributions to solve a task to gain alternative insights and solutions from people outside the organization. They are particularly suited for solving challenging technical, analytical, scientific, or creative problems. Frequently, broadcast search is applied to running different kinds of innovation, design, or data science contests. Finally, open collaboration platforms invite contributors to team up to jointly solve a complex problem where the solution requires the integration of distributed knowledge and the skills of many contributors. The individual contributions are aggregated such that one or more solutions to the underlying problem can emerge. In practice, however, pure forms of these archetypes are rare. Frequently, crowdsourcing platforms combine several traits.

 

How to manage the different types successfully
The management of these different types needs to reflect their varied goals and nature along several[CK1]  dimensions. Governance involves structuring roles and responsibilities, formal and informal rules, standards and regulations, outcome control measures, communication processes, or details of task allocation to achieve a crowdsourcer’s goal. In a research project, we identified six distinct governance domains that encompass 21 distinct governance mechanisms for crowdsourcing. We investigated a total of 19 platforms and for each platform type, we studied at least four typical platforms. The purpose of our study was to identify effective governance mechanisms for each type of platform. Figure 2 summarizes which types of governance mechanisms are effective for the different types.

 

Effective governance of microtasking platforms
Organizations that host a microtasking platform should consider governance mechanisms that are primarily geared towards assuring an adequate quality of contributions. In order to ensure the repeated and parallelized execution of tasks, modularization is key; to receive high quality contributions, crowdsourcers should communicate contribution requirements. Such definitions provide contributors with a clear set of instructions to help them better understand the tasks and document the results of their work.  For example, Clickworker provides templates for defining the characteristics of desired results.  
The large quantity of contributions is also a challenge for monitoring quality. Whenever possible, management should opt for automated quality control. Our results demonstrate that relatively simple measures that are easily adapted by many crowdsourcers prove to be effective. Financial payments are the primary incentive for contributors. Typically, each contributor who completes a valid solution that satisfies the task requirements is paid. We recommend that hosts of microtasking platforms also install a reputation system with which contributors can signal their skills, expertise, or participation level. These systems effectively complement financial incentives and increase the effort exerted by contributors. Further, authentication for verifying the identity of contributors is recommended as it helps prevent misconduct. It removes the anonymity of contributors and incentivizes more accurate task performance.

Effective governance of information pooling platforms
Organizations intending to establish an information pooling platform should implement a governance structure that focuses on helping contributors submit high quality information. They should define contribution requirements and offer tutorials. For instance, BahnScout has clear guidelines: Contributors are expected to include a picture of the issue, a textual description, the precise location, select a predefined category, and mention potential hazards. Typically, contributors voluntarily participate in this type of crowdsourcing and therefore most contributors are personally interested in the task or project. To get a realistic picture and avoid bias, organizations should focus on integrating diverse and independent contributions, e. g. by demographic-based task allocation. For this type of crowd work, non-financial incentive mechanisms such as reputation systems are most effective. Rankings or experience levels are good tools to motivate contributors because they enable contributors to signal their standing within a platform’s community. Similarly, socialization enables contributors to communicate and interact with peers and is often appreciated.

Effective governance of broadcast search platforms
Completely open approaches to broadcast search tend to create a lot of “noise”, resulting in many low-quality contributions. In order to receive a manageable number of contributions without substantially reducing the chances of getting high quality, organizations should consider focusing their broadcast search on groups of contributors with proven abilities.  For this type of platform, contribution requirements again play a crucial role and should be defined carefully. They should ensure that results can be implemented in practice. For broadcast search, financial incentives are particularly important. Usually the best contribution receives a significant prize while unsuccessful participants come away empty-handed. For example, the jovoto platform recognized that competing for such prizes is perceived as risky by many contributors. To ensure broad participation, jovoto usually offers multiple prizes such as rewards for runner-up contributions or progress prizes for best contribution at the halfway point of the contest. In some cases, payments for participating can also be considered. This is common when a group of contributors with specific skills are included within the broadcast search, e.g., design professionals, or for invitation-only projects with a limited number of participants.

The management of different platform types needs to reflect their varied goals and nature along several dimensions.

Effective governance of open collaboration platforms
For open collaboration platforms, modularization of tasks that structure the collective effort of contributors alongside incentives that appeal to intrinsic motivations can be highly effective. The overarching goals of the task are often broad and complex and should be broken down into sub-goals, which can be framed in a project-like fashion. Frequently, contributors perceive the topic of an open collaboration platform as personally important and are willing to expend substantial effort in contributing to achieve the goals. Thus, organizations should define precise and inclusive objectives that appeal to many contributors. They should ensure that these objectives are clearly communicated on the platform. Due to the collective nature of open collaboration, peer assessment is an effective mechanism for quality assurance. Quality control can be achieved by letting participants validate the contributions of other contributors. Apart from peer assessment, open collaboration platforms should provide a variety of socialization mechanisms that enable contributors to immerse themselves in the community. Contributors need to be able to communicate, to exchange, and to discuss their ideas with their peers, and also to resolve disputes during collaboration. For this purpose, all the open collaboration platforms we investigated maintain communication forums that are used extensively. While these forums resemble a general communication infrastructure, open collaboration platforms should also contain sophisticated structures with which contributors can directly collaborate on their emerging contributions. Further, providing contributors with feedback is key to long-term success and to the development of the platform. Contributors consider feedback on the collective effort of the community as a genuine sign of appreciation.

Don´t expect too much too quickly
Crowdsourcing can achieve astonishing results but getting a platform right is an ongoing project. The analysis in this article can help define the goal and the key design of the operating system of a crowdsourcing platform. Nevertheless, we recommend starting small. Effective governance is an experiential learning process, and appropriate mechanisms may not spring into being all at once. Organizations should consider pilot-testing their governance mechanisms with a series of smaller crowdsourcing projects in a noncritical environment. Also, they should think of restricting the crowd to create room for experimentation and learn how to improve governance without fear of negative consequences. Managers responsible for crowdsourcing platforms should recognize that they are the “middlemen” between the organization and the crowd. In order to avoid redundant time-consuming interactions, managers should invest in making their governance mechanisms scalable after having accomplished an effective proof of concept. Finally, companies establishing crowdsourcing platforms should continuously monitor and adjust their governance mechanisms. Quality and quantity of contributions, project runtime, or effort for conducting the crowdsourcing project may be good starting points.

Authors

Ivo Blohm, Assistant Professor for Data Science & Management, University of St. Gallen, Switzerland, ivo.blohm@unisg.ch

Shkodran Zogaj, Research Assistant, University of Kassel, Germany

Ulrich Bretschneider, Interim Professor Corporate Entrepreneurship & Digital Transformation, University of Witten/Herdecke, Germany

Jan Marco Leimeister, Full Professor for Information Systems Research, University of St. Gallen & University of Kassel, Germany, janmarco.leimeister@unisg.ch

Further Reading

Blohm, I.; Zogaj, S.; Bretschneider, U.; Leimeister, J.M. (2018): “How to Manage Crowdsourcing Platforms Effectively?”, in: California Management Review, Vol. 60 (2), 122-149. doi: 10.1177/0008125617738255