Crowd collaboration projects, by contrast, do not seek the best individual solution for a problem, but try to tap into collective wisdom to aggregate knowledge and ideas into a coherent and value-creating whole. Wikipedia is probably the best-known example. Another is OpenIDEO. It was launched by the design and consulting firm IDEO as an “open innovation platform where people from all corners of the world collaboratively tackle some of the toughest global issues through launching challenges, programs, and other tailored experiences”. Based on “design thinking”, the IDEO community shares ideas, collaboratively refines them, and tries to solve problems like “How might mobile technology help improve access to healthcare?”. Some companies have begun to involve large internal and external crowds in strategy-making. IBM, for instance, invited its 150,000 employees plus externals like business partners, customers, or university researchers into its strategy process, attracting more than 46,000 ideas. The US Navy used a crowdsourcing platform in the form of a massive online war game to update its strategic plan.
Crowd complementors are a third common form of crowdsourcing. With this approach, a product or platform owner invites the crowd to develop innovative solutions that create value through complementary innovations. In contrast to the other two forms, it does not seek the solution to a defined and specific problem, but new applications for many different problems. Amazon for example, allows the crowd to develop and publish skills for its virtual assistant Alexa. Using the Alexa Skills Kit, by the end of 2018 almost 60,000 skills were developed by the crowd. In 2019 Amazon went further by allowing every user to develop skills with templates and to publish them.
While these are the most relevant, crowdsourcing has been extended to many types of tasks. Micro jobs are offered to crowd workers on platforms like TaskRabbit or Amazon’s Mechanical Turk. Crowdfunding for startups can be performed via Kickstarter, for instance, and user generated content can be offered on iStock’s photo collection, YouTube, and many other platforms.
Why crowdsourcing works
What makes crowds attractive as innovation partners? And why are strangers and anonymous experts often the ones who come up with the most original or simplest solutions? Research has identified four basic explanations.
Marginality refers to the distance between the solver’s field of technical expertise and the focal field of the problem. Karim Lakhani, professor at the Harvard Business School and one of the foremost experts in crowdsourcing, has spent years conducting and studying hundreds of crowdsourcing projects. In the case of the crowdsourcing platform InnoCentive, he found that topical distance was positively related to higher rates of winning solutions. Technical and social marginality can be a source of different perspectives, and heuristics and can play an important role in explaining individual success in problem-solving. Experts, industry specialists and professionals tend to generate many good ideas, but with little variation. Due to specific education, formal training, work experience, and regular practical application, experts accumulate knowledge in their specific domain. They develop routines to solve frequently encountered problems and converge on conventional cognitive frameworks. Crowdsourcing, on the other hand, attracts a diverse audience and a variety of nontraditional problem solvers.
The Bell Curve. Karim Lakhani’s second observation regarding the Bell Curve of ideas is simple but compelling. Innovative ideas tend to be normally distributed. There will be a few “low quality” ideas, many average ideas, a few good ones, and with luck, one or two that are exceptional. To develop groundbreaking innovations, companies seek those exceptional ideas or, statistically speaking, outliers. Outliers are extremely rare in small samples, however. When it comes to innovation, whether strategic, technological, or new products, we care about “extreme values”, and to get those we need large samples. The Austrian crystal producer Swarovski, for instance, invited more than 1,700 participants to submit over 3,000 pieces of jewelry during a jewelry design competition. Among the participants were both professional designers and amateurs or hobbyists. Submitted designs were evaluated by all users, with the top designs generating more than 4,400 evaluations. Statistical analysis revealed the bell-curve pattern depicted in Figure 2: Designs by professionals, on average, received the highest ratings, their variance on quality was the lowest. Non-professionals submitted low average quality, but with high variance. And the designs evaluated exceptionally highly – representing the “extreme values” – came from the non-professionals!