At a conceptual level, I personally identify until the moment three moments marked by different authors. There are many fundamental authors in the crowdsourcing study that I do not mention/consider, authors like Iperoitis, Vuković, etc. The reason is that they have focused their research area on specific characteristics of crowdsourcing.
- The coin of the term would be the first moment, in which Jeff Howe and Daren Brabham would be the more important authors. Jeff Howe, besides coining the term, was the first to give a definition and propose a concrete typology. Daren Brabham, did the same a little later, completing, from my point of view, the work begun by Howe. We are talking about 2006-2008.
- A second moment, in 2012, is delimited by the definition that Fernando González and myself proposed and the typology Geiger et al. (2011) described.
- The third moment would be in 2016. Here I would especially highlight the chapter “Human-Computer Interaction and Collective Intelligence” of the book “Handbook of Collective Intelligence” written by Bigham, Bernstein & Adar. In this chapter, the authors propose the existence of 3 types of crowdsourcing: direct crowdsourcing, collaborative crowdsourcing and passive crowdsourcing (and in this last type is the evolution issue).
Direct crowdsourcing refers to classic crowdsourcing, where a crowdsourcer proposes a task to the crowd through platforms like Amazon Mechanical Turk.
Collaborative crowdsourcing refers to situations in which the crowd itself determines the way they work and the task to be done. Members of the crowd use to share the same interests, and this type of initiative comes up, to some extent, spontaneously.
These are initiatives in which the user is not asked to participate, but rather takes advantage of generated content by the user and made public. This is the case proposed by Sambuli et al. (2013), who analyzed messages on Twitter from a large group of users to predict political results. Or, strictly speaking, the research work that I carried out when analyzing the use of tags by users in Social Bookmarking Systems (Estellés-Arolas and González-Ladrón-de-Guevara, 2012).
From my point of view, in the case of collaborative crowdsourcing, an evolution accepting situations in which the crowd is the promoter of the initiative itself could be accepted. There is a crowdsourcer, but it is not a unique person, but a group of them that launch the initiative and configure it in a coordinated way. Other people will join that primal group, but the way of working would have already been delimited and the new people simply follow the flow.
In the case of passive crowdsourcing, I have my doubts. I always thought that this type of initiative was typical of collective intelligence, but not crowdsourcing (not all collective intelligence initiatives are crowdsourcing initiatives). The fundamental reason is that I always understood crowdsourcing as an interaction between people: a promoter launches a task and individually, a group of people value whether they are worth carrying out the task and getting involved in it. There is an interaction, a proposal and an assessment of it. In the case the authors propose, there is no interaction at all.
From my point of view, this type of crowdsourcing is more a generalization of the “crowdsourcing” term to equate it to “Collective Intelligence” in a global way.
Geiger, D., Seedorf, S., Schulze, T., Nickerson, R. C., & Schader, M. (2011, August). Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes. In AMCIS.
Bigham, J. P., Bernstein, M. S., & Adar, E. (2015). Human-Computer Interaction and Collective Intelligence. In W. Malone and M.S. Bernstein (eds.), Handbook of Collective Intelligence. MIT Press
Sambuli, N., Crandall, A., Costello, P., & Orwa, C. (2013). Viability, verification, validity: 3Vs of crowdsourcing. iHub Research.
Estellés-Arolas, E., & González-Ladrón‐de‐Guevara, F. (2012). Uses of explicit and implicit tags in social bookmarking. Journal of the Association for Information Science and Technology, 63(2), 313-322.