How to improve the quality of the results of a crowdsourcing initiative

When launching an initiative of collective intelligence, either crowdsourcing or not, the doubt of how good the result would be arises. Can actually a crowd of strangers solve a problem, do a task, and similar issues optimally? Can the solution be compared to a solution given by a specialist or even be a better one?

Experience proves that sometimes this is true as long as the right circumstances are given. As James Surowiecki explained in his book “The Wisdom of Crowds”:

“Under the right circumstances, groups are remarkably smart – smarter even sometimes than the smartest people in them”.

In any case, crowdsourcing in particular is not a magical process and requires preparation. These “right circumstances” must be facilitated. Regarding the quality of the result, there are several actions that can be done.

First of all, you have to look for the crowd in the right place. It is true that one of the characteristics of crowdsourcing is that it is an open call, so anyone can join in. Anyway, we can promote the initiative not only as the usual general promotion, but to do it in groups with particular interests, certain Twitter users with a particular profile, forums related to the subject, etc. This is clearly seen in well-run crowdfunding campaigns

On the second place, it is important to prepare the initiative and ensure its proper functioning before, during and after its execution.

What to do in advance

The actions to carry out before the initiative, obviously are not focused on the generated content (that does not exist yet), but are focused in preparing the way, discard malicious users and work of low quality .

There are two generic actions that can be used in any case:

  • A clear and succinct description of the task to be performed by the crowd.
  • To make sure that crowdworkers have the necessary information / tools (dependents of the crowdsourcer) to carry out the task without problems.

On the other hand, and depending on the type of initiative proposed, other actions are:

  • Do a mandatory pre-training task so that users know what kind of task they are going to face and that allows them to prepare for the real task
  • Do a pre-assessment task that allows the crowdsourcer to discard or accept crowdworkers
  • Use previous experience if it is registered (the task history) to accept or discard users

What to do meanwhile

During the initiative, other actions can be:

  • Analyze user results, discarding those persons that do not achieve certain criteria
  • Use during the execution of the task the “Gold Standard” mechanism, where basic questions that should be correctly answered are posed.

Regarding the first point, an example is the one proposed by Zhai et al. (2012). They analyzed the average time invested by the users in the accomplishment of the tasks, using that average time to verify if a determined user was dedicating enough time or attention.

What to do afterwards

These methods allow to unify results of multiple tasks, generate reliable results and discard undesired results (because their quality).

Some of these actions are:

  • The decision of the majority (the same task is repeated and if the successive results coincide, the first one is accepted)
  • Manual review of the results (expensive method)
  • The use of post-processing techniques based on computer programs


  • Zhai, Z., Kijewski-Correa, T., Hachen, D., & Madey, G. (2012). Haiti earthquake photo tagging: Lessons on crowdsourcing in-depth image classi_cation. In Seventh International Conference on Digital Information Management (ICDIM 2012) (pp. 357{364). IEEE. doi:1109/ICDIM.2012.6360130.


Leave a Reply

Your email address will not be published.