Crowdsourcing is widely proposed as a method to solve large variety of judgement tasks, such as classifying website content, peer grading in online courses, or collecting real-world data. As the data reported by workers cannot be verified, there is a tendency to report random data without actually solving the task. This can be countered by making the reward for an answer depend on its consistency with answers given by other workers, an approach called {\em peer consistency}. However, it is obvious that the best strategy in such schemes is for all workers to report the same answer without solving the task. Dasgupta and Ghosh (WWW 2013) show that in some cases exerting high effort can be encouraged in the highest-paying equilibrium. In this paper we present a general mechanism that implements this idea and is applicable to most crowdsourcing settings. Furthermore, we experimentally test the novel mechanism, and validate its theoretical properties.
Andreas Mortensen, Léa Deillon, Alejandra Inés Slagter, Eva Luisa Vogt, David Hernandez Escobar, Jonathan Aristya Setyadji
Andrea Rinaldo, Gianluca Botter
Alok Rudra, Anna Fontcuberta i Morral, Sara Marti Sanchez, Santhanu Panikar Ramanandan, Joel René Sapera, Vladimir Dubrovskii