ABSTRACT
A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~$2/h, and only 4% earned more than $7.25/h. While the average requester pays more than $11/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work.
Supplemental Material
- Abigail Adams and Janine Berg. 2017. When Home Affects Pay: An Analysis of the Gender Pay Gap among Crowdworkers.Google Scholar
- Ali Alkhatib, Michael S Bernstein, and Margaret Levi. 2017. Examining Crowd Work and Gig Work Through The Historical Lens of Piecework. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), 4599--4616. Google ScholarDigital Library
- Benjamin B Bederson and Alexander J Quinn. 2011. Web Workers Unite! Addressing Challenges of Online Laborers. In CHI '11 Extended Abstracts on Human Factors in Computing Systems (CHI EA '11), 97--106. Google ScholarDigital Library
- Janine Berg. 2016. Income security in the ondemand economy: findings and policy lessons from a survey of crowdworkers.Google Scholar
- Michael S Bernstein, Greg Little, Robert C Miller, Björn Hartmann, Mark S Ackerman, David R Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A Word Processor with a Crowd Inside. In Proceedings of the 23Nd Annual ACM Symposium on User Interface Software and Technology (UIST '10), 313--322. Google ScholarDigital Library
- Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samuel White, and Tom Yeh. 2010. VizWiz: Nearly Real-Time Answers to Visual Questions. Uist: 333--342. Google ScholarDigital Library
- David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research 3, Jan: 993--1022. Google ScholarDigital Library
- John Bohannon. 2011. Social Science for Pennies. Science 334, 6054: 307 LP-307. Retrieved from http://science.sciencemag.org/content/334/6054/307 .abstractGoogle Scholar
- Roger B Bradford. 2008. An Empirical Study of Required Dimensionality for Large-scale Latent Semantic Indexing Applications. In Proceedings of the 17th ACM Conference on Information and Knowledge Management (CIKM '08), 153--162. Google ScholarDigital Library
- Matthew W. Brault. 2012. Americans with Disabilities: 2010, Household Economics Studies. Retrieved from http://www.census.gov/prod/2012pubs/p70--131.pdfGoogle Scholar
- Alice M Brawley and Cynthia L S Pury. 2016. Work experiences on MTurk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior 54: 531--546. Google ScholarDigital Library
- Alice M Brawley and Cynthia L S Pury. 2016. Work experiences on MTurk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior 54: 531--546. Google ScholarDigital Library
- Robin Brewer, Meredith Ringel Morris, and Anne Marie Piper. 2016. "Why Would Anybody Do This?": Understanding Older Adults' Motivations and Challenges in Crowd Work. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 2246--2257. Google ScholarDigital Library
- Chris Callison-Burch. 2014. Crowd-workers: Aggregating information across turkers to help them find higher paying work. In Second AAAI Conference on Human Computation and Crowdsourcing.Google ScholarCross Ref
- Chris Callison-Burch and Mark Dredze. 2010. Creating Speech and Language Data with Amazon's Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk (CSLDAMT '10), 1--12. Retrieved from http://dl.acm.org/citation.cfm?id=1866696.1866697 Google ScholarDigital Library
- Jason Chuang, Sonal Gupta, Christopher D Manning, and Jeffrey Heer. 2013. Topic Model Diagnostics: Assessing Domain Relevance via Topical Alignment. In International Conference on Machine Learning (ICML). Retrieved from http://vis.stanford.edu/papers/topic-modeldiagnostics Google ScholarDigital Library
- Jason Chuang, Daniel Ramage, Christopher Manning, and Jeffrey Heer. 2012. Interpretation and Trust: Designing Model-driven Visualizations for Text Analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12), 443--452. Google ScholarDigital Library
- Jason Chuang, Margaret E Roberts, Brandon M Stewart, Rebecca Weiss, Dustin Tingley, Justin Grimmer, and Jeffrey Heer. 2015. TopicCheck: Interactive Alignment for Assessing Topic Model Stability. In HLT-NAACL, 175--184.Google Scholar
- U.S. District Court. 2015. Christopher Otey, et al. v. CrowdFlower, INC., et al.Google Scholar
- Carroll Croarkin, Paul Tobias, and Chelli Zey. 2002. Engineering statistics handbook. NIST iTL.Google Scholar
- Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G Ipeirotis, and Philippe Cudré-Mauroux. 2015. The Dynamics of MicroTask Crowdsourcing: The Case of Amazon MTurk. In Proceedings of the 24th International Conference on World Wide Web (WWW '15), 238-- 247. Google ScholarDigital Library
- Stewart I Donaldson and Elisa J Grant-Vallone. 2002. Understanding Self-Report Bias in Organizational Behavior Research. Journal of Business and Psychology 17, 2: 245--260. Retrieved from http://www.jstor.org/stable/25092818Google ScholarCross Ref
- Nicholas Evangelopoulos, Xiaoni Zhang, and Victor R Prybutok. 2012. Latent Semantic Analysis: five methodological recommendations. European Journal of Information Systems 21, 1: 70--86.Google ScholarCross Ref
- Siamak Faridani, Björn Hartmann, and Panagiotis G Ipeirotis. 2011. What's the Right Price? Pricing Tasks for Finishing on Time. In Proceedings of the 11th AAAI Conference on Human Computation (AAAIWS'11--11), 26--31. Retrieved from http://dl.acm.org/citation.cfm?id=2908698.2908703 Google ScholarDigital Library
- Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. 2014. A Taxonomy of Microtasks on the Web. In Proceedings of the 25th ACM Conference on Hypertext and Social Media (HT '14), 218--223. Google ScholarDigital Library
- Neha Gupta, David Martin, Benjamin V Hanrahan, and Jacki O'Neill. 2014. Turk-Life in India. In Proceedings of the 18th International Conference on Supporting Group Work (GROUP '14), 1--11. Google ScholarDigital Library
- J.Richard Hackman and Greg R Oldham. 1976. Motivation through the design of work: test of a theory. Organizational Behavior and Human Performance 16, 2: 250--279.Google ScholarCross Ref
- David Hall, Daniel Jurafsky, and Christopher D Manning. 2008. Studying the History of Ideas Using Topic Models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '08), 363--371. Retrieved from http://dl.acm.org/citation.cfm?id=1613715.1613763 Google ScholarDigital Library
- Benjamin V Hanrahan, Jutta K Willamowski, Saiganesh Swaminathan, and David B Martin. 2015. TurkBench: Rendering the Market for Turkers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 1613--1616. Google ScholarDigital Library
- Kotaro Hara and Jeffrey. P. Bigham. 2017. Introducing People with ASD to Crowd Work. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2017. Google ScholarDigital Library
- Seth D. Harris and Alan B. Krueger. 2015. A Proposal for Modernizing Labor Laws for TwentyFirst-Century Work: The "Independent Worker."Google Scholar
- John A Hartigan and Manchek A Wong. 1979. Algorithm AS 136: A k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) 28, 1: 100--108.Google ScholarDigital Library
- Dawn Hendricks. 2010. Employment and adults with autism spectrum disorders: Challenges and strategies for success. Journal of Vocational Rehabilitation 32, 2: 125--134.Google ScholarCross Ref
- Paul Hitlin. 2016. Research in the crowdsourcing age, a case study. Pew Research Center.Google Scholar
- John Joseph Horton and Lydia B Chilton. 2010. The Labor Economics of Paid Crowdsourcing. In Proceedings of the 11th ACM Conference on Electronic Commerce (EC '10), 209--218. Google ScholarDigital Library
- Kazushi Ikeda and Michael S Bernstein. 2016. Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 4111--4121. Google ScholarDigital Library
- Panagiotis G Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk Marketplace. XRDS 17, 2: 16--21. Google ScholarDigital Library
- Lilly C Irani and M Six Silberman. 2013. Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13), 611--620. Google ScholarDigital Library
- Lilly C Irani and M Six Silberman. 2016. Stories We Tell About Labor: Turkopticon and the Trouble with "Design." In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 4573--4586. Google ScholarDigital Library
- Ayush Jain, Akash Das Sarma, Aditya Parameswaran, and Jennifer Widom. 2017. Understanding Workers, Developing Effective Tasks, and Enhancing Marketplace Dynamics: A Study of a Large Crowdsourcing Marketplace. Proc. VLDB Endow. 10, 7: 829--840. Google ScholarDigital Library
- Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. An introduction to statistical learning. Springer. Google ScholarDigital Library
- Miranda Katz. 2017. Amazon Mechanical Turk Workers Have Had Enough. Wired.Google Scholar
- Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money. worker motivation in crowdsourcing--a study on mechanical turk. In Proc. of AMCIS 2011.Google Scholar
- Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW '13), 1301--1318. Google ScholarDigital Library
- Adriana Kovashka, Olga Russakovsky, Li Fei-Fei, and Kristen Grauman. 2016. Crowdsourcing in Computer Vision. Foundations and Trends® in Computer Graphics and Vision 10, 3: 177--243. Google ScholarDigital Library
- Siou Chew Kuek, Cecilia Paradi-Guilford, Toks Fayomi, Saori Imaizumi, Panos Ipeirotis, Patricia Pina, Manpreet Singh, and others. 2015. The global opportunity in online outsourcing.Google Scholar
- Philippe Marcadent, Janine Berg, Mariya Aleksynska, Valerio De Stefano, Martine Humblet, Christina Behrendt, Susan Hayter, Christopher Land-Kazlauskas, Angelika Muller, Niall O'Higgins, Friederike Eberlein, Simone Marino, Bao Chau Le, and Calum Carson. 2016. NonStandard Employment Around The World.Google Scholar
- Adam Marcus and Aditya Parameswaran. 2015. Crowdsourced Data Management: Industry and Academic Perspectives. Foundations and Trends® in Databases 6, 1--2: 1--161. Google ScholarDigital Library
- David Martin, Benjamin V Hanrahan, Jacki O'Neill, and Neha Gupta. 2014. Being a Turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '14), 224--235. Google ScholarDigital Library
- Alexandre Mas and Amanda Pallais. 2016. Valuing alternative work arrangements.Google Scholar
- Winter Mason and Duncan J Watts. 2009. Financial Incentives and the "Performance of Crowds." In Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP '09), 77--85. Google ScholarDigital Library
- Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing Around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16), 2271--2282. Google ScholarDigital Library
- Megan Monroe, Rongjian Lan, Hanseung Lee, Catherine Plaisant, and Ben Shneiderman. 2013. Temporal Event Sequence Simplification. IEEE Transactions on Visualization and Computer Graphics 19, 12: 2227--2236. Google ScholarDigital Library
- F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, and E Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12: 2825--2830. Google ScholarDigital Library
- Forough Poursabzi-Sangdeh, Jordan L BoydGraber, Leah Findlater, and Kevin D Seppi. 2016. ALTO: Active Learning with Topic Overviews for Speeding Label Induction and Document Labeling. In ACL (1).Google Scholar
- Alexander J Quinn and Benjamin B Bederson. 2011. Human Computation: A Survey and Taxonomy of a Growing Field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11), 1403--1412. Google ScholarDigital Library
- Juan Ramos and others. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learning, 133--142.Google Scholar
- Reddit. /r/mturk: Crowd-Workers browser extension discussion. Retrieved from https://www.reddit.com/r/mturk/comments/35uujl/c rowdworkers_browser_extension_discussionGoogle Scholar
- Radim Rehurek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 45--50.Google Scholar
- Niloufar Salehi, Lilly C Irani, Michael S Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, and Clickhappier. 2015. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15), 1621--1630. Google ScholarDigital Library
- M Six Silberman, Lilly Irani, and Joel Ross. 2010. Ethics and Tactics of Professional Crowdwork. XRDS 17, 2: 39--43. Google ScholarDigital Library
- Yaron Singer and Manas Mittal. 2011. Pricing Tasks in Online Labor Markets. In Human Computation. Google ScholarDigital Library
- Susan Leigh Star and Anselm Strauss. 1999. Layers of Silence, Arenas of Voice: The Ecology of Visible and Invisible Work. Computer Supported Cooperative Work (CSCW) 8, 1: 9--30. Google ScholarDigital Library
- Edmund M Talley, David Newman, David Mimno, Bruce W Herr II, Hanna M Wallach, Gully A P C Burns, A G Miriam Leenders, and Andrew McCallum. 2011. Database of NIH grants using machine-learned categories and graphical clustering. Nature Methods 8, 6: 443--444.Google ScholarCross Ref
- David Weil. 2015. The Application of the Fair Labor Standards Act's "Suffer or Permit" Standard in the Identification of Employees Who Are Misclassified as Independent Contractors.Google Scholar
- Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P Bigham, Mary L Gray, and Shaun K Kane. 2015. Accessible Crowdwork? Understanding the Value in and Challenge of Microtask Employment for People with Disabilities. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. Google ScholarDigital Library
- Amazon Mechanical Turk Participation Agreement (2014). Retrieved July 1, 2017 from https://www.mturk.com/mturk/conditionsofuseGoogle Scholar
- Turker Nation. Retrieved September 1, 2017 from http://www.turkernation.com/Google Scholar
- The Evolving Ecosystem of Crowd Workers. Retrieved September 1, 2017 from http://www.ifis.cs.tubs.de/project/crowdsourcing/ecosystemGoogle Scholar
Index Terms
- A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk
Recommendations
Demographics and Dynamics of Mechanical Turk Workers
WSDM '18: Proceedings of the Eleventh ACM International Conference on Web Search and Data MiningWe present an analysis of the population dynamics and demographics of Amazon Mechanical Turk workers based on the results of the survey that we conducted over a period of 28 months, with more than 85K responses from 40K unique participants. The ...
Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis
CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing SystemsPrior research reported that workers on Amazon Mechanical Turk (AMT) are underpaid, earning about $2/h. But the prior research did not investigate the difference in wage due to worker characteristics (e.g., country of residence). We present the first ...
TurkScanner: Predicting the Hourly Wage of Microtasks
WWW '19: The World Wide Web ConferenceWorkers in crowd markets struggle to earn a living. One reason for this is that it is difficult for workers to accurately gauge the hourly wages of microtasks, and they consequently end up performing labor with little pay. In general, workers are ...
Comments