My research is on human-in-the-loop systems and the design of new algorithms, interaction techniques and incentive mechanisms for leveraging "the wisdom of the crowd" to solve problems in information retrieval, science and medicine. I am driven both by the desire to answer fundamental questions about the evolving nature of the human-machine partnership, and to address real-world problems while engaging real users over the Web. As part of my research, we are building two platforms for scientific and medical crowdsourcing: CrowdCurio and CrowdEEG.
  • Typewriter Girl

    Typewriter Girl is a games with a purpose (GWAP) for transcribing hand-written letters housed in libraries and museums. Within the context of this game, we will develop interfaces based on psychological theories of curiosity, and measure how they affect player's performance and engagement. At the same time, this platform serves as a general framework for the transcription of ancient manuscripts in libraries and museums. This is a project in collaboration with Prof. Karen Bourrier (University of Calgary) and Mary Borgo (Indiana University).


    We are developing a framework that combines machine and human intelligence to classify human clinical EEG recordings, which aims to increase accuracy, cost efficiency, and capacity of analysis. We study how EEG specialists can be supported in collaborative analysis of ambiguous cases, how to combine active learning algorithms with crowdsourcing to reduce the need for expert involvement, and how to enable non-experts to perform such complex medical annotation tasks. This project is in collaboration with Prof. Joelle Pineau from McGill University and Dr. Andrew Lim from Sunnybrook Hospital. (IJCAI 2016).

    Crowdsourcing Music Transcription

    We are investigating how to turn music into scores by combining algorithms and crowdsourcing, by decomposing the music transcription tasks into simpler tasks that people with no formal musical training can do, and combining automated melody and chord extraction techniques with human corrections. This project is in collaboration with Justin Salamon at NYU. (ISMIR 2016).

    Citizen Science

    Volunteer-based crowdsourcing platforms, which rely on everyday citizens to perform tasks towards a serious purpose without receiving monetary payment, face a unique set of design challenges. CrowdCurio ( is a research infrastructure for studying these challenges, and a platform that enables researchers to more easily create and manage crowdsourcing projects, and coordinates both expert communities and non-expert crowds to collectively collect, annotate and analyze research data. (HCOMP 2013, Harvard Magazine)

  • X-with-a-Purpose Systems

    X-with-a-Purpose Systems engage participants to perform tasks as a by-product of another activity they are intrinsically motivated to do. Games with a Purpose is a prime example, where game players generate useful labeled data for information retrieval or machine learning as a by-product of playing a casual game. Research goes into making these games both fun and spammer-prove - two important but often conflicitng objectives. As another example, SimplyPut is a crowdsourced, extractive summarization system that asks readers to perform small tasks towards producing structured, purpose-driven summaries. (CHI09, ISMIR09, ECML10, FGVC11)

  • Large-Scale Collaborative Planning Systems

    An important class of under-explored problems are those that demand the solution to satisfy a set of global requirements. These problems are difficult to solve using explicit workflows because global constraints are hard to explicitly capture and maintain, and because decomposing the task and crowd-sourcing for partial contributions independently provides no guarantees as to how well the composed solution would satisfy the set of global constraints. Instead of a workflow, we demonstrate that having a social computing environment - in which the current solution and problem solving context are exposed and shared among participants - can enable efficient contribution towards building a solution that meets global constraints. (AAAI11, CHI12)


P. Jaini, Z. Chen, P. Carbajal, E. Law, L. Middleton, K. Regan, M. Schaekermann, G. Trimponias, J. Tung, and P. Poupart. Online Bayesian Transfer Learning for Sequential Data Modeling. In 5th International Conference on Learning Representations (ICLR), 2017.

E. Law, K. Z. Gajos, A. Wiggins, M. Gray and A. Williams. Crowdsourcing as a Tool for Research: Implications of Uncertainty. In CSCW 2017.

T. Tse, J. Salamon, A. Williams, H. Jiang and E. Law. Ensemble: A Hybrid Human-Machine System for Generating Melody Scores from Audio. In ISMIR 2016.

S. Pan, K. Larson, J. Bradshaw and E. Law. Dynamic Task Allocation Algorithm for Hiring Workers that Learn. In IJCAI 2016.

E. Law, M. Yin, J. Goh, K. Chen, M. Terry and K. Gajos. "Curiosity Killed the Cat, but Makes Crowdwork Better." In CHI 2016.
* Best Paper Honorable Mention

E. Law, B. Grosz, L. M. Sanders and S. H. Fischer. "SimplyPut: Leveraging a Mixed-Expertise Crowd to Improve Health Literacy." In AAMAS Workshop on Human-Agent Interaction Design and Models 2013.

E. Law, C. Dalton, N. Merrill, A. Young, K. Z. Gajos. "Curio: A Platform for Supporting Mixed-Expertise Crowdsourcing." In HCOMP 2013.

O. Amir, B. Grosz, E.Law and R. Stern. "Collaborative Health Plan Support." In AAMAS 2013.
* Challenges and Visions Track, Best Paper Second Prize

H. Zhang, E. Law, K. Gajos, E. Horvitz, R. C. Miller and D. Parkes. "Human Computation Tasks with Global Constraints: A Case Study." In CHI 2012.
* Best Paper Honorable Mention

E. Law and L. von Ahn. Human Computation. Morgan & Claypool Synthesis Lectures on Artificial Intelligence and Machine Learning, edited by Ron Brachman, Tom Dietterich and William Cohen, June 2011.

E. Law. "Human Computation for Music Classification." In Music Data Mining, edited by T. Li, M. Ogihara and G. Tzanetakis. CRC Press/Chapman Hall, 2011.

E. Law, B. Settles, A. Snook, H. Surana, L. von Ahn and T. Mitchell. "Human Computation for Attribute and Attribute Value Acquisition." In CVPR Workshop on Fine-Grained Visual Categorization 2011.

E. Law, P. Bennett, and E. Horvitz. "The Effects of Choice in Routing Relevance Judgments." In SIGIR 2011.

E. Law and H. Zhang. "Towards Large-Scale Collaborative Planning: Answering High-Level Search Queries Using Human Computation." In AAAI 2011.

E. Law, B. Settles and T. Mitchell. "Learning to Tag using Noisy Labels." In ECML 2010.

E. Law, K. West, M. Mandel, M. Bay and S. Downie. "Evaluation of Algorithms Using Games: The Case of Music Tagging." In ISMIR 2009. EXPERIMENTS

E. Law and L. von Ahn. "Input-agreement: A New Mechanism for Data Collection using Human Computation Games." In CHI 2009. VIDEO DATASET BLOG
* Best Paper Honorable Mention

J. Betteridge, A. Carlson, S. Hong, E. Hruschka Jr., E. Law, T. Mitchell and S. Wang. "Towards Never Ending Language Learning." In AAAI Spring Symposium on Learning by Reading and Learning to Read 2009.

E. Law, L. von Ahn and T. Mitchell. "Search Wars: A Game for Improving Web Search." In HCOMP 2009.

E. Law, A. Mityagin and M. Chickering. "Intentions: A Game for Classifying Search Query Intent." In CHI 2009 Work-in-Progress.

E. Law. The Problem of Accuracy as An Evaluation Criterion." In ICML Workshop on Evaluation Methods for Machine Learning 2008.

E. Law, L. von Ahn, R. Dannenberg and M. Crawford. "TagATune: a Game for Sound and Music Annotation." In ISMIR 2007.