Crowdsensus

Abdullah X. Ali, University of Washington [contact]
Meredith Ringel Morris, Microsoft Research
Jacob O. Wobbrock, University of Washington

About

Crowdsensus is a web-based tool that aides in the agreement analysis of symbols collected in end-user elicitation studies. Crowdsensus generates custom web interfaces that facilitate the collection of similarity votes from online crowd workers, and clusters symbols based on those votes using unsupervised machine learning. These clusters are then exportable in a format used in agreement analysis.


Crowdsensus has been folded into the Crowd Design Engine at http://crowdesignengine.com/.
 

Citation

Ali, A.X., Morris, M.R. and Wobbrock, J.O. (2018). Crowdsourcing similarity judgments for agreement analysis in end-user elicitation studies. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '18). Berlin, Germany (October 14-17, 2018). New York: ACM Press, pp. 177-188.

Related Project

Crowdlicit is a crowd-powered tool for designing and deploying distributed end-user elicitation studies.

Acknowledgements

This work was supported in part by funding from Microsoft Research, the Mani Charitable Foundation, and the National Science Foundation under grant IIS-1702751. Any opinions, findings, conclusions or recommendations expressed in our work are those of the authors and do not necessarily reflect those of any supporter.

Our End-User Elicitation Publications

  1. Ali, A.X., Morris, M.R. and Wobbrock, J.O. (2019). Crowdlicit: A system for conducting distributed end-user elicitation and identification studies. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '19). Glasgow, Scotland (May 4-9, 2019). New York: ACM Press. Paper No. 255.
  2. Ali, A.X., Morris, M.R. and Wobbrock, J.O. (2018). Crowdsourcing similarity judgments for agreement analysis in end-user elicitation studies. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '18). Berlin, Germany (October 14-17, 2018). New York: ACM Press, pp. 177-188.
  3. Vatavu, R.-D. and Wobbrock, J.O. (2016). Between-subjects elicitation studies: Formalization and tool support. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '16). San Jose, California (May 7-12, 2016). New York: ACM Press, pp. 3390-3402.
  4. Vatavu, R.-D. and Wobbrock, J.O. (2015). Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '15). Seoul, Korea (April 18-23, 2015). New York: ACM Press, pp. 1325-1334. Honorable Mention Paper.
  5. Morris, M.R., Danielescu, A., Drucker, S., Fisher, D., Lee, B., schraefel, m.c. and Wobbrock, J.O. (2014). Reducing legacy bias in gesture elicitation studies. ACM Interactions 21 (3), May + June 2014, pp. 40-45.
  6. Morris, M.R. (2012). Web on the wall: insights from a multimodal interaction elicitation study. Proceedings of the ACM Conference on Interactive Tabletops and Surfaces (ITS '12). Cambridge, Massachusetts (November 11-14, 2012). New York: ACM Press, pp. 95-104.
  7. Morris, M.R., Wobbrock, J.O. and Wilson, A.D. (2010). Understanding users' preferences for surface gestures. Proceedings of Graphics Interface (GI '10). Ottawa, Ontario, Canada (May 31-June 2, 2010). Toronto, Ontario: Canadian Information Processing Society, pp. 261-268.
  8. Wobbrock, J.O., Morris, M.R. and Wilson, A.D. (2009). User-defined gestures for surface computing. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '09). Boston, Massachusetts (April 4-9, 2009). New York: ACM Press, pp. 1083-1092. Best Paper Nominee.
  9. Wobbrock, J.O., Aung, H.H., Rothrock, B. and Myers, B.A. (2005). Maximizing the guessability of symbolic input. Extended Abstracts of the ACM Conference on Human Factors in Computing Systems (CHI '05). Portland, Oregon (April 2-7, 2005). New York: ACM Press, pp. 1869-1872.

Copyright © 2018-2019 Jacob O. Wobbrock. All rights reserved.
Last updated June 25, 2019.