Recognizers: $1  •  $N  •  $P  •  $P+  •  $Q  •  Impact of $-family
Tools: GECKo  •  GREAT  •  GHoST  •  AGATe

$P Point-Cloud Recognizer

Radu-Daniel Vatavu, University Stefan cel Mare of Suceava
Lisa Anthony, University of Maryland—Baltimore County
Jacob O. Wobbrock, University of Washington [contact]

Currently at the University of Florida

Download

$P source code: JavaScript, C#
Pseudocode: $P, SmartTouch $P
Multistroke gesture logs: XML
Paper: PDF

This software is distributed under the New BSD License agreement.

About

The $P Point-Cloud Recognizer is a 2-D stroke-gesture recognizer designed for rapid prototyping of gesture-based user interfaces. $P is the first point-cloud matcher within the $-family of recognizers, which includes $1 for unistrokes and $N for multistrokes. Although about half of $P's code is from $1, unlike both $1 and $N, $P does not represent gestures as ordered points (i.e., strokes), but as unordered point-clouds. By representing gestures as point-clouds, $P can handle both unistrokes and multistrokes equivalently, and without the combinatoric overhead of $N, as stroke number, order, and direction are all ignored. When comparing two point-clouds, $P finds a solution to the classic assignment problem between two bipartite graphs using an approximation of the Hungarian algorithm. Superseding $P is $P+, which is more accurate and optimized for people with low vision; and $Q, which is a super-quick recognizer optimized for low-power mobiles and wearables, achieving a whopping 142× speedup with slightly improved accuracy. $P was also customized for the Smart Touch project to enable matching of point-clouds with different numbers of points.

Upon its publication at ICMI 2012, $P was given an Outstanding Paper Award. A decade later, the same paper was recognized with the ICMI 2022 Ten-Year Technical Impact Award.

$P and other $-family recognizers have been built into numerous projects and even industry prototypes, and have had many follow-ons by others. Read about the $-family's impact.

Video

Demo

In the demo below, only one point-cloud template is loaded for each of the 16 gesture types. You can add additional templates as you wish, and even define your own custom gesture templates.


Make strokes on this canvas. Right-click the canvas to recognize. If a misrecognition occurs, add the mis- recognized gesture as an example of the intended gesture.

The <canvas> element is not supported by this browser.
Add as example of existing type:
Add as example of custom type:
Delete all user-defined gestures:  

Our Gesture Software Projects

Our Gesture Publications

  1. Vatavu, R.-D. and Wobbrock, J.O. (2022). Clarifying agreement calculations and analysis for end-user elicitation studies. ACM Transactions on Computer-Human Interaction 29 (1). Article No. 5.
  2. Vatavu, R.-D., Anthony, L. and Wobbrock, J.O. (2018). $Q: A super-quick, articulation-invariant stroke-gesture recognizer for low-resource devices. Proceedings of the ACM Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '18). Barcelona, Spain (September 3-6, 2018). New York: ACM Press. Article No. 23.
  3. Vatavu, R.-D. (2017). Improving gesture recognition accuracy on touch screens for users with low vision. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '17). Denver, Colorado (May 6-11, 2017). New York: ACM Press, pp. 4667-4679.
  4. Vatavu, R.-D. and Wobbrock, J.O. (2016). Between-subjects elicitation studies: Formalization and tool support. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '16). San Jose, California (May 7-12, 2016). New York: ACM Press, pp. 3390-3402.
  5. Vatavu, R.-D. and Wobbrock, J.O. (2015). Formalizing agreement analysis for elicitation studies: New measures, significance test, and toolkit. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '15). Seoul, Korea (April 18-23, 2015). New York: ACM Press, pp. 1325-1334.
  6. Vatavu, R.-D., Anthony, L. and Wobbrock, J.O. (2014). Gesture heatmaps: Understanding gesture performance with colorful visualizations. Proceedings of the ACM International Conference on Multimodal Interfaces (ICMI '14). Istanbul, Turkey (November 12-16, 2014). New York: ACM Press, pp. 172-179.
  7. Vatavu, R.-D., Anthony, L. and Wobbrock, J.O. (2013). Relative accuracy measures for stroke gestures. Proceedings of the ACM International Conference on Multimodal Interfaces (ICMI '13). Sydney, Australia (December 9-13, 2013). New York: ACM Press, pp. 279-286.
  8. Anthony, L., Vatavu, R.-D. and Wobbrock, J.O. (2013). Understanding the consistency of users' pen and finger stroke gesture articulation. Proceedings of Graphics Interface (GI '13). Regina, Saskatchewan (May 29-31, 2013). Toronto, Ontario: Canadian Information Processing Society, pp. 87-94.
  9. Vatavu, R.-D., Anthony, L. and Wobbrock, J.O. (2012). Gestures as point clouds: A $P recognizer for user interface prototypes. Proceedings of the ACM International Conference on Multimodal Interfaces (ICMI '12). Santa Monica, California (October 22-26, 2012). New York: ACM Press, pp. 273-280.
  10. Anthony, L. and Wobbrock, J.O. (2012). $N-Protractor: A fast and accurate multistroke recognizer. Proceedings of Graphics Interface (GI '12). Toronto, Ontario (May 28-30, 2012). Toronto, Ontario: Canadian Information Processing Society, pp. 117-120.
  11. Anthony, L. and Wobbrock, J.O. (2010). A lightweight multistroke recognizer for user interface prototypes. Proceedings of Graphics Interface (GI '10). Ottawa, Ontario (May 31-June 2, 2010). Toronto, Ontario: Canadian Information Processing Society, pp. 245-252.
  12. Wobbrock, J.O., Wilson, A.D. and Li, Y. (2007). Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '07). Newport, Rhode Island (October 7-10, 2007). New York: ACM Press, pp. 159-168.

Copyright © 2012-2022 Jacob O. Wobbrock. All rights reserved.
Last updated August 8, 2022.