Cydney Beckwith
Darrin Howell
Momona Yamagami

Human interaction with the physical world is increasingly mediated by intelligent machines, e.g., surgical robots and active prostheses / orthoses. Our goal is to amplify this interaction by designing machines that adapt to and learn from their human partners.

Towards this end, we sought to elucidate the feedback and feedforward control strategies used by human operators to supervise these machines. We investigated this line of inquiry by simulating robotic teleoperation via a path following video game. We collected thousands of trajectory tracking trials from a total of eight human subjects.

Our preliminary results show that human operators learn and invert system dynamics while executing these teleoperation simulations. These data suggest that operators engage in model inversion, a process of defining a desired output and internally planning an input to arrive at the intended target.

These results may provide insights towards developing control interfaces that can adapt to their human operators with high fidelity and guarantee reliably safe teleoperation.


Publications:

E. Roth, D. Howell, C. Beckwith, and S. A. Burden. Toward experimental validation of a model for human sensorimotor learning and control in teleoperation. Proceedings of the SPIE Conference on Micro-Nanotechnology Sensors, Systems, and Applications, 2017.

Mentored by:

Sam Burden
Eatai Roth