These video clips are from a study we conducted on whether people hold a humanoid robot morally accountable for a harm it causes. In the first video clip presented here, Robovie and a participant play a visual scavenger hunt. The participant has chosen a list of items to find in the lab, and is promised a $20 prize if he can identify at least seven items in 2 minutes. Robovie is in charge of keeping score and making the final decision as to whether or not the participant wins. Although the game is easy enough that all participants win, Robovie nonetheless announces that the participant identified only five items and thus did not win the prize.
As you watch this video, note the tension in the participant's voice. At the end of his interaction with Robovie, he even accuses Robovie of lying. While this participant's reaction was on the strong end of the behaviors we observed, 79% of participants did object to Robovie’s ruling and engage in some type of argument with Robovie.
In the second video clip, Robovie has just announced to another participant that she did not find enough items to win the $20 prize. She engages Robovie in a fluid conversation about the outcome of the game, and ultimately responds with the very powerful statement, "I guess we could just agree to disagree."
In the third, fourth, and fifth videos, you can see examples of participants' social interactions with Robovie leading up to the scavenger hunt game. In these interactions we drew on our “interaction patterns as primitives” approach, which is aimed to quickly engage participants in deep and compelling social interactions with Robovie. In a recent paper, "Do people hold a humanoid robot morally accountable for the harm it causes?" we describe the process of structuring a social interaction using "interaction patterns as primitives," and focus in-depth on participants' reactions to and reasoning about Robovie.
For further information on our investigation of the moral accountability of a humanoid robot, see:
Kahn, P.H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., Gary, H. E., Reichert, A. L., Freier, N., & Severson, R.L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (pp.33-40). New York, NY: Association for Computing Machinery. [pdf]
For further information on our investigation of the moral standing of a humanoid robot, see:
Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J.H., & Shen, S. (2012). "Robovie, you’ll have to go into the closet now": Children’s social and moral relationships with a humanoid robot. Developmental Psychology, 48, 303-314. doi:10.1037/a0027033 [pdf]
For further information on our initial investigation of interaction patterns in HRI, see:
Kahn, P. H., Jr., Freier, N., G., Kanda, T., Ishiguro, H., Ruckert, J. H., Severson, R. L., & Kane, S. K. (2008). Design patterns for sociality in human robot interaction. Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction 2008 (pp. 271-278). New York, NY: Association for Computing Machinery. [pdf]
Note: To implement the interaction patterns of our study, we partly controlled Robovie from an adjacent room. This "Wizard-of-Oz" technique was employed to serve one of the goals of this study, which was to investigate children’s social and moral relationships with a humanoid robot with capabilities that lie beyond those currently achievable by an autonomous robot, but which could thereby provide insight into foundational questions of human-robot interaction.
Robovie Pays a Compliment and Makes a JokeTime :
Robovie: I like your shoes, Joel. They’re quite nice.
Participant: Oh, thank you. I just got them.
Robovie: If I had feet I would wear shoes just like your shoes.
Participant: Really? You like, do you like the argyle? You like the stripes?
Robovie: That was my attempt at a joke. Sorry about that.
Participant: That’s okay Robovie.