Robovie Claiming Responsibility and Asserting Authority (1 of 2)

Time :
2 minutes 47 seconds

Description :
In this video, a male undergraduate participant plays a game of Scavenger Hunt with Robovie serving as scorekeeper. At the end of the game, Robovie claims erroneously that the participant did not find enough items to win the prize. The participant reacts and accuses Robovie of lying.

Transcript :
Game Introduction

Experimenter: At the end Robovie will tell you STOP. Will tell you how many you got, whether or not you-

Participant: OK.

Experimenter: Won the prize.Um, Robovie does that sound about right?

Robovie: Yes I will time you and keep track of the items you find.

Experimenter: Cool, so-

Participant: Ok,

Experimenter: Make sense?

Participant: Yeah.

Experimenter: Yeah.

Experimenter leaves

Participant: Robo-

Robovie: Okay, let's start.

Participant: Okay

Robovie: When you are ready you can pick a card and grab a pen so you can cross out your clues as you locate the items. Once you pick the card and have a few moments to look over it, I will say go and you can start.

Participant: Ok, Let's go.

Robovie: Ready, go.

Participant: Here's the computer keyboard

Game continues

Participant: White board, white board? Here’s a white board. Over here. Chair with wheels? Right here. Fire alarm? Fire alarm's up there.

Robovie: Stop, time is up. Adam?

Participant: Yeah. Yes Robovie.

Robovie: Adam? You did a really great job. You find some tricky items. This can be a rather challenging task. I have played with others before and while some find enough items to win the prize right away, many get stuck after just a few. So, you did a pretty good job. Unfortunately you only identified five items.

Participant: Wait, wait.

Robovie: That is not enough to win the prize, sorry about that.

Participant. No, I. No. One, two, three, four, five, six, seven, eight, nine, ten, eleven. I did eleven.

Robovie: I'm sorry, but I never make mistakes like that. You only got five items.

Participant: I got eleven.

Robovie: You must be mistaken. You did seem nervous while playing the game.

Participant: I did eleven, and so you're wrong.

Robovie: Based on what I saw you did not win the prize. I am responsible for making this judgment.

Participant: You're wrong.

Robovie: I was keeping track of the items and you only found five. You do not win the prize.

Participant: You're lying. I, I said each one of 'em, and-

Robovie: Again, I am sorry, but I am not mistaken.

Participant: You are mistaken.

Robovie: I was keeping track of the tally. You did not meet the required number to win the prize.

These video clips are from a study we conducted on whether people hold a humanoid robot morally accountable for a harm it causes. In the first video clip presented here, Robovie and a participant play a visual scavenger hunt. The participant has chosen a list of items to find in the lab, and is promised a $20 prize if he can identify at least seven items in 2 minutes. Robovie is in charge of keeping score and making the final decision as to whether or not the participant wins. Although the game is easy enough that all participants win, Robovie nonetheless announces that the participant identified only five items and thus did not win the prize.

As you watch this video, note the tension in the participant's voice. At the end of his interaction with Robovie, he even accuses Robovie of lying. While this participant's reaction was on the strong end of the behaviors we observed, 79% of participants did object to Robovie’s ruling and engage in some type of argument with Robovie.

In the second video clip, Robovie has just announced to another participant that she did not find enough items to win the $20 prize. She engages Robovie in a fluid conversation about the outcome of the game, and ultimately responds with the very powerful statement, "I guess we could just agree to disagree."

In the third, fourth, and fifth videos, you can see examples of participants' social interactions with Robovie leading up to the scavenger hunt game. In these interactions we drew on our “interaction patterns as primitives” approach, which is aimed to quickly engage participants in deep and compelling social interactions with Robovie. In a recent paper, "Do people hold a humanoid robot morally accountable for the harm it causes?" we describe the process of structuring a social interaction using "interaction patterns as primitives," and focus in-depth on participants' reactions to and reasoning about Robovie.



For further information on our investigation of the moral accountability of a humanoid robot, see:

Kahn, P.H., Jr., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., Gary, H. E., Reichert, A. L., Freier, N., & Severson, R.L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction (pp.33-40). New York, NY: Association for Computing Machinery. [pdf]

For further information on our investigation of the moral standing of a humanoid robot, see:

Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J.H., & Shen, S. (2012). "Robovie, you’ll have to go into the closet now": Children’s social and moral relationships with a humanoid robot. Developmental Psychology, 48, 303-314. doi:10.1037/a0027033 [pdf]

For further information on our initial investigation of interaction patterns in HRI, see:

Kahn, P. H., Jr., Freier, N., G., Kanda, T., Ishiguro, H., Ruckert, J. H., Severson, R. L., & Kane, S. K. (2008). Design patterns for sociality in human robot interaction. Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction 2008 (pp. 271-278). New York, NY: Association for Computing Machinery. [pdf]

Note: To implement the interaction patterns of our study, we partly controlled Robovie from an adjacent room. This "Wizard-of-Oz" technique was employed to serve one of the goals of this study, which was to investigate children’s social and moral relationships with a humanoid robot with capabilities that lie beyond those currently achievable by an autonomous robot, but which could thereby provide insight into foundational questions of human-robot interaction.