[ CogSci Summaries home | UP | email ]

http://www.jimdavies.org/summaries/



Fischer, K. (Winter 2011). How people talk with robots: designing dialogue to reduce user uncertainty. AI Magazine, 32, 4. 31--38.


@Article{Fischer2011,

  author = {Fischer, Kerstin},

  title = {How people talk with robots: designing dialogue to reduce user uncertainty},

  journal = {AI Magazine},

  year = {2011},

  volume = {32},

  number = {4},

  pages = {31--38}

}


Author of the summary: Laura Inostroza, 2012, laura.inostroza@gmail.com

Cite this paper for:


This paper deals with the subject of people's expectations from robots and whether hearing them give us feedback would change the way we give them instructions for tasks. One of the most important points is that humans respond to the look and sound of the robot; so the output given and the visible useful parts make a difference in how they communicate with AI.


User expectations may play a crucial role in rating robots' capacities for conversation. (p.32)

Human-robot relationships are highly individual and similar to Rorschach test projections of the self. (p.32)


People have different expectations about how difficult a task will be for a robot to execute, and we each use the instructions we think will be easier for the robot. (p.32)


Determining whether a person treats a robot as a "social actor" or not can be done by observing their reaction to a spontaneous robot's greeting (p.33) 

example: when entering a room and hearing a robot say "hello", if one is tempted to reply, they might view a robot as a being capable of social action.


Robot's appearance seems to influence the interaction. Whether it is more anthropomorphic or not will affect how well a user interacts with it. And there are effects depending on how similar the artificial agent is to the user. (p.33)


The paper outlines two main experiments:


In the first, Fischer had participants talk a robot into navigating to different places in the room. She had three different robots, and the interactions with people did not have significant differences, probably due to the lack in functionality-specific appearances to each robot. (p. 33)

The only difference found was the interest in the sensor location on the robot, suggesting that reducing uncertainty in users about the robot's functional abilities, by making it available at first glance, is good for facilitating communication. (p.33)


"Robots' appearance can be used to influence users' preconceptions if it provides consistent clues to particular robot functionalities." (p.34)


A second test was done with a mechanical wheelchair, where participants had to teach the wheelchair how to get to different areas in a room either through verbal communication with responses from the chair or without responses. (p. 34)

It was found to be easier for participants who received the feedback from the robotic wheelchair about the directions they were giving, because they tended to use more technical terms, and less out-of-domain words. (p. 34)

So it seemed that users receiving vocal feedback from the robotic wheelchair were more focused on what the chair could possibly understand, and were therefore more efficient at completing the tasks. Whereas in the other condition, the participants didn't know what words to use. (p.35)

Therefore, we can see that robot utterances have a considerable impact on users' understanding of a task, and the words they use in relation to the task at hand. (p. 36)


Afterwards, the mechanical chair experiment was replicated with speakers of German, with different results (p.37)


The goal of robot dialogue design should therefore be to reduce the uncertainty of the user; make it easier for them to guess how to interact with the robot. (p.37)

Therefore, it would be best for robots to be made with carefully crafted linguistic output so that they are easily interpreted by users. (p.37)


Summary author's notes:



Back to the Cognitive Science Summaries homepage 

Cognitive Science Summaries Webmaster:

JimDavies (jim@jimdavies.org)