[ CogSci Summaries home | UP | email ]

Parker, L. E. (1998). ALLIANCE: An architecture for fault tolerant multirobot cooperation. IEEE Transactions on Robotics and Automaton. v14, no. 2, April 1998.

  author = 	 {Lynne E. Parker},
  title = 	 {ALLIANCE: An architecture for fault tolerant
multirobot cooperation.},
  journal = 	 {IEEE Transactions on Robotics and
  year = 	 {1998},
  OPTvolume = 	 {14},
  OPTnumber = 	 {2},
  OPTpages = 	 {220--240},
  OPTmonth = 	 {April},

Author of the summary: Jim R. Davies, 2000, jim@jimdavies.org

Cite this paper for:

Advantages of robot teams: Challenges: What researchers really need to focus on is fault tolerance and adaptivity. [p221] In this article the fault-tolerance is the reallocation of tasks. Adaptivity is changing behavior in a dynamic environment.

Cooperative robotics has two sides: swarm type cooperation and "intentional" cooperation. This paper deals with the second.

Swarm: many homogeneous limited ability robots. Influenced by biology, sociology. Good for non-time critical applications. Depends on emergent properties.

Some swarm systems:

"Intentional" systems. Efficiency constraint, needs directed cooperation. Approaches using the sense-model-plan-act architecture fail to do real-time performance in a dymanic world.


assumptions [p223]
  1. detection of action effect
  2. detection of activities that the robot can do
  3. robots don't lie
  4. communication not guaranteed available
  5. imperfect sensors and operations.
  6. any subsystem can fail
  7. failures are not always communicated
  8. no centralized store of world knowledge, no central exectutive
competences: lower-level behaviors [p224]

In a behavior-based framework, competences (corresponding to primitive survival behaviors like obstacle avoidance) are activated more or less directly by sensory input.

Behavior sets: ALLIANCE uses them. They are grouped and can be activated or inhibited together. The set cooresponds to a high-level behavior. only one is active at a time. low level competences may be constantly activated.

Goal selection happens through motivational behaviors. Each corresponds to a behavior set. Tasks are performed only so long as they can positively affect the world. [p225] It uses this to deal with failures.

The highest activated behavior set is the one used, as long as it is over threshold. What goes into that activation: (sensory feedback, inter-robot communication, inhibitory feedback (from other active behaviors), and internal motivations.

two types of internal motivations: impatience and acquiescence.

Impatience. Handle situation where other robot fails. Motivation for a behavior increaces as others fail to do the job. A robot *trying* to do the job will satisfy impatience for a time. In ALLIANCE, robots tell each other what they are doing.

Acquiescence. Handle situation where self fails. Acquiescence increases as you work on a task but nothing is happpening. Makes you more willing to give up the task.

The system learns as well, finding different levels of mpatience and acquiescence for different contexts. [p226]

Comparison to DAI negotiation schemes:

negotiation scheme: No central executive. Robots send out tasks, robots respond with bids. Broadcaster selects someone to work on it. That selected agent can recruit others if needed.

negotiation schemes have not been shown to work in situated agents in a dynamic environment. they do not take into account failures of communication or task doing. They assume an assigned task will be accomplished.

There is a discussion of ALLIANCE's formal model.

[p228] To update the parameters, it takes into account observations, evaluations, and execution time of a team member performing a task. Changes in the env while a robot is performing are assumed to be caused by that robot.

[p230] Experiments were run using a laboratory hazardous waste cleanup situation. 3 is-robotics R-2 robots. The robots are told qualitatively where the spills are (upper right part of the room) and where to move them to. They had behavior-sets find-locations-methodical, find-locations-wander, move-spill (loc) and report-progress. As an example of learning, the blue robot wouldn't do methodical finding because it had learned from previous tries that this wouldn't work (it's side sensor doesn't work.)

Summary author's notes:

Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)
Last modified: Mon Mar 27 18:47:59 EST 2000