[ CogSci Summaries home | UP | Jim Davies email]
http://www.jimdavies.org/summaries/
________________________________________

Allen, C., Varner, G. & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12, 251 – 261.

@Article{AllenVarnerZinser2000,
author = {Colin Allen and Gary Varner and Jason Zinser},
title = {Prolegomena to any future artificial moral agent},
journal = {Journal of Experimental & Theoretical Artificial Intelligence },
year = {2000},
volume = {12},
pages = {251--261},
}

Author of the summary: Corrie Bouskill, 2012, corriebouskill@gmail.com

The actual paper can be found at Prolegomena to any future artificial intelligence

Cite this paper for:


________________________________________

As we get closer to creating fully autonomous agents, the necessity for these agents to have a good moral compass becomes more urgent. How can we ensure that the agents we create to be useful to humans will not also harm humans?

There is currently no uniform definition of what makes an agent moral. Two areas of disagreement in ethical theory thwart attempts to build an artificial moral agent (AMA) are:

Two main approaches attempt to answer these questions; utilitarianism, and Kant’s use of ‘categorical imperative’.

Utilitarianism is the view that the best actions are those which produce the greatest happiness for the greatest number. Regardless of how the behavioural result is achieved, an agent could be considered moral as long as it produces happiness by following the principle of utility according to this view. [pg.252]

In Kant’s view, a morally good action must be consistent with the ‘categorical imperative’, meaning this: act only on an explicit principle of practical reasoning that you would wish to become a universal law. This view requires that the agent possess the ability to decide if an act is consistent with the categorical imperative and if an act should be willed to be a universal law. According to this view, specific cognitive processes that play a significant part in decision-making would need to be built into an agent to make it morally good. [pg.252]

It is suggested that a Moral Turing Test (MTT) would help identify criterion for that AMA would need to be considered moral. A machine passes the Turing Test if a human interrogator cannot identify that it is communicating with a machine at an above chance level. If a human could not tell that the AMA it is interrogating was a machine based on asking it moral questions, than the AMA would be considered successful: a moral agent. Then, passing the MTT would be considered the criterion for creating a good AMA. [pg.254]

However, an AMA might respond too morally on an MTT, at which point a human interrogator would be able to tell that it was communicating with a machine instead of a human who is bound to respond immorally at some point. In this case, the interrogator might be asked to assess whether one agent is less moral than the other. If the machine is not reported as responding less morally than the human, it will have passed the test. This test is called the ‘comparative MTT’ (cMTT). [pg.255]

The problem with cMTT is that it still allows an agent to act in a less than moral way so long as it is rated better than a human’s action. When designing these agents, we need to expect more moral actions from them than we do from humans. As we do not have a framework for the higher moral standards that we expect an AMA to base their decisions on, two approaches to this task are considered: theoretical approaches that implement an explicit theory of evaluation that provide a framework for the AMA to base its decisions on, and modelling approaches that implement a theory of moral character or that use learning to construct systems that act morally. [pg.255]

Theoretical approaches:

Implementing a hybrid system that involves this consequentialist evaluation and then applies a limit at which point a deontological approach comes into play, or a deontological system that can be overridden by consequentialist reasoning whenever good consequences outweigh the bad would be a way to create successful AMA’s. [pg. 256]

Models of morality:

Emotion is part of human morality as it drives us to behave in certain ways. It is easy to imagine an agent that has full knowledge of moral rules and yet is not motivated to conform to them. Emotions may play a fundamental part in human morality as negative emotions as the result of a bad action will drive us to act in a good way to achieve positive emotions. AI is very far from being able to create emotion in agents but although emotion seems to play a central part to intelligence and morality, it may not be necessary to create autonomous behaviours. Robots like Deep Blue, which plays chess, seem to perform exceptionally well without emotion. [pg.260]

The ultimate objective of building an AMA is to build a morally praiseworthy agent. [pg.261] Although we are quite a distance from achieving a good moral agent, we know that to create autonomous agents that do not harm humans, we need to create agents that can process the effects that their actions have on the environment and people around them. This will be the most important task faced by developers of artificially intelligent automata. [pg.261]


________________________________________
Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies(jim@jimdavies.org)