[ CogSci Summaries home | UP | email ]
http://www.jimdavies.org/summaries/

Anderson, M. & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15-26.

@Article{AndersonAnderson2007,
author = {Anderson, Michael, and Anderson, Susan Leigh},
title = {Machine ethics: Creating an ethical intelligent agent},
journal = {AI Magazine},
year = {2007},
volume = {28},
number = {4},
pages = {15-26}
}

Author of the summary: RhiAnne Brown, 2012, rhianbro@gmail.com

Cite this paper for:

Definition

Machine ethics: ensuring that machines act ethically towards human users and other machines [p15]

Goal: to create a machine that uses an ideal set of ethical principles to guide decision making and actions [p15]

Implicit ethical agents are programmed by a designer to behave ethically; they act in accordance with duty [p17]

Explicit ethical agents choose the best course of action in any situation or domain based on their own set of ethics; they act from a sense of duty [p17]

Many are concerned that rather than behaving ethically, an explicit ethical agent will ‘go rogue’ and make unethical decisions in order to get ahead, leading to the manipulation and subjugation of human users. This belief has been portrayed in popular culture countless times (e.g. HAL 90001 from The Space Odyssey, The Matrix); however, the concerns aren’t quite valid. Worries about poor decision making in machines stem from the poor decision making behaviour that we see in humans. We have evolved to in competition with others, and as such may have developed a survival mechanism that predisposes us towards acting unethically. Since machines lack this predisposition, they might actually have an advantage over us in terms of making morally acceptable decisions. [p17]

Importance

  1. Research in machine ethics may lead to advances in ethical theory [p16]:
  2. a. AI necessitates that ethics be made computable to test theories on machines
  3. Rapid developments in AI necessitate ethical guidelines [p16]:
  4. a. Autonomous agents are capable of causing harm to humans- safeguards need to be put in place to prevent this from happening
    E.g. A vehicle that drives itself
    E.g. Armed robotic vehicles to support ground troops

Philosophical Concerns

Can ethics be computed? Yes, by using act utilitarianism. [p18]

Act utilitarianism defines the right act as the one which, of all actions available, results in the greatest net pleasure. An algorithm would compute the total net pleasure for each possible action, and then choose the action with the highest total net pleasure to be the right action:

“Total net pleasure = ∑ intensity x duration x probability for each affected individual” [p18]

This is already a step-up from ethical decision making in humans because:

  1. Humans tend to make estimations rather than compute specific mathematical outcomes,
  2. Humans are not impartial, rather they favour themselves,
  3. Humans tend not to consider all the alternative actions

However, some problems still exist [p18]:

  1. The ‘right’ decision may sacrifice one person for the greater net good,
  2. The ‘right decision fails to account for past behaviour; rather it focuses solely on future consequences of actions. This might conflict with the notion of justice.

Therefore, an ethical theory that combines the consequences of actions(teleological theory) with concerns about justice and rights (deontological theory) would be a better approach. [p18]

Is there a single correct action in ethical decision making? It isn’t possible to have a machine resolve all ethical dilemmas, especially since ethical theory hasn’t resolved all ethical dilemmas yet. Rather, the machine should implement a framework that allows for updates and makes consistent decisions when faced with the same situations.

AI Concerns

How do we proceed in an interdisciplinary endeavour? This requires communication between AI researchers and philosophers to create a methodology that maximizes the probability that a machine will behave ethically:
E.g. GENTA (General Belief Retrieving Agent), a web-based knowledge system that searches “the web for opinions, usual behaviours, common consequences, and exceptions, by counting ethically relevant neighbouring words and phrases” [p20]
E.g. Marcello Guarini’s neural network attempts to answer the philosophical debate concerning principle-based approaches to moral reasoning versus case-based approaches to moral reasoning [p20]
E.g. Bruce McLaren’s Truth-Teller, in which the system must reason when to, or not to, tell the truth in ethical dilemmas [p21]

Creating a Machine That Is an Explicit Ethical Agent

Step 1: Adopt a prima facie duty approach to ethical theory. This will incorporate teleological and deontological theory. [p21]

Step 2: Select a domain consistent with your choice of prima facie duties. Then, create a dilemma that encompasses a finite number of specific cases, with a few possible actions in each case. [p22]
E.g. A health-care worker has recommended a particular treatment for a patient, and the patient has rejected the treatment option. Should the healthcare worker try to change the patient’s mind, or accept the decision as final? [p22]

Step 3: Implement a decision-making procedure that will work when the duties give conflicting advice. The procedure should be able to add new duties and accommodate changes to existing duties (i.e. intensities and durations) [p23]

Step 4: Implement a method of learning in the system. Create positive training examples, from which negative training examples can be generated.
E.g. Inductive logic programming (ILP), which learns the relationships among duties.

Step 5: Develop a user interface that “asks ethically relevant questions of the user...transforms the answers to these questions into appropriate profiles...sends these profiles to the decision procedure...presents the answer provided by the decision procedure, and ... provides a justification for the answer” [p24]
E.g. EthEl, a system that uses an ethical principle to determine “when a patient should be reminded to take medication” [p24] and when a refusal to do so warrants contact with an overseer.

Step 6: Assess the morality of a system’s behaviour using the comparative moral Turing test (cMTT). A cMTT evaluator assesses the comparative morality of human-made ethical decisions and machine-made ethical decisions. “If the machine is not identified as the less moral member of the pair significantly more often than the human, then it has passed the test” [p24]


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)