[ CogSci Summaries home | UP | email ]

Minsky, M., Singh, P. & Sloman, A. (2004). The St. Thomas common sense symposium: designing architectures for human-level intelligence. American Association for Artificial Intelligence, 25(2), 113-124.

  author = 	 {Minsky, Marvin AND Singh, Push AND Sloman, Aaron},
  title = 	 {The St. Thomas common sense symposium: designing architectures for human-level intelligence},
  journal = 	 {American Association for Artificial Intelligence},
  year = 	 {2004},
  volume = 	 {25},
  pages = 	 {113--124},

Author of the summary: James Howell, 2012, jamesamhowell @gmail.com

Cite this paper for:

The actual paper can be found at http://web.media.mit.edu/~push/StThomas-AIMag.pdf


A symposium was held over two days in St. Thomas where AI methodology was discussed. This paper is an analysis of that discussion and additional thoughts by the authors. AI systems are typically highly specialized. They do well with some particular class of problem but do not perform well on other problems.[p113]

To create an AI system that can emulate human common sense, it needs to incorporate various ways to represent knowledge, learn new information, and make inferences.[p113]

It is often difficult to determine which methods and representations are best for an application. For instance, statistical modelling will be highly suited for some problems, and formal logic or case-based reasoning for others. A high level conceptual analysis is needed to determine when to apply various AI techniques.[p114]

An example is given that shows the difficulty in designing a comprehensive AI. A simple task such as playing with blocks involves many mental realms such as physical (what happens when I remove this block?), psychological (trying to remember where a certain block is), visual (trying to see where a certain block is), spatial (what are the possible structures I can make?), and self-reflective (what else can I do now?). [p115]

A human-level AI will need to operate in all of these realms to some extent. It was suggested that AI researchers might try to emulate how human children learn, however there is much that is not known about the learning process and mental development.[p116]. Instead, a method of presenting a system with a series of graded scenarios was suggested. The scenarios involve the end goal of reaching a box from a high place through the use of a ladder. Various obstacles and complexities would be introduced that require the learning of new skills, and in this way the system could learn in much the same way that humans do. There is some debate by the authors as to whether training worlds should be complex or simple in order to maximize the potential for learning.[P117]

There is similar debate as to whether theories of knowledge representation should be complex or simple in order to adequately explain human behaviours. The authors agree that in order to maximize versatility, machines will need to improve on recognizing and fixing problems and deficiencies that their actions create.[P118]

Parallel ways of thinking may enhance the reliability and adaptability of the system since it is less likely to "get stuck"[P120]

The authors cite a lack of clear ideas on knowledge representation and organization that can impair the creation of a machine that could learn automatically.[P121]

It was decided that single techniques sch as logic or neural networks are insufficient to cope with the scope of problems that a human-level intelligence would encounter. An architecture that combines "different ways to represent, acquire, and apply many kinds of commonsense knowledge" must be created.[P122]

Summary author's notes:

Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)