CogSci Summaries home |
P.H. Winston (1980). Learning and Reasoning by Analogy, Communications of the ACM, (23) 12, December 1980
Author of the summary: David Furcy, 1999, firstname.lastname@example.org
Analogy seems to be a natural and widely used way of accomplishing many tasks requiring intelligence. Therefore, analogy is a method used for different classes of cognitive tasks including learning and reasoning.
Analogy is interesting because explaining it sheds light on several important components and processes of our cognitive system.
To propose sufficient theoretical principles that enable analogical reasoning.
To describe an implemented system which embodies these principles and indeed behaves as an analogical reasoner and learner.
- How can we account for the fact that we seem to be able to use the same cognitive (namely analogical) competence in the service of many kinds of reasoning tasks, e.g.:
and learning tasks, e.g.:
- understanding a story plot,
- learning new knowledge about a domain using available knowledge about another domain,
- Is there a uniform mechanism that allows us to find the best match between two situations and to carry over information from one to the other?
- Analogy is done by mapping situation parts (entities) using relations and acts as evidence.
- Therefore the matching task is at the heart of the analogical method.
- (sufficient) knowledge conditions for matching are:
- reified relations and properties
- classification knowledge
- constraints and especially causal constraints
- Causal relations are central to matching and therefore to analogy.
relations > can cause acts and relations
It must allow for extensible relations which basically means that the representation must enable describing and linking relations with one another. In other words, the relations are reified.
This is done by using frames as a unified representation language and more specifically the FRL language.
For example, the sentence "John killed Sam with a knife because he hated him" is represented as follows:
| John frame | +-----------+
| | | Sam frame |
| kill slot -----+->+-----------+
| | | |
| | | | kill-1 frame |<-------------------------+
| | | | | +-------------+ |
| | | | instrument ------>| knife frame | |
| | | +--------------+ +-------------+ |
| | | |
| hate slot -----+ |
| | | CAUSE
| +------>+--------------+ |
| | | hate-1 frame | |
| | | |--------------------------+
Note that both the "kill" and "hate" relations are reified (they are concrete entities with their own frame) and therefore can be further described and linked together by meta-relations such as CAUSE.
The representation should allow only a limited vocabulary such as case names (e.g., instrument, multiplier) and constraints (e.g., cause).
Demons are used for automatic deductions.
Given two situations, the algorithm considers all possible mappings between frames of the respective situations. Every mapping is scored and the one with the highest score wins.
Scoring is done as follows. A mapping is a list of paired frames (including comment frames). A point is added for each pair of same-name slots with:
- Vocabulary must describe classes, properties and relations.
- Important relations must be explicit in the input description, either from a teacher or given by a rule such as "Cause is important".
- Historical continuity: traditional assumption in science that the world is regular or uniform both across space and time.
I'll go over the Romeo-Juliet / Charming-Cinderella example in class.
- the same value, or
- values that are linked in this mapping, or
- values that are a member of a common class.
- One computational method for performing different cognitive tasks.
- Representation: reified relations.
- AI methodology: a valuable suggestion, namely
theory/set of principles defines knowledge conditions and processes
--> the theory is instantiated in a computational model
--> the model is implemented in a program
--> the program is run, which produces some results
--> the results are analyzed in terms of the performance on the
particular task (e.g., story understanding)
--> the performance results shed light on the competence
(here analogical reasoning) the theory deals with
Scope and limitations
A partial task model of story understanding:
| understanding |
^ ^ ^
/ | \
... use ...
^ ^ ^
/ | \
/ | -----
+--------+ +-------+ \
|retrieve| |perform| ...
^ ^ ^ ^ ^
/ | \ | \__________
... perform ... +--------+ +-------+
memory |matching| | ... |
retrieval +--------+ +-------+
^ ^ ^
/ | \
... | ...
This paper focuses mostly on the matching task and proposes a method to perform it. It has very little or nothing to say about memory retrieval, adaptation of the analogical process using feedback from actual use, etc.
If a discussion has not started by now, here is our last chance...
- Matching is extremely simplified but the focus on relations is a first step in the direction of Gentner's systematicity principle.
- The relevance of a particular mapping seems to depend on the input description. In other words, his method does not really know how to determine what is important/relevant in the precedent.
- It seems that the use of class information is domain-specific and possible analogy specific. In any case, his presentation of it makes it more ad hoc than principled.
- The proposed AI methodology makes sense but the research presented in the paper does not always seem to be as principled as claimed.
- Results (not described in this summary) seem sometimes to be interpreted in a wishful or ad hoc manner (e.g., the "background noise").
- I don't understand how the classification information is used for memory retrieval.
- What do you all think?
Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
Last modified: Mon Apr 5 09:02:17 EDT 1999