[ CogSci Summaries home | UP | email ]
http://www.cc.gatech.edu/~jimmyd/summaries/

Vogt, Paul. (2005). Meaning development versus predefined meanings in language evolution models. International Joint Conference on Artificial Intelligence. 05, 1154-1161.

@InProceedings{Vogt2005,
 author =     {Vogt, PAul},
 title =      {Meaning development versus predefined meanings in language evolution models},
 booktitle =    {International Joint Conference on Artificial Intelligence},
 year =       {2005},
 volume =  {IJCAI-05},
 pages =   {1154-1161}
}

Author of the summary: Gaelan Pope, 2007, gpope@carleton.connect.ca

Cite this paper for:

What are the effects of predefining semantics when modeling the evolution of compositional languages in comparison to allowing agents of that model to develop the semantics while they develop a language?

“One of the most prominent aspects of human language that is researched concerns the origins and evolution of grammatical structures, such as compositionality.” [p1]

“Compositionality refers to representations (typically utterances in languages) in which the meaning of the whole is a function of the meaning of its parts.” [p1]

“Studies into the origins and evolution of compositionality have yielded models that can successfully explain how compositionality may emerge. Most models have semantic structures built in, so the agents only have to acquire a mapping from signals to these meanings, together with their syntactic structures [Brighton, 2002; Kirby, 2002; Smith et al., 2003].” [p1]

“Only few models have considered how compositional structures can arise through a co-evolution between syntax and semantics, where the semantics are grounded through interactions with the world and develop from scratch [Steels, 2004; Vogt, 2005].” [p1]

“Whereas in [Brighton, 2002; Kirby, 2002; Smith et al., 2003] it takes a number of generations until compositionality arises, in studies where the syntax co-develops with the semantics compositionality arises from the first generation [Steels, 2004; Vogt, 2005].” [p1]

Thus compositionality arises quicker when a system develops its semantics rather than when given it.

“Compositionality is defined as a representation of which the meaning of the whole can be described as a function of the meaning of its parts. For instance, the meaning of the expression “red apple” is a function of the meaning of “red” and the meaning of “apple”. As a consequence, it is possible to substitute one part with another to form a new meaning as in the expression “green apple”. In contrast, there are holistic representations in which the meaning of the whole cannot be described as a function of the meaning of its parts. For instance, the expression “kick the bucket” in the meaning of dying is a holistic phrase.” [p1]

“It has been repeatedly shown that compositional structures can arise from initially holistic structures (i.e. structures with no compositionality) using the iterated learning model (ILM) [Brighton, 2002; Kirby, 2002; Smith et al., 2003].” [p1]

“Kirby and others have shown that, given an induction mechanism that can induce compositional structures, an initially holistic language can change into a compositional one after a number of iterations, provided the language is transmitted through a bottleneck.” [p1]

“The transmission bottleneck entails that children only observe a part of the expressible meanings of the language. Assuming the children are equipped with a learning mechanism to discover and store compositional structures whenever possible, these structures tend to remain in the language because they allow an agent to communicate about previously unseen meanings.” [p1]

“If an agent has acquired the language through a bottleneck, it may have to produce expressions about previously unseen meanings when it has become an adult. If there is no bottleneck, the children are expected to have learnt the entire language, so no compositionality is required and typically does not evolve.” [p2]

Compositionality is needed to fill in the “gaps” of a language than a speaker has not learned yet.

“In Kirby’s model, all agents (adults and children alike) are equipped with the same predefined predicate argument semantics. Naturally this is unrealistic, since it is widely acknowledged that human children are born without innate semantics.” [p2]

Kirby’s model presupposes innate knowledge.

“The question is therefore: to what extent does predefining the agents’ semantics influence the results of such simulations? To investigate this question, Vogt [2005] implemented a simulation based on Kirby’s model, but without predefining the semantics. In Vogt’s model, the semantics of individual agents develop in parallel with the language learning. This way, the semantics of adult agents differ from the children’s.” [p2]

More plausible model since adults have a greater understanding of language than children. This does not solve the innate knowledge aspect of Kirby’s model.

“Vogt [2005] showed that, even without a bottleneck, relatively high levels of compositionality developed very early in the simulation. Furthermore, it was shown that under certain conditions, e.g., when a bottleneck on transmission was absent, this compositionality was unstable. In those cases, the languages gradually (though sometimes suddenly) transformed into holistic languages. One possible explanation for such a transition was based on the ontogenetical development of meaning.” [p2]

  
	Action	                        Result 

1 Sensing the environment Context 2 Select topic Topic 3 Discrimination game Meaning 4 Decoding Expression 5 Encoding Topic 6 Evaluate success Feedback 7 Induction Grammar Table 1

“The model is implemented in Vogt’s Talking Heads simulator THSim [Vogt, 2003]. In this simulation, a population of agents can develop a language that allows them to communicate about a set of geometrical coloured objects that form the agents’ world. Language development is controlled by agents playing a series of guessing games, cf. [Steels, 1997].” [p2]

“The guessing game (GG) is briefly outlined in Table 1. The game is played by two agents: a speaker and a hearer. Both agents sense the situation (or context) of the game. The context consists of a given number of geometrical coloured objects.” [p2]

There are two constraints in the current (Vogt) model:

“(1) All quality dimensions should be used exactly once.

(2) Only combinations of 2 semantic categories are allowed. During their development, agents learn (or discover) which semantic categories are useful to form a compositional language.” [p3]

“The speaker selects one object from the context as the topic of the game (step 2, Table 1) and plays a discrimination game (DG)[Steels, 1997] to find a category that distinguishes the topic from the other objects in the context (step 3). To this aim, each individual agent constructs an ontology containing categorical features (CF), which are points in some quality dimension.” [p3]

Each person effectively makes a vector composed of certain qualities of an object and that vector occupies space in some quality dimension.

“A composition may be of only one holistic or of several rules which combine a colour with shape.” [p3]

“The speaker searches its grammar for compositions of rules that match the distinctive category (or meaning).” [p3]

“When the speaker obtains a distinctive category from the DG, it tries to decode an expression (step 4, Table 1).” [p3]

“If there is no composition that decodes the meaning, a new rule is constructed that either decodes a part of the meaning, or that decodes the meaning holistically. The decoded expression is uttered to the hearer, which tries to encode this expression (step 5). The hearer does not know which object is the topic, and its aim is to guess this using the verbal hint received.” [p3]

“To do this, the hearer first plays a DG for each individual object, thus resulting in a set of possible meanings for the expression. Then the hearer searches its grammar for compositions that can parse the expression and of which the meaning matches one of the possible meanings. If there is more than one, the hearer selects the one with highest score and guesses that the object of the corresponding meaning is the topic. This information is passed back to the speaker, who verifies if the guess is correct and provides the hearer with feedback regarding to the outcome (step 6).” [p3]

Multiple meanings are parsed to find the most relevant one before uttering back to the original speaker.

“If the game succeeds, the weights of used rules are increased, while competing ones are inhibited. If the game fails, the speaker provides the hearer with the correct topic and the hearer will then try to induce new knowledge (step 7).” [p3]

For both successes and fails, the hearer will learn new information about the situation and the rules used to infer that knowledge will also be changed.

“Induction proceeds in up to three of the following steps (if one step succeeds, induction stops):

1. Exploitation. If there is a composition that partially decodes the expression, the remaining gaps are filled.

2. Chunking. If exploitation fails, the hearer will investigate whether it can chunk the expression-meaning pair based on stored (holistic) expression-meaning pairs received in the past. (These pairs are stored in an instance base.) This is done by looking for alignments in the received expression with stored expressions and alignments in the distinctive category with stored meanings. If more chunks are possible, the chunk that has occurred most frequently or that has the largest common string is actually pursued.

3. Incorporation. If chunking fails too, the hearer will incorporate the expression-meaning pair holistically.” [p3]

“Compositionality is calculated as the ratio between the number of compositional rules used (both encoded and decoded) and the total number of utterances produced and interpreted.” [p4]

“Experiment 1 Here all meanings developed from scratch in parallel to the development of the grammar. Experiment 2 All agents were given a repertoire of categorical features. With these CFs, the agents could form categories right from the start, but they did not have any idea how to combine the different dimensions to form conceptual spaces as the basis of the semantic structures. Experiment 3 The semantic structure was predefined. All agents were given CFs as in experiment 2 and an additional constraint that semantic categories (i.e. conceptual spaces) were either colours, shapes or the whole (holistic) meaning.” [p4]

“All experiments reveal that when a bottleneck is imposed, compositionality develops rapidly to a high and stable degree, thus confirming the results achieved earlier by Kirby and others [Brighton, 2002; Kirby, 2002; Smith et al., 2003]. When no bottleneck is imposed, the behaviour of the different experiments is more different from each other.” [p5]

Thus, bottlenecking is a necessity for high levels of stable compositionality.

“First, when no semantic structure is imposed (experiments 1 and 2), high levels of compositionality are achieved within two iterations. Second, when no categorical features are predefined (experiment 1), compositionality is unstable, whereas in other cases, compositionality is either stable (experiment 2) or may emerge at a later stage (experiment 3).” [p5]

“When the language is compositional, the movement in one semantic category affects a larger part of the language than it would when the language is holistic. Hence, holistic languages are more stable and thus easier to learn.” [p6]


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)
Last modified: Sat Dec 1 4:39:41 EDT 2007