This old woman can also be seen as an attractive young woman. Ambiguity is part of perception life.
One of the major distinctions between natural and logical languages concerns ambiguity. Mathematical and computer languages are not ambiguous while natural languages are grossly so. Investigators typically see ambiguity as a failing, a problem that has to be overcome. Gary Marcus cites ambiguity as evidence that our brain’s language ability is a makeshift mess rather than some optimized system. (See: Just How Sane Are We?) Efforts to simulate natural speech on a computer require a system for resolving ambiguities. Noam Chomsky and his followers delight in finding ambiguous statements and looking for underlying syntax that enable people to determine which possible interpretation is correct. So I was intrigued to find evidence that language can become more ambiguous as it becomes easier to speak.
Last March I reported that the Evolang conference in Barcelona had provided extensive evidence that language gains structure through use rather than from language processors that are built into the brain. Simon Kirby’s presentation was especially memorable. (See: Language Structure is Cultural, Not Genetic) He had performed a series of experiments in which computers generated a language that was then taught to somebody. A second speaker learned what the original speaker actually said (instead of what the computer had generated), and then a third speaker learned the second’s usage. After ten generations of such transmission, the hard-to-learn, unstructured language produced by the computer had become an easy-to-learn, structured language. It also became more ambiguous.
Kirby, Hannah Cornish and Kenny Smith, now have a paper about those experiments in the Aug. 5 issue of Publications of the National Academy of Science (“Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language;” abstract here) and although most of the basics were reported from Barcelona, I did not catch the part about language becoming more ambiguous.
The experiment begins with 27 computer-generated names for 27 distinct objects, differentiated by shape (circle, square, triangle), color (black, blue, red), and manner of movement (horizontal, bouncing, spiral). The process of passing names through generations of speakers "cumulatively introduces ambiguity as single strings are re-used to express more and more meanings. In other words the languages gradually introduce underspecification of meanings." [p. 10683] In real-life, of course, people do not hear all the words of a language, nor do they hear all possible grammatical forms. The experiment simulated this problem by teaching speakers only half of the names generated. For example, the speaker might learn the name of a black spiraling triangle displayed on a computer monitor but not the name for a red bouncing square. Even so, the speakers must find a way to talk about all 27 objects. One solution was to generalize a word’s meaning so that more than one object can be identified by the same word. The process of becoming ambiguous can be seen:
- Generation 4: the word tunge is used only for pictures of an object moving horizontally. Other words are distributed more idiosyncratically and unpredictably.
- Generation 6: the word poi refers to most spiraling pictures, but blue spiraling triangles or squares are called tupin while red spiraling triangles or squares are called tupim. Tunge has remained stable.
- Generation 7: poi now includes blue spiraling squares and red spiraling triangles. Tupin still serves for a blue spiraling triangle; tupim still serves for a red spiraling square.
- Generation 8, 9, 10: poi has become consistent, referring to any spiraling object. Tunge still refers to any object moving horizontally.
With these generalized meanings, a speaker can refer correctly (i.e., intelligibly) to, say, a spiraling object never seen before.
It is precisely because the language can be described by using [a] simple set of generalizations that participants are able to label correctly pictures that they have never previously seen. This generalization directly ensures the stable cultural transmission of the language from generation to generation, even though each learner of the language is exposed to incomplete training data. [10683]
Now there is a reversal for you. Chomsky continues to argue that because children speak correctly despite the "poverty of the stimulus" (i.e, the incomplete training data) they must have the full story built into their brains to begin with. Kirby's team shows that the logic of Chomsky's case misses the ability of people to generalize from their experience and talk about things previously unseen and unheard of. Some critics will be tempted to counter that the ability to generalize is innate, but Chomsky won't be among them because that ability is semantic rather than syntactic.
A more important objection to this kind of ambiguity is the loss of precision. The language can no longer express the difference between two objects moving horizontally even though their shapes and colors differ. To see if the languages could retain expressive precision, Kirby's team then did a second group of experiments in which, when a word acquired multiple meanings, only one of the meanings was taught to the next generation.
The authors concede that this solution is artificial. It imposes a deus ex machina into the evolutionary process, but the authors justify the method as "is an analogue of a pressure to be expressive that
would come from communicative need in the case of real language
transmission." [10684] Of course that pressure may not always be there, in which case distinctions that once seemed important to speakers will fade into ambiguity.
The intervention process, by the way, did lead to more expressive languages. The results reminded me a bit of Bantu constructions in which the nature of the noun is handled by a series of morphemes. The first morpheme identifies the object's color, the second its shape, and the third its motion. Thus, n-ere-ki is a black square moving horizontally, n-ere-plo is a black square that bounces, and l-ere-ki is a blue square moving horizontally. An "irregular" survivor, renana for a red square moving horizontally, serves as evidence of the earlier, computer-generated language in which the morpheme system did not exist.
One detail that jumped out at me is the role of perception in the generalizing process. It is not at all surprising, but the languages evolve in ways that make good perceptual sense. In the first set of experiments as the number of words decline, the most salient perceptual feature (the object’s motion) becomes the distinguishing feature preserved in the words. In the second set, when growing ambiguity was filtered between the generations, orderly ways for identifying objects by perceptual traits evolved. The work suggests to me that instead of evolving a complex set of language generators and parsers in the brain, we can do very well with the perception system passed down from our ape ancestors. And instead of having to define a whole system that ultimately resolves all ambiguities, we can take language to be an incomplete system that resolves many ambiguities by looking, or maybe tasting. (For a discussion of the qualities that permit the cultural evolution of language, see: Language Adapted to Us)
I am not sure that language is as ambiguous as it appears. (1) If the experiment with the arbitrary made up language continued for long enough, it might become less ambiguous. Or if the transmission rules were a little more relaxed and more natural. (2) Natural languages are transmitted in an environment where there is context. Sentences that are ambiguous when written in isolation are not ambiguous when delivered orally by a person along with non-verbal communication to another person as part of a conversation with that person. (3) I sometimes want to say something ambiguous and I would not want to lose that ability. If we had to be perfectly clear than we would lose a lot of humour and poetry. In other words, it is sometimes not the language that forces the ambiguity but our use of it.
------------------------------
BLOGGER: I certainly agree with point 3 and in general I not only accept but promote the proposition that language embraces ambiguity. Literature critics have long loved it. However, once the computation model of knowledge and behavior became widely accepted, ambiguity became a real problem for theorists. Computers don't know what to do with ambiguous messages.
A further point. In the experiment, ambiguity comes from a loss of specificity and that is a true loss, just as in real life the loss of specificity in a word like "disinterest" (making it indistinguishable from "uninterest") carries a loss in expressive power.
Posted by: JanetK | August 19, 2008 at 01:38 AM