Last week’s post summarized the outcome of the just-completed Evolang conference in Barcelona by reporting the collapse of the long dominant paradigm of generative grammar founded by Noam Chomsky and expanded by many others. The old paradigm made no contributions to the new results reported in Barcelona; reports often contradicted the generative paradigm; some speakers directly and energetically criticized the old paradigm and got away scott free in the discussions that followed. So what did the Barcelona conference leave its participants to build on?
In Thomas Kuhn’s classic account of paradigm shifts a group of working scientists have a common set of assumptions, practices, and achievements (the paradigm) that inspires many questions and arguments. This paradigm runs into trouble when some problem that it cannot handle refuses to be swept under the table, and finally it is replaced by a new paradigm. A famous example is the replacement of Aristotelian physics by that of Galileo and Newton. Does that story fit what happened in Barcelona?
To some degree, yes. A question has arisen that the generative paradigm cannot answer: how did humans evolve a brain that supports language?
Long forbidden, the question of language origins finally became legitimate when two workers in the generative tradition, Steven Pinker and Paul Bloom, published a paper (here) that specifically called for an evolutionary account of the rise of generative grammar. Pinker then wrote a popular book, The Language Instinct, that assumed generative grammar had evolved. That book’s success probably marked the peak of the generative paradigm’s influence.
Pinker and Bloom’s paper directly inspired a series of bi-annual conferences about the evolution of language, the 7th of which has just ended in Barcelona. Although the Barcelona conference established that the old paradigm does not work, the second part of Kuhn’s story—the rise of a replacement paradigm—did not happen. There was no stunning paper that resolved all the problems and established a new basis of further research. But there were clues from different presentations and many of them pointed to bits and pieces of a theory proposed by Terrence Deacon in his book, The Symbolic Species. Deacon’s work was not picked up in its entirety, but parts of it have moved center stage.
Piece number one says Language and the brain co-evolved. The generative paradigm assumed that some part of the brain evolved specifically to generate sentences. This idea held great prominence in Pinker’s book. He argued that specific “modules” had evolved to process sentences. The evidence for this assumption was that children learn to speak with astonishing rapidity. Therefore, the brain must have special equipment to perform this task. Deacon’s response was that evolution can work the other way around. Language can adapt to the abilities of the speakers by changing so that it is easier to learn. They two sides argued more from theory and logic than from data, but not anymore. Deacon’s approach can now claim a variety of empirical evidence.
The most direct evidence came from Simon Kirby’s presentation. (See: Language Structure is Cultural, Not Genetic) He described an experiment in which a person is taught a simple “language” whose sound system has been randomly generated by a computer, whose meanings are based on simple associations, and that has no syntax. As you might expect, the person learning such a language has trouble, but eventually is able to say things in it. A second person learns the language by observing how the first user speaks. A third person learns from observing the second. A fourth observes the third, and so on through ten “generations” of learners. By the end, learning the language had become a much easier task, the sound system had developed regularities, and syntax had appeared. The speakers had found and regularized ways to express relationships. This experiment shows exactly what Deacon expected and generative grammarians did not. There will have to be more experiments to determine whether the outcome emerges from syntactical modules or something else, but the generativists have been thrown on the defensive.
A second piece of evidence came from David Gil’s presentation (See: Complex Grammar has a Simple Solution) in which he described one of Indonesia’s languages, Riau. It turns out to be much more simple than many of the languages generative grammarians have enjoyed studying. Riau speakers are perfectly capable of learning a more complex language and many of them do, so there is no simple correlation between language complexity and brain structure, or language complexity and cultural complexity. Again, this kind of finding is exactly what a reading of Deacon would predict, and a generativist would not. Generativists have a fallback position, one mentioned in passing in Derek Bickerton’s presentation. The fact that we don’t use a feature of language does not mean that the capacity for using that feature is absent from the brain. So more work will follow, but, as before, it is the generativists who are on the defensive.
Third, was Friedmann Pulvermüller’s several presentations (See: Brain Circuitry Challenges Linguistic Models) arguing that the brain does not have the expected modules. Instead, the brain works in a straightforward way, connecting many parts to produce and listen to sentences. Instead of modules, the whole brain works together.
In a luncheon conversation between Pulvermüller and a linguist I heard him specifically reject the idea that the brain uses the syntactic “trees” that generativists are so fond of. Defenders will argue, as one did at Pulvermüller’s presentation, that language doesn’t work as he suggests, but it is again the generativists who are on the defensive, looking for a counter-explanation.
Eavesdropping as I was, I jumped into that luncheon conversation to ask what they thought of the partial trees proposed in the presentation by Gary Marcus (see: The Practicality of Studying Language Origins). Pulvermüller and his friend agreed that Marcus did an excellent job in disproving the generativist assumption that the brain is splendidly efficient at producing and parsing sentences. They did not think, however, that he had made clear his alternative. So again, we come to a point where Deacon’s ideas look promising, but more work will be needed to resolve the matter.
This success of Deacon caught me quite by surprise, and was not predicted before the conference. Deacon himself was not there to urge on his case. By chance Marcus and I were on the same plane to Barcelona, so we took a cab together. Deacon’s name came up and Marcus said he thought parts of Deacon’s book were excellent, but much of it was quite obscure. I said when I started my blog I had expected to be citing Deacon often, but had found that I seldom had a reason to cite him. We rode on in agreement that Deacon had not shown the way. Yet there were Deacon’s ideas a few days later, providing a beacon while generative theory had run out of things to say about the evolutionary question.
Another bit of Deacon’s ideas also gained new prominence by the end of the conference, but this post is already long enough so I will save that news for next week.