Kitty Hawk, North Carolina saw one of the great moments in the history of engineering, the first controlled flight of a machine. It was not one of the great moments in the history of biology.
Let's face it, there shall never be a good, evolutionary account of generative linguistics, nor should there be. Noam Chomsky and his school have allowed themselves to be distracted by an issue that they preferred to ignore, but which grew too noisy to keep off the table.
Over fifty years ago, Chomsky neatly explained his interests. He wanted to define a set of rules that would generate "all and only" the sentences of a particular language. If he had managed to pull off that feat, the biological tangle that pricks his skin today would be easy enough to ignore.
Machines don't have to imitate biology to duplicate biological functions. The Wright brothers, for example, observed birds to better understand the issues involved in controlled flight, but they decided from the beginning that they would not attempt to make a machine that could actually fly like, say, a wren. Wren operations were too delicate and precise for any contraption the brothers might build. Instead, they made a flying machine that is unlike anything found in nature. We are content with that machinery and do not object to the airplane's inability to tell us about controlled flight's evolutionary origins.
Machine-generated behavior also need not duplicate human-generated behavior. When Chomsky first began looking for a sentence generator, some computer scientists were beginning to look into a machine that could play chess. Some investigators tried to build a chess-playing machine that thought like a grandmaster. It would look at a board, see the position's strengths and weaknesses, determine a suitable strategy, and make the appropriate move. That approach never came close to producing a decent rival for a grandmaster. Instead, the solution lay in concentrating on what makes computers so powerful, the ability to make calculations at a rapid pace. At first these "brute force" chess-playing machines were pretty clumsy. Chess offers such a large variety of moves that machines could only calculate a few lines in any depth. Chess masters laughed at them. But as their calculation power improved they were able to follow many lines very far, sometimes 20 moves or more, turning a tactical calculation into a strategic one.
In 1997, when the IBM machine Deep Blue defeated world champion Gary Kasparov, it won by determining moves in a wholly different manner than Kasparov's method. So what? The issue only becomes a problem if we try to deduce the way people play chess by studying how modern chess machines work.
Language-generating machines of the all-and-only variety are only going to start moving when their makers forget about trying to model the machines on humans and go for the fast calculating power of a machine.
I've come to that conclusion for two reasons:
- It is obvious, based on the work of this blog, that we use language in a manner that is in no way understood precisely enough to be duplicated on a machine.
- I have read a paper in the latest issue of Biolinguistics that shows once again that generative linguists are merely embarrassing themselves when they try to discuss their work in terms of biology.
The first point is simple enough. Language works by piloting a listener's attention to particular perceptions, real or imaginary. We group things together like "three brown chickens" because we perceive them as units distinct from the "speckled eggs" that are amongst them. That sorting process gives us phrases automatically. To produce a real sentence with a subject and a predicate we have to perceive at least two points of attention and we must recognize some action that links them. Perhaps, in this case, it is the act of laying. Thus, we can generate a sentence, "My three brown chickens lay specked eggs." Easy as the task is for humans, we are very far from building a machine that can recognize all these relationships, and even if we built such a machine we would need a second machine that could translate these images into sentences. Thus, the idea of building a mechanical sentence generator goes off track when it tries to work the way humans do.
As for the ridiculousness of insisting that generative linguistics offers biological insights, you have only to turn to the recent paper, "A Naturalist Reconstruction of Minimalist and Evolutionary Biolinguistics," The authors, Hiroki Narita and Koji Fujita, go adrift in the introductory section. They begin by demonstrating that human language is "a biological object that somehow managed to come into existence in the evolution of the human species." [p. 356]
Okay.
Then the authors mention a previous paper that said "'evolvability' should be a central constraint on linguistic theorizing." I read that paper when it came out and didn't report on it because it didn't much interest me, but now I'll take the trouble of objecting to its thesis.
Consider this parallel argument: flight is a biological object that came into existence in the evolution of birds, bats, and insects; therefore, evolvability should be a central constraint on aeronautical theorizing. If the Wright brothers had taken that line seriously, we'd still be stuck with those ridiculous machines that tried to flap their wings.
Narita and Fujita could have been equally dismissive. They might have waved aside the evolvability constraint and gone on their way. Instead, they say they "totally agree… that our [minimalist] theory of language must achieve evolutionary plausibility or meet the evolvability condition." [357]
And then they argue with the theory of evolution, or at least the natural selection side of it.
Not everything that evolves is the product of natural selection; genetic drift is also common. But nothing increases organized biological complexity without natural selection. The change had to defy entropy, which selection alone can do at the biological level. They cannot embrace evolution without holding on to natural selection, but what really interests them is the mechanical solution of their problem.
They tip their hand when they say, "minimalism [in linguistics] is essentially a research program that seeks to identify the (optimizing) effect of physical laws of nature in the domain of human language." [361] They want to identify the mechanical laws—referred to repeatedly in this paper as "the third factor"—that permit the generation of sentences. In this spirit they advocate "replacing [the categories] 'adaptation' and 'natural selection' … with 'optimality/simplicity' and 'the third factor'" [362]. I don't mind their being interested in linguistic engineering rather than the biology of language, but they start sounding silly when they try to replace biological terms with engineering concepts while insisting they are staying true to the biological order.
A counterargument might be that birds and bees still have to satisfy the laws of aerodynamics, and that rebuttal is true enough. What is the linguistic equivalent of such a demand? First, it is not even clear that any laws of linguistics are physical laws. In 2009 I reported on a paper by Chater and Christiansen that distinguished between the selection that has to adapt to natural laws and selection that simply has to coordinate with another member of the species. (See: Natural vs Coordinated Challenges) But even if there are shared physical laws, lets keep the cart coming after the horse. First we can engineer sentence generators, and then see which laws are applicable to human language as well.
Again, we can get an example of how this learning process works by looking at aviation. It has been known for thousands of years that carnivorous birds circle in the air. Why do they do that? People guessed they were checking the ground, circling to study the situation. I have heard tourists point to vultures circling over the Rift Valley and tell one another, "They've found something dead." But once aviation reached the point of including glider pilots, hobbyists found that they could gain altitude by using rising heated air (thermals) to gain some lift. What's more, the way to get the best lift is to turn in a tight circle, letting the thermal raise the glider. So suddenly people knew why big birds circle. They are using thermals to counteract gravity. That's the way the knowledge runs—from experience to analogy. Once we've engineered sentence generators and have experience with them we will be able to spot analogies with human usage and enjoy an Ah-Ha! moment or two.
It is not as though we are at a loss for things for sentence generators to do: translate texts, check a document's stylistic precision, take dictation, scan emails for terrorist plans, and so on. You don't have to base these functions on the way humans perform the task. Rely on what computers do best: look up data very fast, make quick calculations, and output fast.
Meanwhile, I'm going to keep thinking about language origins… a different problem altogether.
I like your aviation metaphor!!
There is an old saying (don't know who's) that if you don't understand something then it must be simple. So there are linguists, physicists, computer scientists, philosophers etc. lining up to misunderstand biology. And visa verse probably.
Posted by: JanetK | January 17, 2011 at 03:48 AM
1) I kinda see all evolutionary accounts of language as "just so" stories. I don't think I have seen one that is robust and allows a mechanistic implementation. Most depend on some sort of "functional explanation". However, this is NOT an explanation in the true evolutionary sense, because ultimately evolution thru natural selection is a non-functional explanation of how things are the way they are. This is why it is important to ask "why?" not just "how?" when theorising about evolutionary origins of anything, including language. The Minimalist programme sets before it, exactly this agenda. A caricature of the programme does no one any good, let alone lambasting it based on a severe misunderstanding of it.
2) This is false analogy:
"Consider this parallel argument: flight ... evolvability should be a central constraint on aeronautical theorizing"
The right analogy would be "flight in biological species is a biological object that came into existence in the evolution of birds, bats, and insects; therefore, evolvability should be a central constraint on theorizing about biological flight "
And you will see that evovability is a central concern for the scientist concerned with the workings of the mechanism.
3) The following statement amounts to claiming serendipity is the only route to knowledge:
"That's the way the knowledge runs—from experience to analogy."
If this were the case we would have NO knowledge about abstract fields like Mathematics, cos in most cases the math preceded the experiential use/recognition of the concept - Fourier Transforms to mention one amongst the many.
Even in the sciences, non-experiential predictions are routinely made. I don't quite understand your claim. If you had said, experience is "a way to understanding" - I might agree with you, but it cannot the only way (although, a stricter rationalist would disagree even with that).
Posted by: Karthik Durvasula | February 16, 2011 at 04:32 AM
a note to add to (3) from the previous post:
Experience allows us to entertain at best previously suppressed hypotheses - perhaps suppressed because of a disbelief in their plausibilty, for whatever reason. It doesn't generate the hypothesis itself.
For example, experience can equally well mislead as it leads - classic inductive inference mistakes. It is the rational, not experiential side, which has to ultimately decide between the hypotheses. Hence, experience is both a good thing and a bad thing, but knowledge is garnered only by reflecting on the plausible hypotheses, NOT through the experience to analogy.
There is a strong flavour of "empiricism" to your statements. They amount to saying, if given enough data/facts, we can figure out the truth, and this has been shown repeatedly to be a misunderstanding of how to proceed with knowledge accumulation.
A classic lesson is available from neurobiological investigations: we have an incredible amount of facts about the brain, yet we have NO understanding of it, by which I mean, why it does what it does/how it does it - exactly cos more facts don't equal more knowledge.
Posted by: Karthik Durvasula | February 16, 2011 at 04:43 AM
So Karthik Durvasula, are you saying that empirical science is failing to begin to understand the brain during recent history while non-empirical methods used for the previous 1000 years or more have made progress? If so I would strongly disagree. Science is the tool of choice where ever possible.
---------------------------
BLOGGER: I was a bit startled to be called an empiricist, like that was a bad thing. However, she did have a point when she said facts alone are not enough. Science works by getting facts, then proposing an idea that explains them, then getting more facts, then, if the ides survives, getting more facts. Eventually the idea that explains them fails and you look for another idea and more facts to test it. I'm also sympathetic to the notion that facts have a dull Joe Friday quality, but the amassing of facts has proven to be an effective path to finding workable theories.
Posted by: JanetK | February 18, 2011 at 05:08 AM
Hi Janet,
No, that's not what I said. Simply put, empiricism is not the same thing as "empirical science'.
Rationalist, empiricist, mixed approaches ... are all approaches to science and are all "empirical science". "Empirical" in "empirical science" means "evidence-based" or "dependent on experimentation".
While, "empiricism" refers to (inductive) inferential theorising.
The terminological overlap is unfortunate, but it is what it is.
As far as the brain is concerned, I maintain my earlier claim. There is a tonne of information about the brain, but we have no clue as to why the brain behaves the way it does.
If you think "information = knowledge", then I guess you won't agree with me; but if you think "understanding = knowledge", then you will see the problem I am raising.
Posted by: Karthik Durvasula | February 20, 2011 at 04:00 AM