Let's face it, there shall never be a good, evolutionary account of generative linguistics, nor should there be. Noam Chomsky and his school have allowed themselves to be distracted by an issue that they preferred to ignore, but which grew too noisy to keep off the table.
Over fifty years ago, Chomsky neatly explained his interests. He wanted to define a set of rules that would generate "all and only" the sentences of a particular language. If he had managed to pull off that feat, the biological tangle that pricks his skin today would be easy enough to ignore.
Machines don't have to imitate biology to duplicate biological functions. The Wright brothers, for example, observed birds to better understand the issues involved in controlled flight, but they decided from the beginning that they would not attempt to make a machine that could actually fly like, say, a wren. Wren operations were too delicate and precise for any contraption the brothers might build. Instead, they made a flying machine that is unlike anything found in nature. We are content with that machinery and do not object to the airplane's inability to tell us about controlled flight's evolutionary origins.
Machine-generated behavior also need not duplicate human-generated behavior. When Chomsky first began looking for a sentence generator, some computer scientists were beginning to look into a machine that could play chess. Some investigators tried to build a chess-playing machine that thought like a grandmaster. It would look at a board, see the position's strengths and weaknesses, determine a suitable strategy, and make the appropriate move. That approach never came close to producing a decent rival for a grandmaster. Instead, the solution lay in concentrating on what makes computers so powerful, the ability to make calculations at a rapid pace. At first these "brute force" chess-playing machines were pretty clumsy. Chess offers such a large variety of moves that machines could only calculate a few lines in any depth. Chess masters laughed at them. But as their calculation power improved they were able to follow many lines very far, sometimes 20 moves or more, turning a tactical calculation into a strategic one.
In 1997, when the IBM machine Deep Blue defeated world champion Gary Kasparov, it won by determining moves in a wholly different manner than Kasparov's method. So what? The issue only becomes a problem if we try to deduce the way people play chess by studying how modern chess machines work.
Language-generating machines of the all-and-only variety are only going to start moving when their makers forget about trying to model the machines on humans and go for the fast calculating power of a machine.
I've come to that conclusion for two reasons:
- It is obvious, based on the work of this blog, that we use language in a manner that is in no way understood precisely enough to be duplicated on a machine.
- I have read a paper in the latest issue of Biolinguistics that shows once again that generative linguists are merely embarrassing themselves when they try to discuss their work in terms of biology.
The first point is simple enough. Language works by piloting a listener's attention to particular perceptions, real or imaginary. We group things together like "three brown chickens" because we perceive them as units distinct from the "speckled eggs" that are amongst them. That sorting process gives us phrases automatically. To produce a real sentence with a subject and a predicate we have to perceive at least two points of attention and we must recognize some action that links them. Perhaps, in this case, it is the act of laying. Thus, we can generate a sentence, "My three brown chickens lay specked eggs." Easy as the task is for humans, we are very far from building a machine that can recognize all these relationships, and even if we built such a machine we would need a second machine that could translate these images into sentences. Thus, the idea of building a mechanical sentence generator goes off track when it tries to work the way humans do.
As for the ridiculousness of insisting that generative linguistics offers biological insights, you have only to turn to the recent paper, "A Naturalist Reconstruction of Minimalist and Evolutionary Biolinguistics," The authors, Hiroki Narita and Koji Fujita, go adrift in the introductory section. They begin by demonstrating that human language is "a biological object that somehow managed to come into existence in the evolution of the human species." [p. 356]
Then the authors mention a previous paper that said "'evolvability' should be a central constraint on linguistic theorizing." I read that paper when it came out and didn't report on it because it didn't much interest me, but now I'll take the trouble of objecting to its thesis.
Consider this parallel argument: flight is a biological object that came into existence in the evolution of birds, bats, and insects; therefore, evolvability should be a central constraint on aeronautical theorizing. If the Wright brothers had taken that line seriously, we'd still be stuck with those ridiculous machines that tried to flap their wings.
Narita and Fujita could have been equally dismissive. They might have waved aside the evolvability constraint and gone on their way. Instead, they say they "totally agree… that our [minimalist] theory of language must achieve evolutionary plausibility or meet the evolvability condition." 
And then they argue with the theory of evolution, or at least the natural selection side of it.
Not everything that evolves is the product of natural selection; genetic drift is also common. But nothing increases organized biological complexity without natural selection. The change had to defy entropy, which selection alone can do at the biological level. They cannot embrace evolution without holding on to natural selection, but what really interests them is the mechanical solution of their problem.
They tip their hand when they say, "minimalism [in linguistics] is essentially a research program that seeks to identify the (optimizing) effect of physical laws of nature in the domain of human language."  They want to identify the mechanical laws—referred to repeatedly in this paper as "the third factor"—that permit the generation of sentences. In this spirit they advocate "replacing [the categories] 'adaptation' and 'natural selection' … with 'optimality/simplicity' and 'the third factor'" . I don't mind their being interested in linguistic engineering rather than the biology of language, but they start sounding silly when they try to replace biological terms with engineering concepts while insisting they are staying true to the biological order.
A counterargument might be that birds and bees still have to satisfy the laws of aerodynamics, and that rebuttal is true enough. What is the linguistic equivalent of such a demand? First, it is not even clear that any laws of linguistics are physical laws. In 2009 I reported on a paper by Chater and Christiansen that distinguished between the selection that has to adapt to natural laws and selection that simply has to coordinate with another member of the species. (See: Natural vs Coordinated Challenges) But even if there are shared physical laws, lets keep the cart coming after the horse. First we can engineer sentence generators, and then see which laws are applicable to human language as well.
Again, we can get an example of how this learning process works by looking at aviation. It has been known for thousands of years that carnivorous birds circle in the air. Why do they do that? People guessed they were checking the ground, circling to study the situation. I have heard tourists point to vultures circling over the Rift Valley and tell one another, "They've found something dead." But once aviation reached the point of including glider pilots, hobbyists found that they could gain altitude by using rising heated air (thermals) to gain some lift. What's more, the way to get the best lift is to turn in a tight circle, letting the thermal raise the glider. So suddenly people knew why big birds circle. They are using thermals to counteract gravity. That's the way the knowledge runs—from experience to analogy. Once we've engineered sentence generators and have experience with them we will be able to spot analogies with human usage and enjoy an Ah-Ha! moment or two.
It is not as though we are at a loss for things for sentence generators to do: translate texts, check a document's stylistic precision, take dictation, scan emails for terrorist plans, and so on. You don't have to base these functions on the way humans perform the task. Rely on what computers do best: look up data very fast, make quick calculations, and output fast.
Meanwhile, I'm going to keep thinking about language origins… a different problem altogether.