The German edition of The Ice Finders.
Some years ago a book of mine appeared and told the story of the discovery of the ice age. The idea of an ice age met a lot of resistance at first because it seemed profoundly unscientific. Glaciers the size of continents were unknown and sounded like the sort of fantasy that dreamers always propose. Back then scientific geologists believed that the same slow processes visible in 1840 were enough to explain all the geological markings on the earth. Furthermore, glaciers were believed to be unable to flow uphill. Rivers can only flow downhill, and what were glaciers if not frozen rivers? There was plenty of physical evidence of a recent ice age, left over moraines and large boulders scattered about, and every so often a geologist would look at this evidence and be converted. But that process was slow and it took decades for geologists as a group to come around.
I recall that story because I can see something similar with language studies right now. The idea that language works by piloting attention is so novel that it is resisted by sheer inertia. It too seems unscientific in the sense that a computer cannot work by using perception instead of symbolic concepts. Yet there is a steady process of individual conversions that has gathered enough steam behind it to support a collection of essays (see Attention and Meaning) and I feel confident that eventually the resistance will collapse. Already the arguments that the classical solutions "must be" on the right track can be answered.
A few weeks ago I wrote on this blog that I was no longer intimidated by the classical "must be" claims like hierarchical structure, displacement/movement, recursion and minimality. A reader, Gary Briscoe, has asked me to enlarge on my off-handed rejection, and I felt he had a point even though technical complaints have always been secondary to my main objection about generative linguistics which is that it does not connect language to anything about humanity, or culture, or history. Still, Briscoe is correct that since I mentioned four technical issues raised many years ago on a now-quiet blog called The Lame and the Blind I should clarify my reference.
It is obvious that language can be analyzed hierarchically: words combine into phrases which combine into larger phrases and clauses which can be sorted into subject and predicate which combine into a full sentence. The same can be said of mental processes in which sensory data becomes a percept and percepts become multi-sensory images. The question is whether this hierarchical analysis plays a role in the construction and interpretation of sentences. There is room to doubt. For example, the production of speech is so fast that there is barely time for serial processing, let alone hierarchical analysis and interpretation.
The L&B blog compares two sentences. English syntax forbids *Himself likes John, but allows I saw the picture of himself that John likes. L&B explains a differently organized sentence was originally generated in the mind and then the phrase of himself was moved through the hierarchy to its "overt phonological position."
Let's notice immediately that this sentence is intelligible only because the I-himself difference is unambiguous. If the sentence were He saw the picture of himself that John likes, we would assume it was a picture of whomever he is. If the picture were of John, the sentence would have to be something else, perhaps He saw the picture of John that John himself likes. So what happens to all that hierarchical movement and analysis? Can it really be that the substitution of one pronoun for another results in a completely different linguistic hierarchy?
Let's also note that the sentence is still awkward. If I were working on a manuscript, I would probably change the sentence to something like: I saw that picture John likes of himself.
It may be that L&B has chosen a poor example, but an alternative parsing of the sentence based on attention and working memory is possible. (See my paper, Attention-Based Syntax and an online paper by Stefan Frank, Rens Bod and Morten Christiansen published by the Royal Society in 2012).1
L&B states that some sentences interpret words as being in a different position from where they appear in public. For example, English usually puts the subject before the object, but in What did John read? the subject John appears after the object what. But is what really the object of the verb read? Perhaps it is the subject of the verb did.
I am not being completely serious with that flip answer, but I have a serious point. Why assume that the syntactical rules for interrogatives are the same as those for normal sentences? If we say that language works by piloting attention, we notice promptly enough that interrogatives like what and who do not pilot attention. They are attentional blank spots. Interrogatives have two parts: the section that directs attention, as in a normal, informative statement—John read—and the part that pilots attention nowhere—What did. There is no reason for asserting a priori that the interrogative structures are syntactically like the informative ones.
All languages can "combine phrases grammatically … to an infinite degree." Memory puts a stop to the intelligibility, but generative grammarians pay no attention to the psychology of the matter. That exclusion strikes me as arbitrary, but L&B's critical sentence says, "Not only is [recursion] universal to human language, but it also seems unique to human language. No other species has been convincingly demonstrated to [the ability] to detect or produce recursive patterns." So what? Language is useful because it allows people to learn things from one another. Apart from eusocial insects, other animals don't share news, so why expect linguistic features in their howls? I can list a great many aspects of language that are not found in animal signals. Why single out recursion for special notice?
A tennis referee cries, "Out." That's not recursive, but it is informative in a way unknown to other animals. The generative focus on such an abstract, secondary feature of language is symptomatic of the generative insistence on ignoring the function of language.
L&B writes, "It is a surprising and universal fact about language that dependencies between two elements in a sentence cannot be interrupted by a third element of the same type." The blog uses another interrogative example, but his statement would be more clear if he used a normal, informative sentence (as indeed it does in a Swahili example). Consider Peter gave his girlfriend Jane a book. In this sentence "Jane" is a dependency of "his girlfriend." What L&B finds "surprising and universal" is that we cannot interrupt these two; for example, we cannot say *Peter gave his girlfriend from London Jane a book. I don't deny that dependencies cannot be interrupted like that, the part I deny is that it is surprising. If you say that language works by directing attention, you would expect that interjections that disrupt attention are forbidden.
My correspondent, Mr. Briscoe, has asked me to "refute" these universals, and I doubt that I have done that. Instead, I have offered alternate discussions of the phenomena. It is up to each person to decide whether they prefer arbitrary, a priori assumptions that seem plausible they depend on computational processes available to a machine. I suspect that an explanation linking language syntax to function will eventually win the competition for explanation of observed facts.
1 According to attention-based syntax, the example sentence can be parsed:
[|I| /saw/ ||(the) picture| <of> |(himself)||] [<that> |John| /likes/]
The critical feature of this parse is that there are two complete topics (what I call bounded perceptions) which I have marked between square brackets [I saw the picture of himself] [that John likes]. Sentences are always easier when they take one topic at a time. Adding to the complication, both topics have the same object, "the picture of himself." I saw |object| and John likes |object|.
Overlapping sentences like this work best when the object of the first topic is the subject of the second one, e.g. [The dog bit the man] [who stole the television].
Complicating it still further, the object contains a noun and a pronoun (what I call static phenomena) and the pronoun is reflexive.
A solution is to separate the noun from the reflexive pronoun and put one part of the object in each topic: [I saw that picture] [John likes of himself]. Working memory keeps the break in attention from becoming a problem.
Note that the solution is not a computation. Instead, it requires editorial skill. Yet it is a skill that can be taught because it is based on an understanding of how sentences work.