Yesterday I posted [here] a description of Maggie Tallerman's retort [abstract here] to the thought-firsters' idea that language evolved as a means of improved thought by allowing concepts to combine; we only later developed a way to externalize the thought as speech or signing. In that post I presented Tallerman's argument that words and concepts are not interchangeable and that words alone have properties that allow meaningful combinations. They get those properties via common usage.
Today I want to look at her treatment of syntax. Basically, she makes the same point: the rules of syntax are formed via general usage, or to use the jargon: syntax comes through externalization.
A thought-firster syntactical argument goes like this:
- In English the usual way of making a statement is to organize the sentence (S)ubject + (V)erb + (O)bject, e.g. She loves Joe.
- If we used the same form for asking a question, we would say She loves who? However, in English the question is usually asked Who does she love?
- Note that who has now become the subject of the verb does and the object is now the clause she love. But wait, there is something wrong with the clause. It lacks an object. It should be: she loves who? So the full S-V-O form should be [Who] [does] [[she] [loves] [who]].
- Getting rid of the second who, say the thought firsters, eases the computational burden, but makes communication less efficient.
Tallerman quotes two prominent thought-firsters as saying, when "there is a conflict between computational efficiency and interpretive-communicative efficiency" the universal solution is for languages to "resolve the conflict in favor of computational efficiency." [page 211] The point is that if languages are universally structured to favor efficient thought over efficient communication, it must be because language evolved for thinking and that communication is a side benefit.
Tallerman first notes that thoughts must also be interpreted. If I ask you, "Who does she love?," the interpretive burden is no greater than if you ask yourself, "Who does she love?" Either way, you need to understand the question. So thinking does not absolve the sentence producer from a need for clarity. (I you reply, "Well, for Pete's sake, you know what you are thinking about," you can wander into an infinite regress: no need to think X because you already know you are thinking about X and you know that because either you thought X then or you already knew you were thinking about X and you knew that because … )
She then challenges the proposition that computational efficiency is favored. She points out that in fact we can ask the question as either "She loves who?" which stresses the fact of loving, or "Who does she love?" which stresses the identity of the beloved. The choice of sentence form, says Tallerman, are "alternatives designed to maximize interpretive-communicative efficiency,"  depending on whichever aspect of the question the speaker wishes to stress. Computational efficiency does prevent us from stressing both aspects at once, "*Who does she love who?" We can't ask that one. (Tallerman does not say this, but in my view of how language works, it is a limitation of attentional powers that prevents us from stressing two things in one sentence.)
Thirdly, Tallerman considers the rhetorical benefits that come from not always sticking to the S-V-O structure. (By merry happenstance I discussed this point just a few weeks ago [here].) Shifting from normal word order is accomplished by putting a word or phrase closer to the front of the sentence, thus increasing its prominence and making the speaker's focus more apparent to the listener. There are many rhetorical benefits of breaking normal structure: e.g.,
- Alert listener to the fact that a question is coming: Who do you trust? Where are you from? Why did he do that?
- Emphasize the direct object: Susan was beaten by a mob.
- Obscure the subject: Mistakes were made.
- Stress the speaker's state of knowledge: Probably John is back at school (vs: John is probably back at school); Surely you know that theft is a crime.
Examples like these, Tallerman says, do "not support a view that displacement [movement of words from regular order] is somehow an undesirable imperfection in language."  On the contrary, they increase language's communicative power.
In her discussion of words, Tallerman says that they acquire their combinatorial rules through usage, not through internal thought processes. The same thing happens in syntax. "It is well known that in full language, functional categories are created by the process of grammaticalization" and therefore external processes must "precede or at the very least co-occur with the syntactic displacement."  You cannot get move rules without public sharing of a language.
In short, it is impossible that the need to understand language is a secondary problem, even when use is confined to thought. Changes to language structure carry many communicative benefits and cannot be called a design error. The grammatical rules of a language change as a result of common usage and show no indication of having arisen in the private sphere of a single brain. So let us hear no more of the hypothesis that linguistic thought predates verbal communication.