Whale anatomy shows flippers evolved from hands. Likewise modern abstract sentences have the anatomy of perceptual statements.
My last post discussed the main thesis of my paper, “The Evolution of a Hierarchy of Attention,” included in the book Attention and Meaning. A secondary theme in the paper concerns the problem of whether language was based originally on perception or mentalese, Steven Pinker’s term for innate concepts built into the brain. Presumably, these concepts take the form of brain circuits. Defenders of the mentalese-origin like to point out that we can speak in purely non-perceptual terms, e.g., “Justice is justice only when it is merciful.” There are no concrete nouns in that sentence and no metaphorical verbs or prepositions, yet it seems perfectly intelligible. I can imagine a teacher throwing that sentence out to a class to discuss in an essay. So it appears undeniable that modern language does not have to say things that are visualizable or expressible by the evocation of any of the other senses. Some smart people insist that since we do not need to speak in perceptible terms, language cannot be based on paying attention to perceptible things. Indeed, that attitude dominates linguistics and cognitive psychology in general (with some important exceptions).
On the other hand, we can also say things like, “That man is waving a gun,” which demand that we look about us and spot “that man.” Furthermore, we seem unlikely to have been born with the concept of a gun already built into the brain. We must have gotten it somewhere? Why not from perceiving guns? Pinker does not insist otherwise. His position is that we have some small number of fundamental concepts, e.g., cause and square, and they can be assembled to form more elaborate concepts. Instead of getting their meaning by directing attention, Pinker says words get their meanings by referring to mental concepts.
So which is it; is human thought based on perception or symbol processing? (I have discussed Pinker’s theory of mind before (see: here, here, here and here) and will not repeat all that.)
My paper’s epigraph says, “A defining characteristic of the human species is our capacity to rapidly establish topics of mutual contemplation” (Leavens et al, 2005), and I ask where our topics come from. Symbols or sights? I assume in my paper that knowledge of the world and its topics emerges from the attention hierarchy. Several of the hierarchy’s elements are much older than the Homo genus, so I find myself also committed to the idea that mammalian knowledge in general comes from attention. Surely we can gain at least some of our topics from our senses. When children start to talk, they say things like juice, [ba]nana, [fall] down, naming topics they have discovered through perception. Chimpanzees and bonobos (our closest living relatives) have a very detailed knowledge of the location and uses of things in their habitats, and when taught sign language they refer to concrete things.
So it would seem that the first topics of language were the sort of things one pays attention to, not the abstract topics of metaphysics and geometry. Is there any evidence to the contrary? I have tried to keep my eyes open for that kind of thing, but do not know of anything empirical to consider.
Evolutionary beginnings leave their traces in a couple of ways. One is the persistence of a trait, and the other is in the work-arounds found to solve problems. An example of trait persistence is found in cells. All cells with nuclei (eukaryotes) have little bacteria-like organelles called mitochondria. Apparently eukaryotes evolved by merging a couple of bacteria into one, and the trait has persisted ever since.
An example of a work-around is the whale’s flipper. Flippers are better than hands with fingers for swimming, but the land mammals that evolved into whales were not built to evolve a fish’s fin. Their hand eventually became a flipper, but the bony structure with five fingers reveals that the flipper is a work-around form of hand.
Language has a couple of these clues as well. First, the universal case relationships are perceivable ones. “That man is waving a gun,” expresses the two most basic perceptual relations: actor (that man) and acted upon (a gun). More can be added: that man is waving a gun in Jane’s face. There are two relationships here, one (prepositional) locates the place in space where the action occurs and the other (case) identifies a relationship between Jane and a part of her. It seems that language’s universal relationships to draw attention to perceptual events with ease.
Meanwhile, at first glance my purely abstract sentence—Justice is justice only when it is merciful—seems to express no relationships at all. The verb is simply asserts equivalence. Logically, it might be written: justice = justice iff justice contains mercy. Of course, contains is a spatial relationship, in this case a metaphor for considering the relation between two abstractions by presenting them as occupying space. So there is a secret relationship in the abstract sentence after all, and it is a spatial one drawn from perception.
One could make the abstraction much more vivid: Mercy shields sinners from an unfeeling justice. Here we have an active mercy. Humans think in metaphors like this all the time.
Can we do it the other way, translate that man is waving a gun, into some logical abstraction? We might replace that man with “humanity” and a gun with “weapon” but the verb is waving presents what I suspect is an insurmountable challenge. Humanity is waving a weapon sounds okay, but the relationship is still based on perceptions. We could get rid of the verb, as in humanity is a weapon, but whatever that sentence means, it has lost all relationship to the original.
As far as I have been able to determine, metaphors work one way: they can turn abstractions into concrete perceptions, but they cannot be used to turn concrete perceptions into abstractions.
This fact suggests to me that metaphors are a kind of work-around. Like the flippers of a whale, they let us function successfully in a new (conceptual) environment, but when you examine them closely they reflect the old (perceptible) world where they originated. (That last sentence, by the way, was an analogy, not a metaphor.)
There is also a persistent trait in language that suggests perceptual origins. Language and perception both come with a point of view, what the academicians call intensionality. I can say (a) The boy feared the lion, or (b) The lion frightened the boy. These two sentences refer to the same event, but the relationships with the verb are opposites. We can find the same thing with abstract remarks: (a) Mercy tempers justice, or (b) Justice absorbs mercy. This insistence on organizing an action in terms of point of view can be a nuisance when conducting diplomacy or trying to speak with certainty. Mathematicians have invented a full notational system that allows them to compute without expressing a point of view.
Defenders of the idea that language is grounded in a mentalese and consists of symbol processing without input from subjective experience like to say that symbol processing computes a sentence, just as symbol processing computes the solution to an equation. This conclusion seems inevitable if you assume the brain is nothing but a computer and that sentences are nothing but the result of computer processing. But the results do not carry the traits of a computation. Computations have one solution, but we saw with the boy and the lion that the point of view makes a difference. So the computation must include a step for inserting viewpoint, but why have this step? Why not just have a computation without a point of view?
Languages also express scope, determining how much to include in a scene. It can be concrete: The president is speaking … The president is speaking to a cheering crowd … The president, with the vice-president right behind him, is speaking to a cheering crowd. The news is the same in each sentence, but the focus of attention keeps getting wider. You can also change the scope of abstract metaphors: Justice determines the outcome... Justice determines the outcome in this court… Justice, with mercy close to hand, determines the outcome in this court. Computations have nothing equivalent to this scope. Sure, as above, you could add a step for inserting scope, but again we have to ask why bother if you are contemplating a concept.
In summary, the evidence suggests that we use judgment to utter apt sentences, not a computer to generate a correct one. Language arose as a way to direct attention to perceptible topics, and then evolved to include conceptual topics.
Interesting post. Toward the end you said: “Defenders of the idea that language is grounded in a mentalese and consists of symbol processing without input from subjective experience like to say that symbol processing computes a sentence, just as symbol processing computes the solution to an equation.”
I’d like to break this into two pieces: “Language is grounded in mentalese” and “language consists of symbol processing.” For the first piece, you might enjoy digging into the Natural Semantic Metalanguage (*), which seems to have finally realized Libnitz’ goal of finding a universal language of thought. It’s not Pinker’s mentalese. Their list of primes is the result of a several decade long program of ruthless reduction of concepts in a number of different languages to irreducible concepts; accepted primes have to exist in all languages with the exact same meaning and grammar (combinatorial properties). Concepts like Pinker’s square and red are not only not basic, there are languages that don’t have the underlying concept of color on which “red” depends! On the other hand, the concepts because, move, I, you and someone are basic; they can’t be defined in simpler terms.
For the second piece, I’ve been following a group that’s working on “embedded cognition.” You might enjoy this post, which talks to your contention that computation (in terms of symbol processing) is not an appropriate way of thinking about behavior: http://psychsciencenotes.blogspot.com/2015/07/brains-dont-have-to-be-computers-purple.html .
“This” is a prime. It directs the attention.
In chapter 13 of “Imprisoned in English” by Anna Wierzbicka (“Chimpanzees and the evolution of human cognition,”) she describes recent work on finding a similar basis in semantic primes for chimpanzees. “This,” that is, the ability to direct attention, is one of them.
In NSM, “word” is a prime. It’s one of the last, if not the last, prime to enter the human line. (“Say” may have entered at the same time.) There are studies of people who grew up without language and who eventually discovered the power of words as adults. Their testimony is that the entire world changed; they can’t go back. Helen Keller is a good example; there are others. For example, Ian Tattersal summarizes Ildefonso’s experience as described in “A Man Without Words” (Susan Schaller) as follows: (any transcription errors are mine):
“Schaller initially tried to teach Ildefonso the rudiments of American Sign Language (ASL), but soon percieved that he did not grasp even the concept of signs. Modifying her approach, she eventually achieved a breakthrough. Ildefonso, in a flash of insight, understood that everything had a name. ‘Suddenly he sat up, straight and rigid….The whites of his eyes expanded as if in terror….He broke through…. He had entered the universe of humanity, discovered the communion of minds.’” (Tattersal, 2012 p217) (Masters of the Planet, seeking the origins of human singularity.)
(*) https://www.griffith.edu.au/humanities-languages/school-languages-linguistics/research/natural-semantic-metalanguage-homepage
-----------
BLOGGER'S REPLY: Thanks for the fullsome comment and links.
Posted by: JohnRoth3 | August 03, 2015 at 09:04 PM