The philosopher Daniel Dennett sure resembles Charles Darwin.
The Atlantic magazine's website has a brief piece by Daniel Dennett on the relation between Alan Turing's notion of a computer and Darwin's theory of natural selection. The basic connection is that Darwin defined a mindless process that produces complex life forms, and Turing defined a mindless process whereby machines can solve any problem that has a computable solution.
Dennett is probably the most important philosopher arguing that the mind-body distinction is false and that mind can indeed be fully explained in terms of material mechanisms. It is an interesting issue for this blog because of the relation between mind and language. Can a mindless machine compute any and all sentences in a language?
We know as a matter of fact that the other animals of the earth cannot generate sentences, so, following Dennett, humans must have evolved new computational abilities to support language. Did we do that?
In my account of speech origins I propose that while our ape brains were adequate to get us speaking words and phrases, they were not enough to get us speaking true sentences or speaking about subjective processes. A true sentence consists of two focal points of attention united by a verb. An example is John Wilkes Booth shot Abraham Lincoln. To understand this sentence you have to focus on both Booth and Lincoln at the same time, normally an impossibility, but the two men are held together by the verb shot. We can imagine the scene with Booth. Lincoln, and the gun together and we pay attention to all at once because we understand it as a unitary event. Apes in their use of sign language have shown no hint of being able to create true sentences like this.
In my account, speech was originally used to direct attention to the concrete world, but eventually we developed the ability to speak about subjective things too. For example, Jack wrestled with Jill's idea. This is a true sentence with two focal points—Jack and Jill's idea. But the unifying verb—wrestled—is a metaphor. Some other verb might be possible—e.g., struggled, grappled… but they are all metaphors. No concrete verb gets at whatever it is Jack is doing. We can use computer verbs like tried to process but that's a metaphor too. I believe the ability to use metaphorical verbs—and perhaps metaphors in general—had to evolve to produce modern language.
The implication of Dennett's essay is that we must have evolved some Turing machines which could compute true sentences and metaphors. Yet, I confess that I doubt that such was the case.
Most people assume that to speak you have to understand what you are saying, but Dennett's point is that understanding is not necessary if you have a sufficiently well programmed computer. Take, for example, machines that play chess. I am old enough to have seen that whole story develop. In the 1950s the mastery of chess was often cited as a task for humans alone. Efforts to program machines to play chess were so limited that "toy" versions of the game with a few squares and pieces were the best most programmers could manage.
Back in the 1960's there was a big argument over whether the best approach was to mimic human thought or to rely on the "brute force" of a computer's great speed and data storage. I seem to recall a story in Scientific American from those days that said mimicking human thought was the more promising approach.
By the early 1980s chess playing machines were available and they used brute force. I bought one. It quickly became apparent that the only way a player of my feeble skills could beat the machine was to have a clear strategy in mind. A strategy was some long-range, general goal whose details I could not specify but which I could imagine sharply enough to guide my judgment about positions. With a strategy I could choose moves and eventually see my way to victory.
The other way to play chess is to use tactics, basically finding a set of specific moves that result in a stronger position. Chess machines use tactics. They follow moves and give the resulting outcome a score. They pick the move that leads to the highest score. I can only see a couple of moves ahead and if I relied on tactics alone, I would fail. The machine could evaluate more moves than I could, but with a strategy, if I stayed alert, I could win. I would make a move to support my plan and the computer would respond with an irrelevant move. Eventually, my strategy would overwhelm the machine. But by the early 1990s the story was different.
The best chess applications by that time were able to look deeply enough to give even some grandmasters a rough go. For me it was hopeless. The better players would hang on until they reached the end game. Chess end games—when each side is reduced to a couple of pieces—are especially strategic. Typically, players try to reach a situation, then aim for another situation, and then perhaps another. Machines still had a hard time with that kind of purposeful behavior.
In May 1997, however, an IBM machine called Deep Blue defeated Gary Kasparov in two out of three games. Kasparov was the greatest player of the time and possibly of all time. Deep Blue was able to compute so many steps ahead that it turned Kasparov's strategies into mere tactics and it had stored all possible end game positions and what move to make. Without even knowing that it was playing chess or what a pawn is, Deep Blue was able to out maneuver a man who understood the positions more profoundly than any other person alive.
Can something similar be accomplished using language? Can a mindless machine produce literature? The chess story reminds us that what many had once declared impossible can be done, but to do the impossible a machine must find a way to simulate purposeful behavior by generating a long string of computational steps. Can the production of meaningful sentences be reduced blind steps?
Language evolves in two ways—one is like natural selection, lacking in purpose. Phonetic changes, for example, don't matter per se. A coin might be called a penny or a benny or a venny. The important thing is that sounds distinguish words enough that listeners can catch which coin is meant. To the extent that language can change without changing meaning, language can evolve mindlessly.
But some changes do result from purposeful changes in meaning. I have noticed that a new verb has appeared in the past month. Americans can now say things like Governor Romney has etch-a-sketched his position on immigration. (*For non-Americans I explain the etymology of etch-a sketch beneath this post.) Until quite recently Etch-a-Sketch was a proper noun, but now it I have heard Democratic commentators use it as a verb, a metaphorical synonym for opportunistic change.
Suppose we had a computer that was fully up to date on the American form of English on May 1, 2012 and then in June was confronted with the etch-a-sketch verb. The computer's dictionary would identify Etch-A-Sketch as a proper noun with an –ed suffix and conclude that this is a noun being used as a verb. But what does it mean? Is there a step by step process going from the definition of the word as a noun to its use as a verb?
Sometimes there might be, when the verb simply means to use the noun, as in He Photoshopped his picture. But this new use of etch-a-sketch is metaphorical. I understand it by imagining the Etch-A-Sketch shaking and covering up an old image, ready to display something else. I also know the context of where the word came from and catch the implication of insincere opportunism as well.
Understanding a sentence is like watching a chess game and grasping the player's strategy, something chess-playing machines still cannot do. The observer sees a move and makes a leap, grasping the purpose supporting the move. In understanding a metaphorical sentence the listener must leap to the relevant references and see the purpose that justifies them.
Can a machine do this in some step-by-step manner? I don't like to say never, but I don't know what those steps would be.
Is the brain a computer?
Dennett takes it for granted that the brain is a computer, and although many agree with him the assumption is not proven. And if we believe that human chess playing involves strategic thinking, there is strong evidence that we think in a manner unavailable to chess-playing machines. The fact that machines can beat us at the game is not evidence that our brains are inferior computers anymore than the fact that automobiles can outrace us proves that our bodies are powered by inferior internal-combustion engines.
The assumption that the brain is a computer relies on our understanding of matter. Back when Galileo was laying down the rules of scientific thinking, he said it must stick to primary qualities, i.e., measurable qualities. Sensations, judgments, and purposes are secondary qualities and not to be included in scientific explanations of phenomena. That's why Lamarck's account of evolution was dismissed by people like Lyell and Darwin as unscientific. It appealed to a secondary quality, purpose. Darwin found the way to explain evolution without appealing to purpose.
But science's success does not mean that secondary qualities do not exist. It would be hard to persuade people that they don't have sensations, don't make judgments, and don't have purposes. The mind-body distinction is widely debated by philosophers and psychologists, with opponents of the distinction confident that they are attacking some form of spiritualism. Furthermore, the details of the distinction are vague. If you want to replicate the mind in a mechanical body, you are unsure how to prove you've done it beyond Turing's suggestion of seeing if a person can be fooled by a machine.
The primary-secondary quality distinction, however, has a more scientifically acceptable provenance, and makes for a more clear challenge. To cross the primary-secondary quality divide, a mindless machine would have to follow a series of steps to get from primary-based knowledge (scientific, measurable knowledge) to secondary-based knowledge (humanistic, metaphorical knowledge).
One challenge of this test is to prove the presence of secondary knowledge. I happen to believe that elephants have sensations, make judgments, and behave purposefully, but I cannot prove it. Elephants may be mindless, as Descartes said they were. But Descartes also said that the presence of language proves the existence of the human mind. So lets look at language. Can a machine which can detect only primary qualities compute sentences that express secondary qualities.
Language works at the conscious level, forcing things into our attention, and appealing routinely to secondary qualities: Wow, she's a looker; Try this. It tastes great; I'm going to write a blog so I can understand the details of the subject; I think 'Casablanca' is better than 'Citizen Kane.' Are sentences like these computable? Come to think of it, the etch-a-sketch metaphor is also based on a secondary-quality. The opportunism it implies is subjective, like beauty or grandeur. So computing a sentence about Romney etch-a-sketching his way to a position requires crossing the primary-secondary divide.
Even I am enough of a programmer that I could have a computer look into a database and pull out one of these sentences. But we know that is not how we create our own secondary–quality sentences. The etch-a-sketch sentence, for example, reflects a novel use of a noun as a verb. It cannot have been pre-stored in our brains. We either have some way of computing our way across the divide, or our brains are not Turing machines.
Chess machines cross the strategy-tactics divide by extending its analysis so many steps that the difference between strategy and tactics disappears. The difference between primary and secondary qualities seems to be more categorical, but in some cases that difference makes crossing the divide easier. You just need a simple table to convert wave length (primary quality) into colors (secondary quality). Other secondary qualities—like favorite color, contrasting colors, and an illustrator's pallet—have no counterpart in the world of primary qualities.
Another difference between chess and language is that chess is a complete system. The rules refer to moves made by defined pieces on a prescribed board. Language is not complete. At any moment a speaker might say something like His face turned the color of Grandma's cherry pie. Analogies can come from anywhere. Yet they cannot come from everywhere. His face turned the color of Granma's first novel will not do. Computing all the sentences of a language, and only the sentences of a language may be impossible.
The Etch-a-sketch is a popular toy used for drawing. It displays a silver screen and lets a user draw images. To produce a different drawing you shake the toy and the silver powder erases the old image.
Earlier this year, when Governor Romney was campaigning against other Republicans to get the party nomination for president, he advertised himself as a "severe" conservative. At some point a reporter asked one of Romney's aides how they planned to appeal to less conservative voters when the whole nation was voting. The aide replied that presenting a less conservative Romney would be as easy as changing a picture on an Etch-A-Sketch. There was a hullaballoo about Romney's apparent hypocrisy and the campaign's cynicism implied by this image.
A bit later spokespeople for the Obama campaign began using a new verb: to etch-a-sketch.