Blog Rating

Selected Books by Edmund Blair Bolles

  • Galileo's Commandment: 2500 Years of Great Science Writing
  • The Ice Finders: How a Poet, a Professor, and a Politician Discovered the Ice Age
  • Einstein Defiant: Genius vs Genius in the Quantum Revolution

« Goodbye Specialized Modules? | Main | Language Serves the Group »



When someone says that the brain is a computer, they have to say what kind of computer. The brain is definitely not a 'general computer' and does not have sequential step-wise algorithms as its processing mechanism. It is also highly parallel. It is not digital. It is not in the family of Turing machines. etc.etc. Some say 'ah but it can be stimulated by a Turing machine'. I doubt that it can but even if it were possible, a stimulate is not the same as being a Turing type computer. No magic of course, a physical system, but not reducible to a Turing machine either.
Blogger: I think if you check out Dennett's essay you will see that he argues the Turing machine can do whatever we can do.


Interesting article! However, I do think you are missing one important point when it comes to understanding metaphors.

I think it is a mistake to think of language use as something separated from living/experiencing the world. How we understand metaphors is a great example of that: To understand the etch-a-schetch metaphor, you have to know what an etch-a-scetch is, and how it is used. You also need to know a lot of other facts about the world (what has been said earlier about Romney, how politics work, etc).

A language simulator in a vacuum, will not be able to decifer that kind of sentence, but that does not show that understanding of language is not computational. You just have to include computation of a lot of stuff that is not directly related to language
Blogger: I don't disagree with that last point but I think sorting out the detail for relevance is quite difficult.


I think in this case Dennett is (unusually for him) wrong. I give a quote below that is typical of the sort of analysis of those that have one foot in AI and the other in neuroscience. They believe that they have to go to super-Turing machines, analog computing and stimulations because a Turing type computer is not equivalent to the brain's processing. Siegelmann thinks that even Turing did not believe that the brain was a Turing type computer.
“The Turing machine was suggested in 1935–36 as a model of a mathematician who solves problems by following a specifiable fixed algorithm and using unlimited time, energy, pencils, and paper. Turing’s 1938 search for models of hypercomputers (that outperform this mathematician), together with his later emphasis on learning and adaptation, probably reflects his understanding that there are other kinds of computation beyond the static, fully specifiable algorithm (Copeland,2000; Copeland et al., 999). The brain, for example, could be perceived as a powerful computer with its excellent ability for speech recognition, image recognition, and the human ability to develop new theories. The nervous system, constituting an intricate web of neurons with 10^14 synaptic connections that adapt with experience, cannot be perceived as a static algorithm; the chemical and physical processes affecting the neuronal states, like other natural processes, are based on exact real values and are thus not specifiable by finite means.” Neural and Super-Turing Computing, by Hava T. Siegelmann, Minds and Machines 13: 103–114, 2003.

I'm not an expert in this field, so I apologize if I'm of base here...

The main issue I have with the author's contention is the "primary"/"secondary" quality dichotomy. It seems like ever-increasing computing power is expanding our view of what qualities are "primary". Look at the chess example -- chess strategy used to be viewed as judgement-based, something nebulous which a computer would not be successful at. As time went on, increasing data processing power enabled computers to replicate this judgement, making chess skill into a primary quality. Who says that the same is not true of language? Perhaps a computer with access to enough semantic data, information on current events, etc. would not be able to understand neologisms?
BLOGGER; Of course the expectation is that more data will solve the problem and prove the difference was illusory or at least irrelevant. But remember that this expectation is based on faith, not evidence. Is there a difference between the knowable and the computable?

Chris Crawford

The computer scientists have long since figured out that computers cannot understand language, and they have even developed an explanation of why that cannot happen. They talk in terms of knowledge domains as regions of thought that include all the data, relationships, and ideas associated with a particular subject. Thus, it should be possible to write a program that could do a pretty good job of conversing with a human about a highly circumscribed topic -- so long as the human didn't use metaphors beyond the ken of the topic.

It is widely acknowledged that such circumscription is highly artificial, hence getting a computer to understand language is a task beyond our abilities as yet. It's not a matter of having enough storage or speed. The problem is that language mirrors reality, and so to put language into a computer, you need to put reality into the computer. Can you imagine what a class called "Reality" would entail?

The question then becomes, could we ever specify enough of reality to get language working? There has been an ongoing project for the last twenty years to do precisely that. It is, as you can imagine, a humongous task, and the developers are not confident that they can complete it in the foreseeable future. Nevertheless, it is unquestionably something that CAN be done.

After all, if it's part of reality, then we can express it in language, and if we can express it in language, then we can write down a formal version of that expression, a formal version that CAN be understood by a computer.

I have no doubt that we will eventually get a workable subset of reality coded up. To call the result a database would be rather like calling human physiology a set of chemical reactions. It will be humongous, stupendous... very large. And processing something that big will take teracycles of processing. As I wrote, I have no doubt that it will be done someday -- but I also have no doubt that it will NOT be accomplished anytime in the next few decades.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Bookmark and Share

Your email address:

Powered by FeedBlitz

Visitor Data

Blog powered by Typepad