Blog Rating

Selected Books by Edmund Blair Bolles

  • Galileo's Commandment: 2500 Years of Great Science Writing
  • The Ice Finders: How a Poet, a Professor, and a Politician Discovered the Ice Age
  • Einstein Defiant: Genius vs Genius in the Quantum Revolution

« Babel's Dawn, the Book | Main | Co-Evolution is Real »

Comments

Andi Chapple

hi Blair -

when you write, '“More than 100 algorithms to analyze the question,” is hardly the way people do it', are you sure? could that be some of what is happening in the brain when we hear speech and come to understand it, even given that the predictive side of the brain has already whittled down the possibilities based on what it thinks is about to happen?

there are theorists of consciousness (I have Gerald Edelman in mind, I hope I'm right) who suggest that we end up with a perception or an understanding through mental processes of competing interpretations ...

best wishes

Andi

JanetK

Andi, I think that the question rests on how you define algorithms. I think of algorithms as step-by-step sequential procedures and that they are one half of a separation between 'hardware' and 'software'. I think procedures in the brain (especially fast ones) are done with parallel processes not sequential ones and that there is not a clear separation of 'hardware' and 'software' in the brain. Although it is not clear how the brain processes language; it is fairly clear that it is not by algorithms if we define the word as above.

www.google.com/accounts/o8/id?id=AItOawkyQS5sJI490gt15Sj58MbwLOMW2LgSwoY

Most of Artificial Intelligence research is what it says - Artificial. - and Watson is a good example of a system which has little in common with Human Intelligence.

Computers started as glorified calculating machines, designed to process mathematical algorithms which, due to the nature of the task, are too difficult for humans to do quickly and accurately. Such tasks would be meaningless to early man and it is surprising that anyone every takes them to be a good model any aspect of natural human intelligence.

When it was discovered there was money and careers to be made out of computers there was a mad rush to get on the bandwagon and there was no time for anyone to do genuine "blue sky" research. No-one asked the question "Is it possible to design an electronic system which would interact with people as easily as a stored program computer interacts with mathematical algorithms?" It is now taken for granted that the stored program computer is the best way forward and that its inner workings are inevitably incomprehensible to anyone but very highly paid mathematical experts. Most artificial intelligence programs take it for granted that the way forward is to find the "right algorithms" - irrespective of how many many years of algorithm writing by clever people is invested to make comparatively slow progress. The belief in the validity of the stored program computer approach has now reached the point where most children are taught at school about programming.

Over 40 years ago, after working on an extremely complex commercial data processing system, I decided that the whole philosophy of the stored program computer was wrong. I concluded that the real world was sufficiently unpredictable that, for many tasks, the idea of predefining them using the algorithmic approach was inappropriate. Not realising I was doing somethin new I started to try and answer the "blue sky" question I posed above. I am currently describing what I did on my blog www.trapped-by-the-box.blogspot.com so won't go into detail here - but the key point was to start with the design of a user-friendly communication language where the electronic processor (computer would be an inappropriate term) could tell the user what it was doing in the same language that the user described the problem. For many very different applications, including artificial intelligence, the system could automatically deduce the answer without any obvious predefined algorithm being necessary. The approach could well have been a far more realistic model of human intelligence, and how it evolved, than the stored program computer, but I would not take the analogy too far without significant qualification. While my proposal was conceived as a serial processor approach, it should be easy to re-interpret it to work in a parallel network.

The problem was that it was philosophically contra-intuitive to anyone who had been taught to program a computer (which now means almost everyone who was taught programming at school) and this made it very difficult to get research funding or past peer reviews - which were invariably carried out by stored program computer experts. After many years banging my head against the wall I abandoned the research, effectively on health grounds.

After another 20 years of forgetting the work I decided to have a good look on the web to see what has happened in the meantime. So far I have come to the conclusion that much A.I. researchers, such as the writers of Watson, are still flogging the algorithmic approach. As a result I have decided to use my blog as a basis for exploring my old ideas further - as no-one else seems to be moving in the same direction.

The comments to this entry are closed.

Bookmark and Share

Your email address:


Powered by FeedBlitz

Visitor Data

Blog powered by Typepad

--------------