Blog Rating

Selected Books by Edmund Blair Bolles

  • Galileo's Commandment: 2500 Years of Great Science Writing
  • The Ice Finders: How a Poet, a Professor, and a Politician Discovered the Ice Age
  • Einstein Defiant: Genius vs Genius in the Quantum Revolution

« Attention! It's a Revolution. | Main | Showing Without Telling »


David Rose

Another pithy interpretation! I would only add that the speakers’ attention is not only on the topic, i.e. their proposed food gathering activity, but also on each other, as they propose, agree, negate and concede, and thirdly on directions in the context of speaking, with ‘there’, ‘over here’, ‘this place’ etc. These three foci of attention illustrate the three broad functions of language in Halliday’s model – construing experience as activities, enacting our social relations, and presenting meanings so they make sense in the context.

Joe Ardent

Can you clarify (that is, buttress and elaborate on) the statement, "The computational model of the mind forces attention into a passive role..."? Specifically:

- what is the "computational model of the mind"? In this context, what is "computation"? Are there other models of the mind that embrace computation as a model for the actions of the brain, and if so, do they also force attention into a passive role? Why?

- how exactly is attention-as-volitional-activity precluded by the stated "computational model of the mind"? Can this axiom be interpreted to mean that attention-regulation will never be possible for agents based on a computer (either virtual or embodied)? Or, attention itself is beyond the reach of such agents, since attention requires some sort of self-agency, which is axiomatically only possible with non-computational agents?

As I've said before, I realize that these lines of inquiry are not really directly related to the origin of speech/language in humanity as a means of externalizing the mind to allow it to be shared among more than one brain. But the statements such as the one in this post about the limits of the computational model are a distraction at best.

Joe Ardent

Joe Ardent

Sorry, I meant to say that such statements are "non sequitar at best".

Giorgio Marchetti

I think that when dealing with attention and consciousness, their workings and effects, the traditional computational models of the mind (such as, for example, Johnson-Laird’s and Shallice’s) based on the concept of “information flow” or “information process” are inadequate.

Barely speaking, these models conceive of our mental activity as a flow that has an input (our senses), some kind of elaboration (attention, perception, memory, central processing, etc.) and a final output (our conscious experience).

They treat attention simply as a mechanism that lets information come in and be processed by some other device, for example an operating system, or a central processor, whose working yields conscious experience. Attention is forced then into a passive role (gating, filtering, etc.).

These models delegate the core problem of consciousness - how can we explain the fact that we have subjective, direct experiences of objects? – to a “final” organ (i.e., a central processor).

In my opinion, this way of treating consciousness is inadequate, because there cannot be a “final” device towards which information flows, unless we are willing to consider this final device as a conscious agent itself, or a homunculus, thus entering a vicious circle.

These models can certainly explain how information is processed, the changes it undergoes, the time needed to process it, and so on. However, they do not and cannot explain what a subject feels as it processes information, that is, how its conscious states start forming, develop, and change as a consequence of what it does.

This is because information is made up of ready-made symbols representing the external world, whose meaning derives not so much from the importance they have for the subject’s formation and development, but from the importance they have for the researcher’s investigations. The information-processing approach, in fact, is based on the assumption that the mind processes representations that already have their own meaning, independently from the history of the subject, and does not investigate how they acquire a meaning for the subject, and how the subject builds meaning (a critique, this, of the information processing model that has been made also by Searle).

The information-processing level of analysis examines how some parts of a subject’s organism - sense organs, attention, memory, central processor, an so on - transform information, but does not examine how a sentient subject transforms itself as it processes information.

In order to explain conscious experience, a different approach is needed: what I call “a first-person perspective” must be taken. In my model, attention plays an active, fundamental role: it is the basis of consciousness, and it allows the subject to evolve and form (for more details about this model, see my article “A theory of consciousness” in my website

In my model, attention is not the only organ: an essential role is played also by what I call “the schema of self”, which embodies all the kinds of competence and abilities - linguistic, social, physical, and so on - the organism innately possesses or has acquired during its life. It regulates the activities of the organism according to the hierarchy of principles and goals it incorporates, at the top of which there is one fundamental principle: the principle of survival. Operationally, the principle of survival can be expressed as follows: “operate in order to continue to operate”.

I believe that a conscious machine can be built on the principles stated by my model (at least, I work with this aim: I elaborate my model and my analyses on the principle that they must work, operate, and so be implemented somewhere).

I hope to have (at least partly) clarified Ardent’s doubts.

Giorgio Marchetti

Giorgio Marchetti

Just a comment on David Rose’s observation. Correct: speakers’ attention is not only on the topic, but also on each other, and thirdly on directions in the context of speaking.

It must be added, that (most probably) while the first and third form of attention are quite primitive, the second is not (at least, in so far the speaker’s ability to focus on himself or herself is concerned).

According to some authors (see Felice Cimatti), what differentiates animals from human beings is precisely the fact that the latter use language not only to communicate their own intentions or the events happening in the environments to other organism, as the former do, but also to communicate with themselves, directing their own attention to themselves and to their attentional system.

The inner speech functions as an artificial behavior that gives the organism the possibility of consciously perceiving itself, and therefore of being self-conscious: a characteristic that surely humans have, but that most animals lack.

Giorgio Marchetti

Joe Ardent


Thank you for the very lucid and informative response. I agree that the models you described are inadequate for explaining or modelling consciousness.

However, those are "certain computational models of the mind", not, "THE computational model of the mind." I do not wish to speak for Blair, but I believe he meant exactly what he said: the mind is not computable, and the brain is not a computer; the phenomenon of consciousness or general intelligence is fundamentally inaccessible to machines.

Needless to say, he and I disagree about this :)

David Rose

Re Giorgio's evolutionary timeline for language functions, a comment and question...
1 Animals and human infants attend to things in the world (including their own bodies) or to each other, but it seems cannot attend to both simultaneously. Humans develop the capacity to attend to both at around 9 months - so-called joint attention. At the same time they begin to develop an idiosyncratic protolanguage of vocalisations and gestures, with functions like...
'give me that'
'where are you?'
'that's interesting'
So the capacity to jointly attend to things and places in the context, and to consciously direct others’ attention, seems to be uniquely human and associated with the beginnings of language, indeed some kind of language is required to do it. So I’d suggest it is the most recent development (rather than primitive).
2 Vygotsky tells us that inner speech is the internalisation of 'outer speech', developing well after the infant has learnt to speak. Does this suggest that language is required for 'the organism to consciously perceive itself'. What are the implications? Is one that self-perception is modelled on interactions with others, which in humans are always linguistic interactions?

David Rose

Sorry, I should have said 'evolutionary timeline for three foci of attention', rather than 'language functions'.

Giorgio Marchetti

As to Rose’s question about self-perception. Some scholars think that language is a necessary precondition for self-consciousness to take place. I think that there can be (at least a rudimentary form of) self-perception even without language: Images, sounds, and smells all represent alternative means an organism has to artificially perceive itself. Animals seem able to perceive (up to a certain level) themselves, their own body, and parts of the body (see for example the experiments that use mirrors).

In my opinion, self-portraits in art are another evidence of the fact that self-perception can occur without language (of course, in a more elaborate and advanced way than the way it occurs in animals).

Undoubtedly, however, language is and remains the most effective, articulate, powerful and economic way a human being has of perceiving and representing himself, of thinking about himself, of setting goals for himself, and so on. As such, it allows human beings to do things that animals cannot do.

Language is essentially a social tool: it is used by speakers to communicate with each others, to share opinions, ideas, activities, etc. Mothers use language to teach their babies what they can and cannot do. A baby/young boy understands what he/she is, and what he/she will become in the future (or what society wants him/her to become) thanks and though (also and prevalently) to the words the others (parents, relatives, teachers, etc.) use, and the speech they make. One is (also) what one is told to be (see the work “The social construction of reality” by Beeger and Lukman on this topic). One models oneself on interactions (prevalently, verbal) with others.

When one uses language to self-perceive oneself, think about oneself oneself, set goals to oneself, etc., unavoidably one finds oneself immersed in a social context. Language has personal pronouns, I you he, etc: language throws you in a social arena. Consequently, the use of language to self-perceive oneself makes one be a social being, a social self: a being modelled on interactions with other beings.

Giorgio Marchetti

The comments to this entry are closed.

Bookmark and Share

Your email address:

Powered by FeedBlitz

Visitor Data

Blog powered by Typepad