Symbols can be "pure," referring to things within the symbolic system or that can be "applied," related to the real world. How did speakers and listeners come to understand that linguistic symbols are not pure, but refer to a reality beyond themselves?
One of the big mysteries of language is also a mystery of perception—how does (language/ perception) reach outside itself to have a meaning? I can say, "My pet dog loves to scratch itself," and you can know what I'm referring to even though the meaning reaches out beyond the sentence to describe the world. Similarly, I can see my pet dog scratching itself and know what I'm seeing even though the meaning reaches out beyond the image and its neurology to show me the world. The similarity is one more bit of evidence that language is perception by other means. It is social perception.
Psychologists refer to "naive realists," people who think what they see is out there and do nor realize that some neurological process has created the sensations that support perception. But there are also naive symbolists who do not realizing that there needs to be some way to make the symbol user look beyond the symbol to reality.
I've been reading a short paper from the artificial-intelligence world—"Machine Symbol Grounding and Optimization," by Oliver Kramer—that considers the problem of "grounding," referring to something beyond the symbol. It is not Kramer's point, but his paper bolsters my belief that we are still a very long way from making machines that can perceive or use language as we do.
The paper begins by summarizing the author's view of perception, which he defines as "the transduction of subsymbolic data into symbols" [p. 1] Subsymbolic data are the sensorimotor inputs from the sense organs. Perception converts the sensory input into "symbols," but don't think of symbols in terms of things like words, hieroglyphs, pictures, musical notes. Symbols are "thought to be representations of entities in the world…. they are grounded by their internal semantics" [1]. How does that work?
How, for example, does the systemconvert sensory impressions of a smartphone into knowledge that the world contains smartphones? Kramer says his perceiving machine has no "semantic resources" built into it, nor are they provided from the outside. The machine has to figure everything out itself. This is a tall order, one that both nativists and behaviorists would say is too tall. No innate ideas and no teaching either!
In Kramer's attempt to build a machine clever enough to learn what sensory input means, he starts by randomly generating "clouds" of input over time. I'm not sure that meaningless input is the way to get a machine to appreciate that the input refers to something, but I suppose we have to begin somewhere.
The data is "clustered," organized into groups. Clustering depends on temporal information—e.g., the beginning and end of an input, the ordering of the inputs—and each cluster is given a symbol. For example, imagine input orogorom is symbolized by #.
What does # mean? orogrom
And what does orogorom mean? #.
Clearly we need to break this circle and embrace some larger world. Kramer includes a couple of logical rules that help determine two symbols are parts of another symbol. Also if one symbol follows another within a certain time span, the first is said to cause the second.
At first these parameters might look as though they might be the way to break the circle and get from the symbols to the reality. Build up enough symbols that are part of other symbols and you eventually come to the idea that there is a world full of symbols, but why come to that? We have lots of symbol systems—music and mathematics—that are rich and complex, but don't need to refer to anything beyond themselves..
If a machine can process a document without knowing that it is processing a document, or that the text in the document refers to anything, or that there is anything out there for the text to refer to, how can we expect a machine to make a leap of faith to conclude that beyond the "subsymbolic data" lies a source of this data and that the data tells us something about that source?
In all fairness to Kramer, I have to stress that this is my issue, not his. He considers his symbol "grounded" when a machine can reliably convert subsymbolic data into symbols. But his work has made me think about the "brain in a vat" issue from the reverse side.
The brain in the vat question asks how do we know that we aren't a free-floating brain receiving sensory inputs without there being an actual world beyond the vat producing the sensory inputs. This situation is exactly the one Kramer has in fact produced. He has a computer sitting on a shelf somewhere and is filling it with randomly generated data that the computer converts into symbols. What Kramer's computer does not do, however, is think the symbols refer to some reality beyond itself. Where would it get such an idea? We, however, do have exactly that idea. We can ask the brain in a vat question, but it's a game question. Most of us don't believe we are brains in a vat. Where did we get the belief that our sensations refer to a world beyond ourselves? I'll call that the brain in a world question.
I have no solution to the question, but I want to point out that on this blog the problem only had to be solved once and well before the evolutiion of language. Meaning on this blog comes from piloting attention. If I say, "Your smartphone is vibrating," I point the listener's attention to a smartphone. If I say, "Julius Caesar was butchered on the floor of the Roman senate," I direct the listener's imagination to visualize the scene. I don't have to do anything else. There is no need to say that the smartphone is real and out there in reality. We get that part because we get perception.
There are rival theories that make meaning something more ghostly and divorce it from perception.(You can look them up.) The key point here is that if you accept one of those theories, then your account of language origins is going to have to include some part where people got the fact that language isn't just a game of symbols, It reaches out to a larger world.
It's amazing enough that perception is grounded in reality. It seems to be asking a lot for a second separate system to have found a way to ground itself as well.
Very interesting post... I think you hit the nail on the head when you say that words refer to world events. For instance the words "my dog" refer to what "my dog" typically looks like as she appears in my visual field. As these visual experiences occur day by day they form and shape my invariant "my dog" memory, composed of not one but a group of past visual experiences.
I have an idea on how words might "call up" word meaning memories and vice-versa. What if both a word symbol and its corresponding experience (in this case a visual experience) were stored in the cortex as a particlar pattern of neural oscillation. In addition, both memories (the word symbols "my dog" + the visual memory of my dog) form a larger pattern of neural oscillation. Thus, the activation of the word symbol memory "my dog" would activate the larger memory (NO pattern). This larger memory would in turn trigger the remainder of that memory, including the memory of what my dog looks like. Or conversely, the sight of one's dog would trigger both the invariant memory of what my dog looks like as well as the "my dog" word memory -- again via this associative neural oscillation mechanism.
Underlying this semantic retrival "model" is the idea that subjective experience, from a third person neural perspective, takes the form of neural oscillation patterns. Each and every stored experience (word symbol experience or word meaning experience) takes the form of a particular "NO" pattern.
NO patterns to me are the perfect mechanism for storing and (quickly) retrieving associated memories from the neural substrate. Since both memories and NO patterns are associative in nature, perhaps memories ARE neural oscillation patterns. If so, then this would explain how a word could "call up" a past perceptual experience, and vice versa.
Posted by: Johnh | April 12, 2011 at 07:18 PM
Blair,
when you have time, look at this information on perceptual compensation and consequent semiomorphism (?).
http://www.nature.com/neuro/journal/vaop/ncurrent/abs/nn.2795.html
Posted by: Jerry Moore | April 12, 2011 at 10:55 PM