Commentary on
"The Symbol Grounding Problem Has Been Solved: Or Maybe Not?"
by Angelo Cangelosi
In AMD Newsletter (PDF), Vol 7, No. 1, 2010.
Ed Pierre-Yves Oudeyer, pp. 2-3.
This commentary is on pp.7-8.
Misquoting Santayana: Those who are ignorant of philosophy are doomed to reinvent it -- badly.
The "symbol grounding" problem is a reincarnation of the philosophical thesis of concept empiricism: "every concept must either have been abstracted from experience of instances, or explicitly defined in terms of such concepts". Kant refuted this in 1781, roughly by arguing that experience without prior concepts (e.g. of space, time, ordering, causation) is impossible. 20th century philosophers of science added nails to the coffin, because deep concepts of explanatory sciences (e.g. "electron", "gene") cannot be grounded in the claimed way. Symbol grounding is impossible.
Let's start again! All organisms and many machines use information to control behaviour. In simple cases they merely relate motor signals, sensor signals, their derivatives, their statistics, etc. with, or without, adaptation. For a subset of animals and machines that's insufficient, because they need information about things that are not, or even cannot be, sensed, e.g. things and events in the remote past or too far away in space, contents of other minds, future possible events and processes, and situations that can ensue.
Coping with variety and novelty in the environment in a fruitful way requires use of recombinable "elements of meaning" - concepts, whose vehicles may or may not be usefully thought of as symbols. So a relatively small number of concepts can be recombined to form different percepts, beliefs, goals, plans, questions, conjectures, explanatory theories, etc., These need not all be encoded as sentences, logical formulae, or other discrete structures. E.g. maps, trees, graphs, topological diagrams, pictures of various sorts, 2-D and 3-D models (e.g. of the double helix) are all usable for encoding information structures, for purposes of predicting, recording events, explaining observations, forming questions, expressing purposes, controlling actions, communicating, etc. Sometimes new forms of representation need to be invented -- as happens in science, engineering, and, I suspect, in biological evolution and animal development.
How can conceptual units refer? Through their role in useful, powerful theories. In simple cases, a theory is a set of "axioms" with undefined symbols. There will be some portions of reality that are and some that are not models of the axioms. In each model there will be referents for all the undefined symbols. To reduce ambiguity by eliminating some of the models, scientists add "bridging rules" to their theories, linking the theories to experiments and measurements, which support predictions. This partially "tethers" the theory to some portion of reality, though never with final precision, so theories continue being refined internally and re-tethered via new instruments and experimental methods. An infant or toddler learning about its environment, which contains different kinds of stuff, things, events, processes, minds, etc. needs to do something similar, though nobody knows how, and robots are nowhere near this.
Progress requires much clearer philosophical understanding of the requirements and then new self-extending testable designs for meeting the requirements. Most of what has been written about symbol grounding can then be discarded.
More details are available online:
I know from previous experience that this is far too condensed to be intelligible to many of your readers, but if they follow the links at the end they may be able to comprehend. Brevity has produced a very arrogant tone, alas.
This work is licensed under a
Creative Commons Attribution 2.5
License.
If you use or comment on my ideas please include a URL if possible, so
that readers can see the original (or the latest version thereof).
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham