Extract From: AMD Newsletter Vol. 8, No. 1, 2011 (pages: 6--7)
http://www.cse.msu.edu/amdtc/amdnl/AMDNL-V8-N1.pdf Response to:
Dialog Column Are Natural Languages Symbolic in the Brain? by Juyang Weng

Meaning-bearers in Computers, Brains, and Natural or Artificial Minds

Aaron Sloman
http://www.cs.bham.ac.uk/~axs
School of Computer Science, The University of Birmingham
Birmingham, B15 2TT
England, UK

Cognition includes symbol-use, both externally and internally, to
express meaning or information (not in Shannon's sense). How that is
possible is an old problem with several old unsatisfactory answers,
such as: meaning is based on experience of things referred to,
meaning depends on causal connections between symbol and referent,
meanings are possible only because of social/cultural conventions,
and expressing meaning requires a human language.

Reference cannot require causal connection, since we can refer to
non-existent objects, e.g. "The elephant on the moon". There was a
largest mammal in Africa 500 years ago, but our ability to refer to
it does not require us to be causally linked to it: meaning, or
reference, can use, but does not require, causal links. That's why
we can ask questions without knowing the answers, select goals that
we may not achieve, and have false beliefs. If meaning depended on
causal links most human thinking would be impossible. Reference to
genes, quarks and transfinite ordinals cannot depend on experience
of referents. Percepts, intentions, learnt associations, and
thoughts in pre-verbal children and non-human animals cannot depend
on human conventions or language.

Symbols, i.e. discrete meaningful tokens, such as words, are not the
only meaning-bearers. Spoken languages also use continuous
variation, e.g. of pitch, or intensity; and human sign languages use
continuous gestures. Maps, chemical formulae, equations, and
semaphore signals, are among the external meaning-bearers we use.
Internal meaning-bearers also exist but cannot be found in brains
using physical sensing devices, any more than physical sensors can
detect spelling correction, or a threatening chess move, in a
multi-processing computer. Such events occur in virtual, not
physical, machinery. But they exist, since they have causal
consequences.

Most interesting contents of computers exist in .running. virtual
machines (running VMs), which are implemented in physical machinery,
using complex technology based on a tangled causal web of hardware,
firmware and software, that alters virtual-physical mappings
dynamically. Many tasks, including checking spelling, formatting
documents, fetching web pages, and eliminating malware, require VMs.
When a computer manipulates numbers the physical machine uses groups
of switches implementing bit-patterns that, depending on context,
represent numbers, instructions, pointers to complex structures, or
other things. Whether a bit-pattern represents a number or something
else depends on what procedures are active and what they are doing.
When a chess program creates a threat, that is not a physical state.
Likewise when I doubt whether someone is talking sense, the doubt is
not a physical brain state. "Threat" and "doubt" cannot be defined
in the language of physics nor instances detected by physical
sensors -- though in simple cases physical footprints may be
detectable under special conditions.

Over decades, human engineers found that complex control mechanisms
need to operate on entities in virtual machinery that, unlike
physical machinery, allows rapid construction and modification of
complex structures, and rapid garbage collection after use.

Conjecture: long before that, evolution "discovered" the need for
representation and control functions distinct from, but implemented
in, physical mechanisms: so mental meaning-bearers exist in those
biological VMs running on brains.

Since perceptual and other contents must change faster than physical
parts of brains can be rearranged (e.g. walking with eyes open in a
busy city), biological minds need VMs. That can include symbols, for
example if you solve equations in your head, rehearse a Shakesperian
sonnet, or wonder how brains work. Brain-based VMs can also
construct and manipulate diagrams, e.g. visualising the Chinese
proof of Pythagoras' theorem, or designing a new
information-processing architecture, or imagining the operation of a
threaded bolt rotating as it goes into a nut. Virtual machinery
includes, but is not restricted to, discrete, discontinuous,
structures and processes. Interacting VMs on computers and attached
devices run concurrently -- their state being preserved in memory
while CPUs switch tasks, relying on decades of complex design by
hardware and software engineers, solving many different problems,
including self-monitoring and control. Very few grasp the big
picture combining their efforts.

Biological evolution did something similar, though far more complex
and difficult to understand. Support for VMs used in human language,
in construction of percepts, in formation of motives, in specifying
actions, in generating, evaluating and executing plans, and
learning, probably took thousands of intermediate design steps, not
yet known to us. Clues exist in the competences of other animals and
in pre-verbal children (Karmiloff-Smith, 1992). Exactly what the VMs
are, how they evolved, how they are implemented in brains and what
their functions are, are still unanswered questions. We cannot find
answers simply by studying a narrow subset of products of evolution
(e.g. humans) nor a narrow class of robots that mimic some tiny
(often arbitrary) subset of animal competence.

Much thinking about language, mind and philosophy of science by
roboticists, ignores most of what has already been written over
hundreds of years, including work on semantics last century by
philosophers of science. Previously, scholars could be familiar with
all the important prior published work when investigating a problem,
such as the problem of how meaning is possible. But that is no
longer possible. I call this the Singularity of Cognitive Catchup
(SOCC), see (Sloman, 2010a).

Does SOCC mean that on many important topics we are now doomed to
arguing in circles, producing only minor variations on previous
failed theories? Perhaps not, if we can find a new high level
synthesis to reorganise our thinking. That may be possible if we
replace pointless debates (e.g. about embodiment) with deep
investigations of the evolutionary discontinuities in information
processing requirements and mechanisms, not just in humans but in a
wide range of organisms, including microbes, insects and other
animals. That will help us focus on the real design issues and help
us understand some of the solutions, as suggested in (Sloman,
2010b).
References:

Karmiloff-Smith, A. (1992) Beyond Modularity: A Developmental
Perspective on Cognitive Science, MIT Press, Cambridge, MA

Sloman, A. (2010a) Yet Another Singularity of Intelligence,
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/another-singularity.html

Sloman, A. (2010b) Genomes for self-constructing, self-modifying information-processing architectures,
SGAI 2010,
research/projects/cogaff/talks/#sgai10 http://www.cs.bham.ac.uk/ research/projects/cogaff/talks/#sgai10


Maintained by: Aaron Sloman
Installed: 28 Oct 2012
Updated: 28 Oct 2012