Disembodied Motionless Intelligence
Why offline intelligence is as important as online intelligence for
many animals
and future robots.
That provoked a stream of rival fashions claiming to have the answer, e.g. based on neural nets, evolutionary computation, "behaviour-based" design, subsumption architectures, embodiment, enactivism, situated cognition, morphological computation, dynamical systems, autopoesis, Bayesian learning, deep learning, use of "Big Data", (not all in that order), and probably others. Some of the ideas had already been thought of long before suitable computers were available and were discussed by Marvin Minsky in "Steps Toward Artificial Intelligence" in 1962 -- half a century ago.
All those fashions ignore complexities in what needs to be explained, so that each tries to explain too little -- focusing only on problems for which the fashion is suited. My presentation will identify a class of sub-problems involving "offline intelligence": uses of intelligence without performing any visible actions, though the long term consequences for future actions may be profound. Examples include making mathematical discoveries, such as the discoveries leading to Euclid's Elements, perhaps the most important book ever written. Others include wondering whether, wondering why, trying to remember, composing a poem, enjoying a poem, designing a building, debugging a design, trying to understand why someone is late, and many more.
There are also unconscious examples, such as deriving grammar based language understanding from pattern based language understanding, then extending the grammar-based version to handle exceptions: a difficult software engineering feat performed unconsciously by children.
I think there are many more, including toddler theorems discovered and used, without anyone noticing (though Piaget noticed some).
A key methodological step is adopting virtual-machine functionalism as central to the research framework. This is part of the very long term Turing-inspired Meta-Morphogenesis project, which needs much help.
Since the earliest days of AI, attempts have been made to design (a) machines that move and/or manipulate things (e.g. W. Grey Walter's tortoise, the Stanford University robot, Shakey, Edinburgh university's Freddy I and II Robots and (b) machines that think, with or without abilities to move, e.g. programs that play games like chess or draughts (checkers), construct plans to achieve goals, solve mathematical problems, debug computer programs or have conversations with humans. Some used only textual computer interfaces and some (like the Stanford and Edinburgh robots) were capable of sensing and acting in the environment. However, in the 1960s and 1970s, robots were drastically limited by available computing power (e.g. memories measured in kilobytes, and speeds in kilohertz) with very primitive (and big and heavy) sensors and motors compared with what is now available. For various reasons, not only connected with computer power, it had been easier to put men on the moon than to get a robot to make tea.
There were two closely related strands to this work on intelligent machines, applied (i.e. engineering) and explanatory (i.e. scientific) research, though they have recently diverged, partly because of funding pressures.
The applied (engineering) strand aims at production of new useful machines including, for example, car assembly robots (relatively easy) and generic helpful robot servants or carers for the elderly (far beyond the current (2014) state of the art).
The explanatory (scientific and philosophical) strand of AI aimed to produce powerful new forms of explanation for many aspects of natural intelligence, in humans and other animals, though McCarthy's label "Artificial Intelligence" (which he later regretted) seemed to point only at the first strand. The two strands have important overlaps, though it is possible to make impressive, but limited, progress in the practical strand while ignoring most of the problems of the explanatory strand. My focus has always been on the latter.
Unfortunately many extremely clever researchers in the early days of AI grossly underestimated the difficulties of the tasks, and their failed predictions about timescales were interpreted as failures of the AI project. That provoked a stream of rival fashions claiming to have the answer, e.g. based on neural nets, evolutionary computation, autopoesis, "behaviour-based" design, subsumption architectures, embodiment, enactivism, situated cognition, morphological computation, dynamical systems, Bayesian learning, deep learning, use of "Big Data", and probably others. (Some of the ideas had already been thought of by 1962, long before suitable computers were available. See the survey in Minsky(1963).)
I'll argue that all those fashions ignore complexities in what needs to be explained, so that each tries to explain too little -- focusing only on problems for which the fashion is suited (an extreme case being the use of a passive walker robot -- illustrated here -- to demonstrate that cognition, or software control, is not required for effective behaviour).
In the enactive/embodied/situated/... tradition, combined with various technologies (and mathematical theories) a vast amount of research is now concerned with trying to understand how intelligent robot behaviour can be produced by agents embedded in a dynamic environment with which they interact continuously, or intermittently. Some of the environments involve only "passive" physical objects that are perceived, manipulated (picked up, caught, hit, balanced, thrown, stepped over, etc.), used as means to some end, avoided, and learnt from. In others the environment includes interaction with humans, with or without use of language (textual, spoken, or sign language), other animals or other robots.
Often the proposed solution is based on a learning system that can be trained by interacting with the environment, using statistical relations in sensory and motor signals, and possibly also reward signals. That includes relations at different levels of abstraction. Usually the basic signals are assumed to be scalar values capable of being represented by numbers, e.g. strength or frequency of neural impulses, numerical sensor measures, or motor signals.
Online intelligence
There are now many demonstrations that online learning can produce "online
intelligence" involving sophisticated real-time control of motor signals to
produce required percepts, in a (relatively) narrow class of behaviours, e.g.
balancing a pole, catching a ball, moving through doorways, or picking up
objects on a table. The breadth of environments and behaviours varies however.
For an impressive sample see the videos of the Boston Dynamics robots,
especially BigDog here:
http://www.bostondynamics.com/
Important key ideas about complex control systems preceded work in AI or Robotics, including theories of "cyberneticists", such as Norbert Wiener, Ludwig von Bertalanffy, Heinz von Foerster, William T. Powers, and others.
Many of the key ideas are often innocently re-invented (despite the availability of powerful search engines). Such theories assume that intelligence is the management of increasingly sophisticated sensory-motor control loops in real time (i.e. dealing with continuous motion in the environment). The more advanced versions postulate (or demonstrate) complex networks of control loops. When "cognition" or "intelligence" is prefixed with labels like "embodied", "enactive", or "situated", the implication, in many cases (though not necessarily all), is that cognition or intelligence is concerned with such real-time control.
I think the early AI researchers (who knew of the work of Wiener, for example) must always have realised that this is a part of intelligence, but their interests were mostly focused on other aspects of intelligence, which some of the proponents of newer trends sometimes claim don't exist.
In the talk I'll present examples showing that a great deal of animal intelligence, especially in humans, does not involve that sort of immediate control of interaction with the environment. The same will need to be true of future human-like, or squirrel-like, robots.
Offline intelligence
The alternatives are forms of "offline intelligence" that require no immediate
interaction with the physical environment. In many cases that can even involve
shutting off perceptual information in order to focus on a task or problem, e.g.
lying supine, with eyes shut, while testing or proving a mathematical
conjecture, composing a poem or tune, trying to understand what went wrong in an
unpleasant argument with a friend, trying to work out why a machine did not
behave as expected, remembering a holiday a year ago, trying to understand how a
Beethoven quartet is able to take control of one's mind, thinking about how to
improve on a widely used programming language, thinking about a new idea for
designing buildings, and many more.
The chief designer of a new building or airliner is not concerned with what physical movements he or she (or anyone else) will need to perform in order to produce the hoped for result.
Every mathematician knows that there are some problems for which human short term memory is inadequate and some of the thinking has to be delegated to part of the environment, e.g. diagrams in sand, formulae on paper, and nowadays useful computer tools.[*] But that manipulation of the environment is usually part of a thought process about something else, and in principle could be done without manipulating the environment.
Even our ancestors making shelters, clothing or tools, did not have to be continually manipulating things while thinking about what to do. They could avoid getting bogged down with details of their own movements when considering what needs to be cut, moved, stretched, folded, assembled, etc. no matter by whom, and no matter how exactly.
Offline intelligence manifests itself quite early in children to those who look for it. It's easier to detect after they have worked out how to talk, like the child of a colleague who, after learning about volcanoes shooting material into the air asked if some could suck stuff back through the hole. (Perhaps a future earth scientist who will study subduction zones.)
Basic kinds of offline intelligence
A crucial part of many types of offline intelligence, to be illustrated in the
talk, is discovering which types of structures and
processes are possible, or impossible[*]. Such thinking
about possible and impossible structures and processes is one of the likely
routes to early discoveries in Euclidean geometry and topology. Such discoveries
concerning what is and is not possible in the environment include what could be
called "toddler theorems" illustrated below.
Sometimes possibility and impossibility (necessity) are relative to generative features of a type of construction kit, a notion discussed here.
Are current models of computation (information-processing) adequate?
Many of the activities involving offline intelligence are very hard to replicate
in computers using current ideas about computation and current theories about
how minds, or brains, work. Examples include some cases of geometrical and
topological reasoning about possible and impossible spatial structures and
processes -- some discovered thousands of years ago and presented in
Euclid's Elements. Non-human examples include the
unexplained
visual capabilities shown by many animals when hunting, feeding or building
nests -- e.g. weaver birds[*]. (Most vision researchers
grossly underestimate the variety of functions of vision in humans and other
animals, summarised
here,
and therefore underestimate the complexity of the mechanisms required. A new
paper on functions of vision, including unnoticed functions, is in preparation.)
Some mathematical discoveries seem to be made by young children before they have the meta-cognitive mechanisms required to notice their discoveries, think about them, or talk about them (Toddler Theorems). Unfortunately the development of that mathematical potential is often killed by mathematics teaching in school, though there is no lack of opportunity for it to develop in everyday environments, if suitably encouraged[*].
For example, why is it a bad idea to start putting on a shirt or sweater by inserting a hand in a cuff and pulling the sleeve up over the arm -- but not a bad idea to start by pulling the waist opening over your head (though that can go wrong)? This example is discussed further here. The point is that being able in any particular situation related to a training set to put on a sweater uses online intelligence. But thinking about why some untried strategies will not work, or realising that some strategies that have not been tried will work, and understanding why, uses offline intelligence, usually "multi-step offline intelligence", like the shirt example, or thinking about how to see over a wall that's too high for you to walk up to and look over.
Those examples require the ability to think about an under-specified space of possibilities by abstracting from some of the details and at that abstract level discovering impossibilities or necessary consequences of realising certain types of possibility -- e.g. putting your hand into the cuff of a shirt sleeve and then pulling the sleeve up the arm hoping that will be the first step towards wearing the shirt. You don't need to have tried and failed to understand why that won't work. I don't think there is any current theorem prover that could generate a proof on the basis of what would normally be regarded as a complete set of axioms for geometry and topology.
There are forms of offline animal intelligence (in humans, corvids, squirrels, elephants, cetaceans, octopuses....) that we don't yet understand (though I think some key ideas can be found in Karmiloff-Smith (1992), especially the ideas about domains and 'Representational Redescription'.
If you pull two ends of a string apart why do you sometimes, but not always, get a knot, and is it possible to tell before you start pulling which it will be? (Imagine what it's like to be a weaver bird!). If a shoe-lace goes through a hole in a shoe and you want to remove it, pulling either end should work. So why doesn't holding and pulling both ends work even better? What if it is a very stretchable shoelace so that you can pull both ends away at the same time? This is a topological theorem that should be obvious to many who have never studied or heard of topology. What is it about brains that makes that "obviousness" possible? (Though, as most mathematicians know, and Lakatos documented in detail, mathematicians can make mistakes.)
All researchers on vision that I know of focus only on a subset of the functions of biological vision, usually omitting the roles of vision in mathematical reasoning, which, as my examples illustrate, have deep connections with practical, everyday, problems.
Many mathematical discoveries have practical applications, even if that's not the main reason why they are found to be interesting. And some of them involve our ability to look at structures and notice possibilities for change and constraints on change, which we don't have to establish by repeated testing and collection of statistics. Possibility, necessity, and impossibility should not be confused with varieties of probability: they are totally different. (There's a lot more to be said about the relationships between mathematics and biology, some of it said here., and relationships with Kantian causation (discussed here).)
One of my aims here is to demolish the widely held belief that mathematics is all about numbers, equations, logical formulae, and unfamiliar abstractions -- an assumption often unwittingly expressed in reports on investigation of mathematical abilities of young children or other animals, by researchers who seem not to have heard of geometry or topology. I conjecture that many of the features of biological intelligence that are assumed to be products of statistical learning capabilities will turn out to be products of topological and semi-metrical geometric reasoning capabilities, of types that have not yet been identified, so that nobody has looked for brain functions that could support them. (Studying them in laboratories may require a new generation of mathematically sophisticated highly creative researchers.)
Unexplained visual competences in offline intelligence
Many of these examples depend on visual competences: the ability to see what is
and is not possible in a situation, which is totally different from the ability
to predict what will happen next. Seeing what is and is not possible involves
what would often be described as use of imagination. This has long been
recognized as an important human ability, and the etymology of the word suggests
an implicit theory about use of images.
But since visual and other capabilities (haptic, tactile, kinaesthetic, auditory, proprioceptive, and vestibular) need to work together it is possible that evolution produced a powerful a-modal form of representation in which various sorts of information can be combined and the results used in various ways (rather than large collections of statistical relationships being maintained between different sets of modal sensory values). Often researchers assume that spatial information has to be recorded using something like a system of Cartesian coordinates, but Descartes was a very late product of evolution, and it is possible that long before he discovered the mapping from geometry to arithmetic, something deeper and more powerful had already been in use, and is still used by squirrels, for example.
Finding out what it does and how it does it may be a difficult research problem, partly because of the prejudices about its functionality that will need to be discarded (often a result of the form in which video cameras now present their video data -- i.e. in a regular rectangular array -- which is nothing like what an eye does).
In that case it may be the case that even people born blind have the brain mechanisms that originally evolved to enable visual and other sorts of information to be combined and used in powerful ways. So their imagination may not be as unlike the imagination of a sighted individual as might be thought. Integrating new sensor data with a complex multi-functional virtual machine generally requires sophisticated software engineering (whether done externally or via internal adjustments). So restoring sight late in life to someone who has been blind from birth or soon after could severely disrupt a finely honed highly functional system with masses of new noisy input that it cannot deal with.)
I've given only a tiny subset of examples of the offline uses of vision, and some indications of requirements for appropriate mechanisms. A proper study of the varieties of functions of vision in humans and other animals would be a long term interdisciplinary project. Two documents expanding on this are here and here. (More will come.)
The Meta-Morphogenesis project
The Meta-Morphogenesis project[*], was triggered by reading
Turing's 1952 paper on "The Chemical Basis of Morphogenesis" and wondering what
he would have done had he lived longer. The project offers, not solutions to
these problems, but a multi-disciplinary approach to a long term investigation
of the forms of information-processing that biological evolution produced over
billions of years, and how some of the intermediate forms may provide clues that
would otherwise escape our attention.
[*](This is Jane Austen's concept of information, not
Claude Shannon's.)
People who don't agree about solutions should at least be able to discuss what needs to be explained, and what current design ideas, and current neural theories, cannot yet explain. The tasks will require collaborators from many disciplines including the obvious ones and also philosophy of mathematics, theoretical linguistics, ethology, nursery education, developmental biology, and chemical computation.
In this sort of endeavour, it is important to avoid two common errors:
(2) There are huge discontinuities, e.g. assuming that there is a clear dichotomy between things with and things without some feature or capability for which we have a name (e.g. "cognition", on whose meaning researchers in an EU consultation dramatically failed to agree![*])
In the limited time available, the talk will not get beyond introducing the ideas, but anyone interested can follow up by email or exploring or contributing to the Meta-Morphogenesis web site (constantly under development). Unfortunately very few individuals experience the breadth of education required for a project like this: the fault lies not in them, but in educational policies.
The chemical basis of biological information-processing
(Added 1 Nov 2014)
This is a very large topic. What must the physical world be like in order to
provide the materials that natural selection could choose between, with
consequences that eventually assemble a huge variety of living things, with a
huge variety of physical behaviours and information processing capabilities? A
tentative partial answer (extending Tibor Ganti's ideas about minimal conditions
for life) can be found in this discussion note on the role of chemistry in
evolution and how it could provide the answer to some questions about the second
law of thermodynamics, and entropy:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/entropy-evolution.html
That document overflowed into a discussion of sets of possibilities and
necessities generated by various sorts of construction kits, discussed in more
detail in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/explaining-possibility.html
A semi-spoof document proposing a new 'Chewing test for intelligence', inspired
by encountering presentations on embodied cognition at several recent events was
later added here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/chewing-test.html
NOTES AND REFERENCES
A partial index of related discussion notes is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
____________________________________________________________________________
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham
--