Presentation at PACS symposium
Seoul, 27-28 October 2016

Aaron Sloman
http://www.cs.bham.ac.uk/~axs

_______________________________________________________________________________

Robot Intelligence vs. Biological Intelligence?
A discussion based on Physics, Chemistry, Biology,
Mathematics, Mind-Science and Philosophy

This was an invited talk at the International Symposium on Perception, Action, and Cognitive Systems (PACS) held in Seoul, Korea, Oct. 27-28, 2016. PACS aims to be a common venue for integrated research in cognitive science, brain science, artificial intelligence, robotics, and human-computer interaction and their practical applications.

The full symposium programme is available at
http://www.kiise.or.kr/pacs/2016/lecture_material.htm
including the lecture material for all the speakers.

This document is available in two formats:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-pacs-2016.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-pacs-2016.pdf
(These will be updated from time to time. Last updated (slightly): 15 Apr 2018)

This is part of:
The Meta-Morphogenesis (M-M) Project
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html

     A video of a shorter presentation in June 2016 closely related to this is linked from here:
     http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#information-2016
That presentation was based on an earlier version of these notes.
          It includes some videos that were shown during the PACS talk, but not included here.
A later presentation overlapping with and extending these ideas was given (remotely) at an IJCAI 2017 workshop in Melbourne, based on the video included here:
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/ijcai-17/

CONTENTS

ABSTRACT:

Many people are worried that AI systems will soon match or overtake human intelligence and they spend much time discussing what to do about that.

I claim there's no way AI systems can soon match or overtake adult human intelligence, human toddler intelligence, squirrel intelligence, crow intelligence, elephant intelligence ...
(... except in a relatively small subset of domains, including some in which average humans perform poorly, e.g. playing chess, or GO.)

People making the opposite claim (e.g. arguing that the "AI-singularity" will occur soon) generally use excessively narrow criteria for intelligence, ignoring many of the most important phenomena in natural intelligence, including:

I'll try to present an approach to addressing this. It could take a very long time to achieve some of the goals. The research is also likely to reveal deep research goals that have not yet been noticed, as happens frequently in science.


UNANSWERED QUESTIONS

There are many unanswered questions about natural intelligence: its diversity of forms (making the very idea of a single test for intelligence silly, as Turing understood), its biological origins and mechanisms, its evolutionary history, the features of the physical universe that make it possible, and requirements for replication in future robots.

Alan Turing died in 1954. The Meta-Morphogenesis project is a conjectured answer to the question: what might he have worked on if he had continued several decades after publication of his 1952 paper "The Chemical Basis of Morphogenesis", instead of dying two years later?

The project has many strands, including identifying what needs to be explained -- e.g. how could evolution have produced the brains, or minds, of mathematicians like Pythagoras, Archimedes and Euclid?; or the brains of human toddlers who seem to make and use topological discoveries before they can talk? Or the brains of intelligent non-humans, like squirrels, weaver birds, elephants and dolphins?

How did those ancient human brains make their amazing, deep mathematical discoveries over 2.5 thousand years ago -- long before the development of modern logic or proof-theory?

What information processing mechanisms did they need?

How did their environment influence their use of those competences?

What features of the "fundamental construction kit" (FCK) provided by physics and chemistry made that possible?

What sorts of "derived construction kits" (DCKs) were required at various stages of evolution of increasingly complex and varied types of biological information processing?

Are some currently unrecognized forms of information processing required that will be needed by future Archimedes-like robots?

What specific design features will be required to enable robots to replicate discoveries made by ancient human mathematicians?

e.g. what features of animal minds would be required in order to be able to discover that extending Euclidean geometry with the neusis construction allows arbitrary angles to be trisected (impossible in standard Euclidean geometry)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html

(Why is this an important example?)

A major task of the project is collection and analysis of examples of natural intelligence, human and non-human, that current AI cannot match, and current neuroscience cannot explain, to help steer research towards new subgoals.

One of my goals is to explain why Immanuel Kant was right about the nature of mathematical discovery in 1781 even if he missed some important details.


THE UNIVERSE CONTAINS MATTER, ENERGY AND INFORMATION

Many important scientific concepts cannot be defined explicitly, but are implicitly defined by their roles in theories.

The words "Matter", "Energy" and "Information", in the sense used here, cannot be explicitly defined but are implicitly defined by the theories that use the words, and the ways in which the theories are applied and tested.

This is not Shannon information: a syntactic notion with various associated numerical measures. Life needs semantic information, as do intelligent machines. Some semantic contents are easy to represent in machines, including references to memory locations, to instructions, to physical interfaces. How to enable a robot to refer to contents of the environment (exosomatic semantic content, including contents of minds of other individuals) is easy in some simple cases, very hard in other cases.

Don't assume that the concepts of "information", "referring", "learning", "using information" that we have now will prove adequate in the long run. Compare what happened to concepts of "force", "weight", and "mass" between Newton and Einstein.

As regards information I think our concepts and theories are still too primitive -- and too much influenced by Shannon's theory, which is concerned primarily with a limited class of modes of representation of information. It does not specify or explain the primary function of information, namely control. Information-based control is important in all life forms, even the most primitive forms. Shannon's theory also ignores the semantic functions of information, such as formation of intentions, questions, hypotheses, plans, theories, proofs, etc., found in the most sophisticated life forms, based on the control functions of information.

Moreover research communities are dreadfully fragmented, using different systems of concepts with superficial overlaps at a verbal level and that does not help research on natural and artificial intelligence.

A deep general theory of intelligence should explain at least:

There's a vast amount of conceptual confusion and discussion/debate at cross-purposes, e.g. about the importance of "embodiment" or whether "Good old fashioned AI has been shown to be useless".

--- and far too much mutual ignorance.
(That's a social, educational problem.)


CRITERIA FOR ADEQUACY OF A THEORY OF INFORMATION

Many AI systems that are focused on a narrow set of competences easily out-perform most or all humans in those competences -- a recent addition being the Alpha-GO program.

However, there are many intelligent animals, including squirrels, elephants, nest-building birds, hunting mammals and apes, and pre-verbal humans, each with a range of abilities that current robots are not even close to matching.

Many of these abilities include awareness and use of mathematical structures and relationships, their possibilities for change and their constraints on possible changes.

E.g. stereoscopic vision based on binocular fusion, uses properties of imaginary triangular structures to infer distance. Another example pointed out by James Gibson is that texture gradients (spatial rates of change) and optical flow (temporal rates of change) can play important roles in visual control of movement.

Those examples normally use numerical relationships. But there are many non-numerical mathematical relationships, for example spatial ordering relationships:

One reason why the ubiquitous use of mathematical reasoning goes unnoticed is that the kinds of mathematical structure used in normal perception are not the sorts most people learn about in a mathematics class. In particular visual perception provides information about topology, partial orderings, and what could be called "semi-metrical geometry", geometry in which lengths, areas, etc. are partially, not totally, ordered.

For example, which of two overlapping straight rods is longer will be seen easily if they are parallel and close together, and both of their "left" ends are adjacent while there is a clear gap between the other two ends as in this example:

____________________________
_____________________________
One will then be perceived as definitely longer than the other though there may be cases of uncertainty if the lengths are approximately equal and they are viewed from a distance.

Lengths may become perceptually incomparable to normal vision if the two rods are some distance apart on their common line, like this:

____________________________                _____________________________
Their lengths may also be hard to compare if they are oriented in different directions in 3D space, one more foreshortened than the other. One solution to comparing two lengths that are hard to compare is to bring them together with a pair of ends aligned.

If that is impossible, a creative solution is to use a third object aligned first with one of the two, and then with the other, possibly using marks on the object. That solution depends on the recognition that length comparisons are transitive, and also the assumption that some objects have lengths that are intrinsic to them, and do not depend on where they are in space or on their orientation.

These are deep assumptions, implicit in much human and animal perception and thinking. How did evolution produce the required mechanisms and forms of representation?

The ability to make these inferences involves essential use of information about mathematical structure in the environment. It is possible that early humans somehow learnt to make such inferences in the course of solving practical problems, e.g. building shelters with horizontal roofs, or making tables with three or four legs.

The brain mechanisms required for animals to understand and use these relationships, e.g. transitivity of equality of length, transitivity of the relation of being longer, and so on, are not clear. From the work of Piaget it seems that there are many such mathematical competences that would normally be regarded as trivially obvious, but which children do not have until relatively late stages of brain development. What has to change in their brains?

There are many more unanswered questions, e.g: How did the mechanisms required evolve? What changes were required in the DNA to allow these capabilities to develop? What were the earliest precursors?

All of this is possible without having a notion that lengths have numerical measures, although the use of numerical measures can be built on these insights by counting repeated equal lengths that "add up" to some other length.

Similar comments can be made about perceptual comparisons of areas, though there are many more cases that defeat visual comparability of areas. if one region includes the other, and there is a perceivable gap between their boundaries all the way round, it will be obvious which area is larger. However, if the areas of the two boundaries are of different shapes, or if one is further away than the other, or if one is in a plane that is tilted relative to the other's plane, or if the areas have very different shapes, which area is larger may not be visible to someone who perceives both of them.

The same goes for comparisons of 3-D volume: in some cases it is very obvious which of two volumes is larger, e.g. if one completely contains the other, like a bucket with a ball in it. But the variety of cases that are hard, or impossible, to decide using normal vision is very much larger.

In some cases, humans can quickly and easily see which of two spaces is larger or whether one object could fit in the space bounded by another, even if they cannot estimate the actual size or volume of either. Current artificial visual systems that I know of cannot do this. Some examples are here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/changing-affordances.html

Typically, an AI vision system would use the same general algorithm for all cases: work out numerical values for lengths, areas, orientations, or the volumes of the spaces to be compared then compare the numbers. Is that a good way to compare the volumes of a (normal) banana and a house.

Comparisons of length, area or volume involve mathematical competences, but current computer vision systems (e.g. for robots) would either use a general ability to compute length, area, or volume of something in the environment, or might be trained to use heuristic comparisons on the basis of many special cases, and then would fail on a new case where the shapes to be compared are novel, whereas biological visual systems can use different strategies in different cases and can creatively adapt strategies for hard cases.

Even mathematical competences, which are normally thought of as specially well-suited to computers (as numeric competences are) include fields where current AI systems and robots don't even come close to humans, e.g. the ability to make the kinds of mathematical discoveries in geometry, topology and arithmetic made by ancient mathematicians, many of them assembled in Euclid's Elements..

Those competences include types of mathematical sophistication that have gone unnoticed by most researchers. However, Jean Piaget studied some of them in children, with very interesting results, though he was not able to produce good explanations.

The great ancient mathematicians are advanced instances in an extended process of evolution of increasingly sophisticated information processing capabilities of increasingly complex organisms interacting with structured animate and inanimate objects in their environments, in increasingly complex and varied ways.

The opportunities and constraints (positive and negative affordances) provided by objects and situations of different types in the environments of organisms are based on mathematical facts about what is and is not possible in the spatial world.

Evolution produced increasingly physically complex and increasingly capable organisms in environments whose occupants, including living occupants, were also becoming more complex, more varied and more capable of using the environment to meet their own needs.

This process required parallel production of increasingly complex evolved "construction kits" of many sorts, as explained in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html

The study of evolved construction kits for building information processing mechanisms and assembling them into complex, multi-functional, architectures using virtual machinery can give us new insights into both

This may give us a new view of what's missing in Robotics/AI, partly because

(a) researchers studying and modelling various competences of humans and other species often ignore ways in which mathematical discoveries are related to everyday physical competences, and

(b) researchers attempting to build machines that can emulate or replicate those mathematical discoveries can gain a better understanding of what the competences are that they try to emulate.

The implications are mostly ignored by researchers in AI, biology, neuroscience, psychology, mathematics, philosophy of mind, philosophy of mathematics.

Filling in the details may help us understand what's missing from current AI/Robotics and mind-science.

It may also help to make philosophy deeper and richer.

This work presents some (possibly new?) requirements for fundamental physics (including currently unknown aspects of physics) required to supplement evolution by natural selection as explaining how something like human intelligence could come into existence in an initially lifeless Universe.

Natural selection alone could not suffice, without a sufficiently powerful construction kit to generate options to be selected.

On this view the universe has deep mathematical features that support an extremely powerful "Fundamental Construction Kit" (FCK), from which many Derived Construction Kits can be produced by physical/chemical processes and natural selection, working together.

The combination forms a highly creative generator of new mathematical domains, whose instances form increasingly complex life forms -- including human mathematicians.

From this point of view biological evolution, based on the FKC provided by this physical universe, is the most creative known mechanism,.

LIFE AND INFORMATION

One important role of information for living things, though not the only one, is concerned with reproduction: information in the genome controls or makes possible many of the details of development of individual organisms.

There is much to be said about that -- some of it written by Schrödinger in What is life? (1944).
Annotated extracts from the book are available in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/schrodinger-life.html

Evolution acquires information about opportunities, constraints, and possible designs for all sorts of organism.

How it does that keeps changing: one of the results of evolution.

Evol to Betty

One of the interesting questions is how many different information-processing mechanisms are provided by evolution for different sorts of organisms, or for a particular sort of organism at different stages during its life (from formation of egg or seed), and also how many different sorts of information processing are not provided by evolution but by various aspects of the environment interacting with evolved mechanisms -- learning theories proposed so far are not general enough, e.g. to explain how children create, not learn, languages.

I am interested, among other things, in ways in which evolution changes what individual organisms (or subsystems of organisms) can do with information, of many kinds:

Let's focus on acquisition and use of information by individual organisms, during their life, e.g. rather than information in the genome, and how that is acquired and used.

In particular organisms are able to acquire acquire "modal" information: information about what is and is not possible, and information about necessary consequences of realisation of some possibilities, i.e. mathematical information, e.g. the sorts of discoveries reported by Euclid, some of which individuals can easily make for themselves, e.g.

     If a triangle has three equal sides then it must have three equal angles
     If a vertex of a triangle moves closer to the opposite side,
          the area of the triangle must decrease.

     http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html

Information about geometry and topology recorded by Euclid (and his predecessors and successors) has NOTHING to do with probabilities, which happen to be the main focus of most fashionable research on intelligent systems.

Going back to earlier organisms: evolution produced organisms able to acquire and use more and more varieties of information:
- immediately usable control information
- information that can be used after it is acquired
- information about an need to seek new information
- information about extended terrain, not just immediate environment
- information about what is where even when it is not being perceived, and how to get to some items even when they are not perceived (e.g. learnt routes to sources of food, liquid, shelter, danger, etc.).
-
- We know how to make machines that can acquire and use some types of information, but not others: e.g. information about what is possible or impossible, e.g. theorems in Euclidean geometry and topology.
-
- Some examples of uses of information by animals and machines, e.g. the amazing Big Dog robot:
BigDog 2010, BigDog 2011, built by Boston Dynamics.

As far as I know, BigDog uses information gained immediately for "online control" purposes, but does not have any "offline intelligence", e.g. ability to think about what might happen, what the consequences will be if X happens, what could have happened but did not, what the consequences would have been if X had happened, or if Y had not happened, etc. I.e. it lacks all the precursors of mathematical abilities, some of which are discussed here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

The early AI robots (e.g. Shakey, mid 1960s, Stanford University), had very simple versions of that sort of "offline" intelligence, but unfortunately bad fashions began to dominate AI in the 1980s and such research was abandoned in favour of emphasis on forms of embodiment, and online, embodied, control -- instead of keeping both as important research areas, that needed to be integrated.


WHAT IS INFORMATION?

I am not using "information" in Shannon's sense: he realised too late that by using that word he had succeeded in confusing many people.

The older concept of information, semantic information, was used by the novelist Jane Austen in "Pride and Prejudice" in 1813, as explained in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html

This kind of information content is important because of how it can be USED, whether online or offline. There are many different sorts of use of information, and there need not be any sender or receiver involved if information is derived within an organism and used by that information.

How information is stored or transmitted and the mechanisms required for storage and transmission are important questions, but not as important as:

WHAT CAN INFORMATION BE USED FOR?
HOW CAN INFORMATION BE USED FOR THOSE PURPOSES?
WHAT MECHANISMS MAKE PARTICULAR USES OF INFORMATION POSSIBLE -- IN VARIOUS SPECIES?

While walking along a forest path, I may see that a tree has fallen across the path.

I then have information that my path is blocked, which I can use.

If I turn back and go home I need not make use of the information, but if I want to continue on my way I may also acquire information that I could climb over the tree, or that I could go round the tree.

A bird, an insect, an elephant could also acquire information relating to the tree, but they would all acquire and use different information about contents of closely related portions of space.

If I see a beetle on the bark of the tree I can use the information acquired to control an action, e.g. delicately picking up the beetle, or moving down to peer at it.

The information acquired can trigger the formation of an intention.

The information content of the intention may be
-- to get my finger and thumb on either side of the beetle,
-- to gently move them together so that I can pick up the beetle without
   damaging it,
-- bring it to a position where I can inspect it visually,
-- in order to get more information about its shape, colour, etc.
-- and if I were an entomologist I might also be able to identify the
   species, whether it is male or female, etc.

The information in the intention can control an action, in collaboration with additional visual information acquired at different times.

Many uses of information do not involve ANY use of probabilities, though as I move my hand I can get information about how to adjust the motion in order to bring finger and thumb on opposite sides of the beetle.

If I put the beetle in a box, that may remind me of Wittgenstein, though not because of any probability relation, or regular correlation.

Information acquired visually by Chess and Go experts from a chess or Go board will be very different: using different ONTOLOGIES, including different RELATIONS, different POSSIBILITIES, different CONSTRAINTS.

VIEWS ABOUT VISUAL INFORMATION

There have been Different views about visual information. Examples:
MARR (and other AI vision researchers in 1960s and 1970s): vision provides information about visible surfaces, distances, curvature, orientation, colour, illumination, etc.

BARROW AND TENENBAUM: Recovering intrinsic scene characteristics from images

GIBSON: discovering what the perceiver can or cannot do, given its capabilities, needs, current knowledge etc.

GENERALISE GIBSON: information about what is possible or impossible in the environment, whether relevant to the perceiver's needs or abilities, or not. This can lead to discoveries in topology and geometry, mentioned above.

NB: There are enormous complications if things are moving: there's huge explosion of possibilities -- sets of possible trajectories for different things in the environment. Things that can change include

features
structures
types of motion
types of causal interaction (pushing, pulling, twisting, ...)
types of possibility,
types of constraint.

I claim: what Shannon did is very important for engineering applications involving storage or transmission of data, or minimising loss due to equipment failures, noise, etc., but has nothing to do with the information required for intelligent action: several research communities have been misled -- including many researchers in robotics and artificial intelligence.

Shannon himself was not misled by his terminology. (Compare Jane Austen, writing about information in her novels, a century earlier.)


SOME THOUGHTS ON ALAN TURING AND THE M-M PROJECT

In 'Computing machinery and intelligence',
Mind, 59, 1950, Turing wrote:
"In the nervous system chemical phenomena
are at least as important as electrical"

Two years later he published:
The Chemical Basis Of Morphogenesis, in
Phil. Trans. R. Soc. London B 237, 237, pp. 37--72, 1952,

Two years later Turing was dead.

What would he have done if he had lived several more decades?

Perhaps he would have worked on the Meta-Morphogenesis (M-M) project: an attempt to understand how evolution can repeatedly produce new forms of information-processing that alter mechanisms of evolution -- an extraordinarily powerful form of positive feedback over billions of years.
A Protoplanetary Dust Cloud?
Protoplanetary disk

    [NASA artist's impression of a protoplanetary disk, from WikiMedia]

How can a cloud of dust give birth to a planet
full of living things as diverse as life on Earth?

Part of the answer:

By starting from a very powerful construction kit: physics+chemistry, and using natural selection to produce many branching layers of information-processing machinery, required for new forms of reproduction, new forms of development, new forms of intelligence, new forms of social/cultural evolution, via new types of construction kit.

As Turing seems to have realised: the forms of information-processing used were richer and more varied than those developed by computer scientists and engineers so far, and made essential use of chemistry.

These notes may be expanded later. There's a great deal more here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
http://www.cs.bham.ac.uk/research/projects/cogaff/
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/


Generalising Waddington's epigenetic landscape

     XX
Based on
Jackie Chappell and Aaron Sloman, 2007, Natural and artificial meta-configured altricial information-processing systems, in International Journal of Unconventional Computing, 3, 3, pp. 211--239,
http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#717

CHEMICAL MACHINERY IN LIVING CELLS
https://www.youtube.com/watch?v=Id2rZS59xSE
David Bolinsky: Visualizing the wonder of a living cell


Additional relevant notes, images, ideas & videos

Items needing structured internal information contents:

Internal intentions, plans, hypotheses, questions, control information, etc.

Introduction of theoretical terms
E.g. mass, length, time, electric current, force, energy, magnetism, ...

Example videos:

http://www.cs.bham.ac.uk/research/projects/cogaff/movies/vid

One of the videos is by Warneken and Tomasello showing a very young child spontaneously opening a cupboard door for an adult carrying a pile of books.
Their focus is demonstrating that very young (e.g. pre-verbal) children can be spontaneously altruistic -- i.e. want to help others.

My focus is on how pre-verbal children can *represent* information about the contents of the minds of others, including their beliefs, their lack of information, and their intentions and can work out what physical processes and states of affairs will satisfy the inferred needs/goals of others.


DRAFT INCOMPLETE LIST OF REFERENCES
(To be pruned and extended)


UPDATES
Last Updated: 3 Nov 2016; 9 Nov 2016
     More information may be added later.

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham