CoSy Discussion paper
(Work in progress)
Other reports, papers and presentations

Beyond SensoriMotor Contingencies:
Mirror neurons vs Abstraction neurons

Aaron Sloman
Working in the context of the CoSy project
with help from many others.

This is http://www.cs.bham.ac.uk/research/projects/cogaff/misc/beyond-sensorimotor.html
This note expands on a point made in the middle of another web page discussing 'Orthogonal Recombinable Competences' acquired by some altricial species.

A key point that many scientists discussing so-called 'mirror neurons' appear not to have noticed is that there is an important abstraction function that is not a requirement for many animals but is required for a tiny subset (a subset of altricial species, including humans, nest-building birds, primates, and other animals that can manipulate 3-D objects using hands, claws, or paws), and that this abstraction function is more basic than anything to do with representing what another individual is doing, but arises out of the intrinsic complexity and high structural variability of 3-D manipulation tasks.

Example: For animals whose manipulation (e.g. biting, eating) of objects uses only a mouth or beak all movements of the manipulator are closely correlated with movements of the eyes, because they are all fairly rigidly located in the head. This implies (roughly) that such an animal can learn how to act in the world on the basis of learning sensorimotor correlations (or sensorimotor contingencies in the current fashionable phrase.) The information required for predicting consequences of actions can probably be encoded in a neural net

with inputs: current sensory inputs + motor signals
and outputs: predictions of new sensory signals.
However if the manipulator can move independently of the eyes, as a hand or paw can, the space of possible combinations of sensor inputs and motor signals is very much larger and more complex. This could have caused strong evolutionary pressure for development of a new way of representing information about the environment and actions in it, which required use of explict 3-D representations of structures, relationships and processes rather than merely relations between the sensory and motor signals producing or produced by those structures and processes.

That allowed enormous economy and generality. (You can get a feel for the power and generality of moving to an 'objective' representation by contrasting the ease with which you can represent translations and rotations of a 3-D wire-frame cube, compared with the problem of representing all the patterns of motion observed if all you see are shadows of the moving and rotating cube on a flat screen: in one case there's a single rigid object with very few degrees of freedom, and in the other case 12 lines all with different 2-D translations and rotations, and obscure constraints linking them, especially when rotation is about an axis passing through the cube.)

Instead of sensorimotor contingencies, the mechanisms I am referring to would represent condition-consequence contingencies, where the conditions and consequences are 3-D states and processes in the environment.

It's not just animals with hands

There is a related argument which does not depend on non-rigid links between hands and eyes, but depends on kinds of variability in materials, in structures, and int kinds of movements required (e.g. think of fur, skin, bones, tendons and muscle in an animal being eaten by a predator with teeth).

To cut a long story short, the deep cognitive demands of these 3-D manipulation actions require not 'mirror neurones' specifically concerned with similarities between the perceiver's actions and the actions of another individual, but 'abstraction' neurones, i.e. mechanisms concerned with representing processes in a more 'objective' and more generally re-usable manner than sets of associations between sensor and motor signals.

Very complex, but potentially very powerful mechanisms for mapping between the different representations would still be needed for control of actions, but the more abstract representations could then be used for what is common between different sorts of actions, such as grasping done by the mouth, by the left hand, by the right hand, or by both hands holding something between them, and of course also grasping done by another individual. It would also be relevant to representing past and future hypothetical actions (e.g. in planning). So the cognitive gains for such a system are enormous, though as usual there would be a cost in complexity of brain mechanisms.

I think part of the cost was that for these animals (unlike precocial species born or hatched with most of the competences they will ever need) evolution could not pre-package all the information for controlling actions in the genome (for reasons I've been exploring with Jackie Chappell, who works on animal cognition especially in crows and parrots), and instead had to produce powerful mechanisms for learning about the processes that can occur in the environment, through creative play and exploration.

This had other costs such as infants being helpless and dependent for longer. But if parents had this new abstract cognitive competence it could also be used in perceiving, reasoning about, and predicting actions of the young (and their predators), and in some cases helping them (or hindering the predators). Another benefit was more rapid adaptation to environmental changes.

There are many complications still to be worked out: e.g. there's not just one unique objective 3-D representation. The more an individual plays or experiments with a collection of objects and processes the deeper and richer the understanding of those 3-D processes becomes. This is linked to some of Daniel Glaser's observations of dancers watching dancers. The more expertise the watcher has in common with the observed performer, the more intense the activation in certain parts of the brain that are active during performance of the motion. On the view expressed here that would be related to the growing depth of understanding and representational competence that comes from frequent and extended exploration of certain environmental structures and processes.

Compare musicians listening to music, or experienced orienteers looking at terrain or at maps.

Also there are links with motivation and other affective states and processes. If the associations use the more objective representations then they can be as easily triggered by perceptions of passively observed as actively produced events, and even by imagined future or past examples. and so on.

There are also very strong links with a generalisation of JJ Gibson's ideas about affordances. I've started talking about 'vicarious affordances' in this context.

These ideas (which came out of work on an EU-funded robot project concerned with 3-D manipulation) are being developed in a collection of online papers, presentations and discussion notes. Anyone interested can get a snapshot in the following web page, especially the section in the middle on grasping and abstraction: COSY-DP-0601 (HTML)
Orthogonal Recombinable Competences Acquired by Altricial Species (Blankets, string, and plywood)

Naturally I would welcome criticisms and suggestions for improvement.

This is part of a collection of ideas that I think will upset a lot of researchers, including people studying learning mechanisms, e.g. people studying development of understanding of causation in children.

All this raises important (and difficult) new questions for neuroscience since I do not know of any neural mechanisms capable of performing the functions I am referring to. Merely showing that some things are more active in certain circumstances explains nothing about how the mechanisms work, nor what their actual functions are.


Maintained by Aaron Sloman
http://www.cs.bham.ac.uk/~axs/
Last updated: 13 Mar 2006