Poster Abstract for Presentation at ASSC10
The Tenth Annual Meeting of the Association for
the Scientific Study of Consciousness
Oxford, June 23rd to June 26th, 2006.


Illness stopped me attending the conference and presenting the poster.
So I've made it available as a collection of PDF slides:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/assc10-poster.pdf


How an animal or robot with 3-D manipulation skills
experiences the world
Aaron Sloman

Abstract:

The best way for scientists to explain consciousness is to drop all use of the noun and explain everything else. That presupposes that we can identify what needs to be explained -- and some things are far from obvious. Only when I started working in detail on requirements for a human-like robot able to manipulate 3-D objects using vision and an arm with gripper did I notice what should have been obvious long before, namely that structured objects have 'multi-strand' relationships not expressable simply as R(x, y), because the relation between x and y involves many relations between parts of x and parts of y.
For a more detailed presentation of the resulting theory see COSY-PR-0505: A (Possibly) New Theory of Vision (PDF)
Hence, motion of such structured objects involves 'multi-strand' (concurrent) processes. That is, many relationships change in parallel -- e.g. faces, edges, corners of one block may all be changing their relationships to faces edges and corners of another (and things get more complex when objects are flexible, e.g. your hand peeling a banana or a sweater being put on a child).

Thus seeing what you are doing in such cases can have a kind of complexity that appears not to have been noticed previously because of too much focus on simpler visual tasks like recognition and tracking.

I'll show why we need to postulate mechanisms in which concurrent processes at different levels of abstraction, in partial registration with the optic array (NOT the retina, since saccades, etc., occur frequently) are represented.

Nothing in AI comes close to modelling this, and it seems likely that it will be hard to explain in terms of known neural mechanisms. If the opportunity arises I'll try to explain some of the implications for human development, understanding of causation, and computational modelling, and spell out requirements to be addressed in future interdisciplinary research, explaining deep connections with Gibson's notion of affordance, and its generalisation to 'vicarious affordance'.

The evolution of grasping devices that move independently of eyes (i.e. hands instead of mouth or beak) had profound implications -- undermining claims about sensory-motor contingencies -- also suggesting that mirror neurons should have been called 'abstraction neurons'.

Some of the ideas are sketched here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/orthogonal-competences.html
'Orthogonal Competences Acquired by Altricial Species'

A critique of common assumptions about 'sensorimotor contingencies' can be found in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sensorimotor.html

Requirements for 'fully deliberative' systems are analysed in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/fully-deliberative.html



Updated: 28 Jun 2006
Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham