CONSCIOUSNESS ABSTRACT
ABSTRACT FOR POSTER AT PAC'07, BRISTOL, July 1-3 2007.
Title: Consciousness in a multi-layered multi-functional mind

Aaron Sloman
Last updated: 4 Jul 2007
The poster is available in PDF format here


NOTE ON FORMATTING:
Adjust the width of your browser window to make the lines the length you prefer.
This web site does not attempt to impose restrictions on line length or font size.
Human researchers have only very recently begun to understand the variety of possible information processing systems. In contrast, for much longer than we have been thinking about the problem, evolution has been exploring myriad designs that vary enormously both in their functionality and also in the mechanisms used to achieve that functionality, including their ability to monitor and influence their own information processing.
Humans themselves are, of course, among the many products of that exploration. They incorporate solutions to more design problems than we have so far even identified as problems. Without understanding what those problems were, and what the many solutions are that have been combined to produce humans, we are bound to produce inadequate theories about what humans are and how they work. Theories of consciousness in general and theories of visual consciousness in particular, whether produced by philosophers or by scientists illustrate that inadequacy.
This suggests several tasks in relation to any kind of human competence, i.e. the ability to have or to do X. First we need to identify what X is, including how many sub-cases there are of X, how what X involves varies from context to context, from one organism to another, how it changes from infancy to adulthood, how it can be affected by brain damage or temporary disruption, e.g. by chemicals, and what sorts of consequences having or exercising that competence can have, including how it interacts with other competences.
Answering those questions requires a combination of study of empirical phenomena, including phenomena that easily go unnoticed, philosophical conceptual analysis, exploring what Ryle called the logical geography of the concepts we use, developing explanatory theories (a very difficult process of abduction) and surveying the logical topography uncovered by the theories, which may reveal flaws in our current system of concepts and suggest a revised logical geography. This combination of tasks is very hard to carry out if theorising does not use something like artificial intelligence techniques to check that the theories proposed actually work, for armchair intuitions about what can work are highly fallible. Moreover, within the framework of ideas from software engineering and computer science AI can work with virtual machines whose relationship to physical implementation is subtle and complex, allowing mechanisms and architectures to be considered that would be inconceivable at the level of neural or digital circuits.
I have been doing this for over 30 years in an attempt to combine philosophy with psychology, biology, neuroscience, and artificial intelligence. This includes showing that the phenomena of human vision (and more generally perception) are far more complex and varied than has been generally appreciated. This led me (in the 1980s) to contrast modular theories of vision with a `labyrinthine' theory in relation to which the dual route theory is just a poor toy because it fails to address the multiplicity of functions of vision - ranging from unconscious control of posture and saccades, conscious or unconscious servo-control of manipulation and other actions, maintenance of a model of the immediate environment, guiding multi-step plans for future actions, answering questions, revealing unexplained phenomena, providing causal understanding, revealing mental states and processes in others, combining with other sensory modalities to help their operation or be helped, providing a-modal exosomatic information about the environment depending on many kinds of expertise, including reading various kinds of visual formalisms, including maps, signs, verbal communications, mathematical formulae, musical notations and computer programs.
Those capabilities depend on mechanisms most of which evolved in our pre-human ancestors and are also found in other organisms. Some are sensorimotor control circuits of varying sophistication which are all reactive in the sense that they do not explore and evaluate multi-stage alternatives before selecting goals, plans, explanations, predictions, or answers to questions. These are found in various forms in all animals including invertebrates. A small subset of organisms have additional mechanisms that are sometimes called deliberative, involving varying combinations of symbolic competences required for goal formation, plan construction, plan selection, plan execution, predicting, explaining, generalising, imagining, and forming beliefs and hypotheses of varying complexity, using compositional semantics.
Many of the control systems in organisms require some form of self-monitoring and self-control. The simplest are those studied in control theory where streams of continuously changing measures with thresholds and delay mechanisms operate in various kinds of hierarchical feedback control systems. A small subset of organisms, including humans, also appear to have an architecture allowing deliberative competences to be turned inwards, using an architecture whose self-monitoring produces not just vectors of values but descriptive structures. An even smaller subset has meta-semantic competences, allowing representation of states and processes in systems that have semantic competences: e.g. beliefs, preferences, goals, percepts and mental processes in oneself and others.


HCogaff Requirements for human-like architectures
H-CogAff: Requirements for a Human-like Information-processing Architecture
See the Birmingham Cognition and Affect website
The sketch indicates a collection of diverse requirements for a virtual-machine architecture in which human reactive, deliberative and meta-management (self-monitoring and self-control) competences can be combined - perhaps also in future robots. Parts of it have already been implemented and it has formed the basis of theories of emotion (e.g. the processes involved in grief - which contradict most published theories of emotion), and a self-monitoring computing system resistant to external attacks. An important aspect of the theory is that the architecture grows itself from infancy to adulthood, so that not all the components are present in infants and young children, implying that their consciousness is different from that of older humans. The poster will indicate some of the implications for visual consciousness, especially in the light of the observation that vision has to provide information about an environment in which there can be multiple changing structures and relationships at different levels of abstraction, whose representation is so far unexplained by neuroscience and with implications mostly ignored by philosophers studying consciousness.
An implication (explained in the papers) is that being a human depends more on having had embodied ancestors than on having a particular form of embodiment.


Related papers and presentations



File translated from TEX by TTH, version 3.60.
On 19 May 2007, 09:46.