ABTRACT SUBMITTED TO TUCSON3 (Slightly modified: 26 Oct 1997) Title: Architectures and types of consciousness Author: Aaron Sloman Affiliation: School of Computer Science, and Cognitive Science Research Centre The University of Birmingham, UK Address/phone/fax/email: Aaron Sloman, School of Computer Science, The University of Birmingham Birmingham B15 2TT UK Phone: Office +44-121-414-4775 Phone: Sec +44-121-414-3711 Fax: +44-121-414-4281 EMAIL: A.Sloman@cs.bham.ac.uk ======================================================================= Title: Architectures and types of consciousness Abstract: The paper explores three related conjectures: (C1) Inadequate grasp of the design issues involved in production of an organism or machine with human capabilities leads to deep confusions in both philosophical and empirical research on consciousness. Design issues include the requirements for functioning organisms or agents, and the range of possible design solutions. (C2) The concepts we employ in most of our ordinary thinking about mental states and processes in ourselves and others have hidden depths connected with design issues, but when we reflect on our concepts: we notice only superficial aspects of their phenomenology. Deeper analysis requires identification of the design problems solved by phylogenetic and ontogenetic adaptation (e.g. evolution and learning) and relating those problems to classes of architectures for competent animals of various kinds, (C3) By identifying ways in which those architectures can be abnormal or be damaged, we can extend and refine ordinary concepts so as to provide a powerful new set of concepts for use in empirical research, scientific theorising, and philosophical analysis. Design considerations provide a framework for talking about all forms of consciousness to be found in nature (including other animals, human infants, and people with brain damage or disease) instead of focusing only on the tiny subset of phenomena noticed by a typical adult scientist or philosopher discussing consciousness. Different sorts of consciousness are associated with architectural layers that evolved at different times, and which operate concurrently in humans in more or less integrated fashion. E.g. (L1) A reactive layer supports primitive (e.g. insect-like) types of sentience, emotions focussed only on the present e.g. immediate ``alarm'' reactions causing freezing, fleeing, aggression, etc., and simple learning (e.g. adjustment of weights within existing structures) but no ``what if'' reasoning. (L2) A deliberative (management) layer supports experiences (and qualia) using higher order concepts, emotions linked to what might happen (e.g. apprehension) or what might have happened (e.g. relief or regret), constructive problem-solving and richer forms of learning and memory. It will inevitably be partly digital and to some extent serial and resource-limited. (L3) A reflective (meta-management) architectural layer, using mechanisms of self-monitoring, self-evaluation and self-control can support experiences of self, including sensory and other qualia, emotions based on self-evaluation and partial loss of self control, and learning that extends self-categorization and forms of thinking and attentional control. Being conscious of being conscious requires L3. Forms of sentience in all three layers include a ``first-person'' aspect (how things are sensed or experienced or perceived, as opposed to how they are). Only when L3 is present can the first-person aspect be explictly attended to by the organism. [Conjecture: perceptual and motor systems developed a similar layered structure in parallel with more central systems.] Empirical evidence and engineering design considerations suggest that the layers are not related as a simple control hierarchy, since processes in each can drive or modify processes in the others (an example of "circular causation"). E.g. control of attention and thought processes is always limited. This also undermines the common assumption that consciousness is inherently bound up with rationality. Each layer can go wrong in many different ways. Since their processing is concurrent, malfunctions in one can leave another more or less intact. Thus we can expect many combinations of ``disorders of consciousness'' (including blindsight, multiple personality disorder, autism, etc.). Different layers need not be mapped onto different physical mechanisms. E.g. if they use virtual machines distributed over physical mechanisms, this can undermine simple mental-physical correlationsm. (Compare hardware-software correlations.) In both cases ``downward'' causation is compatible with causal completeness at the physical level. (Cf. Haken on circular causation). This work is partly inspired by evolutionary considerations, partly by empirical research in psychology and brain science, partly by philosophical analysis of many familiar concepts, and partly by lessons learnt from work in AI on the design of various kinds of fragments of intelligent agents. It's not yet clear precisely what sorts of functional capabilities are required in each layer to support a typically human repertoire of mental states and processes. Neither is it clear whether computer-based mechanisms would suffice for conscious human-like robots, nor whether mechanisms of classical physics would suffice. It might turn out that physical constraints of weight, size, energy consumption, speed, information storage capacity, and reliable information persistence, require quantum mechanical mechanisms. Other reasons for bringing in quantum mechanics are usually not based on proper design considerations. (Often more on confused, wishful thinking about "freedom", "self", etc.) Consequences of inadequate understanding of the "design issues" include (a) proposing mechanisms of mind which fail to address requirements of real (animal and robot) minds, (b) arguments against classes of designs based on ignorance of the variety and depth covered by such classes, and (c) superficial theories about both the phenomenology and the mechanisms of consciousness. A baby zombie designed with the right architecture would eventually develop the ability to wonder about the link between its qualia and its body. Just like baby humans. For further elaboration follow the links from http://www.cs.bham.ac.uk/~axs The Birmingham Cognition and Affect directory is at http://www.cs.bham.ac.uk/research/projects/cogaff/ Recent papers by myself and others elaborate on the above themes. Less formal discussion notes, contributions to email lists, contributions to usenet news groups, etc. can be found in this directory. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ [end]