Please ignore this version. The final version of the paper is available at:
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-aaai-consciousness.pdfKey ideas: as I think Turing understood, for the advance of science (and philosophy) what we need is not a test for intelligence of a particular machine or individual, but a test for a *theory* of intelligence that explains the huge variety of forms of intelligence, human and non-human, starting with the many kinds of intelligence on this planet, including squirrel intelligence, crow intelligence, elephant intelligence, etc. One form of such a theory could be a specification for a design of a type of baby robot with the potential to 'grow up' to exhibit a huge variety of different forms of human intelligence, depending on the environments (including languages, cultures, and educational opportunities) encountered at various stages of its life.
The ideas were followed up in several papers on the COGAFF web site including papers on "Virtual Machine Functionalism",
vm-functionalism.html http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ vm-functionalism.html
this incomplete paper on "intelligence" as a concept with parametric polymorphism (extending ideas of Gilbert Ryle).
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/family-resemblance-vs-polymorphism.html#conscious
And the Turing-inspired Meta-Morphogenesis project.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
including the developing theory of fundamental and derived construction kits
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html
(The above also have PDF versions.)Perhaps the biggest red-herring in the philosophy of mind has been the phrase "what it is like to ...". Compare
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/rock
OUT OF DATE ABSTRACT
Many debates about consciousness appear to be endless, in part
because of conceptual confusions preventing clarity as to what the
issues are and what does or does not count as evidence. This makes
it hard to decide what should go into a machine if it is to be
described as 'conscious'. Thus, triumphant demonstrations by some AI
developers may be regarded by others as proving nothing of interest
because the system does not satisfy *their* definitions or
requirements specifications.
Moreover, some disputants deny that the phenomena allegedly explained exist at all, and therefore claim that *no* working model can be relevant, whereas others claim that the phenomena are definitionally related to being a product of evolution and therefore no *artificial* working model can be relevant.
I suggest that we can shift the debate in a fruitful way by focusing on phenomena that everyone must agree do exist. For example all disputants must agree that there are people from many cultures who, possibly for multiple and diverse reasons, are convinced that there is something to be discussed and explained, variously labelled 'phenomenal consciousness', 'qualia', 'raw feels', 'what it is like to be something', etc. though they may disagree on some details, such as whether these are epiphenomenal (i.e. incapable of being causes), whether their nature can be described in a shared language, whether they can exist in non-biological machines, whether they have biological functions, whether other animals have them, how they evolved, whether it is possible to know whether anyone other than yourself has them, etc. Likewise everyone must agree that there are others who strongly disagree with those opinions.
These disputes, often involving highly intelligent people on both sides, clearly exist, and people on both sides acknowledge their existence by taking part in the disputes. So that is something that needs to be explained. Any working model of a typical (adult) human mind should explain the possibility of views being held on both (or all) sides. Even people who dispute the need for a scientific explanation of qualia (e.g. because they claim the concept is incoherent) must agree on the need to explain the existence of disputes about qualia.
Ideally this should not be done by adding some otherwise unnecessary feature to the design: it should arise out of design features that have biological or engineering advantages (at least for some species of animal or machine) independently of modelling or explaining these philosophical tendencies.
To do this we need to start by considering only functionally useful architectural requirements for the design of an animal or machine with a wide range of information-processing capabilities, such as humans have, all of which are capable of producing some useful effects, which might help to explain how they evolved. We also need to demonstrate that under certain circumstances the operation of these mechanisms can lead an individual to notice facts about itself that are naturally described in ways that we find in philosophers who wish to talk about qualia, phenomenal consciousness, raw feels, etc.?
We also need to explain why other individuals with the same information processing architecture dispute the claims made by the first group. For example, this could be because individuals starting with the same sort of genetic makeup can develop in different ways as regards their standards of meaningfulness, or their standards of evidence for theories. Or more subtly, they may develop different ontologies for describing the same portion of reality.
In such a situation we may be able to explain what is correct and what is incorrect about the assertions made on both sides, if the contradictions in their descriptions of the same phenomena arise out of incomplete understanding of what is going on.
Ideally we should be able to provide a deep new theory that incorporates what is correct in both sides and exposes the errors made by both sides.
Perhaps a theory of this sort could deal in the same way not merely with disputes about consciousness, but also disputes about free-will, about the nature of affective states and processes, and about the existence of 'a self'. It will have to solve many other problems about how normal, adult, human minds work. It is likely that any such theory will also provide a basis for modelling other kinds of minds by modifying some of the requirements and showing which designs would then suffice, or by showing how various kinds of damage or genetic malfunction could produce known kinds of abnormality, and perhaps predict the possibility of types of minds and types of abnormality in human minds that are not yet known.
This work has already begun. In (Sloman and Chrisley 2003) a partial specification was given for a machine whose normal functioning could lead it to discover within it something like what philosophers have called 'qualia'. The discovery depends on the existence in the machine of a 'meta-management' sub-system in the architecture, which has self-monitoring and 'meta-semantic' capabilities and is capable of developing an ontology to describe some of its own internal states and processes, using an internal formalism whose semantics depends crucially on causal indexicality. This idea, which I believe has been completely ignored in subsequent publications on consciousness, including publications in the same journal, will be expanded, and a programme outlined for further development of the theory, in the hope of resolving remaining questions. The design and implementation of such machines, and analyses of their tradeoffs, could help to unify philosophy, psychology, psychiatry, neuroscience, studies of animal cognition, and of course AI and robotics.
A. Sloman and R.L. Chrisley, (2003), Virtual machines and consciousness, in Journal of Consciousness Studies, (Special issue edited by Owen Holland.) 10, 4-5, pp. 113--172, PDFAlso relevant
[end]The Chewing Test for intelligence: (2014)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/chewing-test.htmlWhy can't (current) machines reason like Euclid or even human toddlers? (2017)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.htmlConsciousness in a Multi-layered Multi-functional Labyrinthine Mind
Poster presentation at
Conference on Perception, Action and Consciousness: Sensorimotor Dynamics and Dual Vision,
Bristol UK, July 2007.
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham