This file is http://www.cs.bham.ac.uk/~axs/misc/to-drew-on-consciousness.txt From Aaron Sloman Thu Jul 17 00:10:42 BST 2003 Subject: Re: AAAI Fellows Discussion List someone has to start From: Aaron Sloman Hi Drew, Thanks for starting this off. For some people it could be useful to have an easily accessible archive if the discussions get anywhere. Other mail list servers seem to have good software for threaded archives, e.g. psyche-b http://listserv.uh.edu/archives/psyche-b.html I don't know about majordomo. > For all I know, you and I are the only ones on fellows-discuss so far. Actually, before posting that first message I ran my secret three-level AAAI-fellow-simulator with various settings of its system parameters. That produced the prediction that all the fellows interested in this topic who are not on vacation in antartica, or at a conference on a submarine, or for some other reason away from email would by now have signed on and be waiting for something to happen. Prediction seems to have been accurate this time. (The simulation with some parameter settings also predicted that there would be a fair number of lurkers waiting to decide whether to unsubscribe or not.) > By the way, did you actually pick one of the alternative answers to my > query, or "none of the above"? I was deliberately being coy till after the test of the prediction. Chris ran his Aaron-simulator and got almost the right result. Look carefully at the questions: 1 The problem is just too uninteresting compared to other challenges 2 The problem is too ill defined to be interesting; or, the problem is only apparent, and requires no solution 3 It's an interesting problem, but AI has nothing to say about it 4 AI researchers may eventually solve it, but will require new ideas 5 AI researchers will probably solve it, using existing ideas 6 AI's current ideas provide at least the outline of a solution 7 My answer is not in the list above. Here it is: ... Comment 1: Is there really one problem, or a whole family of problems. If the latter, then we have systematic failure of reference because of indeterminacy of these expressions: 'The problem', 'The problem', 'It's', 'it', etc. etc. The way the problem is posed, using phrases like "phenomenal consciousness", "qualitative experience", "seem to have a definite but indescribable quality" etc., fail to identify a unique problem, since those phrases have different interpretations. If we ignore AI and computers for a second and just ask about (a) all the varieties of animals produced by biological evolution on earth (and maybe elsewhere) (b) all the stages of development of a typical human from fertilization till dust. (c) all the various ways in which brain damage or disease can produce a-typical phenomena (d) all the kinds of states even a normal human can be in (e.g. when asleep, dreaming, sleep-walking, hypnotised, anaesthetised (in various different ways), driving a car skilfully while talking, including stopping at lights, turning corners, etc. and not remembering any of it at the end of the journey, what goes on when you understand a complex sentence which has multiple parses most of which you think you never experienced, but maybe a part of you did (as various psycholinguistic experiments suggest), etc. Then it is not clear which of the states and processes that can occur in such animals/people/stages/states do and which do not fit the specification, i.e. involve "qualitative experience" etc. It's not that we don't know which individuals have them: we don't even know what counts as evidence one way or the other. Or rather some people think they know, but different people disagree as to what is and what is not evidence. (Compare discussions as to the stage at which a foetus experiencs pain. Some will say that certain behaviours such as wincing settle the question, whilst others deny that that is enough. Likewise with neural evidence). My conclusion is that that there is something going on here but we need a deep theory, and a new, enriched, conceptual framework to describe all the various phenomena with sufficient precision to make questions answerable. In particular, we need to understand the variety of possible designs (combinations of mechanisms, media, forms of representation, algorithms, knowledge, and architectures combining them that evolution has produced (including virtual machine architectures that most neuroscientists have yet to learn to think about). For various types of designs we can precisely specify types of states and processes that they are capable of supporting (though not always easily, and sometimes not even decidably, if the architecture includes general computational mechanisms including the ability to use the environment as an indefinitely large memory extension). Many of those precisely specifiable states and processs will turn out to map quite closely, though not exactly, onto our pre-theoretic concepts, e.g. being conscious, having a visual experience, finding something funny, having an intention, etc. just as the development of a new theory of the architecture of matter showed how the architecture could support kinds of stuff and kinds of processes that were previously roughly identified, e.g. as salt, iron, carbon, liquid, boiling, dissolving, etc. But the new architectural theory allowed far more variety and precision, e.g. carbon12 and carbon14, water and heavy water,etc. But there will not be unique matching candidates: e.g. significantly different more precise notions of 'pain' or 'visual experience' may be equally close to a variety of ordinary uses of those expressions. (Compare: different isotopes of elements may map equally closely to the pre-theoretic notion of the kind of substance.) The matches between new precise concepts for characterising states of different architectures, will vary from one design to another. E.g. a housefly's architecture probably supports a subset of what we referto as "experiences" (e.g. of something moving rapidly towards it, or some variant of that not necessarily expressible in English), but it probably does not support the knowledge that it is having that experience, or even the ability to pose the question, and perhaps not the memory that it had the experience a second ago. New-born humans may be more like houseflies than we'd like to admit. (My prejudices about houseflies and infants are showing.) So, when we have a much, much deeper and more general understanding than we do now of the variety of possible designs, and the various kinds of states and processes that they can support, and what their functional implications are for instances of the designs, we'll have new, rich, vastly expanded set of concepts with which to formulate many subtly different versions of Drew's questions. And then we'll be able to demonstrate (usually analytically) that a machine of type X can get into a state when it has experiences of type E23, but not of type E24, whereas machines of type Y can do both, and machines of type Z can have neither but can develop (e.g. by extending its architecture) into a state in which it has both, but with slight variations from the other machines. How you tell which state a machine is in is at least as hard as telling what virtual machines are running on it, if you were not the designer. But that doesn't make the question meaningless, only hard to answer. Comment 2: That investigation is part of the task of AI. Nobody else is doing it, or has the conceptual resources to do it. (AI may need expanding to do it: e.g. perhaps moving away from a limited class of information-processing mechanisms. I don't know -- e.g. it depends on whether every physical process can be simulated with full precision on a sufficiently large computer.). Comment 3: When AI is in a far more advanced state we'll be able to replace the expression 'The problem' with a variety of far more precise expressions referring to different sorts of states on different types of machines, which have more or less similarity to the various things different people think they are referring to when they describe the problem. Comment 4: When that state has been achieved, we can say: Q: 1 The problem is just too uninteresting compared to other challenges Answer: Yes for some versions of 'the problem' no for others. Q: 2 The problem is too ill defined to be interesting; or, the problem is only apparent, and requires no solution Answer: There are many more or less loosely related problems that cannot easily be distinguished without conceptual advances. They are interesting and require solutions, but not all the solutions are the same because the problems differ -- more than we now realise. Q: 3 It's an interesting problem, but AI has nothing to say about it Answer: Without AI the clarification and separation of the various problems is impossible. (Some philosophers who are ignorant of AI try hard, but just lack the conceptual tools to express some of the important ideas, e.g. to describe some of the kinds of architectures involving multiple concurrent processes of different sorts interacting in certain ways. But AI people without philosophical training can make other mistakes.) Q: 4 AI researchers may eventually solve it, but will require new ideas Answer: AI researchers will eventually solve *them*, using new ideas. Q: 5 AI researchers will probably solve it, using existing ideas Answer: Existing ideas in AI and related disciplines are a launching pad: we can begin to think about varieties of architectures and the processes they can support. But we are likely to find we need new ideas also (e.g. in the way that people using programming languages find they need new programming constructs, e.g. procedure calls with local variables, multi-threading, inheritance mechanisms, unification, garbage collection, pre-emptive and other kinds of scheduling, etc.) (I think something like this was said in Minsky's Turing Award lecture 30 years ago. http://web.media.mit.edu/~minsky/papers/TuringLecture/TuringLecture.html ) Q: 6 AI's current ideas provide at least the outline of a solution Answer: Not so much the outline of a solution, but the seeds of a research programme that will distinguish the different problems and their solutions. Q: 7 My answer is not in the list above. Here it is: ... Answer: In particular when we know how to design a human-like machine that is capable of understanding philosophical discussions that require (as Pat almost agrees) an architecture with internal self-monitoring and self-categorising capabilities, then we may be able to explain how such a machine can discover things about itself that it finds puzzling in exactly the way human philosophers do. Part of the requirement for the existence of such a machine is, as Harry suggested, an ability to develop its own concepts for describing its own internal states (whether usign Kohonen nets or some other self-organisign concept-formation system). Then those concepts will have what I think Campbell means by 'causal indexicality' --- i.e. their use will implicity refer to the whole set of such concepts used by *that* individual and also the system deploying them; in something like the way 'now' refers to the time of utterance, 'this' refers to a relation between the utterer and something, 'I' refers to the utterer, etc. That will make those internal-state-describing concepts non-transferrable between individuals (explaining the 'indescribability' Drew mentioned, and various other philosophical conundra). E.g. The exlanation of 'You can't have experiences like mine' will turn out to partly analogous to 'You can't use "I" to mean what I do when I use it". Or "You can't have my height". But this, if true at all, is a very abbreviated version of a much longer story, yet to be told. We have work to do. Aaron