From Aaron 19 Jun 1996 / 20 Jun 1996 Subject: The noun "consciousness" is pernicious Newsgroups: comp.ai.philosophy,sci.cognitive Distribution: world I recently posted the following to the moderated group sci.psychology.consciousness (gatewayed to the psyche-D email list). Since it is relevant to other news groups here it is. Comments welcome, as usual. Aaron From Aaron Sloman Wed Jun 19 22:34:04 BST 1996 To: klein@adage.Berkeley.EDU Subject: posted message Dear Stan, I hope it's clear that this message I posted this morning is not personal! I just think a huge amount of time, effort, and intellectual energy is going into discussions on which no progress is being made because the participants, even distinguished scientists, are talking past one another because they all think they are talking about the same thing when there is no one thing for them to talk about and they are actually talking about different things (or, in some cases, nothing). Best wishes. Aaron Newsgroups: sci.psychology.consciousness References: <9606151708.AA02612@adage.Berkeley.EDU> Subject: Re: Binding during dreaming Stan Klein writes: > article: in sci.psychology.consciousness > Date: Sat, 15 Jun 1996 10:08:05 PDT > ...... > My apologies for using NCC without clarification. It stands for the > "neural correlates of consciousness". I used to use the letters NCCQ > ("neural causes and correlates of qualia" but David Chalmers convinced > me that NCCQ was too controversial). I think that just about everyone ^^^^^^^^^^^^^^^^^^^ > believes that the search for NCC is important Dear Stan, "just about everyone" may be a considerable exaggeration!! I, for one, do not think it is important at all, and I believe that it diverts attention from important and difficult problems. The whole idea is based on a fundamental misconception that just because there is a noun "consciousness" there is some *thing* like magnetism or electricity or pressure or temperature, and that it's worth looking for correlates of that thing. (Or the misconception that it is worth trying to prove that certain mechanisms can or cannot produce "it", or trying to find out how "it" evolved, or trying to find out which animals have "it", or trying to decide at which moment "it" starts when a foetus develops, or at which moement "it" stops when brain damage or death occurs.) I regard this all as being as silly as looking for neural correlates for any of: "knowing a fact" "having a skill" "having a PhD" and just as silly as looking for electronic correlates of "a compiler detecting a syntax error during program compilation" "a computer program solving a mathematical problem", " arrival of a message being recorded in a computer" (even on the same computer, using the same operating system, users can choose very different mail mechanisms). It's silly because there is no clearly identifiable thing called "consciousness" to be correlated with any neural correlates, because references to the alleged thing (consciousness) are so general, covering extremely diverse phenomena which need not have anything neural (or physical) in common, for the same sort of reason as the above computer-based events need not have any common electronic implementation: they are essentially events defined by functional relations within a functional architecture. I am not saying that humans, animals, or machines of the future cannot be conscious (note the adjective) of anything -- just that use of the noun "consciousness" causes deep confusion, suggesting that there's one referent. Compare: what are the physical correlates of property changing ownership? Of course property changes ownership: houses, cars, factories, companies, film rights, rights to use software, are bought and sold, so property changes ownership. Would it make sense to search for the physical correlates of these transactions? In the USA? In a primitive hunting tribe? Now? A hundred years ago? In three hundred years time? Rather, insofar as there is any sense in using the noun "consciousness" rather than the adjective "conscious" (which often goes with the preposition "of" followed by a wide variety of possible types of noun phrase pinning down more precisely what is referred to), there will not be ONE thing to be correlated but a very large collection of very different things. We'll only understand what sort of collection we are talking about when we have a good understanding of "design space" i.e. the space of possible designs for various kinds of powerful control mechanisms (which is what brains are). This requires a study of actual and possible information processing architectures and their powers, so that we can see how human and other brains fit into that space, and also how various kinds of non-biological systems might too, and what the effects of various kinds of damage and abnormality in brains might be. Right now our understanding is very, very primitive, despite the ever more rapid collection of potentially relevant details in brain science, psychophysics, etc. (Compare this with the collection of botanical and zoological detail prior to Darwin.) I really think the current fashionability of discussions of consciousness is a real disaster for science because: (a) it is distracting people (including many bright students) from the truly hard scientific problems (b) it is encouraging sloppy use of language and wholly uncritical acceptance of questions which should be rejected as ill formed. (The amount of junk included in the online stuff for the Tucson conference is appalling.) (c) it encourages otherwise very bright physicists to confuse the public by attaching a common word ("consciousness") to aspects of their equations which really have NOTHING at all to do with ordinary usage of the noun "consciousness" or the adjective "conscious". Note that I am not saying that searching for neural correlates for *different* classes of mental events is silly. Of course it isn't. There are all sorts of mental events and processes that are far more specific than the alleged occurrence of consciousness: e.g. the flip of a necker cube, the fusion of a particular sort of random dot stereogram, the recognition of a woodpecker's call, the decision to start running away from an approaching bull, the sudden realization that a proof has an invalid step, the recollection of a humiliating event, and so on. No doubt there are many other more or less similar events in bonobos, monkeys, rats, pigeons, lizards, fleas, bacteria, etc. I see nothing wrong with searching for mechanisms in brains and other biological and non-biological control systems that underly and explain the possibility of such specific, temporally locatable, phenomena. (Note that "underly and explain" is a far stronger requirement than merely being *correlated* with such phenomena: the number of outer shell electrons in atoms isn't simply *correlated* with observable properties of chemical elements. The connections go much deeper via mathematically describable relationships, which is why we can talk about *explanation* here). Although the search for neural mechanisms underlying mental events and processes is not silly, it IS silly to refer to the discoveries so found in neuroscience as discoveries of the neural correlates of *consciousness* because they are neural correlates of lots of very DIFFERENT things, not just one well-defined type of thing. Moreover, in themselves (as opposed to their functional relations to other parts of the system) such neural correlates of things of which we are conscious will not necessarily be any different in kind from neural correlates of things of which we are NOT conscious, e.g. o using optical flow and other visual information to control posture, o low level phonological analyses in speech understanding, o formation of generalisations abstracted from practice examples, o development of grammatical knowledge when we were toddlers, o selection of grammatical structures for sentence production, o control of saccades, o blindsight, o recollection of a name you were trying to remember five minutes earlier, and many more. It may be that the ONLY difference between the things of which we are conscious and those of which we are not is concerned with whether certain brain mechanisms (which may or may not have a fixed location -- e.g. they may be virtual machines rather than physical machines) are capable of monitoring them. I.e. it's a relational feature. This does not mean that processes of which we are not conscious are not themselves monitored -- just that they are monitored, if at all, by different sub-mechanisms, which, for example, cannot be engaged by social interactions (such as asking "How are you feeling?"), or that they are monitored by mechanisms which do not feed their records into certain more generally accessible memory structures. Compare the following: 1. When I run a program P on a computer, some parts of P may monitor and record the behaviour of other parts, e.g. which problems were attempted, which ones solved, etc. 2. When P runs, the operating system O monitors and records parts of the behaviour of P (e.g. memory sizes, numbers of page faults, cpu time required both overall and in the most recent time slot, etc). 3. If the operating system O is part of a large computer network, some central administrative machine A might monitor and record some parts of the behaviour of O, e.g. how long since it last crashed, numbers of users, amount of network traffic it generates, etc. But A may have no way of getting at the events in P which P can monitor. Just because there are some types of events and processes to which A has access, it does not follow that there is some metaphysically deep difference in the nature of those events and processes. Similarly my inability to "introspect" the processes in my cerebellum may be of no more significance in relation to the nature of those processes than my inability to "introspect" the processes in your cortex. There may be lots of different kinds self monitoring sub-processes. I have elaborated on some of this in some slides for a lecture I gave in February at the RSA in London, which are available in postscript form at http://www.cs.bham.ac.uk/~axs/misc/consciousness.lecture.ps (previously announced) and a new plain text summary, to appear in the RSA journal, in http://www.cs.bham.ac.uk/~axs/misc/consciousness.rsa.text Longer term progress on the non-speudo questions will require much clearer thinking and multi-disciplinary collaboration between different sorts of empirical science, AI and philosophy. In particular, some of the conceptual confusions that lead people to assume that information processing systems are incapable of supporting mental states, or that functional analyses will always leave out "something important", can only be disposed of via a detour into the analysis of notions of causation and mechanism, including deep conceptual analysis of the ways in which these notions are involved in our thinking about mental phenomena. We have to uncover problems and confusions about the nature of mental phenomena that will prove as pervasive and significant for the science of mind and brain as the problems and confusions uncovered by Einstein's analysis of notions of simultaneity were for physics, but far more complex and subtle, and more likely to generate powerful emotional reactions in scientists and laymen. (Though even Einstein's work generated emotional reactions for a while!) If you think you know what consciousness is because you have direct access to it remember that people used to think they knew what simultaneity was because they had direct access to it. And then came Einstein. I've begun to address some of this in a first draft paper on how the possibilities and powers of complex systems are related to the possibilities and powers of components: http://www.cs.bham.ac.uk/~axs/misc/real.possibility.html which still has many gaps and weaknesses. Offers of help welcome. (Preferably by direct email, as I don't subscribe to psyche-D, to save my disk quota and clutter in my mail file, and, instead, wait for articles to come via the sci.psychology.consciousness news group, which can cause significant delay). Maybe it is time to form a society for those who want to study the hard problems of mind and brain and information-based control systems without ever using the noun "consciousness", at least not for the next five (ten?) years? Stan, I am sure your scientific work would be accommodated. Cheers Aaron From article: 32687 in comp.ai.philosophy Newsgroups: comp.ai.philosophy,sci.cognitive Followup-To: comp.ai.philosophy,sci.cognitive Message-ID: <4qbv9a$dfn@globe.indirect.com> References: <4qb11k$7no@percy.cs.bham.ac.uk> NNTP-Posting-Host: bud.indirect.com Date: 20 Jun 1996 16:46:34 GMT Organization: Subject: Re: The noun "consciousness" is pernicious From: marty@indirect.com (Marty Stoneman) Aaron Sloman (A.Sloman@cs.bham.ac.uk) wrote: : I recently posted the following to the moderated group : sci.psychology.consciousness (gatewayed to the psyche-D email list). : Since it is relevant to other news groups here it is. : Comments welcome, as usual. : Aaron [REST OF POSTING SNIPPED] I avoid posting "agreement" posts. But, as a developer of mechanisms which "underly and explain" (as described in Aaron's post), I must say that it's refreshing to me to read a post that makes sense and identifies nonsense. I liked and agreed with it. Good job! Cheers Marty Stoneman marty@indirect.com