From Aaron Thu Jan 30 02:31:13 GMT 1997 Newsgroups: sci.psychology.consciousness Subject: Re: Qualia & Form Perception Joe Jeffrey writes: > article: 985 in sci.psychology.consciousness > Date: Mon, 20 Jan 1997 10:41:04 -0600 > > Aaron Sloman, in objecting to the use of the term "qualia", Actually I was not objecting. I specifically wrote > Just for the record, unlike Dan Dennett, I don't think the word "qualia" > is used without any coherent meaning at all, or that there's nothing > important that's ever referred to by the word. Maybe there were too many negatives to be processed by a hasty reader. I can paraphrase: I think the word "qualia" is sometimes used with a coherent meaning, referring to something important. I then tried to then tried to describe (in outline) one sort of information processing architecture that could produce such phenomena, and possibly does in humans. > proposes that the following captures the notion of consciousness: I didn't say that either. At best it captures only a subset of the complex, messy and ambiguous notion of consciousness, namely the subset of cases where we can talk about self-consciousness. > > the ability of some sophisticated information processing systems > > (e.g. humans, and maybe lots of other animals too, though I don't > > know which) to attend to, detect, categorise, evaluate and sometimes > > modify some of their own internal states, including some of the > > intermediate states involved in visual and other forms of > > perception. > > This amounts to saying that consciousness is the ability to monitor > one's (some of) one's internal states. I didn't say that or intend to make any such sweeping claim. However I think it does account for a subset of the phenomena referred by the word "consciousness". > I see a serious problem > with this: It assumes that there is such a thing as an > "internal state", something that is not simply a physiological state, Right. I believe the world is full of abstract states that exist and have causal powers. poverty can cause an increase in the crime rate change of ownership of some shares can cause an increase in your tax liability detection of repeated substructures in a parse tree can cause a compiler to optimise your program when a prolog program asserts a new clause that can cause its future behaviour to be different from its previous behaviour thinking about this topic has just caused me to remember the lecture I have to give at 9am tomorrow. Some philosophers dispute the existence of such things, or their causal powers. They claim that only physical things exist or have effects. But whose physics should we use: Newton's? Einstein's? Schroedinger's? The physicists 500 years from now? > and buys into the whole internal-cognitive-process model, implicitly. I have no idea what this "whole internal cognitive process model" is. I don't buy into things. I consider evidence and arguments and counter arguments and either come to a conclusion or say the issue is still undecided. For now it is perfectly clear to me from what I know about software engineering (and other things) that it is possible for complex information processing systems to have rich virtual machine architectures in which processes of creation, selection, construction, comparison, occur, which in turn may cause other such processes to occur. I conjecture that something like that is going on in our brains, though in far richer form than anything we have ever been able to design so far. But it's still just a conjecture. > While the cognitive process model is a sometimes-useful body of > process-talk, cognitive "processes" are a set of abstractions, not > observable processes, Why should I restrict my ontology to things I can observe, or things that are not abstractions? If you try you will almost certainly not be able to lead a normal life. (E.g. discussing the effects of poverty, or taking account of the things that cause ownership or obligations or tax liability to change.) Of course all these abstracts are ultimately implemented in some sort of physical reality. But from what the physicists tell me that's mostly even more abstract and unobservable than what I am talking about. Moreover, we don't know whether the current physics will turn out to be yet another abstraction implemented in some lower level physics. > and are not the only, or even necessarily the > best, way to talk about what people do, I don't claim that my way of talking about the world is going to be the best for everyone in all contexts. However, if you are trying to explain a set of phenomena then you should consider alternative explanatory theories and evaluate them according to their depth, explanatory power, precision, direct and indirect support, consistency with other known things, etc. etc. I see no reason to prejudge the outcome by ruling out explanations that refer to internal states of an information processing system. > ... which is recognize things in > the world (sometimes immediately, sometimes after "thinking about it"). Certainly people do that. But that's not what I was talking about. I was talking about your ability to pay attention to some of your own states, e.g. noticing changes in the shape of your sensory percepts even though the shape of the object you are looking at does not change, and you know it. This is one of the sorts of things painters have to be able to do. Another case in a doctor's surgery is paying attention to your patterns of discomfort you have when you move your damaged arm in a certain way. Another is noticing that the way you have been thinking about that algebraic problem seems to involve a redundant step that you should be able to eliminate, etc. I have no reason to believe that these are any different in principle (metaphysically different) from the sorts of things that go on when a compiler notices the possibility of optimising the code it is creating or an operating system detects that its is spending too much time paging and swapping, or a word processor detects that the insertion of a new character makes it necessary to break the line and reformat the rest of the page, etc. etc. > Methodologically, it seems a very bad move to tie an attempt to define > consciousness to a body of assumptions from another field. I am not trying to define consciousness. I am trying to produce a theory that can explain a whole range of phenomena, including those that philosophers noticed that led them to talk about qualia. If that conflicts with some people's ideas about methodology, so be it. > Among > other things, doing so would make it essentially impossible to study > anything that did not fit within those assumptions. I am always willing in principle to study anything for which good evidence or good arguments can be produced. Often I can't study such things because I am too busy and I have to select, they are too complex for my simple mind. If instead of implicitly accusing me of having a closed mind you put forward a rival theory then I could examine it. > ..... > This is a bad case of cart before the horse. We do not yet have > an agreed-on articulation of the phenomenon itself; no definition > in terms of internal processes and states could be such an articulation. > All we've really got is a whole lot of instances of people saying, > "Well, consciousness includes X", and we don't even have a great > deal of agreement about what all the X's are or ought to be. In earlier postings I strongly criticised the use of the noun "consciousness" as if it referred to one clearly identifiable thing, when the evidence is that it is used to lots of different things at different times. If that's what you are saying I agree with you. Incidentally the original context was Stan Klein's remark, which I quoted: "To be conscious one needs a much fancier architecture that we don't understand yet." In agreeing with his implicit claim that the phenomena he was referring to presuppose some sort of architecture I was not presupposing anything about what that architecture was. So if you think no architecture could be relevant please explain why. If you start from some philosophical position such as metaphysical dualism then it may be you who wish to close of some options by putting the cart before the horse. > There is > a long, painful, and expensive history, in the computing field, of > attempts to define software architecture before the requirements are > written, or understood, and this much like that. Without a clear > statement of what the phenomeon is, one not couched in terms of > "internal processes", it is not possible to know whether a given > hypothesized architecture is good, bad, complete, accurate, or anything > else. This is a totally different point from whether such information processing architectures with internal processes exist and are worth talking about. Anyhow IF you are saying that the specification of requirements for a computing system cannot mention internal processes in virtual machines you are just wrong. E.g. I can specify requirements for the time slices of a scheduler, or the access protections between processes in a multi-user operating system, or the kinds of optimisations to be performed by an interpreter or compiler or the criteria by which a run time system should decide whether to use a copying or a shuffling garbage collector, or whether run-time type-checking should be turned on or off, or what the run-time interrupt handler should do, etc. I find your comments strange, coming from a computer scientist. It reads more like what I'd expect from a behaviourist psychologist. Perhaps I have misunderstood? > ---------------------------- > Joe Jeffrey > Dept. of Computer Science > Northern Illinois University > DeKalb, IL 60115 > USA > 815/753-6938 (W) > 708/653-0156 (H) Cheers. Aaron From Aaron Sloman Mon Feb 10 22:16:25 GMT 1997 To: joej@MCS.COM Subject: copy of message just posted. This may be too long to get past Patrick. Aaron >From Aaron Sloman Mon Feb 10 22:15:36 GMT 1997 To: PSYCHE-D@IRIS.RFMH.ORG Subject: Re: Qualia & Form Perception Do Joe and I really disagree or are we simply talking past each other because he took me to be saying something I wasn't saying and I'm taking him to be saying something he isn't saying? Joe Jeffrey writes: > Date: Sun, 2 Feb 1997 22:35:24 -0600 > > Apologies ... Accepted! > .... [JJ] > >> and buys into the whole internal-cognitive-process model, implicitly. [AS] > >I have no idea what this "whole internal cognitive process model" > >is. I don't buy into things. .... > [JJ] > By "buy into" this model, I meant that it appears that you have > accepted the basic approach to talking about a person, and > person's consciousness, in terms of internal processes and states. > This seems to be the only accepted way of talking scientifically ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > about a person. I don't think this has anything specific to do with what's *scientific*. In our ordinary (non-scientific) interactions we naturally think of people as having beliefs, desires, attitudes, intentions, idle wishes, moods, preferences, personality traits, and a host of dispositions that may or may not reveal themselves. These are all internal *states*. We also talk about people learning things, taking decisions, becoming more (or less) unhappy or angry or envious or relieved, making plans, considering options, comparing things, making inferences, rehearsing arguments, coming to notice things, forgetting things, reminiscing, switching attention from one thing to another, forming attachments, acquiring new tastes, becoming aware of something, reacting to things, having an impulse. These are all internal *processes*. If you don't agree that we naturally talk about internal states and processes, look at what most novels and plays are about or most gossip is about. (Not all - I agree. Some of it is about who slept with whom, etc.) I also accept that not all cultures talk about the same kinds of things I've heard some things from anthropologists that are very bizarre from my point of view. And even in our culture there are some people who think mental states and processes can survive total destruction of the body, so for them "internal" would probably be the wrong word. If you think there's any useful practical way of talking or thinking about human beings, or for that matter living with and interacting with them, but which is an alternative to thinking and talking about internal states or processes, I would certainly be interested to learn about it. Whether one can make a science out of all this is another question. My conjecture is that, like the science of kinds of stuff, i.e. physics, the development of a powerful theory of the underlying virtual machine architecture in which these states and processes occur will show that some of our ordinary concepts of internal states and processes will have to be replaced, modified, subdivided, merged, or extended. But there will be considerable overlap between the prescientific and the scientific concepts, just as happened with concepts of kinds of physical stuff. [JJ] > As it is not an empirical proposition, but a ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > pre-empirical position or principle, I had assumed you, like most > people, just accepted that one just has to use such models of ^^^^^^^^^^^^^^^ > people and behavior. I am still not sure exactly which models you are talking about, but I would NOT say we just *have* to talk in any particular way either in our everyday life or when doing science. After all, different cultures use different models of humanity and our place in the universe. I.e. there are many ways of talking about people (and other animals, and machines of various kinds). Which ways turn out illuminating or useful for various purposes is an empirical matter, not a pre-empirical question. E.g. certain sorts of caricatures of AI systems based on the notion of a single fixed algorithm are ruled out empirically, e.g. because we are more like a collection of different adaptive processes taking in different inputs and processing them concurrently and asynchronously, in changing ways. Moreover, it's an empirical question whether all the important processing is discrete or whether there are some major features of the system that depend on continuous (analog) mechanisms, e.g. mechanisms better expressed by differential equations than by programs. I conjecture that there are many subsystems whose processing is essentially discrete (at the relevant level of abstraction), though I suspect that other important brain mechanisms are not. It's an empirical issue which ways of modelling minds will help us to devise good educational systems, good therapy systems, and help us explain the effects of brain damage, or explain much of the observed dynamics of human interaction. It's also an empirical question whether this or that model will turn out to be consistent with underlying physiological mechanisms, or capable of being produced by an evolutionary process. (Not all trajectories in design space may be physically possible.) ..... stuff on which we agree omitted..... [JJ] > >> Methodologically, it seems a very bad move to tie an attempt to define > >> consciousness to a body of assumptions from another field. [AS] > >I am not trying to define consciousness. I am trying to produce a > >theory that can explain a whole range of phenomena, including those > >that philosophers noticed that led them to talk about qualia. [JJ] > Well, OK, but when someone says "Consciousness is the ability to...", > I usually figure they're offering a definition. I didn't say that, though apparently you read me as saying it. What I was defining in the extract you quoted was not consciousness but a meta-management layer in a multi-layer information processing architecture. I was saying that in a system whose architecture included a meta-management layer, things would occur of the same sorts as lead philosophers to start talking about qualia. I then said I was happy to use the word "qualia" to refer to the internal objects to which certain meta-management capabilities were directed. (Defining consciousness is another matter.) I've tried to make it clear all along that these are all empirical conjectures. E.g. they could be refuted by the discovery that in human beings what I've called the deliberative and meta-management layers do not exist (any more than they exist in ants, as far as I know) and all the appearances of such things are produced by a huge collection of pre-compiled reactive mechanisms determined by our genes, or by a huge lookup table. [JJ] > I was not accusing you of having a closed mind. I was simply pointing out > that any definition of anything makes it hard to study any aspects of > the phenomena that do not fit within the definition, and so definitions > are to be approached with fear and trembling, and lots of focus on the > phenomena to be included and excluded. That's one of the reasons why I am opposed to starting with definitions. Good science doesn't start from definitions. Rather it eventually comes up with definitions based on good theories. The theories are gradually created by a bootstrapping process. Only when we have good explanatory theories do we really know what we were trying to explain. I don't know why you take me to be offering definitions, when I try so hard to resist them! [JJ] > ...If one defines consciousness > as a set of abilities, couched in terms of an information-processing > model, it will then very hard to talk about, let alone study, > things that don't fit within that model. Certainly. What did you have in mind? Give us some examples, and then lets see what we should think about them, and how they challenge this or that theory. [AS] > >Anyhow IF you are saying that the specification of requirements for > >a computing system cannot mention internal processes in virtual > >machines you are just wrong. > [JJ] > No, certainly I was not saying that specifications cannot mention > internal processes. However, a specification of an internal process > is not a requirement -- it's design. Notice the shift here: I talked about requirements mentioning internal processes. You talked about "a specification of an internal process". A detailed specification could indeed be a design. A *requirement* that the scheduler ensure that no runnable process ever waits more than 0.2secs before it gets a chance to run is not a design but a requirement, as is the requirement that the system support up to 256 concurrent processes, and the requirement that no process owned by one user should be allowed to inspect a process owned by another user, unless the second user has explicitly given permission. Specifying HOW all that is achieved would require a design, involving a mixture of specifications of hardware and software features. I admit that what's a design feature from one point of view can be a requirement from another. If that's all that's going on here then we are talking at cross purposes. [JJ] > However, I am not a behaviorist, in part because I do not "restrict > my ontology" to the merely physical -- the things you earlier referred > to as abstractions are absolutely real. When I want to talk precisely > about persons and what they do, I think in terms of abilities to recognize > and act on a wide range of things, including patterns of various kinds. So we are agreed. [JJ] > I find the information-processing model, and traditional cognitive > process model, an awkward fit for the kinds and range of such facts > that seems necessary for talking about people and what they do > without doing violence to the subject matter. Maybe that's because you are restricting your thinking to some narrow and inadequate range of internal states and processes. A narrow type of information-processing model? I've often found that philosophers and psychologists who have read a teeny bit of AI or attended a course on expert systems think that AI (or GOFAI) is constrained to processes in which rules expressible in English transform databases containing information expressed in something like English (or some other similar language). They have not talked to people in an AI robotics lab, for instance! But you are presumably not falling into that trap. So perhaps we can avoid all this meta-level discussion if you'll simply say precisely what sorts of internal processes you are referring to and exactly why they are inadequate, and for what purpose. Then I may agree with you and talk about which sorts of processes are more suitable. These are empirical issues. Let's not turn them into methodological ones. Incidentally in one of your messages you talked about viewpoints, and I agreed with what you said. What I've said about qualia has to do with information that's available from a viewpoint that is unique to the individual concerned: but only if that individual has an appropriate architecture. (My previously announced paper "What is it like to be a rock?" elaborates on this in whimsical fashion.) Incidentally I don't believe that any of the existing information processing models that I know of is capable of accounting for human visual capabilities (nor probably the visual abilities of a squirrel or magpie). [I spelled out some reasons in a paper in Journal of Experimental and Theoretical AI in 1989]. But that doesn't mean that NO information processing system can do it. We may have to invent new kinds of information structures and processes. But before embarking on that design process we need to get clear about the requirements, internal and external. Cheers Aaron ==== Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs ) School of Computer Science, The University of Birmingham, B15 2TT, England EMAIL A.Sloman@cs.bham.ac.uk Phone: +44-121-414-4775 (Sec 3711) Fax: +44-121-414-4281