Newsgroups: sci.psychology.consciousness References: Subject: Re: Tucson II [Function and Experience] Jeff Dalton writes: > Date: Thu, 8 Feb 1996 04:31:55 GMT > Organization: Centre for Cognitive Science, Edinburgh, UK > .... > Aaron Sloman suggested that we "unblinker" ourselves and pay more > attention to > >> [...] the possibility that some of the processes in ourselves of >> which we are not aware are exactly like the processes of which we >> are aware, except that WE cannot access them directly. They may be >> accessed and monitored by other processes within us that we don't >> know about, just as states in your mind are accessible to mechanisms >> that I don't know about but not to me. > > But that still leaves the question of why accessing or monitoring > (by any process) ever results in, or involves, or _is_ conscious > experience. I think there's a communication problem here. I did not intend to suggest that there's some sort of causal connection between being accessed or monitored and being an experience, or that being monitored necessarily results in something becoming conscious experience. (Do TV cameras monitoring intruders to a bank have experiences?) I suspect that when the monitoring is part of a rich enough functional architecture then it's an instance of what we mean by something being an experience. But what counts as rich enough is not yet clear: there is still work to be done. I also don't think there's a necessary connection between being experienced and being monitored. For example I have every reason to think that a fly experiences something moving rapidly towards it (which makes it move sideways quickly) but no reason to believe that it monitors that state. My point was only to offer the conjecture that when we have a good grasp of the specifications for certain kinds of (deep) functional states we'll not be able to think of any difference between that and our conception of experience. I.e. we'll come to see that what appear to be two different concepts are not two different concepts. (An analogy: someone who has the intuitive concept of a continuous set might, when first confronted with a mathematical definition of continuity, think that his intuitive concept is different from the mathematical concept. However deeper thought, and the failed attempts to identify any significant difference, might convince him that the two concepts were identical after all. Correct analyses of our own concepts can surprise us.) I then offer the thought that whereas most people seem naturally to think we immediately know about (have access to, can monitor) ALL our own experiences and none of those of other people, a slight conceptual relaxation will allow us to conceive of experiences in *ourselves* that we cannot monitor. Apart from our inability to monitor them, they could be just like the ones we do monitor. I suggest that the concept of experiences in you of which you are unaware is no more contradictory than the concept of experiences in another person of which you are not aware. Whether these experiences of which you are unconscious MUST be monitored by something else (within you) is a separate question. I suspect that some of them MAY be, but don't claim they all must be. The kind of functional role that *constitutes* something being an experience in our intuitive sense could turn out to involve some monitoring or accessing of internal states: I don't know. But I have no reason to believe there are any necessary and sufficient conditions: experiences are too diverse. [ I suspect I still have not said clearly enough what I am getting at. ] [JD] > .....Let us suppose that qualia are identical with > certain QM processes. Then qualia occur whenever those processes > take place (and vice versa), because those processes _are_ qualia. > That's what qualia turned out to be. I think that Henry Stapp may be saying something like that, at least as regards some experiences. I wasn't. My conjecture is that if you have a machine in which certain sorts of functional roles exist then we may not be able to conceive of any difference between things filling those roles and experiences. (We may think there's a difference, at first, but fail to find any, after much struggling. That's a form of learning.) I don't see how QM processes could be like that. Machines with the required functional roles might or might not be implementable using QM processes: I don't know. I think some QM processes may be necessary, for reasons to do with the inadequacy of storage systems based on classical mechanics, but I don't have a strong argument. [JD] > But in such a case there can still be an explanatory gap. > The process/quale has various properties, and we may still > want them explained. If you are talking about a gap between something described as an experience and something described as a QM process or a chemical process or an electronic process, then I agree with you. But I don't see any explanatory gap between being an experience and having a functional role. For example, consider two descriptions of my state S: (a) being in state S is unpleasant. (b) being in state S involves wanting to change state S. I am not saying that (b) amounts to a complete and correct analysis (we would need something a lot more complex for that). But a concept of type (b) might, when suitably filled out, be seen as just what we had always understood by the notion of an unpleasant experience. It would not be an empirical correlation, or a case of one thing being implemented in another. It would be a case of conceptual identity. That's my conjecture. [JD] > ...If we think of the process and the quale > as separate, it may seem that identity provides the explanation: > the process has the properties of a quale because it _is_ > a quale (and vice versa). This looks like circular reasoning to me. [JD] > ...But of a single thing that has one > property, P, we might still need an explanation of why it > has some other property, Q. Yes. I can't speak for Clark, but what I was suggestion did not involve discovering a relation between two properties. Rather it was discovering that two apparently distinct concepts were not distinct. [JD] > A different example may make it easier to see this. The Morning > Star turned out to be identical to the Evening Star (they're > both the planet Venus). That's an empirical discovery. Not what I was discussing. > ..But someone who is told the two are > identical may still not understand why the object appears > where it does in the evening, morning, and so on. Yes. If the same thing is visible from two viewpoints, or at two different times, or with two different appearances, then there's something to explain. But there's no need to explain why squares have four sides. [JD] >.... > Perhaps it's still felt that no explanation can be needed in the case > of qualia and whatever (if anything) they turn out to be identical to. > (Perhaps my examples are thought to have the wrong kind of "gap".) YES > But it's hard to see how anyone can know that, or even that there's > an identity, _now_. You discover the conceptual identity when the differences you previously thought you understood no longer make any sense. I would not claim to know _now_ that there's an identity. It's still a conjecture. [JD quoting AS] >> [...] when people claim to be able to see a conceptual gap between >> consciousness and explanatory mechanisms, all that's happening is >> that they are unable to grasp conceptual relationships between >> something ill-defined, and something they barely understand because >> it has not been specified yet. [JD] > So no wonder they think more explanation (or whatever) is needed! Yes - when you have not seen an identity you will think there's a conceptual gap. [JD] > The "conceptual gap" is not the "explanatory gap" I was addressing > above. Instead, it refers to the idea that qualia cannot be > identical to, say, functional states, because it's conceivable > (hence, if there's no contradiction not yet discovered in the > details, logically possible) that the functional state could > occur without the qualia. I was saying that I thought you could have experiences you were unaware of. Thus this argument against functionalism falls flat. Whether qualia can exist without your being aware of them is a bit tricky. I think all the actual phenomena that lead philosophers to think about and talk about qualia are cases where there is not only something happening (e.g. perception, thinking, desiring, etc.) but also that internal state is being monitored, or attended to. Thus I can see two distinct conceptions of qualia: 1. Qualia necessarily involve self monitoring 2. Qualia can exist without self monitoring, as long as they have all the other functional roles. Qualia of type 2 could exist without your being aware of them and without anything else being aware of them, though their existence might involve awareness of something else, e.g. perceptual qualia produced by sensory processes. Qualia of type 1 would be possible only where the functional roles permit these things to be monitored. But they need not be accessible to the person - i.e. they would still be qualia of whose existence the person was unaware. (I don't know whether blind-sight includes qualia of type 1 or type 2. I suspect only type 2, based on older parts of the brain that do not include self-monitoring.) Either way, the functional states, i.e. the qualia, could occur in some subsystem within you of which you are unaware, and which you cannot report. [JD] > But in any case, if Sloman's right about how ill-defined and barely > understood the two sides of the suggested identity are, it's hard to > see how anyone can know _now_ that any gap is "illusory" or that, > indeed, an identity obtains. Yes. If I suggested anyone knew NOW, I was expressing things too strongly. It's all conjecture. By taking the conjecture seriously, I suspect (another conjecture) that we'll have a conceptual revoluation as important as the conceptual change that allowed the possibility of non-euclidean space (even though it was previously "obvious" that Euclid's axiom of parallels was true.) [JD] >.... > I agree with him when he writes: > >> When we have a good understanding of what can and what cannot be >> done by various sorts of architectures we shall have a much clearer >> understanding of what functionalism is actually claiming than any >> functionalist can possibly have now. > > and > >> Until we have our theory of deep functionality we may not have a >> good basis for saying why some of the things it refers to are >> identical with what we previously called experiences, or >> consciousness. > > (And I agree with much else besides, BTW.) > > But later he takes a harder line. Yes. I may have expressed several points as confident assertions which are still only conjectures. [JD] > For instance: > [AS] >> [...] the kind of identity in question is not a simple "obvious" >> identity but a deep and complex one: there's one thing that can be >> looked at in two very different ways. (Like the geometrical and the >> set-theoretic views of the real number continuum: not everyone who >> understands some geometry and some arithmetic can grasp the >> underlying identity.) > >> It seems to me, alas, that our evolutionary and educational >> processes seem to produce a subset of people who lack the capability >> to grasp some of the complex relationships that other people can >> grasp. But it doesn't stop them asking the questions: they just >> can't understand the answers. > ... >> Maybe when we have a good theory of the mechanisms underlying deep >> mathematical and scientific understanding we shall be in a position >> to help the people for whom that is currently impossible, for >> reasons that we now don't understand. I.e. maybe we'll find good >> ways to extend people's mental capabilities? Maybe not. > > And, in his concluding paragraphs: > >> Perhaps the combination of fully worked out theory and working >> demonstrations (robots built on the basis of the theory and intricate >> non-invasive ways of observing and manipulating their and our mental >> states) will help to convince the remaining doubters, in a way that "in >> principle" arguments cannot. > >> I doubt it will convince everyone. > >> There will always be doubters. Even some of the robots will be >> doubters, because of the way their ability to have experiences has >> been implemented, giving them only a very shallow view of what's >> going on inside them, just like most contemporary contributors to >> discussions of consciousness. A lot of that is couched in the future tense: there's an element of prediction, but the form of expression is over-confident. That's partly because I was trying to be provocative. [JD] > Earlier, the view was "Until we have our theory of deep functionality > we may not have a good basis [for saying why there's identity]". But > now the idea seems to be that we should already be convinced, perhaps > by "in principle" arguments, or maybe by a deeper view of what's going > on inside ourselves -- unless perhaps we "lack the capability". That's not what the quoted comments say, and not what I intended. I was talking about what would (might) happen in the future if the functionalist programme had been successfully carried out. I don't think anyone should be convinced yet. There's too much work still to be done. On the other hand, I think that (a) some people will be too conceptually blinkered to consider the possibility that the detailed design work could teach us something about our concepts, and (b) when the work has been done it may prove too complex for some people to understand -- just as certain mathematical proofs are too complex for some people to understand, and some designs are too complex for some people to understand. (I am reminded of a comment by Dijkstra (I think) that whereas normal people have short term memories capable only of holding 7 plus or minus 2 items at once, a really good programmer working with complex recursive functions has to be able to keep far more things simultaneously in short term memory. I suspect the same is true of great composers, great mathematicians, etc.) [JD] > I find this shift difficult to understand. This shift isn't there. I guess I did not express things well. In particular, I did not repeatedly include the qualification that what I was talking about was a conjecture, and I did not say clearly that I was trying to anticipate problems in having the conjecture accepted even after all the work to establish it had been done. [JD] > ..And when I consider the > example of the geometrical and set-theoretic views of the real number > continuum, it seems to me that in that case I, at least, received an > explanation, or something rather like one. I wasn't asked to see an > identity between something "ill-defined" and something that "has not > been specified yet". I think that in that case you started with two ill-defined concepts (arithmetical and geometric continuity) and gradually had them replaced with one concept. On the way you may have been introduced to the notion of a dense set (arithmetical or geometric) and then been shown why that is not sufficient to capture your intuitions about continuity (e.g. because the dense set of rationals does not include something where the square root of 2 ought to be.) I don't see how there could be a demonstration or explanation of the *correctness* of a mathematical definition of continuity as replacement for an intuitive one. At best some persuasive examples can be given. (You can be given a mathematical proof of equivalence of two distinct mathematical concepts, but that's a different matter.) [JD] > Right now, there is a genuine problem of consciousness, so far as I > can tell. But the difference between saying that "gaps" are illusory > and saying they need to be filled, and the difference between looking > for the answers in identity and looking for them in bridge laws -- in > short, the difference between, say, Sloman and Hayes on the one hand > and Chalmers on the other -- does not seem to be all that great, > provided we accept that in both cases some hard work still must be > done. Bridge laws are totally different from what I was referring to. You can find bridge laws when you already have two (or more) well defined concepts. E.g. the concept of temperature can be associated with a range of measuring devices. The concept of kinetic energy of a molecule is also related to something that can be measured (with difficulty). Thus discovering the link between temperature and average kinetic energy was the discovering of a bridge law, linking two distinct things. The kind of discovery I was talking about does not build a bridge over the gap between two things: it eliminates the gap, so that talk about a bridge no longer makes any sense. Chalmers wants to talk about a "hard problem" remaining after a complete functional analysis has been given. I am totally unconvinced by everything I have read on that. I tried to indicate why in other parts of my original message (e.g. in my comments on flipping qualia and my comments on the differences between zombie robots and non-zombie robots). In a later message Pat Hayes writes Date: Thu, 8 Feb 1996 13:47:23 -0600 > Science cannot assert logical necessities, since they are > untestable. If we are groping towards a *science* of > consciousness, we will have to be content with simply > discovering the facts. Discovering that two apparently distinct concepts are not really distinct is not a scientific discovery, and it is not a discovery of some new facts. Aaron --