From Article: 14992 in comp.ai.philosophy Newsgroups: comp.ai.philosophy Message-ID: <3bttql$e0s@sun4.bham.ac.uk> References: <3aj4a9$9ct@mp.cs.niu.edu> <3ak08o$mvv@mp.cs.niu.edu> <1994Nov22.121521.27633@oxvaxd> <3atltt$428@mp.cs.niu.edu> <3auja3$7hf@news1.shell> <3aukr2$t3h@mp.cs.niu.edu> <3bir5e$g11@vixen.cso.uiuc.edu> Summary: could be implemented digitally Keywords: continuous/analog consciousness Date: 5 Dec 1994 02:27:33 GMT Organization: The University of Birmingham, UK. Subject: Re: Strong AI and (continuous) consciousness From: axs@cs.bham.ac.uk (Aaron Sloman) I hope my editing the subject line causes no confusion. smithjj@cat.com (Jeff Smith) writes: > Date: 30 Nov 1994 21:34:38 GMT > Organization: University of Illinois at Urbana > ....... ....lots of stuff deleted... > ....... > I have some problems with classifying computers as conscious because > there have not been any good definitions of consciousness, so I don't > know if they measure up. BTW, my definition of consciousness is: > "conscious, as human beings are conscious, according to my own experience" I'll let all that circularity pass, as I want to comment on the following: > By this definition, I don't believe computers are conscious. As far as > I know, human consciousness is continuous and analog. I can't imagine > consciousness being discrete and digital. People who work on computer vision usually have to start from a digitised image array. However, they often write code that interprets the array as if it were derived by *sampling* a continuous image, or optic array. By making certain plausible assumptions about bounds on the discontinuities in the original array one can, as necessary, interpolate between the pixels in the array, and, for example, answer questions about the colour or intensity at a point two thirds of the way along a pixel. Similarly, intersection points between intensity edges can be located with sub-pixel accuracy. It is possible for higher level processes to be given information from which it is impossible to tell that the visual imput was discrete and quantized at a certain scale. A robot with visual capabilities built on such mechanisms would "experience" the world visually as continuous. If it started studying philosophy and AI, it would probably soon jump to the conclusion that no robot based on digital computers could ever have visual experiences because visual experiences are *obviously* continuous. It might buy Penrose's books and read them with relish, as supporting its intuitions, and perhaps even start publishing papers on how intelligent robots also have mechanisms that cannot be simulated on digital computers. Maybe one of them is already posting articles to comp.ai.philosophy? Hi Jeff. Cheers. Aaron -- Aaron Sloman, (WWW page: http://www.cs.bham.ac.uk/~axs ) School of Computer Science, The University of Birmingham, B15 2TT, England EMAIL A.Sloman@cs.bham.ac.uk OR A.Sloman@bham.ac.uk Phone: +44-(0)121-414-4775 Fax: +44-(0)121-414-4281 From Article: 15038 in comp.ai.philosophy Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books Message-ID: References: <1994Nov30.165636.20074@rosevax.rosemount.com> Distribution: inet Xref: bhamcs sci.skeptic:65549 alt.consciousness:7225 comp.ai.philosophy:15038 sci.philosophy.meta:10461 rec.arts.books:71557 Date: Tue, 6 Dec 1994 02:52:06 GMT Organization: NETCOM On-line Communication Services (408 261-4700 guest) Subject: Re: Penrose and Searle (was Re: Roger Penrose's fixed ideas) From: jqb@netcom.com (Jim Balter) In article , Jeff Dalton wrote: >In article jqb@netcom.com (Jim Balter) writes: >>[...] but I am again not interested in explaining how it could >>be that it isn't obvious to others >>(and Aaron Sloman already posted a nice response to that question). > >I must have missed it. Or maybe I've forgotten it. Could someone >please send me a copy? I'll quote a bit from an article I saved; it was a response from Aaron to you. From what follows I infer (but not with certainty) that you think the red herring is my claim that instead of there being one well defined notion of consciousness there are lots of different collections of capabilities referred to by the words "consciousness" "conscious" "aware" etc. and no one explanation can account for all of them. [...] I offer the observation not as an argument to show that various people are wrong, but as a (partial) diagnosis of how intelligent people fall into deep muddles: e.g. by assuming that there's a unified clearly understood concept associated with a word, when there isn't. (It's not my idea: you'll find similar claims about sources of philosophical muddle in the writings of Wittgenstein, among others, though that doesn't make them correct either.) [...] When someone comes up with a clearly understandable specification of what exactly is referred to then I shall be happy to discuss what sorts of mechanisms might or might not lie behind it, or how it might have evolved etc. But I have not met any such specification. Most of the definitions people offer (e.g. of "consciousness") use words that are as riddled with ambiguity or unclarity as the one they are trying to define. One problem is the sad tendency for people, even very intelligent people, to think they have given a definition when they haven't. At least Penrose (in TENM) knew that he wasn't giving a definition. But he claimed that he didn't need to because we all knew what he was referring to. Well I for one don't. [...] > Now, perhaps you can convince me that I'm wrong here, but I've > never seen anything approaching convincing arguments on this point. When people have a deep belief that they know what they are talking about when they don't, it is rarely possible to dislodge this by producing convincing arguments. It requires extended individual philosophical discussion, with a strong element of diagnosis. (I.e. its a form of philosophical therapy). And it does not always work. It took a while before people realised they did not know what they meant by "the aether". It took an Einstein to show us that we did not know what we meant by two spatially distinct events occurring at the same time. (Probably there are still some people who think they do know.) Showing people that they are actually muddled about consciousness, when in fact they think their introspective understanding of it is brilliantly clear, is a much harder job. And there's no guarantee of success (i.e. cure!). >>What I find interesting is that folks like Dalton want to challenge the >>consciousness of programs by examining their listings, looking for >>"internal dialog" or scrounging around looking for signs of "consciousness", > >Actually, I don't want to do any such thing. I am merely suggesting >possibilities for criteria other than the TT. If you present me with >a TT-passing program, then you'll see how or if I want to challenge its >consciousness. It seems to me that you are quibbling, but I'll resist quibbling back. Ok, consider every "poster" to c.a.p. Any of these is conceivably driven by a program. What criteria do you use to judge their consciousness? If you say "I know they are really human.", how will you know when I *do* present you with a TT-passing program, considering that you say you aren't concerned with looks? >>Unless someone can explain what consciousness is and how we can detect it >>other than as a judgement about behavior, > >But what aspects of behavior should we consider? Those revealed >by a teletype-based TT or what? Linguistic behavior seems like a pretty good candidate. Of course, if the machine is mute, we might want to allow it other outlets. But listings and "internal dialog" aren't behavior. What other aspects of behavior would *you* consider? >> then if they make any claim that one >>entity is conscious but another is not based on something other than a >>judgement about behavior, they are taking an essentialist position toward >>"consciousness". Such essentialism is not testable, it is not refutable, >>and the argument will never end. > >FWIW, realism about mental properties does not require _claiming_ >that an entity is conscious or not based on anything other than >behavior. However, in this view something might _be_ conscious >even if we couldn't tell. > >How much scope this leaves for us to agree, I don't know. See Aaron Sloman's comments above. -- From Newsgroups: comp.ai.philosophy Sun Dec 11 14:37:05 GMT 1994 References: <3aj4a9$9ct@mp.cs.niu.edu> <3ak08o$mvv@mp.cs.niu.edu> <1994Nov22.121521.27633@oxvaxd> <3atltt$428@mp.cs.niu.edu> <3auja3$7hf@news1.shell> <3aukr2$t3h@mp.cs.niu.edu> <3bir5e$g11@vixen.cso.uiuc.edu> <3bttql$e0s@sun4.bham.ac.uk> <3bu2p7$t1i@agate.berkeley.edu> Subject: Re: Strong AI and (continuous) consciousness jerrybro@uclink2.berkeley.edu (Gerardo Browne) writes: > Date: 5 Dec 1994 03:52:07 GMT > Organization: University of California, Berkeley > > Aaron Sloman (axs@cs.bham.ac.uk) wrote: > > : > By this definition, I don't believe computers are conscious. As far as > : > I know, human consciousness is continuous and analog. I can't imagine > : > consciousness being discrete and digital. > > : People who work on computer vision usually have to start from a > : digitised image array. However, they often write code that > : interprets the array as if it were derived by *sampling* a > : continuous image, or optic array. By making certain plausible > : assumptions about bounds on the discontinuities in the original > : array one can, as necessary, interpolate between the pixels in the > : array, and, for example, answer questions about the colour or > : intensity at a point two thirds of the way along a pixel. Similarly, > : intersection points between intensity edges can be located with > : sub-pixel accuracy. > > : It is possible for higher level processes to be given information > : from which it is impossible to tell that the visual imput was > : discrete and quantized at a certain scale. > > I fail even to see why one would have the impression that one's > visual field was continuous. We are not even able to "interpolate > between" the "pixels" in our retina. Are you saying you are aware of your visual field as being pixel-based, with limited resolution? That's most unusual. Presumably you also don't interplate across the blind spot, the way most people do? > We don't even have a *mistaken > opinion* about what is occurring at, for example, the position > pi/4 x pi/4, derived from some weighing of inputs from pixels at > nearby locations. There is no such interpolation going on, not > for a question of *that* level of precision. Maybe you took me to saying that human vision is based on interpolating rectangular arrays of pixel measures. I did not intend to say that. The human retina is not rectangular, nor does it have uniform resolution. My point was that most people are just not aware of the structure of the visual processing going on below the level of consciousness and neither need a robot be. Both can be misled into thinking there's something continuous. There's the impression of continuity, but nothing actually continuous (as far as I know). Cheers. Aaron From Sun Dec 11 16:41:16 GMT 1994 Newsgroups: sci.skeptic,sci.psychology,sci.physics,sci.philosophy.meta,sci.bio,rec.arts.books,comp.ai.philosophy,alt.consciousness Summary: science is mostly not about what we sense References: <3bd8s0$1q2@pobox.csc.fi> Subject: Re: Why scientists popularize premature speculations? pindor@gpu.utcc.utoronto.ca (Andrzej Pindor) writes: > Date: Mon, 5 Dec 1994 18:30:51 GMT > > In article , > McCarthy John wrote: > >I would have regarded Crick's hypothesis - that consciousness is to be > >investigated by the same scientific methods as are applied to every > >other question - as not astonishing. > > ..... > > > The problem here is that consciousness is unlike other problems to which we > apply scientific methods. If you look back far enough you will find parallels. E.g. once upon a time people tried to investigate the aether. Then gradually it dawned on them (thanks to the philosophical analysis done by Einstein of concepts like "simultaneity") that they did not know what they were talking about, and interest switched to more precisely defined topics. There are probably lots of examples that a historian of science could produce, e.g. "vital force" in biology, and maybe even the concept of "life", which, as far as I can tell, is no longer of any scientific interest: it has been replaced by a collection of different, more precise, concepts which identify matters worthy of investigation, and which overlap with the older concepts in subtle ways. > ...Scientific methods are applied to the world > reaching us through our senses Scientific methods are applied mainly to a world full of things that do NOT reach us through our senses, though some of their indirect effects do. If science were only the investigation of what we can sense it would be far more boring than it is: most of what's interesting concerns the hidden mechanisms and processes that we cannot sense. > ...whereas consciousnes is a phenomenon about ^^^^^^^^^^^^ > which we have knowledge without senses - we _know_ that we are conscious, > without involving sight, hearing, etc. There is no one phenomenon that the ordinary word "consciousness" refers to, any more than there's any one thing that the word "life" refers to. The word is riddled with ambiguity, vagueness, and hard to shake off theoretical baggage of diverse metaphysical and religious theories on which users of the word do not agree. Of course, if you are talking about "self-consciousness", as you appear to be in that last sentence, then that is itself only one among many cases (a fly, or a very young child, can probably be conscious of something moving towards it without being self-conscious.) > ...Hence I doubt if scientific method is > suitable to studying consciousness understood this way. Scientific method is not suitable to studying anything referred to by ill-defined, vague, highly ambiguous words and phrases except as part of a process of replacing those concepts with a family of far more precise and general theory-based concepts. A philosopher called Otto Neurath (I think) once likened science to a ship whose occupants are continually redesigning and re-building it while they sail in it. That's what happened to many of our concepts of kinds of stuff, when we learnt about the architecture of physical matter. E.g. we now think about water, air, fire, salt, sand, etc. very differently (though there's some overlap with older ways). Learning about the architecture of biological matter is leading to revisions of earlier concepts relating to biological systems (compex systems that evolved). Similarly, when we learn more about the architecture of intelligent agents we'll develop a family of much improved concepts for talking about their states and their capabilities, and then we'll discover that the kind of non-sensory self-knowledge that you are talking about is just one of many kinds of phenomena supported by SOME of the architectures ... except that we'll then be able to say more precisely and clearly what it is that we are talking about. Meanwhile people who think they know precisely what they are talking about and then either say that it is, or that it isn't, amenable to scientific study or explanation are just fooling themselves (on both sides of the debate). Often we learn that we haven't even understood ourselves. That's deep progress. > Andrzej Pindor The foolish reject what they see and > University of Toronto not what they think; the wise reject > Instructional and Research Computing what they think and not what they see. > pindor@gpu.utcc.utoronto.ca Huang Po Aaron From Article: 15067 in comp.ai.philosophy Newsgroups: comp.ai.philosophy Path: bhamcs!bham!warwick!lyra.csx.cam.ac.uk!sunsite.doc.ic.ac.uk!decwrl!pagesat.net!internet.spss.com!markrose Message-ID: Sender: news@spss.com References: <1994Nov24.135351.25743@unix.brighton.ac.uk> <3bu0gs$fff@sun4.bham.ac.uk> Lines: 52 Date: Mon, 5 Dec 1994 19:39:39 GMT Organization: SPSS Inc Subject: Re: Bag the Turing test (was: Penrose and Searle) From: markrose@spss.com (Mark Rosenfelder) In article <3bu0gs$fff@sun4.bham.ac.uk>, Aaron Sloman wrote: >markrose@spss.com (Mark Rosenfelder) writes: >> -- It's easy to fool. Turing seemed to think that people will not on >> the whole accept "intelligence" in machines. On the contrary, many >> people accept it all too readily, or even figure it's already been done. > >Bob French wrote an article arguing the opposite: he claimed that >the Turing test is unreasonably difficult, as one can ask questions >that only someone with a similar lifestyle and physiology could be >expected to answer. He concludes that the full TT is really only a >test of the ability to "think exactly like human beings", and >therefore of no interest as a general test. Turing himself expressed similar qualms. The intuition here, I think, is that details of human biology aren't relevant to intelligence. That's reasonable as far as it goes, but the implications are not IMHO fully grasped. These intuitions show that we do have some specific notions about what is or is not part of intelligence. Why not make these notions explicit, instead of maintaining that the question of what intelligence is cannot be answered? Making these intuitions explicit would also allow them to be analyzed and criticized. For instance, it evidently seemed obvious to Turing that the ability to enjoy strawberries and cream was irrelevant to the problem of intelligence. But this is open to question; Lakoff for instance maintains that meaning is based on direct sensory experience, which raises questions about the intelligence of a system that doesn't have any. >> -- Focussing on external behavior as it does, the TT encourages the notion >> that only algorithmic structure, rather than any physical fact about >> human brains, produces intelligence. That may be, but it should be a >> matter for investigation, not an initial assumption. > >I don't see how it even focuses on algorithmic structure. For any >collection of external behaviour there will generally be infinitely >many different algorithms capable of producing the same behaviour. >Behaviour is just behaviour. You have to make very strong >assumptions to infer anything from it. I didn't say the TT *focusses* on algorithmic structure, only that it does not encourage attention to any physical fact about human brains. This is not to say that an AI needs to be built like a brain is, any more than an airplane needs to flap its wings. On the other hand one can learn a lot about how to build a flying machine from closely investigating birds. Thanks for the Shannon/McCarthy quote, whose point I appreciate. Now how would you (or they) respond to Daryl McCullough's contentions about a scale of optimized AIs ending in the HLT? From Article: 15096 in comp.ai.philosophy Newsgroups: comp.ai.philosophy Lines: 31 Message-ID: <3c2on7$geg@cantaloupe.srv.cs.cmu.edu> References: <3c0vo1$si4@news1.shell> Reply-To: hpm@cs.cmu.edu NNTP-Posting-Host: cart.frc.ri.cmu.edu Date: 6 Dec 1994 22:31:03 GMT Organization: Field Robotics Center, CMU Subject: Re: Bag the Turing test (was: Penrose and From: hpm@cs.cmu.edu (Hans Moravec) >Hal Finney: >I still see a difference between the HLT and the wall. Nobody ran an AI >simulation to create the wall; its existence does not constrain the >universe to contain a mind which has had certain thoughts. All these >examples except the Turing test passers are like the wall. A video game >character is not conscious. Nobody feels pain when a Mortal Kombat >character gets his head ripped off. Only the TT cases show us a mind. I really don't believe that. I can sympathize with the pain of a character in a novel. If there isn't something feeling pain, what am I sympathizing with? Am I not sympathizing with a platonic entity feeling real pain? The novel's characterization fuzzily defines this entity, and additional details about the character's life would sharpen the definition by chopping away at the fuzz of alternatives. The real difference between the Turing test passer and the rock is that the TT passer defines a platonic mind very precisely to us, in our language, while the rock provides no help: to us it`s all an undistinguished fuzz of possible alternative platonic entities, including a vast majority that are mindless. In between, the characterization in the novel provides enough definition to eliminate the mindless alternatives. Maybe enough, even, to compile the novel's character into an AI program that could pass a Turing test. We do something very close to the latter, anyway, when we imagine or dream about interacting with a character we've read about. -- Hans Moravec CMU Robotics From Article: 15456 in comp.ai.philosophy Newsgroups: comp.ai.philosophy Distribution: world Message-ID: <3clela$7fa@percy.cs.bham.ac.uk> References: <3aj4a9$9ct@mp.cs.niu.edu> <3ak08o$mvv@mp.cs.niu.edu> <1994Nov22.121521.27633@oxvaxd> <3atltt$428@mp.cs.niu.edu> <3auja3$7hf@news1.shell> <3aukr2$t3h@mp.cs.niu.edu> <3bir5e$g11@vixen.cso.uiuc.edu> <3bttql$e0s@sun4.bham.ac.uk> <3c4skn$e1s@tadpole.fc.hp.com> NNTP-Posting-Host: fat-controller.cs.bham.ac.uk Date: 14 Dec 1994 00:35:54 GMT Organization: School of Computer Science, University of Birmingham, UK Subject: Re: Strong AI and (continuous) consciousness From: A.Sloman@cs.bham.ac.uk (Aaron Sloman) allsop@fc.hp.com (Brent Allsop) writes: > Date: 7 Dec 1994 17:50:15 GMT > Organization: Hewlett-Packard Fort Collins Site > (I (AS) wrote) > > A robot with visual capabilities built on such mechanisms would > > "experience" the world visually as continuous. (Brent commented) > I fail to see this in your (and other's) arguments. In fact I > think we should draw the opposite conclusion from these arguments. > > When I look at a piece of paper it "seems" very continuously > flat and smooth. By my definition "seems" means some kind of > representation that may or may not be veridical. But it is a > representation none the less and if the representation "seems" that > way it must BE that way even if what it represents or is caused by (at > any of the many translation levels of perception) isn't that way as > proved when we look at paper with a microscope or examine the > discreteness of the rods and cones in our retina. > [copy of email response follows] I don't have time for a full response just now, but I think the fundamental disagreement that we have is on this point: > If a computer is representing an image with a discrete > mechanism what it represents will "seem" discrete regardless of how > much it is able to scale that discreteness. Why do you say that? I think human vision is probably a superb counter-example: the nervous system is full discrete elements (neurones), and yet we get the illusion of continuity in our experience. > ...In order for it to "seem" > continuous the representation must be continuous. You need to produce an argument for such a statement. I see no reason at all to believe it. Essentially I regard actual contuity in information structures in the brain as impossible. A truly continuous system (in the mathematical sense) would have to have infinite complexity (e.g. no matter how much you zoom in there's always more structure there, even if in practice it is often simple structure, like a straight edge.) I do not believe a brain or any other implementing engine can process systems with that kind of infinite complexity. However you can give the illusion of continuity if the system is represented as having a certain structure with the *potential* for zooming in further. In practice, that potential will never be realized beyond a certain limited precision. (I think all of this was, in effect, pointed out by Immanuel Kant in his Critique of Pure reason in the last century, though not in these words.) The blind spot is a good example: there's definitely a gap in the experienced visual field, but we only become aware of that gap in highly contrived experimental situations. > The representations are what we are directly aware of .... I don't think we are DIRECTLY aware of anything at all. It looks as if you accept the philosophical position that introspection can give us some infallible information about our minds, whereas I don't! What you are currently aware of at any time is the product of interaction between current representations and access methods, which may make you tell yourself things about yourself that are false! Cheers Aaron -- Aaron Sloman, (WWW page: http://www.cs.bham.ac.uk/~axs ) School of Computer Science, The University of Birmingham, B15 2TT, England EMAIL A.Sloman@cs.bham.ac.uk OR A.Sloman@bham.ac.uk Phone: +44-(0)121-414-4775 Fax: +44-(0)121-414-4281 From Wed Dec 14 16:08:22 GMT 1994 Newsgroups: comp.ai.philosophy References: <3bir5e$g11@vixen.cso.uiuc.edu> <3bttql$e0s@sun4.bham.ac.uk> <3bu2p7$t1i@agate.berkeley.edu> <1994Dec6.174947.27872@egreen.wednet.edu> Subject: Re: Strong AI and (continuous) consciousness ascott@egreen.iclnet.org (Alan Scott - CIR) writes: > Date: Tue, 6 Dec 1994 17:49:47 GMT > Organization: Evergreen School District, Vancouver Washington USA. > > In article <3bu2p7$t1i@agate.berkeley.edu>, > Gerardo Browne first quoted: > >Aaron Sloman (axs@cs.bham.ac.uk), who wrote: > > >: People who work on computer vision usually have to start from a > >: digitised image array. .... ...... stuff deleted .... > >: It is possible for higher level processes to be given information > >: from which it is impossible to tell that the visual input was > >: discrete and quantized at a certain scale. > > > Then Mr. Browne wrote: > > >I fail even to see why one would have the impression that one's > >visual field was continuous. We are not even able to "interpolate > >between" the "pixels" in our retina. We don't even have a *mistaken > >opinion* about what is occurring at, for example, the position > >pi/4 x pi/4, derived from some weighing of inputs from pixels at > >nearby locations. There is no such interpolation going on, not > >for a question of *that* level of precision. > > > Then Alan Scott wrote: > This sounds to me like support for Sloman's original point (quoted above > for clarity). His higher-level digital robot wouldn't have any > *knowledge* of such interpolation, either! The "pixels" of our visual > perception (rods and cones) are not individually perceptible to us, as > Browne says, and the robot's "rods and cones" would not be individually > perceptible to it, at least not ordinarily. Presumably, an intelligent > robot could "look at" its internal functioning in such reductionist detail > (just as we can "look at" our retinas using special equipment) but under > 'normal' circumstances it wouldn't bother. [Nor would it be able to.] This is just to confirm that Alan's interpretation of what I wrote accords with mine. I was equally puzzled as to how Gerardo thought he was contradicting me. The only hypothesis I can come up with is that he thought I was saying we could see things with sub-pixel accuracy. As it happens I suspect we can in some parts of the visual field and not in others: ask psychologists who do research on visual acuity. But that's not the main point. My main point, in responding to Brent Allsop (I think) was that the higher level systems are presented only with information structures that depict or describe what is seen as continuous even though they are derived from discrete lower level interpretations. The lower level origins are not accessible to, for example, high level decision making processes. I claimed that similar representations of continuity could occur in a robot that was actually dealing with digitised input. It's perhaps also worth remarking that the same applies to output. A robot may plan a continuous trajectory for the motion of its hand, and then via processes of translation to low level control signals send digital information patterns to motor control systems, and these in turn can produced the desired motion -- which could be continuous if physics allows continuity. I don't see any reason why digital mechanisms should not provide a basis for both perceptions of continuity and continuous actions, if physics allows continuity in the world. If physics does not allow real continuity, then perception might still produce illusions of continuity, e.g. table tops. I.e. neither the discontinuity of what's out their nore the discreteness of the internal representations would be detectable. (As someone pointed out to me in a private posting, Dan Dennett's book, Consciousness Explained, has quite a bit to say about all this. People may find it useful to look at the book. I don't agree with his attack on qualia though: a machine with a well designed visual system and human-like architecture for control of attention could have visual qualia - even continuous ones !) Cheers Aaron --- From Aaron Thu Dec 15 00:25:03 GMT 1994 Newsgroups: comp.ai.philosophy References: <3aj4a9$9ct@mp.cs.niu.edu> <3ak08o$mvv@mp.cs.niu.edu> <1994Nov22.121521.27633@oxvaxd> <3atltt$428@mp.cs.niu.edu> <3auja3$7hf@news1.shell> <3aukr2$t3h@mp.cs.niu.edu> <3bir5e$g11@vixen.cso.uiuc.edu> <3bttql$e0s@sun4.bham.ac.uk> <3bu2p7$t1i@agate.berkeley.edu> <3cf2pv$4b2@percy.cs.bham.ac.uk> <3cftlo$63l@agate.berkeley.edu> Subject: Re: Strong AI and (continuous) consciousness jerrybro@uclink2.berkeley.edu (Gerardo Browne) writes: > Date: 11 Dec 1994 22:15:19 GMT > Organization: University of California, Berkeley (AS) > : Presumably you also don't interplate across the blind spot, the way > : most people do? (JB) > I'm not sure that I do. Why bother filling it in when I can just > turn my head to see what's *actually* there? Of course I don't > *see* it, but that doesn't mean that I *do* see something else in > its place, such as a black circle. All I meant was that people do not normally have the experience of there being a blind spot. I.e. they have an experience of a visual field that is full, with no gaps, and, in a sense, continuous. But that doesn't mean there is anything that is actually full or continuous. ... stuff deleted (GB) > I'm not challenging the phenomenon, I'm challenging how we talk > about the phenomenon. > (AS) > : My point was that most people are just not aware of the structure > : of the visual processing going on below the level of consciousness > : and neither need a robot be. Both can be misled into thinking > : there's something continuous. There's the impression of continuity, > : but nothing actually continuous (as far as I know). (GB) > My point is that the source of the impression of continuity is not > in our experiences themselves, This puzzles me. If there is an impression of X then the impression is in the experience: that's how I understand having an impression. But of course, it does not follow that there is any X. If that's what you are saying I agree. > ...but in the way we talk about them. I don't think the impression of continuity comes from language (though the ability to talk about it obviously does, and maybe also the ability to think about it). I would expect that evolution is designed many animals to have visual systems that give the impression of continuity, e.g. they impression of the possibility of seeing more detail by coming closer. There is no impression of any limit to this process. (Which does not imply that there is no limit the internal state may not actually be continuous). [For mathematicians: I am not here concerned with the difference between mathematically continuous and merely dense sets, like the set of rationals.] > In the case of the blind spot, instead of saying, as we might, that > an actual surface seems continuus and uninterrupted, we have the > habit of talking about a "visual field" and then saying that *this* > seems continuous and uninterrupted. The words we choose change > significantly the way we understand our experiences. In this case, the words may actually reflect the structure of the system we are attending to. The notion of space as continuous, which mathematicians struggled for centuries to understand before a precise definition finally emerged, is not something that comes from language, but, I think, from the way our sensory systems work. However, this is all pure conjecture. I don't really know: I don't think anyone has a working model of a visual system that has the kind of richness I am referring to (but have not spelled out very precisely). Maybe further research will show that no such model is possible. Aaron From Thu Dec 15 01:18:19 GMT 1994 Newsgroups: comp.ai.philosophy References: <3bir5e$g11@vixen.cso.uiuc.edu> <3bttql$e0s@sun4.bham.ac.uk> <3bu2p7$t1i@agate.berkeley.edu> <1994Dec6.174947.27872@egreen.wednet.edu> <3cn590$pu1@percy.cs.bham.ac.uk> <3cnpll$nkf@agate.berkeley.edu> Subject: Re: Strong AI and (continuous) consciousness > Date: 14 Dec 1994 21:56:05 GMT > Organization: University of California, Berkeley > > Aaron Sloman (A.Sloman@cs.bham.ac.uk) wrote: > ...snip.... > > I was not contradicting this either. I picked pi/4xpi/4 to suggest the > plane RxR, or actually the unit square. Now *that* is an example of > a continuous thing. Between any two points there's a third point, > and the limit of a sequence of points is (at least) a point > (it's compact). Now that's one very common example of the idea of > continuity. And the visual field is nothing like that. OK. I wasn't claiming that it was like that. However, people have the impression that it's like that: i.e. that you can go on zooming in indefinitely. My reference to Kant was based on his notion that we have many concepts that involve something potentially infinite. His examples were space and time in the large, and processes like counting, which have a structure that permits going on indefinitely even if nobody ever can go on indefinitely. My example (and I don't recall whether Kant ever discussed this) was space in the small: we have the impression that what we experience is dense because of the potential for zooming in indefinitely. But the potential cannot be realised. There need not be anything that is actually dense (or continuous). I suspect we are in complete agreement, though I did not express myself clearly enough to rule out misinterpretation. > ....I > would say that "the visual field" conceived as a plane (and thus > a candidate for continuity) is a bad idea all around. In particular > it gives in to the "Cartesian Theatre" outlook. Here we may disagree. I think one of the very interesting things about human visual mechanisms and the way they are integrated into the complete cognitive system (which may not be true of other animals) is that we can sometimes use them *up to a point* to attend not to what is out there (the table) but to some aspect of the structure of internal information stores. It is not something we normally do and it is not always easy: e.g. learning to paint or draw involves learning to do it a lot better than average, and young children find it particularly hard -- e.g. seeing that the rectangular table top out there is seen via something in here that involves two obtuse and two acute angles. (I say "involves" not "has" because I doubt that it's actually a geometrical shape). Learning to sight a gun by superimposing two (or three) portions of the visual field is much easier. In other words, I think that Gilbert Ryle, in the Concept of Mind (1949), overdid his attack on the cartesian theatre. But that's because up till that time the theatre was associated with notions of infallible forms of introspection, souls, irreducible mental processes, and the like. Now we can begin to reconstruct the theatre in the framework of a working visual architecture. >....... > What you had argued for before was that, given imprecise > viewing equipment, we could interpolate from that and arrive at > precise judgements, even if they're false. Fine, as I said, I > don't disagree with that. But do we really do that enough to > warrant even the *false* belief that we can make infinitely ^^^^^^^^^^^^^^^^^^^^^^ > precise judgements (which is what would be required of true ^^^^^^^^^^^^^^^^^^ > continuity of the RxR variety, i.e. the most common variety)? I did not explain clearly enough that I was not making that sort of claim. Rather what we have is the belief that there is no limit to the precision of the judgements. That's not the same as saying we can make infinitely precise judgements. It's Kant's distinction again between potential infinities and actual, or completed, infinities. >.... > : If physics does not allow real continuity, then perception might still > : produce illusions of continuity, e.g. table tops. I.e. neither the > : discontinuity of what's out their nore the discreteness of the internal > : representations would be detectable. > > I for one can't say either way, whether my internal representations > are discrete or continuous. Why should I have an opinion one way? Fine. Brent Allsop, to whom I originally replied, was sure his internal representations were continuous, and use that as an argument that they could not be implemented on a digital computer. I was trying to make a distinction between having the impression of continuity and having something actually continuous. It's not uncommon on usenet for B's response to A to be interpreted by C in a way that B did not intend, as a result of which C's response to B is misinterpreted by B. We've just had another example, I think. Cheers Aaron From Article: 15495 in comp.ai.philosophy Date: 15 Dec 1994 08:09:22 GMT Organization: University of California, Berkeley Subject: Re: Strong AI and (continuous) consciousness From: jerrybro@uclink2.berkeley.edu (Gerardo Browne) Aaron Sloman (A.Sloman@cs.bham.ac.uk) wrote: : OK. I wasn't claiming that it was like that. However, people have : the impression that it's like that: i.e. that you can go on zooming : in indefinitely. My reference to Kant was based on his notion that : we have many concepts that involve something potentially infinite. : His examples were space and time in the large, and processes like : counting, which have a structure that permits going on indefinitely : even if nobody ever can go on indefinitely. I think we're in agreement on almost everything of substance, so don't take my remarks as implying a blanket rejection. In any case, first remark: I think that what Kant did in part was to take the unshakable assumptions of his age, and project them onto the human animal as such. One glaring example is Kant's idea that we all share a universal moral conscience, an idea which has been widely and effectively criticized, IMO. : My example (and I don't recall whether Kant ever discussed this) was : space in the small: we have the impression that what we experience : is dense because of the potential for zooming in indefinitely. : But the potential cannot be realised. There need not be anything : that is actually dense (or continuous). I would be more inclined to trace this to technology, e.g., a microscope with several objective lenses of different strengths, which could give rise to the idea of indefinitely increasing the strength. This in turn depends on counting, which is another technological innovation developed in response, probably, to practical needs and not directly in response to the structure of the mind. I'm sure there is a connection between our native structure and our present ideas, but I suspect it's distant. : Here we may disagree. I think one of the very interesting things : about human visual mechanisms and the way they are integrated into : the complete cognitive system (which may not be true of other : animals) is that we can sometimes use them *up to a point* to attend : not to what is out there (the table) but to some aspect of the : structure of internal information stores. It is not something we : normally do and it is not always easy: e.g. learning to paint or : draw involves learning to do it a lot better than average, and young : children find it particularly hard -- e.g. seeing that the : rectangular table top out there is seen via something in here that : involves two obtuse and two acute angles. (I say "involves" not : "has" because I doubt that it's actually a geometrical shape). Okay, this is an interesting area and I think I have something to add. I think the elements involved in perspective are not (to quote your wording) the "table top out there" and the "something in here", but rather, the table top out there, the picture plane, and the point of view. The obtuse and acute angles occur on the "picture plane", which is by the way not the surface painted on but an abstract plane imagined between the observer and the scene. The technology of perspective (which I think is what you're talking about) as I learned it in art class, is the geometric study of straight rays of light passing through a plane considered as a window and hitting the eye considered as a point. At no time is it necessary to consider what happens once the rays hit the eye. Now, once the idea for this approach is understood, it is mainly a question of deriving certain rules geometrically, such as that straight lines in space always map to straight lines or points, and then using these rules to generate pictures on the basis of specifications (such as is done in architectural rendering). In particular, the revelation about acute and obtuse angles is a product of this approach. Now, kids can learn this point without learning perspective, but I'd say they do it usually by imitating other drawings and (nowadays) photographs, and not by introspecting. Nowadays I can automatically estimate quite accurately the angle I would need to represent something I see, but I don't think this is an example of superior introspective powers. It is only mastery of a certain technique of drawing. More generally, I think the process initiated by the Renaissance artists which resulted in gradually more realistic representation, was a matter of stumbling upon ways to amaze the viewer with the realism of the rendering. This would involve drawing something a certain way, and then asking, "does it persuade?" Novelists have developed literary techniques in much the same way, I'm sure. Now asking ourselves "does it persuade" is introspective, but I'm not sure we can easily use the resulting novelistic techniques to illuminate the structure of the mind. : Learning to sight a gun by superimposing two (or three) portions of : the visual field is much easier. I'm not sure that the instructions "center the target in the crosshairs" refers to the visual field, though it does involve "seeing" the target in a "false" way we're not used to seeing it, as if it were flat against the crosshairs. : I did not explain clearly enough that I was not making that sort : of claim. Rather what we have is the belief that there is no : limit to the precision of the judgements. That's not the same as : saying we can make infinitely precise judgements. It's Kant's : distinction again between potential infinities and actual, or : completed, infinities. Allow me to repeat my suspicion that this notion of the potential infinity is a cultural artifact, and that Kant was timely, not timeless. : Fine. Brent Allsop, to whom I originally replied, was sure his : internal representations were continuous, and use that as an : argument that they could not be implemented on a digital computer. : I was trying to make a distinction between having the impression of : continuity and having something actually continuous. And I think it's a valid point. I did understand this point, but I thought it was too forgiving, because it excused the impression of continuity as a natural illusion which might result when there was nothing actually continuous. And I thought, well, this could happen, but is our own illusion a natural one? I thought that our illusion probably has cultural origins, and part of my reason was that I myself did not feel it (but that just might be a sign of approaching insanity :-). : It's not uncommon on usenet for B's response to A to be interpreted : by C in a way that B did not intend, as a result of which C's : response to B is misinterpreted by B. We've just had another : example, I think. Well, I'm not convinced I misunderstood you--maybe I did. I think we really do have a minor disagreement, minor because I think neither of us is sure he's right. It seems to me that the impressions in question are more a product of our technologies than it seems to you, but that's a historical and biological question, and the best I can do is to offer arguments for the plausibility of my position. From Aaron Thu Dec 15 11:37:55 GMT 1994 Newsgroups: comp.ai.philosophy,sci.cognitive References: <3bir5e$g11@vixen.cso.uiuc.edu> <3bttql$e0s@sun4.bham.ac.uk> <3bu2p7$t1i@agate.berkeley.edu> <1994Dec6.174947.27872@egreen.wednet.edu> <3cn590$pu1@percy.cs.bham.ac.uk> <3cnpll$nkf@agate.berkeley.edu> <3co5g9$6n2@percy.cs.bham.ac.uk> <3cotji$bdl@agate.berkeley.edu> Subject: Re: Strong AI and (continuous) consciousness [I have added sci.cognitive to the set of newsgroups. sci.cognitive readers who wish to look at earlier articles can look in comp.ai.philosophy for items with the same subject line] jerrybro@uclink2.berkeley.edu (Gerardo Browne) writes: > Date: 15 Dec 1994 08:09:22 GMT > Organization: University of California, Berkeley > > Aaron Sloman (A.Sloman@cs.bham.ac.uk) wrote: > ........ > : My example (and I don't recall whether Kant ever discussed this) was > : space in the small: we have the impression that what we experience > : is dense because of the potential for zooming in indefinitely. > : But the potential cannot be realised. There need not be anything > : that is actually dense (or continuous). > > I would be more inclined to trace this to technology, e.g., a > microscope with several objective lenses of different strengths, > which could give rise to the idea of indefinitely increasing the > strength. I don't think that can be right. The intuitions about continuity of space and matter go back at least to the ancient Greeks (and probably before that to the even more ancient Chinese!) and led to debates about whether matter was infinitely divisible or based on indivisible "atoms", long before there were microscopes. Zeno's paradoxes also arise out of the intuitions of experienced space as *potentially* infinitely divisible, and were precursors to the development of mathematically precise concepts of continuity hundreds of years later. My guess (and it is only a guess) is that the invention of the possibility of a microscope depended on the intuition of our experience as being of an indefinitely "zoomable" space, rather than the intuition coming from the technology. (Similarly the development of telescopes for astronomy was driven by a prior concept of there being more and more stuff out there that might be made visible. No doubt before that they were also used for more mundane purposes, on ships, in battles, etc.) > I think we're in agreement on almost everything of substance,.. yes. (I wrote) > : Here we may disagree. I think one of the very interesting things > : about human visual mechanisms and the way they are integrated into > : the complete cognitive system (which may not be true of other > : animals) is that we can sometimes use them *up to a point* to attend > : not to what is out there (the table) but to some aspect of the > : structure of internal information stores. It is not something we > : normally do and it is not always easy: e.g. learning to paint or > : draw involves learning to do it a lot better than average, and young > : children find it particularly hard -- e.g. seeing that the > : rectangular table top out there is seen via something in here that > : involves two obtuse and two acute angles. (I say "involves" not > : "has" because I doubt that it's actually a geometrical shape). > > Okay, this is an interesting area and I think I have something to > add. I think the elements involved in perspective are not > (to quote your wording) the "table top out there" and the "something > in here", but rather, the table top out there, the picture plane, > and the point of view. I appreciate what you are saying, and it is very close to J.J.Gibson's concept of the "optic array" as something "out there" which the visual system samples (which I think is better than the idea of picture plane). However, as someone who has worked (some time ago) on implementing (primitive) visual systems, and talks to people who still do so, I think the visual engine needs a whole variety of data-structures representing all sorts of things, including intermediate information such as some of the properties of the (essentially 2-D) optic array, as well as data-structures representing the observed 3-D surfaces (and also unobserved things, like the far sides of objects). In some visual systems it may be possible for higher level mechanisms to access information in the intermediate data structures, in others not. I think it is possible in human brains. I don't know about other animals. Chimps enjoy painting, but I don't know if they ever produce representational pictures as even some of the very ancient humans did, which, I claim, depended on their being able to attend to part of the structure of their experience -- e.g. being aware of contours which are not actually parts of the surfaces depicted, though they are related to discontinuities in the optic array. In fact, as far as the original argument is concerned, namely whether there is continuity in experience, I don't think it matters much whether we say that there is an impression of continuity in something inside ourselves, or that there is an impression of continuity in the optic array out there. My main point (with which you seem to agree) is that the impression of continuity does not imply that there is anything that's actually continuous. I may have confused things by saying that the fact that we have such impressions depends on our having internal data-structures that we can access, which are interpreted as representing something continuous even though they are not themselves continuous structures. Thus experiences of continuity could occur in totally digital systems. You may think, like Gibson, that talk about data-structures, representations, information processing, etc. is totally irrelevant to how perception works (he believes in some kind of "resonance to invariants", which I find totally incomprehensible, and I suspect he does too, but doesn't realise it!). If you share Gibson's view, then we need a very long discussion before there's any hope of agreement. I've expounded on some of this at some length in a paper: A. Sloman `On designing a visual system: Towards a Gibsonian computational model of vision' Journal of Experimental and Theoretical AI 1,4, 289-337 1989 Also available in compressed postscript by ftp in this directory: ftp://ftp.cs.bham.ac.uk/pub/dist/papers/cog_affect The paper, lacking the few (inessential because familiar) diagrams is in the file Aaron.Sloman_vision.design.ps.Z Alternatively I can send troff source for online reading. The paper is partly an attack on Marr's "modular" theory of vision. I have to say that I think there are still many deep unsolved problems about vision. I suspect we understand only very little about what it does, never mind even how it works. On many theories of vision seeing happiness in a face would not be a visual process. I think it is. The arguments are in the paper. Cheers Aaron From Aaron Wed Dec 14 23:41:56 GMT 1994 Newsgroups: sci.skeptic,alt.consciousness,comp.ai.philosophy,sci.philosophy.meta,rec.arts.books References: Subject: "Consciousness" 0, 1 or many concepts?(was Penrose's fixed ideas) It seems that intelligent and thoughtful people trying to understand hard problems sometimes get very exasperated with one another. I have not followed everything in this thread, but Jeff and Jim, two people I respect, really seem to wind each other up unnecessarily. Anyhow here's a point at which I've ended up in the middle, so I'll try to clarify what I meant by the bits they've quoted. I've changed the Subject line to make it easier to identify this thread. jqb@netcom.com (Jim Balter) writes: Date: Fri, 9 Dec 1994 Commenting on article , by Jeff Dalton wrote: ... stuff by both and by me deleted .... (jim) > Aaron also spoke of "philosophical therapy" being needed in some cases. There > is a whole range of reasons for such irrational acts as putting forth and > accepting obviously bad arguments. I don't see any need to explain it in this > forum. (aaron) > >> When someone comes up with a clearly understandable specification of > >> what exactly is referred to then I shall be happy to discuss what > >> sorts of mechanisms might or might not lie behind it, or how it > >> might have evolved etc. But I have not met any such specification. > >> Most of the definitions people offer (e.g. of "consciousness") use > >> words that are as riddled with ambiguity or unclarity as the one > >> they are trying to define. (jeff) > >It's a difficult word to define, if you demand compelete clarity > >and lack of ambiguity. But why should that be required? (jim) > What is required is sufficient clarity to support whatever claims are made. I agree. See below. > ... more stuff deleted.... e.g. about attacking straw men. (jeff) > >Now, it seems that Aaron Sloman has decided to wait until someone > >comes up with a "clearly understandable specification of what exactly > >is referred to" rather than, for instance, helping them to produce > >a clearly understandable specification. It's up to him how he spends > >his time, but that's not the only approach one can take. (jim) > What do you know, Jeff, of how Dr. Sloman spends his time? Well, as an authority on the matter I can say that he does not organise his time very well. But I can also respond to Jeff's interpretation of my position. It may look as if I have decided to *wait* for some specification to be produced, but actually I haven't. I think there are at least two things to do instead of waiting, which involve making real progress (perhaps more progress than most of the debates on comp.ai.philosophy). They are: (a) trying to understand different sorts of architectures that may underly behaving systems and trying to see precisely which sorts of concepts those architectures can and cannot support (in the way in which the currently accepted architecture of physical matter cannot support the concept of phlogiston, but does support concepts like chemical compound, sodium bicarbonate, chemical element, covalence, combustion, and much more, unlike previous architectures that assumed everything was composed of air, earth, fire and water, or whatever.) Much work in AI is implicitly doing this, though often with rather simple architectures designed to support rather simple concepts, e.g. planning, and restricted varieties of learning or perception. (b) unpicking the various muddles behind the belief that there is ONE well understood notion of consciousness, by trying to analyse the DIVERSE phenomena that people mix up when they talk about consciousness. E.g. we can talk about a fly being conscious of a hand moving rapidly towards it, a squirrel being aware of where the branch is, a chimp being aware that the face it sees in the mirror is its own, a person being aware that he is not expressing himself clearly, another being aware that he is lonely, another being unconscious of his own seething anger, another regaining consciousness after a deep sleep, another sleep-walking yet seeing the door and opening it, another gradually becoming conscious of his own unpopularity, another appreciating the beauty in a poem, sonata or sunset, .... and plenty more. Where people think there's only ONE thing that consciousness is and that every object in the universe either has it or does not have it, they don't appreciate that it's not one thing but a large collection of different capabilities which can exist in different combinations in different agents, depending on their architectures. If someone says "That misses the point: there is ONE thing that I am talking about" then I want to know what it is, and usually what comes out is an answer that's incredibly vague, or ambiguous, or circular, or dependent on the assumption (e.g. made by Penrose) that I really do know what's referred to but pretend not to. Then I make the statement quoted above, i.e. When someone comes up with a clearly understandable specification of what exactly is referred to then I shall be happy to discuss ... But I don't just sit and wait in the meanwhile as Jeff suggests. There's important work to be done. But it's hard, and very complicated, and there are many different ideas to be evaluated and explored (e.g. Minsky's society of mind, SOAR, various neural architectures, hybrid architectures, the kind of architecture my group is trying to develop, and lots more). It's not easy to come up with a convincing architecture (especially when the architectures that begin to look rich enough are very difficult to implement and test, especially in impoverished academic laboratories!). Furthermore it's not easy, given a complex architecture, to work out what sorts of states it can support at various levels of abstraction, and how, and which it cannot. (How many people who knew about Von Neumann machine architectures 30 years ago could have anticipated some of the kinds of systems that are currently implemented on such architectures; or will be in 30 years time?) But people are too impatient. They want a three-line definition of consciousness and a five-line proof that computational systems can or cannot have consciousness. And they want it today. They don't want to do the hard work of unravelling complex and muddled concepts that we already have, and exploring new variants that could emerge from precisely specified architectures for behaving systems. ... more stuff deleted ... (jim) > ...I suspect that, at some time > in the future, there will be refined models of human consciousness with > testable components, and there will be people, probably including yourself, > that hold that *real* consciousness must pass some of those tests. There will > be other poeple, perhaps including myself, who will hold that those particular > tests are for certain artifacts of human consciousness that are not essential to > a broader concept of consciousness, and that those tests are over-specified, > and that the TT is still the best test extant. Even further in the future, > it may come to be that there are tests above and beyond the TT that virtually > everyone will hold are necessary to be passed in order to qualify as conscious. Whereas I suspect that there will never be a useful well-defined unitary concept of consciousness that we can all agree is what we all meant all along by "consciousness". Rather we may end up with an interesting collection of different concepts, maybe 7 of them, or perhaps 23 or 51, all loosely related to some of our uses of "consciousness" and related words and phrases, but all defined as precisely as we can now define concepts like sodium chloride, H2O, isotope, solution, alloy, etc. These concepts grew out of old ones, e.g. salt, water, but went far beyond them in richness, precision and theoretical underpinning. ..... I expect a similar evolution of mental concepts, only more so, for there is but one physical world, with a fixed architecture whereas behaving systems can have a huge (infinite?) variety of architectures: many of which already exist, and some of which are waiting to be invented. (jim) > If, on the other hand, you are talking about intelligence, I think my > understanding of the concept of intelligence is fundamentally, inherently, > *results*-based, and that no future development, other than perhaps senility, > will change that. Some of the new concepts I am talking about will be defined in terms of different internal states and processes that produce indistinguishable external behaviour, and therefore CANNOT have behavioural tests. Others will have behavioural tests. I can define two algorithms that are behaviourally indistinguishable, i.e. produce exactly the same input output behaviour, but which involve different sequences of internal machine states -- such differences may be irrelevant to users of a system though they are very relevant to designers and maintainers. -- And there lies the heart of my response not only to Jim, but also to what Daryl and Hans have to say about huge lookup tables and the turing test. We are not merely users of one another: we are also designers and maintainers! So, now I have disagreed with both Jeff and Jim. Or, from another viewpoint, I've agreed with both. >... more stuff deleted .... (jim) > But, after all, we're only human. Me too, I think. Aaron ----