Sat Jun 7 17:54:25 BST 1997 Newsgroups: comp.ai,comp.ai.philosophy References: <5l792s$6to@news.ox.ac.uk> <5md82c$al12@dionysus.netmatters.co.uk> From: AaronSloman@cs.bham.ac.nospam (Aaron Sloman See text for reply address) Subject: Re: AI and Deep Blue (Historical correction) [Correct email address is at the end] a.croxton@netmatters.co.uk (al c) makes a historical mistake, which I guess I should correct in case others believe it: > Date: 26 May 1997 23:58:04 GMT > Organization: ABCDevelopment > > Aaron Sloman (founding father of British AI) wrote: ^^^^^^^^^^^^^^^ Correction: by the time I started learning about AI, it was already well established in Britain. I started learning about AI in 1969 when I met Max Clowes at Sussex University. He was the person who persuaded me that the best way to address most philosophical questions was to explore issues concerned with designing working minds, human-like and others. Max was one of the leading UK AI vision researchers before I had even heard about AI. (He died around 1980 unfortunately). I also learnt a huge amount when I spent a year (1972-3) at Edinburgh University, where there were already a lot of well established AI researchers, including Donald Michie (probably the the person with the best claim to be the UK's founding father of AI) Rod Burstall Christopher Longet-Higgins He led the epistemics group Steve Isard Julian Davies Bernard Meltzer He led the computational logic group Pat Hayes (moved to a lectureship at Essex just as I arrived) Steve Salter designer of Freddy the robot's mechanics Robin Popplestone Pat Ambler Harry Barrow Bob Kowalski PhD students included Geoff Hinton, Alan Bundy, David Warren, and several others (including several people from the USA who thought the Edinburgh AI group was well worth visiting: e.g. Americans there included Danny Bobrow, J Moore, Bob Boyer, Chris Brown, Frank Brown, and others.). All of those listed above have a better claim than I have to be called founders of AI in the UK. A lot of very good work had been done by then (1973) in Edinburgh (including some interesting robotics work which is now totally ignored by some roboticists of the 1990s who tend to think they invented it all, and who have no idea how difficult it was to do AI with the computers available then, which took several minutes to find the outline of a teacup in a digitised image, ruling out any possibility of "online" control of action. The idea that there can be important trade-offs between software complexity and physical design was well understood, at least in some contexts: of course the label "situated" had not become fashionable yet). [AS] > > 1. most people working on natural language know that the vast > > majority of our linguistic knowledge and processing is inaccessible > > to introspection. > [AC] > At any given time, yes. At all times. That's one reason why linguistics is such a hard subject and so full of controversy. [AS] > > 2. people working on vision know that the vast majority of our > > visual processing is inaccessible to introspection. (I have worked > > closely with several AI vision researchers.) [AC] > not if you're caused to look closely, for instance by halucinations. Hallucinations tell you VERY little about how your visual system gets from photons hitting the retina to seeing that your friend is happy, or that the poplars are waving in the breeze. [AS] > > 3. people working on robotics know that the vast majority of our > > motor control processes are inaccessible to introspection. [AC] > what about during sex? What about it? Another hallucination ?? NB: even if you do find some cases where motor control processes can be introspected, that says nothing to refute what I've said about "the vast majority". That includes for instance, not only manual manipulation of blocks on a table, writing your name, but chewing your food, speech production, tying shoelaces, putting on a floppy sweater, playing a flute or violin, etc. etc. [AS] > > 4. ... the vast majoring of our learning processes are not > > accessible to introspection [AC] > Are the details really that important? Surely there is only one basic > form of learning and the rest is variations on a theme. Yet another wonderful new idea which, if only it were followed up would solve the major problems of AI ??? (I've seen many come and go.) A human mind has a very complex self-modifying architecture, with many interacting components of different sorts. The more complex and functionally differentiated the architecture the more varied the forms of self-modification can be, including for instance, fine tuning in feedback control loops, discovery of new clusters in neural nets, development of new forms of integration of information in long term knowledge stores, acquisition of new systems of concepts, learning new notations, or new forms of representation based on old notations, discovering new consequences of old assumptions, discovering powerful new assumptions, development of new thinking skills, acquisition of new aesthetic preferences, absorption of new social values, development of new motivators, developing new ways of keeping track of your mental processes, and many more. One of the recurring temptations that mislead AI researchers (especially young ones) is the belief that there's some UNIQUE and SIMPLE new idea that will make everything easy. There's more likely to be thousands of important new ideas to be discovered before we have good models of human mental functioning, or, what comes to the same thing, good designs for robots with a wide range of human-like abilities. [AS] > > 5. ... the vast majority of skilled problem solving is inacessible > > to introspection. [AC] > Sorry that only the simple points seemed worthy of processing, let the > computers do the rest! I don't know what point is being made here. Cheers. Aaron ==