Date: 16 Feb 1999 22:10:52 GMT Newsgroups: comp.ai.philosophy,comp.ai Message-ID: <7acqdc$cn6$1@soapbox.cs.bham.ac.uk> References: <7a30ev$stc@ux.cs.niu.edu> <36C5198D.42220848@sandpiper.net> <7a4enb$o3@ux.cs.niu.edu> <36C62879.11434042@jhuapl.edu> <7a5j2p$39$3@its.hooked.net> <36C8B120.1F552624@sandpiper.net> From: Aaron.nospam.Sloman@cs.bham.ac.uk (Aaron Sloman See text for reply address) Subject: Rickert, Balter, lookup tables and intelligence In the comp.ai.philosophy newsgroup it seems that Jim Balter and Neil Rickert are arguing over whether a machine which had a huge lookup table (HLUT) determining every facet of its behaviour could be distinguished from one which had human-like internal information processing capabilities, and could learn and have thoughts, decisions, preferences, etc. Jim Balter wrote: > Just what is it that we would never observe in behavior produced > from a > HLUT, that would assure us that it wasn't human? Neil Rickert responded > The HLUT would be incapable of learning, in the ways that humans > learn. [Balter] > Capacities and ways are not behavior, so that doesn't answer the question. > > .... > ....A HLUT maps states to states. At one point > a HLUT is in a state such that it responds to questions about algebra > as if it knew nothing about algebra. It then receives inputs, > side by side with an algebra student. It transitions into a state such > that it acts like it knows a little about algebra. That it doesn't > learn in the ways that humans learn is irrelevant to your original > claim about behavior; it *behaves* in the same way. I suspect that both Balter and Rickert are right (at least partly) but are talking past each other. (Not uncommon in newsgroups about AI.) Jim is correct that IF a machine has an appropriate HLUT built in advance by someone who has worked out what sorts of behaviours are to be produced in all possible sequences of external environments, then no amount of *external* observing and testing can distinguish the machine from one which learns, thinks, takes decisions, etc. for itself. If they can, then that just proves the lookup table was not designed correctly. But Neil is right in saying that there is a difference between a machine which works out what to do or say and one which merely produces responses that someone or something else had worked out for it. However, the difference would not be visible "from outside". It's a different kind of difference! It's a matter of what's going on in the virtual machine that produces the behaviour. You cannot infer what's going on from the behaviour. (Encrypted output for which you don't have the key is one special case where this is obviously true. But the problem is far more general.) If you are trying to understand a complex information processing machine which you have not designed yourself, the task of finding out how it works may be wholly impractical. Even if you could examine the internals of such a machine, checking and measuring the physical mechanisms in great detail and precision, "decompiling" its circuilts or brain processes may be possible in principle but extremely difficult in practice (as brain scientists and software engineers dealing with old compiled code for which program sources are lost know). This version of the "other minds" problem is solved for us by evolution: we are simply programmed to treat other people and animals as having minds. We don't believe they have minds as a result of *reasoning* from *evidence* of any kind. We can't help ourselves. A robot driven by suitably designed HLUT would have the same effect on us (if such a thing were physically possible.) The notion of machine controlled entirely by a lookup table presupposes that all inputs and outputs are discrete. Perhaps this is supported by quantum physics (e.g. individual photons hit the retina). It may be that some people object to the idea of a HLUT because they assume some (or all) interactions with the environment must be analog, whereas a lookup table maps a discrete states plus a discrete array of sensory inputs into a discrete array of output signals. However, even if biological systems are not discrete, it is possible to make a discrete/digital system approximate, to a very high degree of accuracy, a purely analog system, e.g. by using digital to analog and analog to digital converters at its periphery, with a very high sampling frequency. But there are deeper objections to the HLUT, some mentioned below. [balter] > The hypothetical HLUT is such that all learning that could possibly be > done within a finite lifetime in all the environments that anyone could > describe within a finite lifetime with one finite alphabet is already > represented as state transitions within the HLUT. Rickert probably assumes that no finite alphabet could suffice to characterise human experience and learning. But sufficiently close approximations may be achievable. Compare the use of fixed precision floating point numbers to represent real numbers in computers. If you take enough care you can simulate continuous processes using them, and this is done every day in scientific and engineering laboratories. Of course, as Balter acknowledges later, even using a small finite alphabet, no machine with a large enough HLUT to pass the kinds of tests envisaged could possibly be built, because a machine with a table that anticipated *all* the possible external (discretised) contexts that could occur in a human life time along with *all* the appropriate responses, would require a larger memory than could fit into the physical universe. (Even storing the game tree for chess would require an impossibly big brain: I think it has been calculated that there are more nodes in the tree than electrons in the universe. Life is bigger than chess.) Another, more interesting, reason why the HLUT is impossible is that no person or machine could work out *in advance* everything that needs to be in such a table to accommodate a human lifetime. E.g. it would involve anticipating all possible cultural and technological developments in a lifetime, all possible future scientific theories that might be developed in a life time (including false theories that become fashionable!), all possible artistic creations to which the individual might be exposed, and so on. Being able to anticipate all that would require being able to run some sort of simulation of a large portion of the universe much faster than real time. However, if the future after time T included someone running such a simulation then the person running the simulation just before time T would have to go even faster. An infinite regress of faster and faster simulations might be required? But quite apart from that difficulty it seems to be impossible to gain all the fine-grained knowledge required to anticipate everything years ahead. (Maybe that's why Turing restricted his test to 5 minutes!) Just as a designer could not anticipate all relevant future states, neither could evolution. I.e., although it may be able to use "trial and error" learning to work out a set of behaviours adequate to a range of conditions likely to be met by bacteria or ants, through a period of exploratory evolution, it could not do the same for the environments likely to be encountered by humans, and perhaps not chimps and other various other more or less intelligent species. One reason for this is that after we develop new capabilities we then change our environments in such a way as to produce quite novel situations that were not available in advance for the development of appropriate behaviours in those situations! (E.g. building grand pianos, sky-scrapers and computers.) Instead evolution somehow found a better trick: it developed mechanisms capable of doing planning and problem solving at "run time", i.e. allowing the individuals to work things out for themselves, by constructing new types of behaviours. This requires a specialised architecture supporting "what if" reasoning capabilities. This is denied by some people who argue for systems built *entirely* out of large 1collections of pre-stored reactive behaviours. (That may work for an ant, as far as I know. But ants don't seem to develop new scientific theories, design and build new kinds of machines, develop new languages for controlling machines, prove and use new theorems, etc. etc.) Anyhow, [Balter] > One might say that the HLUT > doesn't learn at all, it only *behaves* as if it does. But the difference, > to the degree that one can talk about a difference between a class of > real entities and a class of strictly theoretical, unrealizable, entities, > is strictly metaphysical; the HLUT extended in space and the ^^^^^^^^^^^^^^^^^ > real learner extended in time are isomorphic. ^^^^^^^^^^^^^^^^ I am not sure that the word "isomorphic" is useful here. Actually, it is not even correct: The real learner has potential to follow many branches of action which never occur. But those branches are not explicitly realised in its brain or in its action: instead he/she/it has generative mechanisms for *working out* that they should be performed if needed. (I presume that's part of Rickert's point.) By contrast the HLUT must have all those possibilities explicitly represented in advance, e.g. all the sentences that might possibly have to be uttered in all situations that could occur. (Unless the designer of the HLUT could work out a *unique* time-line for everything in the environment, which I suspect is impossible for many different reasons.) So it would be more accurate to say that the HLUT corresponds to the real learner extended in *all possible futures* from the day of its birth. But it's not clear what follows from that correspondence in this sort of debate. Calling it an isomorphism doesn't deny the important differences. [Balter] > Real learners undergo state > changes as they receive inputs; only one state is physically represented at a > time. That's just one view of the architecture. It would be an accurate characterisation of the HLUT, and of some monolithic AI programs and neural nets. However, I think it is more illuminating to think of human minds and brains as collection of concurrent interacting subsystems whose states change concurrently and not necessarily in any synchronised way. This is more illuminating because it leads to better explanatory theories, and is more likely to lead to the design of artificial systems that are like humans. (I accept that this comment probably doesn't contradict what Balter *intended* to say. My own work, developing the idea of a complex "nearly modular" virtual machine architecture with concurrent interacting components can be found in papers in the Birmingham Cognition and Affect FTP directory ftp://ftp.cs.bham.ac.uk/pub/groups/cog_affect/0-INDEX.html Papers near the top of that list try to show, for example, how three different kinds of emotions, in humans, i.e. primary, secondary and tertiary emotions, depend on different concurrently active sub-architectures which evolved at different times, and are shared with different subsets of animals.) [Balter] > HLUTs switch from state to state as they receive inputs; all states > are physically represented at once. The only way to be able to build a HLUT > is to produce a 3-dimensional projection of our 4-dimensional world. > Since we can't do that in practice, they are only relevant to fantasyland. Not jsut a projection of our 4-D world, or any particular 4-D world, but an indefinitely large branching set of *possible* 4-D worlds. Understanding all the differences between a table incorporating such a projection and a mechanism that is capable of *generating* that sort of projection is central to understanding the nature of intelligence. Even if we agree with Rickert that HLUTs won't do because they don't work things out for themselves, or learn, it's not at all obvious what sorts of architectures *are* capable of producing the kinds of "working out" and learning that go on in human minds (our virtual machines) or brains (our implementation engines), nor what goes on in other animals. Brain science, psychology and AI are still a long way from clear, accurate and deep explanatory theories: there are only fragments. We don't know how to put them together, nor whether putting them together would suffice. We could be missing something very important that nobody can imagine just now. (Penrose thinks it's quantum-gravity effects, but I think that if there are such things they operate at the wrong architectural level to answer our questions: discovering them would be analogous to discovering transistors.) But that doesn't mean we won't continually deepen our understanding if we work at it (though some people find that thought distasteful in the same sort of way that many people found distasteful the thought that we are not at the centre of the universe, or that we are products of a biological evolutionary process). Aaron ===