NOTE ON DYNAMICAL SYSTEMS Revised: 27 Oct 2010; 10 Aug 2012 See also: http://tinyurl.com/BhamCog/misc/kinds-of-dynamical-system.html Most of this note was originally posted to the Psyche-B discusison list in August 2000, in response to a message posted by John McCrone. Quite a lot has changed since then, and there is now much more discussion of consciousness by AI researchers (whether that's a good thing or not, time will tell). Examples include Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds Murray Shanahan http://www.amazon.co.uk/Embodiment-inner-life-Cognition-Consciousness/dp/0199226555/ref=sr_1_1?s=books&ie=UTF8&qid=1288136533&sr=1-1 And the International Journal of Machine Consciousness, for which I (rashly) agreed to write a discussion paper, with replies to commentaries, published in 2010. An Alternative to Working on Machine Consciousness http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#910 Phenomenal and Access Consciousness and the ``Hard'' Problem: A View from the Designer Stance http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#906 A discussion on dynamical systems is being added to this web site http://www.cs.bham.ac.uk/research/projects/cogaff/misc/dynamical.html Date: Thu, 10 Aug 2000 04:01:01 +0100 From: Aaron Sloman Subject: What isn't a dynamical system? (Was Re: How to Build a Mind) Approved-by: patrickw@CSSE.MONASH.EDU.AU To: PSYCHE-B@LISTSERV.UH.EDU John McCrone wrote > Quite rightly, the discussion has had a heated tone because the strong > AI/cognitive science camp has been granted a pretty easy run in the eyes of > anyone on the dynamicist/complexity side. For decades, the computationalists > have had the limelight and the DARPA dollars (or Alvey/Eureka if you were in > the UK). Very big claims were made and bugger all - in terms of conscious > machines or even useful models of conscious brains - has resulted. Just for the record: very little, if any, of the funding allocated by DARPA, or in this country by research councils, had anything to do with producing conscious machines. There were far more specific *engineering* and technical goals, e.g. speech recognition, face recognition, design of robots of various kinds, many kinds of expert systems, tutoring systems, fault diagnosis systems, rule-induction systems, data-mining systems, automatic programming systems, verification systems, mathematical tools, etc. And there are now many products in use that arose out of this research, some using logic, some using alternative methods of symbolic reasoning, some using neural nets, some using evolutionary computation, and many of them using mixtures of techniques, e.g. combining statistical methods and AI techniques, etc. The good AI research departments I know about include people exploring a wide range of techniques. Religious commitment to doing things only ONE way is rarely productive. Incidentally, if you know of any project funded by DARPA or a UK research councel, that was about building conscious machines I would be interested to learn which it was. They do fund theoretical work, but as far as I know, not philosophical theoretical work, such as attempts to analyse and replicate consciousness. [Digression: It's interesting how myths are generated about funding or about what people in a discipline actually do. Do NOT believe what philosophers say they do! E.g. philosophers will tell you that Turing machines are important for AI, whereas they are barely mentioned in the vast majority of books or articles on AI by AI researchers. I've explained why turing machines are irrelevant to AI here: http://www.cs.bham.ac.uk/~axs/misc/turing-relevant.html Actually, as the history of physics shows, you can't rely on practitioners in any field to say what they do, any more than speakers of English can tell you how they speak English. You have to go and study what they actually do. Try scanning articles in the main AI journal: Artificial Intelligence (published by Elsevier) looking for papers on consciousness. http://www.elsevier.nl:80/inca/publications/store/5/0/5/6/0/1/ Look at the list of topics on which papers are invited. You can do the same for the main national and international AI conferences. Anyone who thinks there's a huge amount of money being spent on consciousness research in AI has been badly misinformed by someone. I know only two AI researchers who claim to have implemented conscious systems, and I regard both as over-interpreting what they have done. There may be some others, but they make up a tiny minority. There are some philosophers (like me) trying to analyse requirements for mentality in machines and brains, but that's different. And in my experience the science and engineering funding councils are NOT enthusiastic about supporting this! end digression] RE: dynamical systems. I have nothing against dynamical systems. Everything that changes is a dynamical system. Computers are dynamical systems, so is the planetary system. (Stricly, they are instances of dynamical systems.) The important questions about dynamical systems include: (a) what sorts of dynamical systems are there (e.g. continuous vs discrete, stochastic vs deterministic, with or without persistent memory structures, numbers and types of attractors, what sort of topology the "phase space" has, whether they are closed systems or open systems, and in the latter case what sorts of interfaces they have to their environments, whether they have a fixed structure or can grow themselves, like embryos, how the components interact, etc. etc. etc.) (b) what are the useful ways to describe and analyse them (c) what are the trade-offs between different sorts of dynamical systems in relation to particular practical and scientific goals (e.g. producing useful machines or explaining existing biological machines, like brains, or evolutionary systems, or socio-economic systems, or tornados, etc.) Some dynamical systems are well represented as collections of partial differential equations. Would that be a good way to undersand how a computer works? It may be accurate for a certain level of description, e.g. it may be important for computer hardware engineers concerned with timing, stability, reliability, power consumption, etc. But it will probably not be very useful if you are either trying to design a new improved word processing packge, or if you are trying to understand a typical modern computer running most current software on most current operating systems. Those are different kinds of dynamical systems. Of course a brain is a dynamical system. But what sort of dynamical system it is is another question. Probably a human brain is not ONE sort of dynamical system but many different sorts cooperating in all sorts of subtle and complex ways, some of them virtual machines, some neural machines, some chemical machines, and some combinations of several types. In a paper published in 1993 entitled ``The mind as a control system'', I suggested that it was important to distinguish "atomic" from "molecular" dynamical systems. An *atomic* dynamical system has a single state (e.g. represented by a high dimensional state vector) which moves through a trajectory in some phase space. A ``molecular'' dynamical system has a collection of separate, enduring, mutually interacting, yet independently changing components. I.e. it has an *architecture*, with different components performing different functions, and possibly operating on different time scales (e.g. long term relatively static memory stores vs tight feedback-loops controlling current actions). The architecture of a molecular dynamical system need not be fixed: biological dynamical systems often bootstrap themselves, growing more complex architectures over time. What sort of mathematics is relevant will depend on details of the system. E.g. if the main interactions between components are via forces, voltages, fields, etc. best represented by continuous variables, then one sort of mathematics will be needed, familiar to physicists and engineers. However, if some components interact by sending structured messages (e.g. logical expressions, sentences in English or in Urdu) or if they send programs to be executed in each others' environments, or if they communicate via some sort of image structure with a changing topology, then very different mathematics will be required. But do not expect that kind of interaction to be detected by opening a system up and doing visual inspection or physical measurements. Structured messages are more likely to be implemented at an abstract level in a virtual machine. (Virtual machines are real and have real causal powers: they just happen to be more abstract than physical machines, just like poverty, crime and economic inflation.) The space of types of systems is VAST, and anyone who claims to have understood more than a tiny corner of it is almost certainly deceiving himself or herself. Likewise anyone who claims to have identified a corner into which brains and minds fit. (I sometimes wonder whether a hundred years from now we'll look back and laugh at all the people who failed to realise that the important information processing in brains is done by molecular (i.e. chemical) interactions, with neurones mainly serving as communication devices. Are sets of differential equations good for representing processes of changing molecular structures, e.g. H2O + H2O -> H2 + H2 + O2 ?) People who are wondering what the fuss is about might like to read the issue of Behavioral and Brain Sciences, vol 21, no 5, october 1998 in which Tim van Gelder's article "The dynamical system hypothesis in cognitive science" tried to explain why computational approaches to cognition were different from and inferior to dynamical systems approaches, and the 30 or so commentaries. I regard much of the debate as vacuous religious warfare. One issue of substance raised by Doug Watt and others in recent psyche-b correspondence is whether the architecture of human and other animal brains is best understood in terms of very large numbers of dynamical sytems operating in parallel on different spatial and temporal scales, with fairly close coupling between the diverse small scale processes and the more central, high level major controlling processes. My guess is that that will be needed to explain some aspects of human mentality but not all. For example: That view of a human being may be required to explain (some of) what's going on when when I am carrying a heavy load up a steep hill, and am exhausted, sweating profusely, out of breath, with aching limbs, hungry, thirsty, needing to pee, itching all over from mosquito bites, afraid of losing my footing and falling to my death, and worried about missing the bus at the top of the hill, all at the same time. The experience of such a person is the outcome of a very large number of very different processes on different scales all competing for attention. However, all that is probably not relevant to explaining what's going on when I am sitting down and thinking hard about designing a computer program, or playing a game of draughts (checkers), or thinking about transfinite set theory, or writing email messages about philosophical problems, or composing poetry. For the latter mental processes the fine grained implementation details are far less relevant, and perhaps totally irrelevant. Even the former multi-level dynamical architecture may or may not be capable of being replicated on different physical components. Arguments saying alternative implementations are impossible would need to spell out very precise reasons, perhaps with mathematical proofs. Intuitions about what computers cannot do are mostly worthless. It's equally obvious that assemblages of atoms of carbon, hydrogen, oxygen, iron, .... cannot fall in love. And who is to say a machine cannot be conscious if it never feels aching muscles, nor a need to urinate. Maybe that's consistent with some of what John McCrone wrote: > The question for mind science then becomes which mathematical tool or > conceptual framework is the best. Maybe none: different ones are needed for different sub-tasks. > Again I've said that I feel you need to > use both to bracket the real life phenomena of evolved life and conscious > brains. This is because evolved life and conscious brains appear to be > examples of complex adaptive systems (CAS) - systems which are fundamentally > dynamic but which show emergent computational properties. But not all aspects of mind have the same character. Regarding this: > If this is true, then does it rule out consciousness in a computer? The answer will surely be dependent on what sort of consciousness, and what sort of computer are in question. The computers of 50 years from now may be even more different from today's computers than today's are from those of 1950, especially as we are increasingly using computers to help us design and build new ones. Cheers. Aaron === Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs/ ) School of Computer Science, The University of Birmingham, B15 2TT, UK EMAIL A.Sloman@cs.bham.ac.uk PAPERS: http://www.cs.bham.ac.uk/research/cogaff/ TOOLS: http://www.cs.bham.ac.uk/research/poplog/freepoplog.html Phone: +44-121-414-4775 Fax: +44-121-414-4281/2799