"Complete" Architectures for Biological and Artificial Intelligence
To bring together outstanding researchers from various disciplines concerned with natural and artificial intelligence (e.g. Neuroscience, Psychiatry, Psychology, Philosophy, Linguistics, Neural Nets, Artificial Intelligence, Artificial Life, Robotics, Ethology, Archaeology, Biology (including evolution), Evolutionary Computation, Complex systems, Synergetics, etc) in order to come up with new theories about the design of *complete* intelligent systems.
A *complete* system is one which includes a wide range of fully integrated information processing mechanisms, including: perception (usually several modalities), motivation, learning (new facts, new concepts, new hypotheses, new strategies, new skills, new standards of judgement, etc.), reactive behaviours, deliberation, planning, problem solving, plan execution, action, motor control, self awareness, self assessment, self control (e.g. of attention and thought processes).
There is a need for the study of complete systems to balance the steadily increasing fragmentation in which various groups of people study only low level vision, only high level vision, only planning, only learning, only simple reactive robots, only neural nets, only natural language, only theorem proving, etc.
There is also a need to bring closer together people trying to design new forms of computer-based intelligence with those who study various forms of natural intelligence from various viewpoints (experimental psychology, psychiatry, neuroscience, anthropology, etc.).
A complete system would not necessarily start off fully fledged. Like a new born kitten or human infant it may be immature in many ways and develop dramatically in its life time. The trade-off between more or less "finished" designs and designs for self-adaptive bootstrapping needs to be better understood (like the differences between altricial and precocial species of birds).
An adequate theory of the architecture for a complete human-like agent should in principle be implementable in the form of computing systems and should be capable of yielding predictions, for instance about types of development and learning that can occur in various circumstances, about ways things can go wrong through brain damage or various kinds of maladjustment, about the effects of various forms of education and training, about the effects fo social interactions in groups of such agents, and many more.
It might turn out that computing systems as we know them are inadequate for this purpose, as claimed by Penrose, Hameroff and others, but only by trying as hard as possible will we really find out whether they are inadequate and why, since the reasons given so far are all spurious and easily refuted.
Besides being implementable, an adequate theory of the architecture would also need to be consistent with what is known about the physiology of animal brains and the ways in which various kinds of brain damage can affect different kinds of competence, the types of control of function provided by hormones, neurotransmitters and drugs. This could include accounting for various kinds of motivational and emotional processes such as those produced by the brain stem and limbic system, some of which would be relevant to advanced robots or sophisticated control systems managing a complex plant or helping to deal with crisis situations e.g. in fire-fighting, terrorist threats, epidemics, etc.
The architecture should also reflect what is known (or surmised) about the evolution of animal brains, and should be based on a theory which explains the kinds of evolutionary pressures that might have led to various additions to more primitive control architectures found in simpler organisms. (This would include explaining the differences between biological niches which favour colonies made up of large numbers of relatively simple agents such as ants or termites and those which favour the development of groups with far fewer members where each has a far richer repertoire of competences.)
The main outcome of such a project in the short term would be theoretical papers reporting the problems of specifying and designing such complete architectures and the strengths and weaknesses of various attempts. It could include a variety of "pilot" design and simulation projects exploring various ways of putting multiple information processing capabilities together in the context of a series of increasingly demanding task domains. It would also include the identification of important aspects of animal intelligence or animal brain functioning which have so far not been modelled or replicated but which look as if they may be ripe for modelling.
There would be no room for posturing of the form which often bedevils new fields, where some participants claim that only THEIR approach can succeed. E.g. arguments about whether simulated agents are worth building as opposed to physical robots are worthless: we can learn from both activities if they are done well. Only open minds are welcome in such a network.
The time seems ripe for a major attempt to bring together a number of strong currents that have emerged in a range of disciplines over the last 10 (or more) years. These are listed in no particular order.
1. Increasingly consciousness is being regarded as a biological topic ripe for scientific study instead of being only a topic for philosophers, cranks, theologians, and students of literature. There are international scientific conferences, journals, email lists, electronic seminars, and many books addressing the topic from a scientific or analytic standpoint. Although the signal to noise ratio is not always optimal it seems to be getting steadily better.
See for example the web page of the Association for the Scientific Study of Consciousness, which has several pointers to relevant items, including the recent conferences at Tucson. http://www.phil.vt.edu/assc/info.html I attended the most recent one (May 1998), which contained some excellent stuff (as well as junk). (I'll be taking part in a symposium on this topic organized by Stan Franklin at the IEEE Conference on Systems, Man, and Cybernetics San Diego, in October.)
2. There have been enormous advances in understanding of brain mechanisms in recent years including unravelling of both neural and chemical sub-systems and processing pathways. Some of the best work is clearly concerned with attempts to understand the information processing architecture implemented in the brain (E.g. Antonio Damasio's book Descartes' Error). Studies of competences which are and are not diminished by various forms of brain damage help to reveal some of the sub-components in the architecture.
3. Various strands of research, some old and some new, seem to be moving towards increasing understanding of multi-level phenomena in which different classes of mechanisms co-exist and interact even though some are implemented entirely in others (sometimes referred to as "circular causation"). Haken's work (in Stuttgart) on Synergetics is one example. Another is the work by Kaufmann and Goodwin on "order for free" in evolutionary systems (whether natural or computational). Another is the philosophical analysis of varieties of supervenience. Another is the development of the notion of a virtual machine in computer science. There must be something like this in the relation between mind and brain, though we don't yet understand all the possible forms of such relationships.
4. Evolutionary theorists are beginning to understand more of the complexity of natural evolution (e.g. the evolution of evolvability), and work on artificial evolution is helping us to identify problems and possibilities that previously were not thought of (e.g. the use of structures other than linear sequences as genetic material). It may be that a better understanding of possible forms of evolution will give us new insights into the evolution of intelligence and that may influence theories about forms of architectures that could be produced by such a process.
5. Stephen Mithen's recent book "The prehistory of the Mind" shows that even archaeologists can think about information processing architectures and how they might evolve. (I think he got a lot of details wrong, but his approach and knowledge could be useful as part of a larger interdisciplinary effort.)
6. In the last 50 years there has been a lot of work in Artificial Intelligence and Computer Science which has extended our grasp of a wide range of mechanisms that can acquire, manipulate, transform and use information, including such diverse mechanisms as self-modifying neural nets and logical theorem provers. We don't yet know good ways to put all these things together into complete agents, but the recent rise of interest in developing more or less autonomous software agents (alongside ongoing work on robotics) is helping to focus attention on complete (though often simple) architectures, as opposed to studying only specific sub-mechanisms, such as edge detection or planning. Increasingly, developments in the entertainment industry will also push this research forward (e.g. as the demand for intelligent, self-motivated characters in games and virtual reality systems grows).
It is time to combine all this computational know-how with information from other disciplines about how at least one impressive and complete type of agent is desgined. Even if the whole task is too big to be completed in the foreseeable future, a new multidisciplinary push could yield significant advances.
7. Until recently computing resources have been a major bottleneck. Some of the work in AI done in the early 70s looks mightily impressive when account is taken of memory sizes and processing power available then. But this also explains some of the limitations (it took the Edinburgh robot Freddy about 20 minutes to analyse an image of a cup and saucer in 1973, and this made real-time visual-motor coordination totally out of the question except for very trivial systems). The rapidly increasing power and memory capacity, falling size and falling cost of computing equipment means that experiments are now feasible which would have been impossible a few years ago. There are also many new toolkits to help with the design and implementation of quite complex systems, including our SIM_AGENT toolkit here at Birmingham.
Summary: There are many strands of recent research which seem ready to be pulled together and the technology may be able to support far more ambitious experiments than ever before.
However I know many people from a variety of disciplines (in the UK and elsewhere) whose interests are closely related to the themes listed above, and I have reason to believe that it will be relatively easy to create a network of internationally distinguished UK researchers who would be interested in various forms of collaboration to achieve the objectives sketched above. I already discuss these topics with some of them from time to time.
However some important potential contributors are not in the UK and it
would be a pity if such a network were restricted to people in this
country. Ongoing email collaboration will not be expensive but it would
be good to have funds to enable overseas participants to take part in
one or two workshops a year.
[end]