Many AI theorists have proposed different architectures for different purposes ranging from relatively simple architectures for agents in very large multi-agent systems to very complex architectures inspired by attempts to produce individual human-like systems (E.g. Minsky's architecture in 'The Emotion Machine' and my closely related H-Cogaff).
Perhaps we need an understanding of what varieties of purposes AI architectures can have and which sorts of architectures are suitable for which purposes (i.e. which niches). For this we need a language and ontology for describing how niches can vary and, if possible, an agreed ontology and terminology for talking about varieties of architectures, e.g. by specifying types of components, types of representations, types of functions components can perform, ways in which different components can be assembled for different purposes, etc. (Compare the use of electronic circuit diagrams: nobody supposes there is one right circuit but there are agreed ways of talking about circuits and representing them, and analysing their behaviours, tradeoffs, etc.)
Superficially there seems to be some common ontology in the AI community insofar as many people use labels like 'reactive', 'deliberative', 'reflective', 'symbolic', 'subsymbolic', 'layered architecture', 'BDI architecture', 'subsumption architecture', etc. Yet when you look closely it turns out that some of these labels are used in strikingly different ways by different people. E.g. some assume that 'reactive' rules out internal state changes whereas others don't. Some use 'deliberative' to refer to anything that considers options and makes a selection, whereas others require something richer (e.g. a planning or problem solving capability). Some assume that an architecture must be unchangeable, whereas others (like me) assume that if you want to understand human intelligence you will need to consider an infant-like architecture that grows and bootstraps itself into something very different over an extended period.
There are also differences between amounts and types of competences required ab-initio, as clearly demonstrated in natural systems by the differences between precocial species like deer that need to run with the herd very soon after birth without having time to learn much, and altricial species born or hatched helpless and (superficially) incompetent but somehow able to develop much richer and more varied cognitive competences by the time they are adults, e.g. the competences of a hunting mammal. A similar spread of designs may be required for artificial systems, e.g. depending on how much detail can be predicted in advance by the system designers about the application domain and task requirements and how much has to be figured out by the system itself on delivery or after the environment changes as a result of unforeseen events.
There may also be very different architectural requirements depending on how the agent interacts with its environment. E.g. an individual with an articulated 3-D body with multiple sensors and effectors of different sorts interacting continuously with physical structures and processes in a dynamic and potentially dangerous environment requires very different mechanisms from an intelligent system interacting with and controlling a large chemical plant, or a software system interacting with other internet agents concerned only with commercial transactions. Are there some requirements common to all of them?
Is the diversity of niches and architectures for intelligent systems so great that there is no point trying to develop a common framework? Or might we gain new conceptual clarity and improved communication and collaboration by developing such a framework? I suggest that some of the interesting transitions in evolutionary history provide useful clues. E.g. why and how did the ability to refer to and reason about unperceived or future objects and events, including multi-step futures, arise? Why and how did meta-semantic competence arise: the ability to refer to things that refer, including coping with referential opacity, etc. How were those related to the evolution of linguistic communicative competence? Which other interesting discontinuities are there?
(There's more here: http://www.cs.bham.ac.uk/research/cogaff/talks/#nokia)
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham