EACE QUARTERLY (ISSN 1022-7458) is published by the European Association for Cognitive Ergonomics
Questions posed by
Dr Patrice Terrier
Laboratoire Travail et Cognition
Universite Toulouse II
5 all. A. Machado
F-31058 Toulouse Cedex
Tel.: +33 5 61 50 35 44
Fax : +33 5 61 50 35 33
http://www.univ-tlse2.fr/ltc/
and answered by Aaron Sloman
NOTE: the printed version, based on this file, contained minor editorial revisions.
Questions by Patrice Terrier are indicated by "[PT]"
[PT] Professor Sloman, your current appointment is Professor of Cognitive Science and Artificial Intelligence at the University of Birmingham. You have been working on cognitive science and artificial intelligence since about 1970, and on philosophy of mind and knowledge since about 1960. Recent work includes, among other topics: philosophy of mind, philosophical implications of artificial intelligence, computational analysis of attention, design of friendly programming environments and languages, emotions and related affective states.
My first set of questions is about the relationships between the last two topics as reflected in your current work on designing human-like agents.
[PT] To what extent is Poplog, the programming language you have developed since 1980, linked to the development of human-like agents?
[AS] Poplog is a very flexible and powerful toolkit for use in research and applications in Artificial Intelligence. It was developed initially at he University of Sussex and extended and marketed by Integral Solutions Ltd. It supports interactive, incremental, development of software using multiple programming paradigms (e.g. list processing, pattern matching, rule based programming, functional programming, conventional procedural programming, logic programming and object oriented programming). It is supplied with incremental compilers for four languages (Pop-11, Lisp, Prolog and ML) all of them implemented via Pop-11.
It is inherently extendable, and with colleagues in Birmingham I have extended it with the Sim_agent toolkit, which conveniently combines a number of paradigms (including rule-based programming, object-oriented programming, and conventional AI programming) to support the exploration of designs for interacting objects and agents each of which is able to sense others and communicate with others, while running within itself a number of "concurrent" mechanisms (e.g. perception, motive generation, planning, plan execution, reasoning, emergency detection, etc.). The concurrency is, of course, simulated in a "discrete event simulation" mechanism. Unlike most agent toolkits sim_agent does not prescribe a particular type of information processing architecture for agents: rather it allows us to explore a variety of different architectures and to investigate effects of things like speeding up or slowing down some mechanisms relative to others, in order to investigate effects of processing resource limits, for instance and mechanisms for coping with them.
The fact that it is so general makes Sim_agent harder to use than a toolkit committed to a particular class of architectures, but we feel that that is compensated for by its flexibility and extendability.
Sim_agent arose from needs discerned in the Cognition and Affect project. This is an ongoing activity at Birmingham, involving a succession of research students, research fellows and various formal and informal collaborators. We are attempting to understand the kinds of information processing architecture which could account for a wide range of human capabilities, and to see how such an architecture might have evolved and how it relates to architectures found in other organisms, and perhaps also robots and software agents. In particular we wish to account for both the ability of human minds to do various kinds of processing concurrently and also the effects of limitations on such concurrency, which require sophisticated mechanisms for control of attention. Resource limits in those mechanisms and the need for speed can lead to both desirable and undesirable consequences, some of which are manifested in typically human emotional states, such as uncontrollable grief or envy.
It would be possible to implement something like Sim_agent in a Lisp environment, but I believe that doing it in one of the more popular languages, such as C, C++, or Java would be far more difficult.
More information about Poplog is in:
http://www.cs.bham.ac.uk/research/poplog/poplog.info.html
and
http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
An overview of the Sim_agent toolkit is in: http://www.cs.bham.ac.uk/~axs/cog_affect/sim_agent.html
The Cognition and Affect project is summarised, with pointers to our online papers, in: http://www.cs.bham.ac.uk/~axs/cogaff.html
[PT] What are the main motivations for designing human-like agents in many research laboratories? And are industrial or military applications developed at the present time?
[AS] Different researchers have different motivations. My own primary motivation is wanting to understand what we are, how we evolved, how we are like and unlike other animals, how humans themselves differ (e.g. according to stage of development, gender, cultural influences, kinds of brain damage, genetic endowments, etc), and how artificial agents can be like us in various ways. This is inherently multi-disciplinary long term research combining concepts, theories and methods from philosophy, psychology, brain science, biology, AI and computer science.
This "pure" long range research interest is coupled with the belief that such research can have several important practical implications.
For example:
1. If we have a better understanding of the information processing architecture of a normal adult human being we may be in a far better position to understand how things can go wrong in the architecture and what the options are for providing help. This should be of use to counsellors, therapists, neuroscientists helping people with all sorts of problems including learning difficulties, emotional disorders, addictions, or brain damage.
2. If we understand the architecture and how it develops during the life of an individual we may be in a far better position to understand the requirements for effective educational systems and strategies, instead of having to use prejudice, guess-work, shallow and misleading empirical generalisations, or just current educational fashions, as often happens now. This could produce healthier, better informed, better adjusted, more competent citizens, with all the implications of such changes.
4. If we can build working implementations of the architectures we are studying, and develop good tools to enable students to explore them, play with them, investigate the consequences of altering them in various ways, then we may be able to produce a new generation of psychologists, brain scientists, therapists and counsellors who have a far deeper understanding of the kinds of systems they are studying or interacting with.
3. On a more modest scale: developers designing computing systems, programming languages, interfaces, intend these to be used by people. They will surely do a better job if they have a deep understanding of how people work.
Normally if you try to design a new system to interface with an existing system you try to understand how the existing system works so that your new system can interact with it successfully. This may include knowing what sorts of information it can process, what it can do with the information, what information it already has, what sorts of actions it can generate, how it takes decisions, and so on.
If we had a better understanding of how people work -- e.g.
o how they perceive things,
o how they learn,
o how their motivations and preferences work and change over time,
o how they adopt goals,
o what sorts of ontologies they use,
o what forms of representation and inference they use,
o how emotional states develop and what effects they have,
o how various types of long term and short term memory work,
etc.
then we should be in a far better position to design effective interfaces and languages for people to use.
However, most interface designers and language understand very little about human minds (and often care little too) so that they design very poor systems or systems which work well only in limited contexts or for limited groups of people.
I think the commonly used mouse-and-menu interfaces are an example: they are easy for beginners, but for many people can lead to a very limited understanding and very limited range of skills at using computers. I believe we are crippling the minds of our children, as a result. The emphasis on ease of use may be like tying feet together. In some ways it is easier to walk with small steps, but that can stop you climbing ladders, clambering over obstacles, leaping across crevasses, or simply moving quickly.
[PT] You mentioned an agent toolkit, called Sim-agent. I have a couple of questions about this toolkit.
In a recent paper on Sim-agent (ref. 1) you contrast research on interactions between agents and research on processes within agents. Could you explain to our readers to what extent research on interactions between agents is the right approach towards the design of cognitively rich agents?
[AS] I would not argue that anything is the "right" approach. There are many different sorts of problems to be solved and I strongly recommend multiple approaches, provided that the people adopting different approaches pay attention to one another and learn from one another.
For instance, in the paper you cite, Brian Logan and I claim that research on interacting agents might benefit from work (such as ours) on the information processing architectures of individual agents. Likewise people like us who work on the architecture of individual agents can benefit from studies of the requirements for interacting agents.
For instance if two animals A and B coexist, then it is possible for each of them to regard the other simply as a physical object with physical properties. Then perhaps they will need no more information processing mechanisms than they need for perceiving and interacting with rocks, trees, whirl-winds, rivers, etc.
However if A regards B as an intelligent agent with goals, beliefs, attitudes, and mechanisms for sensing and acting on the world, then A needs richer information processing mechanisms which can handle representations or models of other systems which contain representations and models. Different degrees of sophistication in A would correspond to different abilities to treat B as intelligent. For instance, it makes a difference whether or not A can think of B thinking of A thinking of B, etc.
The ability to represent (at some level) the internal information processing of other agents is relevant to modelling social animals animals of various sorts (monkeys, chimps, animals which hunt in packs, etc). The precise ontologies used by animals of various degrees of sophistication (or for that matter human children as they progress through varying degrees of sophistication) are a topic of ongoing research.
It is often thought that language is primarily a means of communication BETWEEN individuals. From the standpoint described here, the primary use of language (in a very general sense where "language" refers to a medium for expressing information, questions, preferences, plans, strategies, etc.) is as a tool for thinking, forming goals, planning, deliberating, and so on WITHIN individuals.
In other words, the primary use of language is for an agent to communicate with itself, in internal information processing. It is primary in both the sense that these internal language-using mechanisms are a prerequisite for the existence of external linguistic communication, and in the sense that the internal mechanisms must have evolved first, and exist in some animals which have only limited external communication.
Of course, as the need for more sophisticated forms of communication (and cooperation, competition, persuasion, deceit, threats, etc.) develop, the requirement for an external language grows. That in turn extends the requirements for internal processing in both generating and understanding such communications. It also extends the requirements for an intelligent agent to think about what another agent can or might do: A may need to think about how B will understand a communication from A, or how B will communicate with C, etc. (There is work on this in Birmingham by my colleagues John Barnden and Mark Lee.)
These more sophisticated mechanisms within individuals also make possible new kinds of individual learning and development involving the absorption of concepts, facts, techniques, standards, preferences, attitudes, goals and so on, from a culture.
And as the information processing capabilities of individuals develop, so too will the variety of forms of communication. In other words there's a circle of influences between cognitively rich internal architectures and processes, and rich social interactions and forms of communication. The currently fashionable study of memes presupposes all these mechanisms, but does not really analyse their nature.
What I have said here is extremely sketchy: it merely indicates why I think that the study of cognitively rich architectures and the study of sophisticated interactions cannot be separated from one another.
One implication of all this is that work aimed at giving computers rich abilities to interact with human beings had better include a study of the information processing architectures of humans, both in order to specify the types of communications that are possible and what they can achieve, how they can go wrong, etc., and also because we may make faster progress if we can copy aspects of an existing design than if we simply try to design a system ourselves from scratch.
[PT] Other agent toolkits exist, such as SOAR (ref.2). What are the advantages, if any, of Sim-agent compared with other agent toolkits? Are there important problems which are usually ignored in much multi-agent systems but not in Sim-agent?
[AS] For particular purposes other toolkits may be preferable. For example SOAR is based on a particular type of cognitive architecture designed by Allen Newell and his collaborators, and SOAR provides sophisticated tools for building systems with that sort of architecture. If that is what you want to do then SOAR will be far better than SIM_AGENT. There are other tools geared to specific architectures, e.g. the ACT-R system of John Anderson and colleagues at Pittsburgh, and many tools which are designed to support development of architectures based on certain sorts of neural nets.
However, if none of those architectures is precisely what you want, and you need to be able to explore simulated agents with a variety of architectures, and you wish to develop new types of architectures, then Sim_agent may provide more flexibility than the alternatives available (as far as I know: there may be equally or more flexible alternatives that I am not aware of.) One of the reasons for the flexibility which is not found in "batch-compiled" languages is that in Sim_agent, as in many Lisp systems and Prolog systems, there is an incremental compiler which is part of the run-time system. That can support very fast prototyping, testing, debugging and exploratory extensions.
Of course, anything that is done in such a system can also be done in a system using C or C++, for instance. However it is not normally possible using such a language to match the ability provided by an AI language to edit some module or procedure which is part of a multi-megabyte running system and then within a fraction of a second have the new module compiled and linked in to the system so that everything which previously used the old module thereafter uses the new module. (The old one will be garbage collected automatically).
It might be possible to port the Sim_agent toolkit to another language. If anyone wishes to try to do that I'll be happy to cooperate.
[PT] Can our readers access online resources on the toolkit?
[AS] All the Pop-11 code and documentation is accessible via the Birmingham Poplog ftp site mentioned above. An introductory tutorial file showing how to code some very simple reactive agents is in ftp://ftp.cs.bham.ac.uk/pub/dist/poplog/sim/teach/sim_feelings
A more general overview of the toolkit is in ftp://ftp.cs.bham.ac.uk/pub/dist/poplog/sim/help/sim_agent
The rule-based language, poprulebase, used to express the internal processing of each agent is described in some detail in ftp://ftp.cs.bham.ac.uk/pub/dist/poplog/prb/help/poprulebase
Some of the generality and flexibility of the toolkit comes from the fact that Poprulebase allows procedures written in Pop-11 to be invoked in conditions or actions of rules. Simple cases are illustrated in the sim_feelings file. More sophisticated simulations could use Pop-11 to invoke C programs, communicate with other machines via sockets, etc.
In order to run the toolkit it is necessary to obtain the Poplog system,
which can be run under linux on a PC, or on several types of unix
workstation, e.g Sun, Digital (Compaq) Alpha Unix, Hewlett Packard,
SGI. Poplog is developed by Sussex University distributed by Integral
Solutions Ltd, but they are changing their arrangements for
distribution in the near future, and it is very likely that Poplog will
become freely available.
Note added 11 Jul 2002
Poplog became freely available late in 1999, with full system sources, and may be downloaded from here: http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
[PT] Our readers who are aware of your work published in artificial intelligence journals are not necessarily aware of your work on philosophy of mind. Typically, while cognitive ergonomists assume some commonalities between brains and computers, they could also doubt the importance of being well educated in philosophy of mind for a researcher interested in design and usability issues.
[AS] The short answer is this:
Those who are ignorant of philosophy are doomed to reinvent it badly.
A longer answer was provided in the papers by John McCarthy and myself written for a "Philosophical Encounter" at the 14th International Joint Conference on AI at Montreal in 1995. My paper is available online at http://www.cs.bham.ac.uk/research/cogaff/0-INDEX81-95.html John McCarthy's contribution can be found at http://www-formal.stanford.edu/jmc/
Marvin Minsky also took part, but did not produce a written paper.
One of the benefits of philosophical expertise is having the ability to produce good analyses of concepts that are used in specifying human mental capabilities (motivation, intention, attitudes, emotions, values, etc.) Most people who attempt to use these concepts assume over simple definitions and then build over simple theories and models. (There is a similar problem in psychology, as exemplified by the number of different and inconsistent definitions of "emotion" in the psychological literature.)
[PT] Where do you see inconsistent definitions of "emotion" in the literature?
[AS] I first became aware of the diversity of definitions of emotions among psychologists when I read the collection edited by Magda Arnold (ref arnold 68). A possibly more accessible recent book by Oatley and Jenkins (ref oatley 96) surveys a number of approaches to the study of emotions.
In my reading I have found some authors who define emotions in terms of certain sorts of brain mechanisms and processes, some (such as William James) who define them in terms of sensed physiological processes, some who define them in terms of observable behaviours and external changes (weeping, smiling, posture, etc.) some who define them in terms of how perceptions or beliefs relate to values or goals, some who define them in terms of how they are experienced, and some who define them as inherently irrational while others allow that some or all emotions can be rational. Out on a limb was J.P. Sartre's definition as emotion as a state in which we perceive the world as magical.
I expect that an exhaustive survey of all literature on emotions would probably find between 50 and 100 significantly different definitions, although they may form a smaller number of clusters of related definitions.
Part of the problem is that many people assume that because they have experienced emotions they know exactly what they are and can define them. A similar assumption bedevils studies of consciousness, where people think they know what it is because they experience it. This is as misleading as the assumption that because we have experienced simultaneity we all know exactly what simultaneity is: an assumption which Einstein demolished in his work leading to the special theory of relativity. What we experience is a collection of special cases, and in each case what we are conscious of is but the tip of an iceberg.
[PT] What is your definition of emotion? Is your definition of emotion consistent with the cognitive theories you favour in dealing with perception or other cognitive processes, or do you use distinct theories for dealing with cognition and for dealing with affect in the C&A project?
[AS] A full answer would require a long discussion including an analysis of the notion of "definition" and its role in Science.
The short answer is that I believe that we have a collection of intuitive concepts of types of mental phenomena which differ from individual to individual, from culture to culture, and in some cases between research disciplines also, and that we can refine and clarify these concepts by studying and classifying the phenomena made possible by appropriate information processing architectures.
Examples of such concepts referring to aspects of both cognition and affect include: "belief", "perception", "learning", "understanding", "desire", "attitude", "emotion", "intention", "taste", "personality", and many more. These concepts (and their many variants) are useful first approximations in dealing with a complex collection of phenomena whose nature we understand only in a shallow fashion. Some researchers believe that greater depth of understanding comes from empirical studies. Some believe that pure conceptual analysis as practised by their favourite philosophers is the best approach. Some believe that all these concepts are purely culturally determined constructs from which we cannot escape. Some try to formalise these concepts by developing logical and mathematical models (e.g. using modal logics to formalise notions of belief and desire).
Such approaches may have some value but they are inherently shallower than one which starts from an attempt to understand naturally occurring minds as complex information processing systems which evolved under a variety of biological pressures, subject to various constraints. Any specific type of information processing architecture will support particular types of states and processes (which types will in general depend also on the environment). By finding a categorisation of the states and processes which an architecture can generate we obtain a set of concepts which are generated in a PRINCIPLED way, much as the concepts of kinds of stuff summarised empirically in the periodic table of the elements were generated in a more principled way from a theory of the architecture of matter.
In that way we can discover not only what is correct in our pre-theoretic concepts, but also how they need to be enriched, refined, subdivided into different sorts of cases. On this basis, my colleagues and I have been developing an analysis of the phenomena which are more or less loosely grouped together under the pre-theoretic concept of "emotion". For example, we have conjectured that normal adult humans have a three-level information processing architecture involving an evolutionarily old "reactive" layer similar in many ways to what can be found in most other animals, a newer and rarer "deliberative layer" capable of supporting "what if" reasoning, and a still newer "meta-management layer" capable of monitoring, categorising and evaluating internal processes, and to some extent modifying them, e.g. by redirecting attention. These mechanisms have both normal modes of functioning and abnormal modes when they are interrupted or redirected by one or more fast-acting pattern-directed global alarm mechanisms.
We then find that the states and processes normally called "emotional" can be subdivided into primary emotions largely initiated by the reactive mechanisms, secondary emotions arising out of processes in the deliberative mechanisms and tertiary emotions arising out of problems of loss of control in the meta-management mechanisms. Further sub-divisions can be made on the basis of how these states are caused, how they develop and decay, what effects they have, what sorts of semantic content they have, and how they are related to other internal states and processes. This amounts to a very rich theory in which emotions and other affective phenomena are deeply connected with cognitive processes.
This theory provides a rational reconstruction of much previous work including Damasio's distinction (ref Damasio 94) between "primary" and "secondary" emotions, which we can now see ignored some important distinctions.
Our own developing ideas are presented in an incomplete form in various papers including (ref sloman and croucher 81) (ref sloman 92) (ref beaudoin 93) (ref sloman 98) (ref sloman forthcoming) all available in our WEB directory.
[PT] Our readers would be interested in identifying the cognitive theories you favour. For example, when I read your answers (concurrency, limited capacity, control of attention) I suspect your work is influenced by some cognitive theories of consciousness (Baars?)...but I am not sure I am right.
[AS] My work, and the work of my collaborators and students has been influenced by that of many thinkers, including several past philosophers such as Kant, Frege, Wittgenstein and Ryle (whose book The Concept of Mind was largely misinterpreted as a behaviourist manifesto). Chomsky's work, like Kant's (and implicitly Frege's) drew attention to the importance of vast amounts of unconscious information processing that must underly much of our conscious experience. Much of my work can be seen as a development of ideas in a very important paper by Herbert Simon (ref simon 67) originally written in the early 60's in response to a criticism of information processing models by the psychologist Ulric Neisser. I also learnt much from my former colleague Margaret Boden at the University of Sussex, e.g. her little known 1972 book (ref boden 72), and various books and papers by Marvin Minsky and John McCarthy.
I have also been influenced by protagonists of Darwin's ideas, e.g. the books of Dawkins and Dennett (ref dennett 96), since I think it is important to understand the differences between minds which have been or which could have been produced by an evolutionary progress and other sorts of minds.
As far as Baars is concerned, I came across his work (e.g. ref baars 88) after most of the ideas explained here were already developed under diverse influences. I think he assembles many useful facts, and his theories point in the right general direction, but they are not expounded in terms which could be used as the basis of a detailed analysis of the architecture and its implications, presumably because, like most psychologists, he has had no training in designing, building or debugging, models that actually work. So he is forced to rely on inadequate metaphors, such as the metaphor of an internal theatre as the vehicle of consciousness. It might have been better to use a metaphor of an internal editor buffer within which many processes can both find data and store results of operations on data stored elsewhere. But even that is over simple.
In fact I don't think we have the right set of conceptual tools for designing human-like architectures yet and we also lack a deep understanding of precisely what it is that has to be explained when we talk about consciousness. We won't have a truly deep understanding of what needs to be explained, until we are in a position to formulate deeper explanatory theories!
[PT] In LSE, London, Nov. 19th 1998, your presentation was entitled "Are brains computers"? I supposed you answered positively this question but, surprisingly, one of your slides contained the following sentence : "Brains are computers, though at present we don't yet know what kind of computers." Could you explain what you mean by this sentence?
[AS] What I mean is that we are only in the early stages of understanding what sorts of computations are possible, what sorts of information processing mechanisms and architectures are possible. It is clear that a brain is very different from a computer of the sort we currently know how to build. It may turn out that we shall invent and implement all sorts of additional types of mechanisms in future (just as people recently have begun to explore connectionist forms of computation, computation using DNA, and quantum computation). As our ideas of computation (or information processing expand) we shall be in a better position to ask what sorts of computations animal brains perform.
There is a more subtle aspect to our ignorance regarding what kind of computer a brain is. That has to do with ignorance of what the task of a brain is. E.g. people often think they know what perception is: it is taking in physical signals converting them (using transducers) to some kind of internal information structure, and then analysing and interpreting those in order to infer the existence of objects in the environment, their properties, relationships, etc. That for example is how many people see the function of vision, as expound by David Marr in his 1982 book. However there is an alternative view derived from J.J.Gibson, which is that the function of perception in general and vision in particular is to provide information about affordances. These are not "objective" properties and relations of things in the environment but can only be defined in terms of the goals, needs, possible actions and information processing capabilities of the perceiver. The kind of computation done by a Marrian perceiver is very different from that done by a Gibsonian perceiver. I suspect there is a lot that we simply do not understand about the function of perception and until we do we shall not know what sorts of computers can perform those functions. For example, in the spirit of Gibson I have argued that a major function of perception is to inform the perceiver about POSSIBILITIES in the environment, and relationships between possibilities. But it is far from clear what this entails.
(I don't yet know whether it links up with the many-worlds interpretation of Quantum mechanics expounded brilliantly by David Deutsch in his 1997 Penguin book, The Fabric of Reality.)
[PT] Gilbert Ryle's 1949 famous book, The concept of mind, contains many important and relevant ideas about internal information processing in your opinion. Please cite three of them.
[AS] 1. Perception cannot consist in the creation of an internal image perceived by a homunculus, since, in order to perceive, the homunculus would then have to create internal images perceived by another homunculus, ad infinitum. This argument may not have originated with Ryle, but he used it, and it effectively demolishes many naive views about the nature of minds and consciousness.
2. Many mental states and processes are inherently concerned with dispositions, propensities, capabilities, inclinations, as opposed to actual episodes. I.e. what is important about my current mental state, and what identifies it as this state (e.g. experiencing letters on the screen as I type) rather than something else is a large collection of truths about "what would happen if". E.g my understanding of what I read changes my ability to answer certain questions, think of new questions, make new inferences, etc. More dramatically, my experience of a particular visual configuration inherently involves my ability to experience it as changing in many ways, even when it doesn't change. Seeing a patch as red involves the ability to see the colour change, throughout or in part. Seeing it as circular inherently involves the ability to see the boundaries moving in many different ways, or cracks appear in it, or a different coloured patch appear to obscure part of it, etc.
3. Many of the realisations of those dispositions, and many of the triggering conditions are themselves internal mental states and processes. (That's why Ryle was not a behaviourist, though his talk of dispositions led many of his contemporaries to regard him as one. He was, I believe, groping towards an information processing model of mind, but lacked the conceptual tools we now have to express the ideas.)
Many of these points are very important in understanding what sorts of mental processes occur when someone uses a computer interface. Often they are ignored or over-simplified, as if seeing a red circular blob were an event whose structure could be completely captured by four parameters, the colour, the x/y coordinates of the circle's centre and its radius. (These are all you need to draw the blob. But seeing is not the inverse of drawing.)
[PT] To conclude, let's talk about the results of a recent meeting you have attended. In April 1999, the British HCI Group and the University College of London organised a one-day meeting entitled "Affective computing: The role of emotion in HCI". Do you really think that emotion may enhance human-computer interaction in the future?
[AS] My answer is indirect. Part of the answer is that whether we want to or not we may find that certain sorts of emotions emerge naturally in mechanisms designed for other purposes. So certain sorts of robots and real time control systems will have emotions whether or not they enhance interaction. Another part is that human beings will have emotions and for an intelligent system to take account of that possibility may be important for effective communication and cooperation. Finally in some contexts, for instance interactive entertainment systems it may be important for synthetic characters, robots, etc. to behave as if they had emotions, in order to be convincing. That is why entertainment companies are investing in research in this area.
I have been working on related issues for many years. This is partly because I think that any complete theory of the human mind must include an account not only of perception, learning, problem solving, decision making, planning, and plan execution, but also personality, motivation, preferences, attitudes, tastes, values, moods and emotions.
In 1981 Monica Croucher and I argued (ref sloman and croucher 81) that intelligent systems which were subject to various limitations, such as incomplete knowledge, limited processing speed and other resource limits would require mechanisms which as a side effect were capable of generating emotions.
I still think that argument is basically correct, though nowadays I distinguish three kinds of emotions (primary, secondary and tertiary) related to three different layers in the human information processing architecture (reactive, deliberative, and reflective). For instance primary emotions arise out of the need to have a global "alarm" system which receives signals from all over the system is capable of rapidly detecting global patterns indicating a need for a rapid reorganisation of all activities into a new pattern, e.g. fleeing, freezing, fighting, etc. Secondary emotions are more subtle and complex and depend on deliberative mechanisms supporting "what if" reasoning capabilities. These can be used to discover that a possible consequence of a plan currently being considered would be disastrous, thereby triggering in advance (through a modified global alarm system) a reaction of apprehension which can modify many aspects of both external behaviour an internal processing until the danger is past. Tertiary emotions involve even more complex mechanisms capable of monitoring, evaluating and to some extent controlling a host of internal deliberative processes, and then under certain conditions losing control. These ideas are developed in more detail in papers in the Cognition and Affect web site: http://www.cs.bham.ac.uk/research/cogaff/
The point about interactive systems often needing to take account of emotions and other affective states in humans with whom they interact was made in (ref sloman 92), for instance, in connection with intelligent tutoring systems. A good teacher has to be able to tell whether pupils are happy or upset and to make reasonable guesses as to why, and may need to think about how to tailor criticism not only to the quality of a student's work, but also to the likely emotional reactions of different sorts of students.
Many of these ideas derive from a pioneering paper by Herbert Simon (ref simon 67)
Although I think that computers showing simulated emotions may be useful in various kinds of entertainments (just as characters in Disney cartoons are often effective because they suffer, are vengeful, are happy, gloat, are disappointed, etc.), I suspect that there are dangers in simulating such feelings where the system lacks the architecture to support real preferences, beliefs, desires, hopes, fears, pleasures, pains, etc. Feigned emotions (sympathy, concern, indignation, pleasure at a user's success, etc.) could, at worst, seriously mislead naive users, and at best might seriously irritate more informed users (like me).
[PT] Thank you very much Professor Sloman.
(ref.1) Sloman, A., Logan, B. (1999). Building cognitively rich agents using the Sim_agent toolkit", in CACM, 42, 3, March.
(ref.2) Laird, J.E., Newell, A., Rosenbloom, P.S. (1987). SOAR: An architecture for general intelligence. Artificial Intelligence, 33, 1-64.
(ref arnold 68) Magda B Arnold (editor) The Nature of Emotion, Penguin Books, Harmondsworth, England, 1968,
(ref baars 88) Bernard J. Baars, A cognitive Theory of Consciousness, Cambridge University Press, Cambridge, UK, 1988,
(ref beaudoin 93) L.P. Beaudoin and A. Sloman, A study of motive processing and attention, in Prospects for Artificial Intelligence, Eds. A. Sloman, D. Hogg, G. Humphreys and D. Partridge and A. Ramsay, pp. 229--238, IOS Press, Amsterdam, 1993,
(ref boden 72) M.A. Boden, Purposive Explanation in Psychology, Harvard University Press, 1972.
(ref dennett 96) D.C. Dennett, Kinds of minds: towards an understanding of consciousness, Weidenfeld and Nicholson, London, 1996,
(ref oatley 96) K. Oatley and J.M. Jenkins, Understanding Emotions, Blackwell, Oxford, 1996,
(ref Damasio 94) Antonio R Damasio, Descartes' Error, Emotion Reason and the Human Brain, Grosset/Putnam Books, New York, 1994,
(ref simon 67) H.A. Simon, Motivational and emotional controls of cognition, originally published 1967, Reprinted in Models of Thought, Yale University Press, 29--38, 1979
(ref sloman 92) A. Sloman, Prolegomena to a theory of communication and affect, Communication from an Artificial Intelligence Perspective: Theoretical and Applied Issues, Springer, 1992, Ed. A. Ortony and J. Slack and O. Stock, pp. 229--260, Heidelberg, Germany
(ref sloman 98) A. Sloman, Damasio, Descartes, Alarms and Meta-management, in Proceedings International Conference on Systems, Man, and Cybernetics (SMC98), IEEE, pp. 2652--7, 1998.
(ref sloman forthcoming) Aaron Sloman, Architectural Requirements for Human-like Agents Both Natural and Artificial. (What sorts of machines can love?), in Human Cognition And Social Agent Technology, Ed. Kerstin Dautenhahn, John Benjamins, forthcoming.
(ref sloman and croucher 81) A. Sloman and M. Croucher, Why robots will have emotions, in Proc 7th Int. Joint Conference on AI, pp. 197--202, 1981, Vancouver
(ref wright et al 96) I.P. Wright and A. Sloman and L.P. Beaudoin, Towards a Design-Based Analysis of Emotional Episodes, in Philosophy Psychiatry and Psychology, 1996, 3, 2, pp. 101--126