PAPERS INSTALLED IN THE YEAR 2004 (APPROXIMATELY)
See also
PAPERS 2004 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
Closely related publications are available at the web site of Matthias Scheutz
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/04.html
Maintained by Aaron Sloman.
It contains an index to files in the Cognition and Affect
Project's FTP/Web directory produced or published in the year
2004. Some of the papers published in this period were produced
earlier and are included in one of the lists for an earlier period
http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents
A list of PhD and MPhil theses was added in June 2003
Last updated: 10 Oct 2009; 13 Nov 2010; 7 Jul 2012
In some cases other versions of the files can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion.
JUMP TO DETAILED LIST (After Contents)
Filename: phil-mag-emotions-sloman.pdf
Title: Damasio's Error
Date Published: 4th Quarter 2004
Where published:
In The Philosophers' Magazine 2004, pp 61-64
Abstract (Opening Paragraphs):
In 1994 Antonio Damasio, a well known neuroscientist, published his book Descartes'
Error. He argued that emotions are needed for intelligence, and accused Descartes and
many others of not grasping that. In 1996 Daniel Goleman published Emotional
Intelligence: Why It Can Matter More than IQ , quoting Damasio with approval, as did
Rosalind Picard a year later in her book Affective Computing.Since then there has been a flood of publications and projects echoing Damasio's
claim. Many researchers in artificial intelligence have become convinced that
emotions are essential for intelligence, and they are now producing many computer
models containing a module called `emotion'.(This article criticises the current fashion.)
Filename:
http://www.cs.bham.ac.uk/research/cogaff/talks/#cafe04
Title: Do machines, natural or artificial, really need emotions?
Talk to
Birmingham Cafe
Scientifique & Culturel Birmingham,
7th May
2004
Revised version presented on 24th June 2005 in Utrecht at
The 3rd multi-disciplinary symposium organized by the NWO Cognition
Programme:
How rational are we?
Author: Aaron Sloman
Date installed: 11 Aug 2004 (Updated several times)
Abstract:
For full abstract follow link above. (Includes critique
of Damasio's fashionable view that emotions are required for
intelligence).
in Cognitive Systems Research, Volume 6, Issue 2, June 2005, Pages 145-174
Online since Sept 2004 at ScienceDirect.
NOTE: (by August 2005) this paper was third in the list of top 25 most downloaded articles in the journal. SeeMuch revised version of paper originally presented at: International Workshop Biologically-Inspired Robotics: The Legacy of W.Grey Walter, 14-16 August 2002, Bristol, UK http://www.ecs.soton.ac.uk/~rid/wgw02/home.html
Abstract:
Animals and robots perceiving and acting in a world require an ontology
that accommodates entities, processes, states of affairs, etc., in their
environment. If the perceived environment includes
information-processing systems, the ontology should reflect that.
Scientists studying such systems need an ontology that includes the
first-order ontology characterising physical phenomena, the second-order
ontology characterising perceivers of physical phenomena, and a
(recursive) third order ontology characterising perceivers of
perceivers, including introspectors. We argue that second- and
third-order ontologies refer to contents of virtual machines and
examine requirements for scientific investigation of combined virtual
and physical machines, such as animals and robots. We show how the
CogAff architecture schema, combining reactive, deliberative, and
meta-management categories, provides a first draft schematic third-order
ontology for describing a wide range of natural and artificial agents.
Many previously proposed architectures use only a subset of CogAff,
including subsumption architectures, contention-scheduling systems,
architectures with `executive functions' and a variety of types of
`Omega' architectures. Adding a multiply-connected, fast-acting `alarm'
mechanism within the CogAff framework accounts for several varieties of
emotions. H-CogAff, a special case of CogAff, is postulated as a minimal
architecture specification for a human-like system. We illustrate use of
the CogAff schema in comparing H-CogAff with Clarion, a well known
architecture. One implication is that reliance on concepts
tied to observation and experiment can harmfully restrict explanatory
theorising, since what an information processor is doing cannot, in
general, be determined by using the standard observational techniques of
the physical sciences or laboratory experiments.
Like theoretical physics, cognitive science needs to be highly
speculative to make progress.
Keywords:
Architecture, biology, evolution,
information-processing, ontology, ontological blindness, robotics,
virtual machines
Title: Interactions between Philosophy and Artificial Intelligence:
The role of intuition and non-logical reasoning in intelligence,
Author: Aaron Sloman
NOW TRANSFERED TO:
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1971-02
Filename: sloman-information-nature.pdf
Title: Information-Processing Systems in Nature
Draft: liable to change.
Author: Aaron Sloman
Date added: 16 Apr 2004 (Revised 22 Apr 2004)
Abstract:
This paper is a sequel to
my invited contribution to PPSN2000.
It
attempts to identify and analyse a collection of issues implicitly taken
for granted in the earlier paper and in a great deal of literature which
assumes that biological organisms do information processing. Normally it
is assumed that we all understand intuitively what it means for
something to be an information-processor, whether natural or artificial.
I attempt to offer the beginning of an analysis which attempts to
justify many of the ordinary ways of talking about information in
organisms -- some of which attract critical comments from those who are
sceptical about attempts to talk about computation and representations
in organisms. In the long run I hope to show that that scepticism is
misguided.
Filename: petters-aaai-ss-04.pdf
Title: Simulating Infant-Carer Relationship Dynamics
Presented at cross-disciplinary workshop on Architectures for
Modeling Emotion at the AAAI Spring Symposium at Stanford University in
March 2004.
http://homepages.feis.herts.ac.uk/~comqlc/ame04/
Author: Dean Petters
Date added: 15 Feb 2004
Abstract:
Advances in autonomous agent technology have resulted in the potential for
implementations of multiple agents to act as psychological theories of complex
social and affective phenomena. Simulating attachment behaviours in infancy
provides a relatively simple starting point for this type of theory
development. The presence of neurophysiological, psychological and other types
of data facilitates the validation of architectural theories by constraining
these architectures at multiple levels. A seven part design process is
described which details how requirements are specified and how design,
implementation and evaluation processes are carried out. Two competing
theories are proposed, one that involves some deliberation and one that is
reactive only.
For movies see
http://www.cs.bham.ac.uk/research/poplog/figs/simagent
Filename: sloman-aaai04-emotions.pdf
The slide presentation is here
Title: What are emotion theories about?
Invited talk at cross-disciplinary workshop on Architectures for
Modeling Emotion at the AAAI Spring Symposium at Stanford University in
March 2004.
http://homepages.feis.herts.ac.uk/~comqlc/ame04/
Author: AaronSloman
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/sloman-aaai04-slides.pdf
Date added: 29 Jan 2004
Abstract:
This is a set of notes relating to an invited talk at the
cross-disciplinary workshop on Architectures for Modeling Emotion at the
AAAI Spring Symposium at Stanford University in March 2004. The
organisers of the workshop note that work on emotions "is often carried
out in an ad hoc manner", and hope to remedy this by focusing
on
two themes (a) validation of emotion models and architectures, and (b)
relevance of recent findings from affective neuroscience research. I
shall focus mainly on (a), but in a manner which, I hope is relevant to
(b), by addressing the need for conceptual clarification to remove, or
at least reduce, the ad-hocery, both in modelling and in empirical
research. In particular I try to show how a design-based approach can
provide an improved conceptual framework and sharpen empirical questions
relating to the study of mind and brain. From this standpoint it turns
out that what are normally called emotions are a somewhat fuzzy subset
of a larger class of states and processes that can arise out of
interactions between different mechanisms in an architecture. What
exactly the architecture is will determine both the larger class and the
subset, since different architectures support different classes of
states and processes. In order to develop the design-based
approach we need a good ontology for characterising varieties of
architectures and the states and processes that can occur in them. At
present this too is often a matter of much ad-hocery. We propose steps
toward a remedy.
Filename: http://www.cs.bham.ac.uk/research/cogaff/AIMag/StThomas-AIMag.pdf (PREPRINT)
Published
version (PDF)
Also at
http://web.media.mit.edu/~push/StThomas-AIMag.pdf
Title: The St. Thomas common sense symposium: designing architectures for human-level intelligence.
To Appear in The AI Magazine in 2004.Authors: Marvin Minsky, Push Singh and Aaron Sloman,
Abstract:
To build a machine that has common sense was once a principal goal in
the field of Artificial Intelligence. But most researchers in recent
years have retreated from that ambitious aim. Instead, each developed
some special technique that could deal with some class of problem well,
but does poorly at almost everything else. An outsider might regard our
field as a chaotic array of attempts to exploit the advantages of (for
example) Neural Networks, Formal Logic, Genetic Programming, or
Statistical Inference with the proponents of each method maintaining
that their chosen technique will someday replace most of the other
competitors.
We do not mean to dismiss any particular technique. However, we are convinced that no one such method will ever turn out to be best, and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. In other words, we should not seek a single unified theory! To build a machine that s resourceful enough to have human-like common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn. We held a two-day symposium in St. Thomas, U. S. Virgin Islands, to discuss such a Project to develop new architectural schemes that can bridge between different strategies and representations. This article reports on the events and ideas developed at this meeting, subsequent thoughts by the authors on how to make progress.
Filename:
marek-kopicki-miniproject1.pdf
Title: The ways to improve intelligence of interacting agents
Author: Marek Kopicki
Semester 1 Miniproject submitted as part of work (Oct-Dec 2003) for MSc in Advanced Computer Science, University of Birmingham.A video demonstration of the program can be found here (look for the hybrid reactive/deliberative sheepdog). The whole program can be downloaded and run within the Free Poplog environment on a PC running Linux or a Sun, or Windows PC using VMware. The SimAgent Toolkit with this program included is part of the linux PC poplog package (21 Mb).
Abstract (expanded 11 Aug 2004):
Path planning is not a trivial problem of artificial intelligence.
An agent has to find a path from one state (or position) to another
whilst avoiding contact with obstacles. The configuration space used for
representation of all agent states is usually continuous, which makes
the problem even more complex. Skeletonisation is one of approaches,
which discretises continuous space and reduces it to a graph search
problem.
The sheepdog demo is a Pop-11 written computer simulation of an artificial world consisting of a dog, sheep, trees (obstacles) and a pen. The program uses the SimAgent toolkit to implement all the agents, the objects in the scene, and the various concurrently active components of the dog's 'mind'. The task of the dog is to drive all the sheep to the pen avoiding collisions with trees and other agents. An earlier version of the sheepdog, produced by previous MSc students, was purely reactive, so that it could not cope with complex barriers and mazes, which require planning if they are to be traversed in a sensible way. This version of the program adds a sophisticated planning capability. Probabilistic roadmap and A* graph search algorithm play a major role in the current refined version of the simulation, changing an original stimulus-response paradigm. The program combines deliberative planning with reactive plan execution including reactive local plan optimisation during execution. I will present advantages of using agent planning, and I will attempt to confront this traditional AI conceptual approach to a concept-free, perception-action architecture proposed by Rodney Brooks.
Even though the program still needs lots of improvements the overall result of the simulation is promising - the dog is able to complete the task, avoiding dynamically obstacles and changing the plan if necessary. The future version of the program might involve smarter skeletonisation procedure, more extensive use of a sim agent toolkit, and possibly one of approximate search algorithms to tackle more complex environment.
See also the School of Computer Science Web page.
This file is maintained by
Aaron Sloman, and designed to be
lynx-friendly,
and
viewable with any browser.
Email A.Sloman@cs.bham.ac.uk