PAPERS ADDED IN THE YEAR 2003 (APPROXIMATELY)
See also
PAPERS 2003 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
Closely related publications are available at the web site of Matthias Scheutz
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/03.html
Maintained by Aaron Sloman
It contains an index to files in the Cognition and Affect
Project's FTP/Web directory produced or published in the year
2003. Some of the papers published in this period were produced
earlier and are included in one of the lists for an earlier period
http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents
A list of PhD and MPhil theses was added in June 2003
Last updated: 31 Aug 2008; 13 Nov 2010; 7 Jul 2012; 30 Jul 2013
In some cases other versions of the files (but not Microsoft Word versions) can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion.
JUMP TO DETAILED LIST (After Contents)
How Velmans' conscious experiences affected our brains
Authors: Ron Chrisley and Aaron Sloman
Some Foundational Issues Concerning Anticipatory Systems
Author: Ron Chrisley
Artificial Intelligence (Entry in Oxford Companion to the Mind 2nd Ed).
Author: Ron Chrisley
Embodied Artificial Intelligence
Author: Ron Chrisley
Anytime Deliberation for Computer Game Agents
Author: Nick Hawes (PhD Thesis)
Tarski Frege and The Liar Paradox (1971)
Relocated to another file.
Author: Aaron Sloman
Virtual Machines and Consciousness
Authors: Aaron Sloman and Ron Chrisley
Filename: chrisley-sloman-velmans.pdf (Uncorrected proofs)
Title:
How Velmans' conscious experiences affected our brains
Authors: Ron Chrisley and Aaron Sloman
Journal of Consciousness Studies 2002, 9:11 pp 58-63. Invited commentary on Max Velmans' paper "How could conscious experiences affect brains?", same issue of JCS.
Date installed: 17 Jan 2004
Abstract:
Velmans' paper raises three problems concerning mental causation: (1)
How can consciousness affect the physical, given that the physical world
appears causally closed? (2) How can one be in conscious control of
processes of which one is not consciously aware? (3) Conscious
experiences appear to come too late to causally affect the processes to
which they most obviously relate.
We agree with Velmans that there are philosophical problems concerning
the causal efficacy of the experiential which need to be addressed by
any proper theory of consciousness. We also agree that some sort of
monist metaphysics, such as is required to explain the relation between
virtual machines (in computers, say) and the physical machines in which
they are implemented, is required. Despite Velmans' efforts, however,
these needs remain unsatisfied. We believe that the clinical,
psychological and philosophical methodologies Velmans musters should be
supplemented with and informed by experimental, synthetic AI work, in
order to facilitate the acquisition of new concepts and refinement of
old concepts that are required for advances in our understanding of the
place experience occupies in the natural world.
See also
our JCS 2003 paper, below.
Filename: chrisley-anticipation.pdf
Title: Some Foundational Issues Concerning Anticipatory Systems
Author: Ron Chrisley
International Journal of Computing Anticipatory Systems, Volume 11, 2002, pp 3-18. Partial Proceedings of the Fifth International Conference CASYS'01 on Computing Anticipatory Systems, Liege, Belgium, August 13-18, 2001, D. M. Dubois (Ed.), Liege: CHAOS. ISSN 1373-5411; ISBN 2-9600262-5-X
Date installed: 17 Jan 2004
Abstract:
Some foundational conceptual issues concerning anticipatory systems are
identified and discussed: 1) The doubly temporal nature of anticipation
is noted: anticipations are directed toward one time, and exist at
another; 2) Anticipatory systems can be open: they can perturb and be
perturbed by states external to the system; 3) Anticipation may be
facilitated by a system modeling the relation between its own output,
its environment, and its future input; 4) Anticipations must be a part
of the system whose anticipations they are. Each of these points are
made more precise by considering what changes they require to be made to
the basic equation characterising anticipatory systems. In addition,
some philosophical questions concerning the content of anticipatory
representations are considered.
Keywords:
computing anticipatory systems, weak anticipatory systems, modeling,
temporal representation, representational content.
Filename: chrisley-oxford-companion.txt
Title: Artificial Intelligence (For Oxford Companion to the Mind)
Author: Ron Chrisley
In Gregory, R. (ed.) The Oxford Companion to the Mind (second edition). (In press).
Date installed: 17 Jan 2004
Filename: chrisley-embodied-ai.pdf
Title: Embodied Artificial Intelligence (Uncorrected proofs)
Author: Ron Chrisley
In Artificial Intelligence, 149,1, 2003
Date installed: 17 Jan 2004
Abstract:
Commentary on 'Michael L. Anderson Embodied Cognition: A field guide',
in the same issue.
Filename: sloman-cogaff-03.pdf
Title: Progress report on the Cognition and Affect project:
Architectures, Architecture-Schemas,
And The New Science of Mind
(Original 2003. Revised October 2004 and 2008.)
Author: Aaron Sloman
Date installed: 7 Dec 2003 (Revised Oct 2004. Liable to be further
revised.)
Abstract:
The work is a mixture of philosophy, science and engineering, concerned especially with the role of explanatory architectures. In this it overlaps with Marvin Minsky's work on The Emotion Machine.
This report was triggered partly by a consultation for DARPA regarding cognitive systems and partly by the need to write a final report for the Leverhulme-funded project on Evolvable virtual information processing architectures for human-like minds (1999--2003) on which there were three research fellows in sequence, Brian Logan, Matthias Scheutz and Ron Chrisley. Several PhD students at the University of Birmingham also contributed.
The Leverhulme project has ended but work arising out of it continues, as will the Cognition and Affect project, with or without funding. Ongoing activities include a grand challenge proposal and European Community research initiatives, including this initiative on models of consciousness.
A major new robotic project funded by the EC started in September 2004 CoSy: Cognitive systems for cognitive assistants
It is not clear when this report will be completed, if ever. A decision was therefore taken to make it available for anyone interested, after I learnt that it might even be useful to sociologists interested in these topics.
When the report is updated, the Date above will change.
These ideas are constantly under development. Recent changes:
http://www.cs.bham.ac.uk/research/projects/cogaff/#overview
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
Filename:nick-hawes-phd-thesis.pdf
Filename:nick-hawes-phd-thesis.ps
Title: Anytime Deliberation for Computer Game Agents
PhD thesis, School of Computer Science, The University of Birmingham, 2003.Author: Nick Hawes
Abstract:
This thesis presents an approach to generating intelligent behaviour for agents
in computer game-like worlds. Designing and implementing such agents is a
difficult task because they are required to act in real-time and respond
immediately to unpredictable changes in their environment. Such requirements
have traditionally caused problems for AI techniques.
To enable agents to generate intelligent behaviour in real-time, complex worlds, research has been carried out into two areas of agent construction. The first of these areas is the method used by the agent to plan future behaviour. To allow an agent to make efficient use of its processing time, a planner is presented that behaves as an anytime algorithm. This anytime planner is a hierarchical task network planner which allows a planning agent to interrupt its planning process at any time and trade-off planning time against plan quality.
The second area of agent construction that has been researched is the design of agent architectures. This has resulted in an agent architecture with the functionality to support an anytime planner in a dynamic, complex world. A proof-of-concept implementation of this design is presented which plays Unreal Tournament and displays behaviour that varies intelligently as it is placed under pressure.
Filename: sloman-chrisley-scheutz-emotions.pdf
(Longer version, 2 Dec 2003. Shortened later at request of publisher and editors.)
Title: The Architectural Basis of Affective States and Processes
In Who Needs Emotions?: The Brain Meets the Robot, Ed. M. Arbib and J-M. Fellous, Oxford University Press, Oxford, New York, 2005Authors: Aaron Sloman, Ron Chrisley and Matthias Scheutz
Abstract:
Much discussion of emotions and related topics is riddled with confusion
because different authors use the key expressions with different
meanings. Some confuse the concept of "emotion" with the more general
concept of "affect", which covers other things besides emotions,
including moods, attitudes, desires, preferences, intentions, dislikes,
etc. Moreover researchers have different goals: some are concerned with
understanding natural phenomena, while others are more concerned with
producing useful artifacts, e.g. synthetic entertainment agents,
sympathetic machine interfaces, and the like. We address this confusion
by showing how "architecture-based" concepts can extend and refine our
pre-theoretical concepts in ways that make them more useful both for
expressing scientific questions and theories, and for specifying
engineering objectives. An implication is that different
information-processing architectures support different classes of
emotions, different classes of consciousness, different varieties of
perception, and so on. We start with high level concepts applicable to a
wide variety of types of natural and artificial systems, including very
simple organisms, namely concepts such as "need", "function",
"information-user", "affect", "information-processing
architecture". For more complex architectures, we offer the CogAff
schema as a generic framework which distinguishes types of components
that may be in a architecture, operating concurrently with different
functional roles. We also sketch H-Cogaff, a richly-featured special
case of CogAff, conjectured as a type of architecture that can explain
or replicate human mental phenomena. We show how the concepts that are
definable in terms of such architectures can clarify and enrich research
on human emotions. If successful for the purposes of science and
philosophy the architecture is also likely to be useful for engineering
purposes, though many engineering goals can be achieved using shallow
concepts and shallow theories, e.g., producing "believable" agents for
computer entertainments. The more human-like robot emotions will
emerge, as they do in humans, from the interactions of many mechanisms
serving different purposes, not from a particular, dedicated "emotion
mechanism".
There is a summary of a review by Zack Lynch here."Rather than building on the hype surrounding thinking machines the book provides a superb scientific analysis of the current state of emotions research in animals, humans and man-made systems." .... "While technical in parts, this book is an important contribution to the emerging field of emotional neurotechnology. It is a stimulating book that is well edited and researched. I highly recommend Who Needs Emotions? for researchers and graduate students across neuroscience and computer science."
Filename: kennedy-phd-thesis.pdf
Filename: kennedy-phd-thesis.ps
Title: Distributed Reflective Architectures for Anomaly Detection and Autonomous Recovery
PhD thesis, University of Birmingham , 2003.
Abstract:
In a hostile environment, an autonomous system requires a reflective
capability to detect problems in its own operation and recover from them
without external intervention. We approach this problem from the point
of view of cognitive systems research.
The simplest way to make such an autonomous system reflective is to
include a layer in its architecture to monitor its components' behaviour
patterns and detect anomalies. There are situations, however, where the
reflective layer will not detect anomalies in itself. For example, it
cannot detect that it has just been deleted, or completely replaced with
hostile code.
Our solution to this problem is to distribute the reflection so that components mutually observe and protect each other. Multiple versions of the anomaly-detection system acquire models of each other's "normal" behaviour patterns by mutual observation in a protected environment; they can then compare these models against actual patterns in an environment allowing damage or intrusions, in order to detect anomalies in each other's behaviour. Diagnosis and recovery actions can then follow.
In this thesis we present some proof-of-concept implementations of distributed reflection based on multi-agent systems and show that such systems can survive in a hostile environment while their self-monitoring and self-repair components are repeatedly being attacked.
The thesis also compares the cognitive systems paradigm used in the implementations with the paradigm of distributed fault-tolerance and considers the contributions that one field can make to the other.
Filename: sloman-chrisley-jcs03.pdf
Title: Virtual Machines and Consciousness
Authors: Aaron Sloman and Ron Chrisley
Date installed: 12 May 2003; Updated 23 Oct 2015; 7 Aug 2018
NOTE ADDED 23 Oct 2015
Reviews and comments on the paper
-- The collection of papers containing our paper is reviewed in here by Catherine Legg, who writes regarding this paper:
Another (brief) review of the whole book, by Stefaan Van Ryssen, is at
http://www.leonardo.info/reviews/apr2005/machine_ryssen.html
There is additional brief discussion of the implications of this paper in
Some of the ideas in our paper are developed further in later papers on this web site, e.g.
Abstract:
Replication or even modelling of consciousness in machines requires some
clarifications and refinements of our concept of consciousness.
Design of, construction of, and interaction with artificial
systems can itself assist in this conceptual development.
We start with the tentative hypothesis that although the
word "consciousness" has no well-defined meaning, it is used to refer
to aspects of human and animal information-processing.
We then argue that we can enhance our understanding of what these
aspects might be by designing and building virtual-machine architectures
capturing various features of consciousness.
This activity may in turn nurture the development of
our concepts of consciousness, showing how an analysis based on
information-processing virtual machines answers old philosophical
puzzles as well enriching empirical theories. This process of developing
and testing ideas by developing and testing designs leads to gradual
refinement of many of our pre-theoretical concepts of mind, showing how
they can be construed as implicitly "architecture-based" concepts.
Understanding how human-like robots with appropriate architectures are
likely to feel puzzled about qualia may help us resolve those puzzles.
The concept of "qualia" turns out to be an "architecture-based"
concept, while individual qualia concepts are causally indexical
"architecture-driven" concepts.
NOTE
This is an expanded version of talk 9 at
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk9
Filename: kennedy-sloman-jcsr03.ps
Filename: kennedy-sloman-jcsr03.pdf
Title: Autonomous Recovery from Hostile Code Insertion using Distributed
Reflection
Authors: Catriona M. Kennedy and Aaron Sloman
In Journal of Cognitive Systems Research, 4, 2, pp. 89--117, 2003
Abstract: In a hostile environment, an autonomous cognitive system requires a reflective capability to detect problems in its own operation and recover from them without external intervention. We present an architecture in which reflection is distributed so that components mutually observe and protect each other, and where the system has a distributed model of all its components, including those concerned with the reflection itself. Some reflective (or "meta-level") components enable the system to monitor its execution traces and detect anomalies by comparing them with a model of normal activity. Other components monitor "quality" of performance in the application domain. Implementation in a simple virtual world shows that the system can recover from certain kinds of hostile code attacks that cause it to make wrong decisions in its application domain, even if some of its self-monitoring components are also disabled.
See also the School of Computer Science Web page.
This file is maintained by
Aaron Sloman, and designed to be
lynx-friendly,
and
viewable with any browser.
Email A.Sloman@cs.bham.ac.uk