THE UNIVERSITY OF BIRMINGHAM
School of Computer Science

THE COGNITION AND AFFECT PROJECT

PAPERS ADDED IN THE YEAR 2003 (APPROXIMATELY)

See also

PAPERS 2003 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE

Closely related publications are available at the web site of Matthias Scheutz

NOTE

This file is http://www.cs.bham.ac.uk/research/projects/cogaff/03.html
Maintained by Aaron Sloman
It contains an index to files in the Cognition and Affect Project's FTP/Web directory produced or published in the year 2003. Some of the papers published in this period were produced earlier and are included in one of the lists for an earlier period http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents

A list of PhD and MPhil theses was added in June 2003

Last updated: 31 Aug 2008; 13 Nov 2010; 7 Jul 2012; 30 Jul 2013


PAPERS IN THE COGNITION AND AFFECT DIRECTORY
Produced or published in 2003 (Approximately)
(Latest first)

Most of the papers listed here are in postscript and PDF format. More recent papers are in PDF only.
For information on free browsers for these formats see http://www.cs.bham.ac.uk/~axs/browsers.html

In some cases other versions of the files (but not Microsoft Word versions) can be provided on request. Email A.Sloman@cs.bham.ac.uk requesting conversion.


The following Contents list (in reverse chronological order) contains links to locations in this file giving further details, including abstracts, and links to the papers themselves.

JUMP TO DETAILED LIST (After Contents)

CONTENTS -- FILES 2003 (Latest added first)

What follows is a list of links to more detailed information about each paper. From there you can select the actual papers, in various formats, e.g. PDF, postscript and some in html.

How Velmans' conscious experiences affected our brains
Authors: Ron Chrisley and Aaron Sloman

Some Foundational Issues Concerning Anticipatory Systems
Author: Ron Chrisley

Artificial Intelligence (Entry in Oxford Companion to the Mind 2nd Ed).
Author: Ron Chrisley

Embodied Artificial Intelligence
Author: Ron Chrisley

Progress report on Cognition and Affect project
Architectures, Architecture-Schemas, And The New Science of Mind
(2003. Revised: October 2004 and 2008)
Author: Aaron Sloman

Anytime Deliberation for Computer Game Agents
Author: Nick Hawes (PhD Thesis)

The Architectural Basis of Affective States and Processes
Authors: Aaron Sloman, Ron Chrisley and Matthias Scheutz

Tarski Frege and The Liar Paradox (1971)
Relocated to another file.
Author: Aaron Sloman

Distributed Reflective Architectures for Anomaly Detection and Autonomous Recovery
Author: Catriona M. Kennedy

Virtual Machines and Consciousness
Authors: Aaron Sloman and Ron Chrisley


Title: Autonomous Recovery from Hostile Code Insertion using Distributed Reflection
Authors: Catriona M. Kennedy and Aaron Sloman


DETAILS OF FILES AVAILABLE


BACK TO CONTENTS LIST

Filename: chrisley-sloman-velmans.pdf (Uncorrected proofs)

Title: How Velmans' conscious experiences affected our brains
Authors: Ron Chrisley and Aaron Sloman

Journal of Consciousness Studies 2002, 9:11 pp 58-63. Invited commentary on Max Velmans' paper "How could conscious experiences affect brains?", same issue of JCS.

Date installed: 17 Jan 2004

Abstract:
Velmans' paper raises three problems concerning mental causation: (1) How can consciousness affect the physical, given that the physical world appears causally closed? (2) How can one be in conscious control of processes of which one is not consciously aware? (3) Conscious experiences appear to come too late to causally affect the processes to which they most obviously relate.

We agree with Velmans that there are philosophical problems concerning the causal efficacy of the experiential which need to be addressed by any proper theory of consciousness. We also agree that some sort of monist metaphysics, such as is required to explain the relation between virtual machines (in computers, say) and the physical machines in which they are implemented, is required. Despite Velmans' efforts, however, these needs remain unsatisfied. We believe that the clinical, psychological and philosophical methodologies Velmans musters should be supplemented with and informed by experimental, synthetic AI work, in order to facilitate the acquisition of new concepts and refinement of old concepts that are required for advances in our understanding of the place experience occupies in the natural world.
See also our JCS 2003 paper, below.


Filename: chrisley-anticipation.pdf

Title: Some Foundational Issues Concerning Anticipatory Systems
Author: Ron Chrisley

International Journal of Computing Anticipatory Systems, Volume 11, 2002, pp 3-18. Partial Proceedings of the Fifth International Conference CASYS'01 on Computing Anticipatory Systems, Liege, Belgium, August 13-18, 2001, D. M. Dubois (Ed.), Liege: CHAOS. ISSN 1373-5411; ISBN 2-9600262-5-X


Date installed: 17 Jan 2004

Abstract:
Some foundational conceptual issues concerning anticipatory systems are identified and discussed: 1) The doubly temporal nature of anticipation is noted: anticipations are directed toward one time, and exist at another; 2) Anticipatory systems can be open: they can perturb and be perturbed by states external to the system; 3) Anticipation may be facilitated by a system modeling the relation between its own output, its environment, and its future input; 4) Anticipations must be a part of the system whose anticipations they are. Each of these points are made more precise by considering what changes they require to be made to the basic equation characterising anticipatory systems. In addition, some philosophical questions concerning the content of anticipatory representations are considered.
Keywords: computing anticipatory systems, weak anticipatory systems, modeling, temporal representation, representational content.


Filename: chrisley-oxford-companion.txt

Title: Artificial Intelligence (For Oxford Companion to the Mind)
Author: Ron Chrisley

In Gregory, R. (ed.) The Oxford Companion to the Mind (second edition). (In press).


Date installed: 17 Jan 2004


Filename: chrisley-embodied-ai.pdf

Title: Embodied Artificial Intelligence (Uncorrected proofs)
Author: Ron Chrisley

In Artificial Intelligence, 149,1, 2003


Date installed: 17 Jan 2004

Abstract:
Commentary on 'Michael L. Anderson Embodied Cognition: A field guide', in the same issue.


Filename: sloman-cogaff-03.pdf

Title: Progress report on the Cognition and Affect project:
Architectures, Architecture-Schemas, And The New Science of Mind
(Original 2003. Revised October 2004 and 2008.)

Author: Aaron Sloman
Date installed: 7 Dec 2003 (Revised Oct 2004. Liable to be further revised.)

Abstract:

The 'Cognition and Affect' project, which was called 'The attention and affect' project for a few years (circa 1991-1993), is a continuation of research on the nature of mind in natural and artificial systems by A.Sloman, which began around 1970 while he was at Sussex University, accelerated by a one-year visiting fellowship at the University of Edinburgh in 1972-3, continued during the build up at Sussex of COGS (The School of Cognitive and Computing Sciences) and accelerated further after he moved to the University of Birmingham in 1991.

The work is a mixture of philosophy, science and engineering, concerned especially with the role of explanatory architectures. In this it overlaps with Marvin Minsky's work on The Emotion Machine.

This report was triggered partly by a consultation for DARPA regarding cognitive systems and partly by the need to write a final report for the Leverhulme-funded project on Evolvable virtual information processing architectures for human-like minds (1999--2003) on which there were three research fellows in sequence, Brian Logan, Matthias Scheutz and Ron Chrisley. Several PhD students at the University of Birmingham also contributed.

The Leverhulme project has ended but work arising out of it continues, as will the Cognition and Affect project, with or without funding. Ongoing activities include a grand challenge proposal and European Community research initiatives, including this initiative on models of consciousness.

A major new robotic project funded by the EC started in September 2004 CoSy: Cognitive systems for cognitive assistants

It is not clear when this report will be completed, if ever. A decision was therefore taken to make it available for anyone interested, after I learnt that it might even be useful to sociologists interested in these topics.

When the report is updated, the Date above will change.
These ideas are constantly under development. Recent changes:
http://www.cs.bham.ac.uk/research/projects/cogaff/#overview
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html


Filename:nick-hawes-phd-thesis.pdf
Filename:nick-hawes-phd-thesis.ps

Title: Anytime Deliberation for Computer Game Agents

PhD thesis, School of Computer Science, The University of Birmingham, 2003.
Author: Nick Hawes
Date installed: (original: 30 Nov 2003, reformatted with improved font: 9 Mar 2004)

Abstract:
This thesis presents an approach to generating intelligent behaviour for agents in computer game-like worlds. Designing and implementing such agents is a difficult task because they are required to act in real-time and respond immediately to unpredictable changes in their environment. Such requirements have traditionally caused problems for AI techniques.

To enable agents to generate intelligent behaviour in real-time, complex worlds, research has been carried out into two areas of agent construction. The first of these areas is the method used by the agent to plan future behaviour. To allow an agent to make efficient use of its processing time, a planner is presented that behaves as an anytime algorithm. This anytime planner is a hierarchical task network planner which allows a planning agent to interrupt its planning process at any time and trade-off planning time against plan quality.

The second area of agent construction that has been researched is the design of agent architectures. This has resulted in an agent architecture with the functionality to support an anytime planner in a dynamic, complex world. A proof-of-concept implementation of this design is presented which plays Unreal Tournament and displays behaviour that varies intelligently as it is placed under pressure.


Filename: sloman-chrisley-scheutz-emotions.pdf

(Longer version, 2 Dec 2003. Shortened later at request of publisher and editors.)

Title: The Architectural Basis of Affective States and Processes

In Who Needs Emotions?: The Brain Meets the Robot, Ed. M. Arbib and J-M. Fellous, Oxford University Press, Oxford, New York, 2005
Authors: Aaron Sloman, Ron Chrisley and Matthias Scheutz
The version published by OUP was gratuitously changed and mangled in many ways by the copy-editor who mostly did not understand what was being said, despite strong protestations by the authors. And OUP (at least in the USA) do not understand requirements for publishing scientific texts -- they use out of date style guides designed for literary texts. See http://www.cs.bham.ac.uk/~axs/publishing.html
Date installed: 14 Dec 2003
(There was a longer earlier version 2 Dec 2003)

Abstract:
Much discussion of emotions and related topics is riddled with confusion because different authors use the key expressions with different meanings. Some confuse the concept of "emotion" with the more general concept of "affect", which covers other things besides emotions, including moods, attitudes, desires, preferences, intentions, dislikes, etc. Moreover researchers have different goals: some are concerned with understanding natural phenomena, while others are more concerned with producing useful artifacts, e.g. synthetic entertainment agents, sympathetic machine interfaces, and the like. We address this confusion by showing how "architecture-based" concepts can extend and refine our pre-theoretical concepts in ways that make them more useful both for expressing scientific questions and theories, and for specifying engineering objectives. An implication is that different information-processing architectures support different classes of emotions, different classes of consciousness, different varieties of perception, and so on. We start with high level concepts applicable to a wide variety of types of natural and artificial systems, including very simple organisms, namely concepts such as "need", "function", "information-user", "affect", "information-processing architecture". For more complex architectures, we offer the CogAff schema as a generic framework which distinguishes types of components that may be in a architecture, operating concurrently with different functional roles. We also sketch H-Cogaff, a richly-featured special case of CogAff, conjectured as a type of architecture that can explain or replicate human mental phenomena. We show how the concepts that are definable in terms of such architectures can clarify and enrich research on human emotions. If successful for the purposes of science and philosophy the architecture is also likely to be useful for engineering purposes, though many engineering goals can be achieved using shallow concepts and shallow theories, e.g., producing "believable" agents for computer entertainments. The more human-like robot emotions will emerge, as they do in humans, from the interactions of many mechanisms serving different purposes, not from a particular, dedicated "emotion mechanism".


There is a summary of a review by Zack Lynch here.

"Rather than building on the hype surrounding thinking machines the book provides a superb scientific analysis of the current state of emotions research in animals, humans and man-made systems." .... "While technical in parts, this book is an important contribution to the emerging field of emotional neurotechnology. It is a stimulating book that is well edited and researched. I highly recommend Who Needs Emotions? for researchers and graduate students across neuroscience and computer science."


Tarski Frege and The Liar Paradox (1971)
Relocated to another file. (7 Feb 2016)

Filename: kennedy-phd-thesis.pdf
Filename: kennedy-phd-thesis.ps

Title: Distributed Reflective Architectures for Anomaly Detection and Autonomous Recovery

PhD thesis, University of Birmingham , 2003.

Author: Catriona M. Kennedy
Date installed: 16 Jun 2003

Abstract:
In a hostile environment, an autonomous system requires a reflective capability to detect problems in its own operation and recover from them without external intervention. We approach this problem from the point of view of cognitive systems research. The simplest way to make such an autonomous system reflective is to include a layer in its architecture to monitor its components' behaviour patterns and detect anomalies. There are situations, however, where the reflective layer will not detect anomalies in itself. For example, it cannot detect that it has just been deleted, or completely replaced with hostile code.

Our solution to this problem is to distribute the reflection so that components mutually observe and protect each other. Multiple versions of the anomaly-detection system acquire models of each other's "normal" behaviour patterns by mutual observation in a protected environment; they can then compare these models against actual patterns in an environment allowing damage or intrusions, in order to detect anomalies in each other's behaviour. Diagnosis and recovery actions can then follow.

In this thesis we present some proof-of-concept implementations of distributed reflection based on multi-agent systems and show that such systems can survive in a hostile environment while their self-monitoring and self-repair components are repeatedly being attacked.

The thesis also compares the cognitive systems paradigm used in the implementations with the paradigm of distributed fault-tolerance and considers the contributions that one field can make to the other.


Filename: sloman-chrisley-jcs03.pdf

Title: Virtual Machines and Consciousness
Authors: Aaron Sloman and Ron Chrisley
Date installed: 12 May 2003; Updated 23 Oct 2015; 7 Aug 2018

In Journal of Consciousness Studies, 10, No. 4-5, 2003.
This is a special issue on Machine Consciousness, edited by Owen Holland, also published as a book.
(The published version of the paper has a different format from the version here.)

NOTE ADDED 23 Oct 2015

NOTE 1 in Section 1 on cluster concepts was extended with this comment:
Added 23 Oct 2015: It is now clear that the notion of a "cluster concept" has
less explanatory power than the notion of "polymorphism" of concepts, especially
"parametric polymorphism", as explained in (Sloman 2010). So instead of the noun
"consciousness" we should analyse similarities and differences between uses of
sentences of the forms "X is conscious of Y" and "X is conscious that P" for
different values of X, Y and P. See also
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/family-resemblance-vs-polymorphism.html

Reviews and comments on the paper

-- A detailed commentary (and tutorial) on our paper by Marcel Kvassay (who was previously uknown to me), comparing and contrasting our ideas with the anti-reductionism of David Chalmers, was posted on August 16, 2012,
http://marcelkvassay.net/article.php?id=machine
later converted to PDF http://marcelkvassay.net/pdf/machines.pdf

-- The collection of papers containing our paper is reviewed in here by Catherine Legg, who writes regarding this paper:

"This is an original and very interesting paper, which, if taken seriously, has the potential to change the methodology of much philosophy of mind, from seeking to find 'correct' conceptual analyses of inherently indeterminate folk mental concepts, to exploring and experimentally testing spaces of more determinate concepts discovered a posteriori."

Another (brief) review of the whole book, by Stefaan Van Ryssen, is at
http://www.leonardo.info/reviews/apr2005/machine_ryssen.html

There is additional brief discussion of the implications of this paper in

Susan Blackmore's contribution to theguardian.com, Monday 12 July 2010
"Science explains, not describes"
http://www.theguardian.com/commentisfree/belief/2010/jul/12/science-religion-philosophy
(Also in the second edition of her textbook on consciousness.)
and in
Elizabeth Irvine, 2013, Consciousness as a Scientific Concept: A Philosophy of Science Perspective, Springer, 2013; Draft here.
See http://www.springer.com/philosophy/book/978-94-007-5172-9

Some of the ideas in our paper are developed further in later papers on this web site, e.g.

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html and
in presentations related to virtual machinery in
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
especially
  • Talk 86: Supervenience and Causation in Virtual Machinery
  • Talk 85: Daniel Dennett on Virtual Machines
  • Talk 84: Using virtual machinery to bridge the "explanatory gap"
    Or: Helping Darwin: How to Think About Evolution of Consciousness

Abstract:
Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word "consciousness" has no well-defined meaning, it is used to refer to aspects of human and animal information-processing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual-machine architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information-processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly "architecture-based" concepts. Understanding how human-like robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of "qualia" turns out to be an "architecture-based" concept, while individual qualia concepts are causally indexical "architecture-driven" concepts.

NOTE
This is an expanded version of talk 9 at http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk9


Filename: kennedy-sloman-jcsr03.ps
Filename: kennedy-sloman-jcsr03.pdf
Title: Autonomous Recovery from Hostile Code Insertion using Distributed Reflection
Authors: Catriona M. Kennedy and Aaron Sloman

In Journal of Cognitive Systems Research, 4, 2, pp. 89--117, 2003

Date: February 2003

Abstract: In a hostile environment, an autonomous cognitive system requires a reflective capability to detect problems in its own operation and recover from them without external intervention. We present an architecture in which reflection is distributed so that components mutually observe and protect each other, and where the system has a distributed model of all its components, including those concerned with the reflection itself. Some reflective (or "meta-level") components enable the system to monitor its execution traces and detect anomalies by comparing them with a model of normal activity. Other components monitor "quality" of performance in the application domain. Implementation in a simple virtual world shows that the system can recover from certain kinds of hostile code attacks that cause it to make wrong decisions in its application domain, even if some of its self-monitoring components are also disabled.


BACK TO CONTENTS LIST


NOTE


Older files in this directory (pre 2003) are accessible via the main index


RETURN TO MAIN COGAFF INDEX FILE

See also the School of Computer Science Web page.

This file is maintained by Aaron Sloman, and designed to be lynx-friendly, and viewable with any browser.
Email A.Sloman@cs.bham.ac.uk