PAPERS ADDED BETWEEN 1981 AND 1995 (APPROXIMATELY)
(Some of them published in 1996 or later)
Plus a few earlier papers added to this list later.
PAPERS 1981 -- 1995 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html
Maintained by Aaron Sloman
It contains an index to files in the Cognition and Affect project
FTP/Web
directory. It contains papers written before 1996.
Some of the papers by Aaron Sloman listed here were written while he was at the University of Sussex. He moved to the University of Birmingham in July 1991.
Last updated:
28 May 2015; 29 Sep 2015; 18 Sep 2017; 7 Jan 2018; 11 Mar 2018
3 Jan 2010; 13 Nov 2010; 7 Jul 2012; .... 11 Apr 2014
Most of the papers listed here are in compressed or uncompressed
postscript format. Some are latex or plain ascii text. Most are also
available in PDF.
For information on free browsers for these formats see
http://www.cs.bham.ac.uk/~axs/browsers.html
PDF versions of postscript files can be provided on request. Please Email A.Sloman@cs.bham.ac.uk requesting conversion.
Papers are listed below roughly in reverse chronological order.
What Sorts Of Machines Can Understand The Symbols They Use?
Published 1986. Installed here Aug 2011. Moved here 29 Nov 2018
Author:
Aaron Sloman
Title: A new continuous propositional logic (1995)
Author: Riccardo Poli, Mark Ryan, Aaron Sloman
Date installed: 7 Jan 2018
POPLOG's Two-level Virtual Machine Support for Interactive Languages
Authors: Robert Smith, Aaron Sloman and John Gibson
Moved here from another file: 5 Jan 2018
Title: Commentary on Boden on "Artificial Intelligence and Animal Psychology"
Author: Aaron Sloman
Published 1983, installed here 2008;
Transferred to this file 18 Sep 2017)
Title: Real Time Multiple-Motive Expert Systems
Moved here 2 Feb 2017
Title: Experiencing Computation: A tribute to Max Clowes
With biography and bibliography
Author: Aaron Sloman
Moved here 26 Feb 2016
Title: Bread today, jam tomorrow: The impact of AI on education
Authors: Benedict du Boulay and Aaron Sloman
Date installed: 23 Feb 2016 (Published 1988)
What are the purposes of vision?
Author: Aaron Sloman
Based on Presentation at Fyssen Foundation Workshop on Vision,
Versailles France, March 1986, Organiser: M. Imbert
Title: Deep and shallow simulations (BBS commentary on Colby)
Author: Aaron Sloman
Title: Did Searle attack strong strong or weak strong AI? (1985)
Author: Aaron Sloman
Computational Epistemology (1982)
From a workshop on Genetic Epistemology and Artificial Intelligence
Geneva 1980
Author:
Aaron Sloman (Installed here: 25 Jan 2014)
Developing concepts of consciousness
(Commentary on Velmans, BBS, 1991)
Author:
Aaron Sloman (Installed here: 4 Jun 2013)
Title: A Suggestion About Popper's Three Worlds
In the Light of Artificial Intelligence
(Previously: Artificial Intelligence and Popper's Three Worlds)
Author:
Aaron Sloman (Installed here: 9 Oct 2012)
Title: A Personal View Of Artificial Intelligence
Preface to Computers and Thought 1989 (by Sharples et al).
Author: Aaron Sloman (Installed here: 4 Sep 2012)
The structure of the space of possible minds
Aaron Sloman
Towards a Computational Theory of Mind
Aaron Sloman
Title: Skills, Learning and Parallelism
In Proceedings 3rd Cognitive Science Conference, Berkeley, 1981, pp
284-5.
Cognitive Science Conference, 1981
Author: Aaron Sloman
Title: Simulating agents and their environments
Authors: Darryl Davis, Aaron Sloman and Riccardo Poli
Title: Towards a Grammar of Emotions
Author: Aaron Sloman
Title: Beginners Need Powerful Systems
Author: Aaron Sloman
Title: The Evolution of Poplog and Pop-11 at Sussex University
Author: Aaron Sloman
Title: The primacy of non-communicative language (1979)
Author: Aaron Sloman
Now moved to another file
Title: A Philosophical Encounter
Authors: Aaron Sloman
Title: Exploring design space and niche space
Authors: Aaron Sloman
Title: A Hybrid Trainable Rule-based System
Authors: Riccardo Poli and Mike Brayshaw
Title: Information about the SIM_AGENT toolkit
Authors: Aaron Sloman and Riccardo Poli
Title: Goal processing in autonomous agents
Author: Luc P. Beaudoin
Title: Why robots will have emotions
Authors: Aaron Sloman and Monica Croucher
Title: An Emotional Agent -- The Detection and Control of Emergent
Author: Ian Wright
Title: Computational Constraints on Associative Learning,
Author: Edmund Shing
Title: Geneva Emotion Week 1995
Title: Towards a general theory of representations
Author: Aaron Sloman
Title: Applying Systemic Design to the study of `emotion'
Author: Tim Read
Title: Computational Constraints for Associative Learning
Author: Edmund Shing
Title: Explorations in Design Space
Author: Aaron Sloman
Title: Representations as control substates (DRAFT)
Author: Aaron Sloman
Title: Semantics in an intelligent control system
Author: Aaron Sloman
Title: A Summary of the Attention and Affect Project
Author: Ian Wright
Title: Varieties of Formalisms for Knowledge Representation
Author: Aaron Sloman
Title: Systemic Design: A Methodology For Investigating Emotional
Author: Tim Read
Title: The Terminological Pitfalls of Studying Emotion
Authors: Tim Read and Aaron Sloman
Title: Cassandra: Planning with contingencies
Authors: Louise Pryor and Gregg Collins
Title: The Mind as a Control System,
Author: Aaron Sloman
Title: Prospects for AI as the General Science of Intelligence
Author: Aaron Sloman
Title: A study of motive processing and attention,
Authors: Luc P. Beaudoin and Aaron Sloman
Title: What are the phenomena to be explained?
Author: Aaron Sloman
Title: Towards an information processing theory of emotions
Author: Aaron Sloman
Title: Silicon Souls, How to design a functioning mind
Author: Aaron Sloman
Title: The Emperor's Real Mind (Review of Penrose)
Author: Aaron Sloman
Title: Prolegomena to a Theory of Communication and Affect
Author: Aaron Sloman
Title: A Proposal for a Study of Motive Processing
Authors: Luc Beaudoin and Aaron Sloman
PhD Thesis proposal for Luc Beaudoin.
Title: Notes on consciousness
Author: Aaron Sloman
Title: How to dispose of the free will issue
Author: Aaron Sloman
Title: Motives Mechanisms and Emotions
Author: Aaron Sloman
Title: Reference without causal links,
Author: Aaron Sloman
Title: What enables a machine to understand?
Author: Aaron Sloman
Title: Why we need many knowledge representation formalisms,
Author: A.Sloman
Where published:
Invited contribution:
Joint Session of Mind Association and Aristotelian Society July 1986
Reply was presented by L.Jonathan Cohen, Oxford.
Published in Proceedings of the Aristotelian Society,
Supplementary Volume LX, 1986 pages 61--80,
Stable URL, including reply by Cohen: http://www.jstor.org/stable/4106898
Abstract: (Partial extract from text)
My topic is a specialised variant of the old philosophical question `could a machine think?'. Some say it is only a matter of time before computer-based artefacts will behave as if they had thoughts and perhaps even feelings, pains or any other occupants of the human mind, conscious or unconscious. I shall not pre-judge this issue. The space of possible computing systems is so vast, and we have explored such a tiny corner, that it would be as rash to pronounce on what we may or may not discover in our future explorations as to predict what might or might not be expressible in print shortly after its invention. Instead I'll merely try to clarify what we might look for.
Like Searle ([11,12]) I'll focus on a specific type of thought, namely understanding symbols. Clearly, artefacts like card-sorters, optical character readers, voice-controlled machines, and automatic translators, manipulate symbols. Do they understand the symbols? Some machines behave as if they do, at least in a primitive way. They respond to commands by performing tasks; they print out answers to questions; they paraphrase stories or answer questions about them. We understand the symbols, but do THEY?A `design stance' helps to clarify the question whether machines themselves can understand symbols in a non-derivative way. It is not enough that machines appear from the outside to mimic human understanding: there must be a reliable basis for assuming that they can display understanding in an open-ended range of situations, not all anticipated by the programmer. I have briefly described structural and functional design requirements for this, and argued that even the simplest computers use symbols in such a manner that the machines themselves associate meanings of a primitive sort with them.
I have shown that a computer may use symbols to refer to its own internal states and to abstract objects; and indicated how it might refer to a world to which it has only limited access, relying on the use of axiom-systems or perception-action loops to constrain possible interpretations. These constraints leave meanings partly indeterminate and indefinitely extendable. Causal links reduce but do not remove indeterminacy.
The full range of meaningful uses of symbols by human beings requires a type of architectural complexity not yet achieved in AI systems.
There is a complex set of prototypical conditions for understanding, different subsets of which may be exemplified in different animals or machines, yielding a large space of possible systems which we are only just beginning to explore. Our ordinary labels are not suited to drawing a definite global boundary within such a space. At best we can analyse the implications of many different boundaries, all very important. This requires a long term multi-disciplinary exploration.
Filename: Poli-EPIA1995.pdf (PDF)
Title: A new continuous propositional logic
Author: Riccardo Poli, Mark Ryan, Aaron Sloman
Date Installed: 7 Jan 2018
Where published:
Portuguese Conference on Artificial Intelligence: Progress in Artificial Intelligence
EPIA 1995: pp 17-28
Abstract:
In this paper we present Minimal Polynomial Logic (MPL), a generalisation of classical propositional logic which allows truth values in the continuous interval [0, 1] and in which propositions are represented by multi-variate polynomials with integer coefficients.
The truth values in MPL are suited to represent the probability of an assertion being true, as in Nilsson's Probabilistic Logic, but can also be interpreted as the degree of truth of that assertion, as in Fuzzy Logic. However, unlike fuzzy logic MPL respects all logical equivalences, and unlike probabilistic logic it does not require explicit manipulation of possible worlds.
In the paper we describe the derivation and the properties of this new form of logic and we apply it to solve and better understand several practical problems in classical logic, such as satisfiability.
[Relocated from another file 3 Jan 2018]
Filename:
smith-gibson-sloman-1992.pdf (10MB OCR-PDF)
Title:
POPLOG's Two-level Virtual Machine Support for Interactive Languages
Authors: Robert Smith, Aaron Sloman and John Gibson
Date Installed: 2 Jul 2010
Date published: 1992
Where published:
In Research Directions in Cognitive Science Volume 5: Artificial Intelligence,
Eds. D. Sleeman and N. Bernsen, Lawrence Erlbaum Associates, pp. 203--231, 1992,
Abstract:
Poplog is a portable interactive AI development environment available on a range of operating systems and machines. It includes incremental compilers for Common Lisp, Pop-ll, Prolog and Standard ML, along with tools for adding new incremental compilers. All the languages share a common development environment and data structures can be shared between programs written in the different languages. The power and portability of Poplog depend on its two virtual machines, a high level virtual machine (PVM -- tne Poplog Virtual Machine) serving as a target for compilers for interactive languages, and a low level virtual machine (PIM -- the Poplog Implementation Machine) as a base for translation to machine code. A machine-independent and language-independent code generator translates from the PVM to the PIM, enormously simplifying both the task of producing a new compiler and porting to new machines.See also Poplog and Pop11 on Wikipedia:
https://en.wikipedia.org/wiki/Poplog
https://en.wikipedia.org/wiki/POP-11Poplog information and downloads
http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog.html
[Relocated from
another file 18 Sep 2017]
Filename: sloman-on-boden-1983.pdf
Title: Commentary on Boden on "Artificial Intelligence and
Animal Psychology"
Authors: Aaron Sloman
Date Published: 1983
Date Installed: 16 Dec 2008
Where published:
New Ideas in Psychology
vol. 1, no = 1 pp. 41--50. Online here
Abstract: (Introduction to article)
Having discussed these issues with the author over many years, I was not surprised to find myself agreeing with nearly everything in the paper, and admiring the clarity and elegance of its presentation. All I can offer by way of commentary, therefore, is a collection of minor quibbles, some reformulations to help readers for whom the computational approach is very new, and a few extensions of the discussion.Extracts
WHAT IS ARTIFICIAL INTELLIGENCE?
I'll start with a few explanatory comments on the nature of A.I., to supplement the section of the paper "A.I. as the Study of Representation". Cognitive Science has three main classes of goals (a) theoretical (the study of possible minds, possible forms of representation and computation), (b) empirical (the study of actual minds and mental abilities of humans and other animals), (c) practical (the attempt to help individuals and society by alleviating problems (i.e. learning problems, mental disorders) and designing new useful intelligent machines).Activities pursuing these three goals are most fruitful when the goals are interlinked, providing opportunities for feedback between theoretical, empirical and applied work. Artificial Intelligence is a subdiscipline of Cognitive Science which straddles the theoretical approach (studying general properties of possible computational systems) and applications (designing new systems to help in education, industry, commerce, medicine, entertainment). Its empirical content is mostly based not on specialised research, but on common knowledge of many of the things people can do - such as using and understanding language, seeing things, making plans, solving problems, playing games. This knowledge of what people can do sets design goals for both the theoretical and the applied work. In particular, an important aspect of A.I. research is task analysis: given that people can perform a certain task, what are the computational resources required, and what are the trade-offs between different representations and processing strategies? This sort of analysis is relevant to the study of other animals insofar as many human abilities are shared with other animals.
(Moved here 2 Feb 2017)
Filename: sloman-realtime-bcs86.pdf
Title: Real Time Multiple-Motive Expert Systems
Date added: 8 May 2004 (Originally Published 1985).
Abstract:
Sooner or later attempts will be made to design systems capable of dealing with a steady flow of sensor data and messages, where actions have to be selected on the basis of multiple, not necessarily consistent, motives, and where new information may require substantial re-evaluation of plans and strategies, including suspension of current actions. Where the world is not always friendly, and events move quickly, decisions will often have to be made which are time-critical. The requirements for this sort of system are not clear, but it is clear that they will require global architectures very different from present expert systems or even most AI programs. This paper attempts to analyse some of the requirements, especially the role of macroscopic parallelism and the implications of interrupts. It is assumed that the problems of designing various components of such a system will be solved, e.g. visual perception, memory, inference, planning, language understanding, plan execution, etc. This paper is about some of the problems of putting them together, especially perception, decision-making, planning and plan-execution systems.
Filename: sloman-clowestribute.html
Filename: sloman-clowestribute.pdf
Title: Experiencing Computation: A tribute to Max Clowes
With biography and bibliography added 2014
(Originally appeared in Computing in Schools 1981)
Author: Aaron Sloman
Date installed: 11 Feb 2001 (Originally published 1981) (Updated 13 Apr 2014)
Abstract:
Max Clowes (pronounced as if spelt Clues, or Klews) was one of the pioneers of AI vision research in the UK. He inspired and helped to develop Artificial Intelligence and computational Cognitive Science at the University of Sussex. In 1981 he tragically died, shortly after leaving the University in order to work on computing in Schools. This paper was originally published in 1981 in Computing in Schools, and was later re-published in New horizons in educational computing, Ed. Masoud Yazdani, 1984, pp. 207--219, (Ellis Horwood Series In Artificial Intelligence.)The version installed here in 2001, had some footnotes added, referring to subsequent developments influenced by the work or ideas of Max Clowes.
In March 2014, a personal recollection and tribute from Wendy Manktellow (Nee Taylor) was added as an appendix.
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-clowestribute.html#wendyIn April 2014 I added a draft annotated biography and list of publications of Max Clowes, also as an appendix
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-clowestribute.html#bio
making use of information found on the internet, on my bookshelves, or supplied by former colleagues and students.
There are several gaps, so contributions to fill those gaps will be much appreciated -- also electronic copies of papers by Max Clowes not already indicated as being available online.
(Is anyone willing to create a Wikipedia entry using this material as a base?)
Filename: jam-tomorrow-duboulay-sloman.html (HTML)
Filename: jam-tomorrow-duboulay-sloman.pdf (PDF)
Title: Bread today, jam tomorrow: The impact of AI on education
Authors: Benedict du Boulay and Aaron Sloman
Date Installed here: 23 Feb 2016
Where published:
Fifth International Conference on Technology and Education
Education In The 90s: Challenges Of The New Information Technologies
Edinburgh, Scotland 28 - 31 March 1988
Also here (but no longer available):
Cognitive Science Research Papers
Serial No. CSRP 098
School of Cognitive Sciences
University of Sussex
Brighton, BN1 9QN, England
Abstract:
Several factors make it very difficult to automate skilled teacher student interactions, e.g. integrating new material in a way that links effectively to the student's existing knowledge, taking account of the student's goals and beliefs and adjusting the form of presentation as appropriate. These difficulties are illustrated with examples from teaching programming. There are domain-specific and domain-neutral problems in designing ITS. The domain-neutral problems include: encyclopaedic knowledge, combining different kinds of knowledge, knowing how to devise a teaching strategy, knowing how to monitor and modify the strategy, knowing how to motivate intellectual curiosity, understanding the cognitive states and processes involved in needing (wanting) or an explanation, knowing how to cope with social and affective processes, various communicative skills (this includes some of the others), knowing how to use various representational and communicative media, and knowing when to use them (an example of strategy).
Filename: sloman-understand-symbols.pdf
Title:
What Sorts Of Machines Can Understand The Symbols They Use?
Author: Aaron Sloman
Date Installed: 29 Aug 2011 (Published July 1986)
Where published:
Invited contribution:
Joint Session of Mind Association and Aristotelian Society July 1986
Reply was presented by L.Jonathan Cohen, Oxford.
Published in Proceedings of the Aristotelian Society,
Supplementary Volume LX, 1986 pages 61--80,
Stable URL, including reply by Cohen: http://www.jstor.org/stable/4106898
Abstract: (Partial extract from text)
My topic is a specialised variant of the old philosophical question `could a machine think?'. Some say it is only a matter of time before computer-based artefacts will behave as if they had thoughts and perhaps even feelings, pains or any other occupants of the human mind, conscious or unconscious. I shall not pre-judge this issue. The space of possible computing systems is so vast, and we have explored such a tiny corner, that it would be as rash to pronounce on what we may or may not discover in our future explorations as to predict what might or might not be expressible in print shortly after its invention. Instead I'll merely try to clarify what we might look for.
Like Searle ([11,12]) I'll focus on a specific type of thought, namely understanding symbols. Clearly, artefacts like card-sorters, optical character readers, voice-controlled machines, and automatic translators, manipulate symbols. Do they understand the symbols? Some machines behave as if they do, at least in a primitive way. They respond to commands by performing tasks; they print out answers to questions; they paraphrase stories or answer questions about them. We understand the symbols, but do THEY?A `design stance' helps to clarify the question whether machines themselves can understand symbols in a non-derivative way. It is not enough that machines appear from the outside to mimic human understanding: there must be a reliable basis for assuming that they can display understanding in an open-ended range of situations, not all anticipated by the programmer. I have briefly described structural and functional design requirements for this, and argued that even the simplest computers use symbols in such a manner that the machines themselves associate meanings of a primitive sort with them.
I have shown that a computer may use symbols to refer to its own internal states and to abstract objects; and indicated how it might refer to a world to which it has only limited access, relying on the use of axiom-systems or perception-action loops to constrain possible interpretations. These constraints leave meanings partly indeterminate and indefinitely extendable. Causal links reduce but do not remove indeterminacy.
The full range of meaningful uses of symbols by human beings requires a type of architectural complexity not yet achieved in AI systems.
There is a complex set of prototypical conditions for understanding, different subsets of which may be exemplified in different animals or machines, yielding a large space of possible systems which we are only just beginning to explore. Our ordinary labels are not suited to drawing a definite global boundary within such a space. At best we can analyse the implications of many different boundaries, all very important. This requires a long term multi-disciplinary exploration.
Filename: sloman-aslib83.pdf
Title:
An Overview Of Some Unsolved Problems In Artificial Intelligence
Author: Aaron Sloman
Date: 1983 (installed here 19 Mar 2012)
Where published:
Intelligent Information Retrieval: Informatics 7, 1983 (pp.3--14)
Ed. Kevin P. Jones
Proceedings Cambridge Aslib Informatics 7 Conference, Cambridge 22-23 March 1983.
Abstract (Extract from Introduction):
These long-term problems are concerned with the aim of designing really intelligent systems. Of course, it is possible to quibble endlessly about the definition of 'intelligent', and to argue about whether machines will ever really be intelligent, conscious, creative, etc. I want to by-pass such semantic debates by indicating what I understand by the aim of designing intelligent machines. I shall present a list of criteria which I believe are implicitly assumed by many workers in Artificial Intelligence to define their long term aims. Whether these criteria correspond exactly to what the word 'intelligent' means in ordinary language is an interesting empirical question, but is not my present concern.
Moreover, it is debatable whether we should attempt to make machines which meet these criteria, but for present purposes I shall take it for granted that this is a worthwhile enterprise, and address some issues about the nature of the enterprise.
Finally, it is not obvious that it is possible to make artefacts meeting these criteria. For now I shall ignore all attempts to prove that the goal is unattainable. Whether it is attainable or not, the process of attempting to design machines with these capabilities will teach us a great deal, even if we achieve only partial successes.
Filename:
vision-purposes-sloman.pdf
(PDF)
More details: What are the purposes of vision?
Title: What are the purposes of vision?
Based on invited presentation at Fyssen Foundation Workshop on
Vision,
Versailles France, March 1986, Organiser: M. Imbert
(The proceedings were never published.)
Author:
Aaron Sloman
Date Installed: 8 Oct 2012 (Written circa 1986)
Abstract (Extract from Introduction):
A good theory of human vision should describe the interface between visual processes and other kinds of processes, sensory, cognitive, affective, motor, or whatever. This requires some knowledge of the tasks performed by the visual subsystem. Does it feed information only to a central database, where other sub-systems can access it, or does it feed information direct to a variety of sub-systems? What sorts of information does it feed - is it mostly a set of descriptions of spatial properties of the environment, or are there other sorts of descriptions, and other outputs besides descriptions? Is there a sharp boundary between vision and cognition? What sorts of input does the visual subsystem use?
I shall attempt to survey the uses of human vision, with the hope of deriving some design constraints and requirements both for theories about biological visual systems and for machine vision. I shall propose a very broad view of the functions of vision in human beings, and suggest some design principles for mechanisms able to fulfil this role, though many details remain unspecified.
(Added Mar 2014) This is also relevant:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vision
A presentation of some hard, apparently unsolved, problems about natural vision and
how to replicate the functions and the designs in AI/Robotic vision systems.
Filename: imageinterpretation.pdf
(PDF Reformatted:
280 KB PDF)
Filename: imageinterpretation.html (HTML)
Filename: image-interp-way-ahead.pdf (Original pages: 3.9MB PDF)
Title: Image interpretation: The way ahead?
Invited talk,
originally published in
Physical and Biological Processing of ImagesAuthor: Aaron Sloman
(Proceedings of an international symposium organised by The Rank Prize Funds, London, Sept 1982.)
Editors: O.J.Braddick and A.C. Sleigh.
Pages 380--401, Springer-Verlag, 1983.
Some unsolved problems about vision are discussed in relation to the goal of understanding the space of possible mechanisms with the power of human vision. The following issues are addressed: What are the functions of vision? What needs to be represented? How should it be represented? What is a good global architecture for a human like visual system? How should the visual sub-system relate to the rest of an intelligent system? It is argued that there is much we do not understand about the representation of visible structures, the functions of a visual system and its relation to the rest of the human mind. Some tentative positive suggestions are made, but more questions are offered than answers.Note1
This paper is available in two formats as explained above. The OCR version probably has some errors that I have not corrected. But it is much smaller and easier to read than the scanned in images. I had forgotten about this paper for many years, until I stumbled across a reference to it. It is a precursor to
On designing a visual system: Towards a Gibsonian computational model of vision.
(Published in 1989).The 1982 paper presents several of the ideas I later developed in the context of a more embracing theory of the architecture of human-like minds, in which there are concurrently active 'layers' of different kinds performing different tasks, some evolutionarily very old some newer, all sharing the same sensors and effectors (see also 'The mind as a control system'(1993)).
I believe this is potentially a far more powerful and general theory than the much discussed 'dual-stream' or 'dual-pathway' theories of vision based on differences between dorsal and ventral visual pathways. But evaluating the ideas requires a much broader multi-disciplinary perspective, which is not easy for researchers to achieve.
Note2
This paper pointed out, among other things, the need for natural and artificial vision systems to be able to perceive both static and continuously moving structures, and structures with parts that change their shapes and relationships continuously. It also emphasised differences between seeing what is the case and seeing how to do something, especially in a changing situation, involving continuous control of movement (e.g. painting a chair).It later turned out that this distinction, which is familiar to engineers as a distinction between use of vision to acquire and record information that might be used for variety of purposes and use of vision for 'servo-control', was loosely related to distinct functions of ventral and dorsal visual pathways in primate brains, which were misleadingly labelled "what" and "where" pathways by some researchers, who later attempted to correct the confusion was made by renaming these "perception" and "action" pathways, which unfortunately does not allow visual control of actions to be termed "perception" or "seeing". These confusions are still wide-spread.
Filename: sloman-ijcai83-meaning.html
Filename: sloman-ijcai83-meaning.pdf
Title: Introduction to Panel Discussion:
Under What Conditions Can A Machine Attribute Meanings To Symbols?
Authors: Aaron Sloman, et al.,
Date Installed: 23 Mar 2011 (Published 1983)
Where published:
Aaron Sloman, Drew V. McDermott, William A. Woods, Brian Cantwell Smith and Patrick J. Hayes,
"Panel discussion: Under What Conditions Can a Machine Attribute Meanings to Symbols?", chaired by Aaron Sloman,
In Proceedings IJCAI 1983, pp44-48,
http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-1/CONTENT/content.htm
Filename: sloman-deep-and-shallow-1981.html (HTML)
Filename: sloman-deep-and-shallow-1981.pdf (PDF)
Title: Deep and shallow simulations
Commentary on: Modeling a paranoid mind, by Kenneth Mark Colby
The Behavioral and Brain Sciences (1981) 4(04) pp 515-534
http://dx.doi.org/10.1017/S0140525X00000030
Abstract:
Filename: sloman-croucher-warm-heart.html
Filename: sloman-croucher-warm-heart.pdf
Title: You don't need a soft skin to have a warm heart: Towards a
computational analysis of motives and emotions.
Authors: Aaron Sloman and
Monica Croucher
Originally a Cognitive Science Research Paper at Sussex University:Date Installed: 17 Jun 2005
Sloman, Aaron and Monica Croucher, "You don't need a soft skin to have a warm heart: towards a computational analysis of motives and emotions," CSRP 004, 1981.
Abstract:
The space of possible architectures for intelligent systems is very large. This essay takes steps towards a survey of the space, by examining some environmental and functional constraints, and discussing mechanisms capable of fulfilling them. In particular, we examine a subspace close to the human mind, by illustrating the variety of motives to be expected in a human-like system, and types of processes they can produce in meeting some of the constraints.
This provides a framework for analysing emotions as computational states and processes, and helps to undermine the view that emotions require a special mechanism distinct from cognitive mechanisms. The occurrence of emotions is to be expected in any intelligent robot or organism able to cope with multiple motives in a complex and unpredictable environment.
Analysis of familiar emotion concepts (e.g. anger, embarrassment, elation, disgust, pity, etc.) shows that they involve interactions between motives (e.g. wants, dislikes, ambitions, preferences, ideals, etc.) and beliefs (e.g. beliefs about the fulfilment or violation of a motive), which cause processes produced by other motives (e.g. reasoning, planning, execution) to be disturbed, disrupted or modified in various ways (some of them fruitful). This tendency to disturb or modify other activities seems to be characteristic of all emotions. In order fully to understand the nature of emotions, therefore, we need to understand motives and the types of processes they can produce.
This in turn requires us to understand the global computational architecture of a mind. There are several levels of discussion: description of methodology, the beginning of a survey of possible mental architectures, speculations about the architecture of the human mind, analysis of some emotions as products of the architecture, and some implications for philosophy, education and psychotherapy.
Filename: sloman-searle-85.html
Filename: sloman-searle-85.pdf
Filename: sloman-searle-85.txt
Title: Did Searle attack strong strong or weak strong AI?
Originally published in
A.G. Cohn and J.R. Thomas (eds)
Artificial Intelligence and Its Applications, 1986.
John Wiley and Sons
Proceedings AISB Conference, Warwick University, 1985)
Author: Aaron Sloman
Date installed: 13 Jan 2001 (Originally presented 1985, published 1986)
(Added HTML version and moved here from 00-02.html 22 May 2015)
(Added Postscript and PDF versions 23 Oct 2005)
10 May 2017: File names altered, replacing '.' with '-'
Abstract:
Keywords: Searle, strong AI, minds and machines, intentionality, meaning, reference, computation.
Filename: comp-epistemology-sloman.pdf
Title: Computational Epistemology
in
Genetic epistemology and cognitive science
Structures and cognitive processes:
Proceedings of the 2nd and 3rd Advanced Courses in Genetic
Epistemology,
organised by the
Fondation Archives Jean Piaget in 1980 and 1981. - Geneva:
Fondation Archives Jean Piaget, 1982. - P. 49-93.
http://ael.archivespiaget.ch/dyn/portal/index.seam?page=alo&aloId=16338&fonds=&menu=&cid=28
Author: Aaron Sloman
Date: (Originally Published in 1982)
Abstract:
To appear in proceedings of the Seminar on Genetic Epistemology and Cognitive Science, Fondations Archives Jean Piaget, University of Geneva, 1980.
This is an edited transcript of an unscripted lecture presented at the seminar on Genetic Epistemology and Artificial Intelligence, Geneva July 1980. I am grateful to staff at the Piaget Archive and to Judith Dennison for help with production of this version. I apologize to readers for the remnants of oral presentation. Some parts of the lecture made heavy use of overlaid transparencies. Since this was not possible in a manuscript, the discussions of learning about numbers and vision have been truncated. For further details see chapters 8 and 9 of
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
The Computer Revolution in Philosophy: Philosophy, science and models of mindI believe that recent developments in Computing and Artificial Intelligence constitute the biggest breakthrough there has ever been in Psychology. This is because computing concepts and formalisms at last make it possible to formulate testable theories about internal processes which have real explanatory power. That is to say, they are not mere re-descriptions of phenomena, and they are precise, clear, and rich in generative power. These features make it much easier than ever before to expose the inadequacies of poor theories. Moreover, the attempt to make working programs do things previously done only by humans and other animals gives us a deeper insight into the nature of what has to be explained. In particular, abilities which previously seemed simple are found to be extremely complex and hard to explain - like the ability to improve with practice.
The aim of this "tutorial" lecture is to define some very general features of computation and indicate its relevance to the study of the human mind. The lecture is necessarily sketchy and superficial, given the time available. For people who are new to the field, Boden C19773 and Winston C1977D. The two books complement each other very usefully. Boden is more sophisticated philosophically. Winston gives more technical detail.
I speak primarily as a philosopher, with a long-standing interest in accounting for the relation between mind and body. Philosophical analysis and a study of work in AI have together led me to adopt the following neo-dualist slogan:
Inside every intelligent ghost there has to be a machine.
Filename: sloman-on-velmans-bbs.pdf (PDF)
Title: Developing concepts of consciousness
Commentary on 'Is Human Information Processing Conscious',Author: Aaron Sloman
By Max Velmans
in Behavioural and Brain Sciences C.U.P., 1991
Where published:
Behavioral and Brain Sciences, Vol 14, Issue 04, Dec, 1991, pp. 694--695,
http://dx.doi.org/10.1017/S0140525X00072071
Abstract (Extract from paper):
Filename: sloman-popper-3-worlds.pdf
Title: A Suggestion About Popper's Three Worlds
In the Light of Artificial Intelligence
(Previously: Artificial Intelligence and Popper's Three Worlds)
Author: Aaron Sloman
Date: 1985
Date Installed: 9 Oct 2012
Where published:
In Problems, Conjectures, and Criticisms: New Essays in Popperian Philosophy,
Eds. Paul Levinson and Fred Eidlin, Special issue of ETC: A Review of General Semantics, (42:3) Fall 1985.
http://www.generalsemantics.org/store/etc-a-review-of-general-semantics/309-etc-a-review-of-general-semantics-42-3-fall-1985.html
Abstract:
Having always admired Popper and been deeply influenced by some of his ideas (even though I do not agree with all of them) I feel privileged at being invited to contribute to a volume of commentaries on his work. My brief is to indicate the relevance of work in Artificial Intelligence (henceforth AI) to Popper's philosophy of mind. Materialist philosophers of mind tend to claim that world2 is reducible to world1. I shall try to show how AI suggests that world2 is reducible to world3, and that one of the main explanatory roles Popper attributes to world2, namely causal mediation between worlds 1 and 3, is a redundant role. The central claim of this paper can be summed up by the slogan: "Any intelligent ghost must contain a computational machine".
Filename: personal-ai-sloman-1988.html (HTML)
Filename: personal-ai-sloman-1988.pdf (PDF)
Title: A Personal View Of Artificial Intelligence
Author: Aaron Sloman
Date Installed: 4 Sep 2012 (First published 1989)
Where published:
Preface to Computers and Thought 1989
By Mike Sharples, David Hogg, Chris Hutchinson, Steve Torrance, and David Young
MIT Press, 20 Oct 1989 - 433 pagesThis preface has also been available since about 1988 as a 'TEACH' file in the Poplog system: TEACH AITHEMES
Abstract:
Filename: sloman-space-of-minds-84.pdf
Filename:
sloman-space-of-minds-84.html (HTML)
Title: The structure of the space of possible minds
Author: Aaron Sloman
Originally published in The Mind and the Machine: philosophical aspects of Artificial Intelligence,Date Installed: 13 Jan 2007. Moved here 9 Aug 2016. (Originally published 1984)
ed. Stephen Torrance, Ellis Horwood, 1984, pp 35-42.
Abstract: (Extract from text)
Describing this structure is an interdisciplinary task I commend to philosophers. My aim for now is not to do it -- that's a long term project -- but to describe the task. This requires combined efforts from several disciplines including, besides philosophy: psychology, linguistics, artificial intelligence, ethology and social anthropology.Clearly there is not just one sort of mind. Besides obvious individual differences between adults there are differences between adults, children of various ages and infants. There are cross-cultural differences. There are also differences between humans, chimpanzees, dogs, mice and other animals. And there are differences between all those and machines. Machines too are not all alike, even when made on the same production line, for identical computers can have very different characteristics if fed different programs. Besides all these existing animals and artefacts, we can also talk about theoretically possible systems.
NOTE
This theme was taken up by (among others)
Roman V. Yampolskiy, University of Louisville, in
The Universe of Minds (2014)
https://arxiv.org/pdf/1410.0369
https://www.semanticscholar.org/paper/The-Universe-of-Minds-Yampolskiy/8c28056af2b97de5625aaed41791d9c14ea5cfda
[Relocated from another file 3 Jan 2018]
Filename: sloman-computational-mind.pdf (PDF)
Title: Towards a Computational Theory of Mind,
Originally in Artificial Intelligence - Human Effects, (Eds) M. Yazdani and A. Narayanan,Author: Aaron Sloman
Ellis Horwood, Chichester, 1984. pp 173--182
Abstract:
(From the introduction to the chapter.)
Cognitive Science has three interrelated aspects: theoretical, applied and empirical. Work in all three areas depends on and feeds back into the other two. Theoretical work explores possible computational systems, possible mental processes and structures, attempting to understand what sorts of mechanisms and representational systems are possible, how they differ, what their strengths and weaknesses are, etc. Empirical work studies existing intelligent systems, e.g. humans and other animals. Applied work is both concerned with problems relating to existing minds (e.g. learning difficulties, psychopathology) and also the design of new useful computational systems. This paper sketches some of the assumptions underlying much of the theoretical work, and hints at some of the practical applications. In particular, education and psychotherapy are both activities in which the computational processes in the mind of the pupil or patient are altered. In order to understand what they are doing, educationalists and psychotherapists require a computational theory of mind. This is not the dehumanising notion it may at first appear to be.
Filename: skills-cogsci-81.html
(HTML)
Filename: skills-cogsci-81.pdf
(PDF)
Filename: skills-cogsci-81.txt
(Plain Text)
Title: Skills, Learning and Parallelism
In Proceedings 3rd Cognitive
Science Conference, Berkeley, 1981. pp 284-5.
Slightly expanded as Cognitive Science Research paper No 13, Sussex University,
1981.
Author: Aaron Sloman
Date installed here: 15 Jan 2008 (Written April 1981)
HTML version added 23 Feb 2019
Note: The conference schedule is available here:
cogsci-1981-Berkeley-programme.pdf
Abstract:
People who learn about the compiled/interpreted distinction frequently re-invent the idea that the development of skills in human beings may be a process in which programs are first synthesised in an interpreted language, then later translated into a compiled form. The latter is thought to explain many features of skilled performance, for instance, the speed, the difficulty of monitoring individual steps, the difficulty of interrupting, starting or resuming execution at arbitrary desired locations, the difficulty of modifying a skill, the fact that performance is often unconscious after the skill has been developed, and so on. On this model, the old jokes about centipedes being unable to walk, or birds to fly, if they think about how they do it, might be related to the impossibility of using the original interpreter after a program has been compiled into a lower level language.
Despite the attractions of this theory I suspect that a different model is required in some cases.
Abstract:
NOTE: See also the current description of the toolkit, here: http://www.cs.bham.ac.uk/research/poplog/packages/simagent.html
Abstract:
Filename: sloman.beginners.pdf (PDF)
Filename: sloman.beginners.html (HTML)
Title: Beginners need powerful systems
Originally in
New Horizons in Educational Computing
(Ed) M. Yazdani,
Ellis
Horwood, 1984. pp 220-235
Author: Aaron Sloman
Date: Originally published 1984. Added here 27 Nov 2001
Abstract:
The paper argues that instead of choosing very simple and restricted
programming languages and environments for beginners, we can offer them
many advantages if we use powerful, sophisticated languages, libraries,
and development environments. Several reasons are given. The Pop-11
subset of the Poplog system is offered as an example.
Filename: sloman.pop11.pdf
Filename:
Sloman.pop11.html (HTML Added 17 Jan 2009
Filename:
Sloman.pop11.txt Plain text
Title: The Evolution of Poplog and Pop-11 at Sussex University
Originally in
POP-11 Comes of Age: The Advancement of an AI Programming Language,
(Ed) J. A.D.W. Anderson, Ellis Horwood, pp 30-54, 1989.
Author: Aaron Sloman
Date: Originally published 1989. Added here 1 Feb 2001
Abstract:
This paper gives an overview of the origins and development of the
programming language Pop-11, one of the Pop family of languages
including Pop1, Pop2, Pop10, Wpop, Alphapop. Pop-11 is the most
sophisticated version, comparable in scope and power to Common Lisp,
though different in many significant details, including its syntax. For
more on Pop-11 and Poplog, the system of which it is the core language,
see
http://www.cs.bham.ac.uk/research/poplog/poplog.info.html
This paper first appeared in a collection published in 1989 to celebrate the 21st birthday of the Pop family of languages.
Title: The primacy of non-communicative language
Author: Aaron Sloman
Now moved to another file (Papers 1962-80)
Filename: Sloman.ijcai95.txt (Plain text)
Filename: Sloman.ijcai95.pdf
Authors: Aaron Sloman
Title: A Philosophical Encounter
This is a four page paper, introducing a panel at
IJCAI95 in Montreal August 1995:
"A philosophical encounter: An interactive presentation of some
of the key philosophical problems in AI and AI problems in
philosophy."
Many thanks to Takashi Gomi, at Applied AI Systems Inc, who took the picture.
John McCarthy also contributed a short paper on interactions
between Philosophy and AI, available here:
https://www.ijcai.org/Proceedings/95-2/Papers/131.pdf
http://www-formal.stanford.edu/jmc/
Date: 24 April 95
Abstract:
This paper, along with the following paper by John McCarthy, introduces
some of the topics to be discussed at the IJCAI95 event `A
philosophical encounter: An interactive presentation of some of the
key philosophical problems in AI and AI problems in philosophy.'
Philosophy needs AI in order to make progress with many difficult
questions about the nature of mind, and AI needs philosophy in order
to help clarify goals, methods, and concepts and to help with
several specific technical problems. Whilst philosophical attacks on
AI continue to be welcomed by a significant subset of the general
public, AI defenders need to learn how to avoid philosophically
naive rebuttals.
Filename: Sloman.scai95.pdf
Authors: Aaron Sloman
Title: Exploring design space and niche space
Invited talk for 5th Scandinavian Conference on AI, Trondheim,
May 1995. in Proceedings SCAI95 published by IOS Press,
Amsterdam.
Date: 16 April 1995
Abstract:
Most people who give definitions of AI offer narrow views based
either on their own work area or the pronouncement of an AI guru
about the scope of AI. Looking at the range of research activities
to be found in AI conferences, books, journals and laboratories
suggests something very broad and deep, going beyond engineering
objectives and the study or replication of human capabilities. This
is exploration of the space of possible designs for behaving systems
(design space) and the relationships between designs and various
collections of requirements and constraints (niche space). This
exploration is inherently multi-disciplinary, and includes not only
exploration of various architectures, mechanisms, formalisms,
inference systems, and the like (aspects of natural and artificial
designs), but also the attempt to characterise various kinds of
behavioural capabilities and the environments in which they are
required, or possible. The implications of such a study are
profound: e.g. for engineering, for biology, for psychology, for
philosophy, and for our view of how we fit into the scheme of
things.
Filename: Riccardo.Poli_Mike.Brayshaw.hybrid.system.pdf
Filename: Riccardo.Poli_Mike.Brayshaw.hybrid.system.ps
Title: A Hybrid Trainable Rule-based System
School of Computer Science, the University of Birmingham
Cognitive Science technical report: CSRP-95-4
Date: 31 March 1995
Authors: Riccardo Poli and Mike Brayshaw
Abstract:
In this paper we introduce a new formalism for rule specification that
extends the behaviour of a traditional rule based system and allows the
natural development of hybrid trainable systems.
The formalism in itself allows a simple and concise specification of
rules and lends itself to the introduction of symbolic rule induction
mechanisms (example-based knowledge acquisition) as well as artificial
neural networks.
In the paper we describe such a formalism and four increasingly powerful
mechanisms for rule induction. The first one is based on a truth-table
representation; the second is based on a form of example based learning;
the third on feed-forward artificial neural nets; the fourth on genetic
algorithms.
Examples of systems based on these hybrid paradigms are presented and
their advantages with respect to traditional approaches are discussed.
Filename: sim_agent.pdf
November 1994 Seminar Slides. (PDF)
Postscript/PDF version of some seminar slides
presenting the package. Partly out of date.
Filename: simagent.html
http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
Link to the main SIM_AGENT overview page.
Includes a pointer to some movies demonstrating simple uses of the
toolkit.
Author: Aaron Sloman and Riccardo Poli
Date: November 1994 to March 1995
Abstract:
These files give partial descriptions of the sim_agent toolkit
implemented in Poplog Pop-11 for exploring architectures for individual
or interacting agents.
See also the Atal95 paper summarised above,
Aaron.Sloman_Riccardo.Poli_sim_agent_toolkit.pdf
NOTE
A more up to date overview of the toolkit can be found in
http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
Filename: Luc.Beaudoin_thesis.pdf
(PDF Corrected 24 Dec 2014)
Filename: Luc.Beaudoin_thesis.ps
(postscript. [Imperfect copy])
Filename: Luc.Beaudoin_thesis.rtf.gz
(Original rtf format, gzipped. [Figures not visible])
Filename: Luc.Beaudoin_thesis.txt.gz
(Plain text version gzipped)
Title: Goal processing in autonomous agents
Date: 31 Aug 1994 (Updated March 13th 1995)
(PDF version added 18 May 2003. Corrected version installed 24 Dec 2014)
Author: Luc P. Beaudoin
Abstract:
A thesis submitted to the Faculty of Science of the University of
Birmingham for the degree of PhD in Cognitive Science.
(Supervisor: Aaron Sloman).
Synopsis
The objective of this thesis is to elucidate goal processing in
autonomous agents from a design-stance. A. Sloman's theory of autonomous
agents is taken as a starting point (Sloman, 1987; Sloman, 1992b). An
autonomous agent is one that is capable of using its limited resources
to generate and manage its own sources of motivation. A wide array of
relevant psychological and AI theories are reviewed, including theories
of motivation, emotion, attention, and planning. A technical yet rich
concept of goals as control states is expounded. Processes operating on
goals are presented, including vigilational processes and management
processes. Reasons for limitations on management parallelism are
discussed. A broad design of an autonomous agent that is based on M.
Georgeff's (1986) Procedural Reasoning System is presented. The agent is
meant to operate in a microworld scenario. The strengths and weaknesses
of both the design and the theory behind it are discussed. The thesis
concludes with suggestions for studying both emotion ("perturbance") and
pathologies of attention as consequences of autonomous goal processing.
Filename: Aaron.Sloman_why_robot_emotions.pdf
Title: Why robots will have emotions
Authors: Aaron Sloman and Monica Croucher
Date: August 1981 (Installed in this directory 10 Nov 1994)
Originally appeared in Proceedings IJCAI 1981, Vancouver
Also available from Sussex University as Cognitive Science
Research paper No 176
Abstract:
Emotions involve complex processes produced by interactions between
motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or
violation of a motive, or triggering of a 'motive-generator', can
disturb processes produced by other motives. To understand emotions,
therefore, we need to understand motives and the types of processes they
can produce. This leads to a study of the global architecture of a mind.
Some constraints on the evolution of minds are discussed. Types of
motives and the processes they generate are sketched.
(Note we now use slightly different terminology from that used in this paper. In particular, what the paper labelled as "intensity" we now call "insistence", i.e. the capacity to divert attention from other things.)
NB
This paper is often misquoted as arguing that robots (or at least intelligent robots) should have emotions. On the contrary, the paper argues that certain sorts of high level disturbances (i.e. emotional states) will be capable of arising out of interactions between mechanisms that exist for other reasons. Similarly 'thrashing' is capable of occurring in multi-processing operating systems that support swapping and paging, but that does not mean that operating systems should produce thrashing.A more recent analysis of the confused but fashionable arguments (e.g. based on Damasio's writings) claiming that emotions are needed for intelligence can be found in this semi-popular presentation.
One of the arguments is analogous to arguing that a car requires a functioning horn for its starter motor to work, because damaging the battery can disable the horn and disable the starter motor.
Filename: Ian.Wright_emotional_agent.pdf
Filename: Ian.Wright_emotional_agent.ps.gz
Filename: Ian.Wright_emotional_agent.ps
Title: An Emotional Agent -- The Detection and Control of Emergent
States in an Autonomous Resource-Bounded Agent
(PhD Thesis Proposal)
Date: October 31 1994
Author: Ian Wright
Abstract:
In dynamic and unpredictable domains, such as the real world, agents
are continually faced with new requirements and constraints on the
quality and types of solutions they produce. Any agent design will
always be limited in some way. Such considerations highlight the need
for self-referential mechanisms, i.e. agents with the ability to examine
and reason about their internal processes in order to improve and control
their own functioning.
This work aims to implement a prototype agent architecture that meets
the requirements for self-referential systems, and is able to exhibit
perturbant (`emotional') states, detect such states and attempt to
do something about them. Results from this research will contribute
to autonomous agent design, emotionality, internal perception and
meta-level control; in particular, it is hoped that we will
i. provide a (partial) implementation of Sloman's theory of
perturbances (Sloman, 81) within the NML1 design (Beaudoin, 94),
ii. investigate the requirements for the self-detection and control
of processing states, and
iii. demonstrate the adaptiveness of, the need for, and consequences
of, self-control mechanisms that meet the requirements for
self-referential systems.
Filename: Aaron.Sloman_musings.pdf
Filename: Aaron.Sloman_musings.ps
Title: Musings on the roles of logical and non-logical representations in intelligence.
in: Janice Glasgow, Hari Narayanan, Chandrasekaran, (eds),
Diagrammatic Reasoning: Computational and Cognitive Perspectives,
AAAI Press 1995
Author: Aaron Sloman
Date: 17 October 1994
Abstract:
This paper offers a short and biased overview of the history of discussion and controversy about the role of different forms of representation in intelligent agents. It repeats and extends some of the criticisms of the `logicist' approach to AI that I first made in 1971, while also defending logic for its power and generality. It identifies some common confusions regarding the role of visual or diagrammatic reasoning including confusions based on the fact that different forms of representation may be used at different levels in an implementation hierarchy. This is contrasted with the way in the use of one form of representation (e.g. pictures) can be {\em controlled} using another (e.g. logic, or programs). Finally some questions are asked about the role of metrical information in biological visual systems.This is one of several sequels to the paper presented at IJCAI in 1971
Filename: emotions_workshop95
Title: Geneva Emotion Week 1995
Date: October 1994
Call for Applications
GENEVA EMOTION WEEK '95
April 8 to April 13, 1995
University of Geneva, Switzerland
The Emotion Research Group at the University of Geneva announces the third GENEVA EMOTION WEEK (GEW '95), consisting of a colloquium focusing on a major topic in the psychology of emotion, and of a series of workshops designed to introduce participants to advanced research methods in the field of emotion. In combination with WAUME95.
Filename: Aaron.Sloman_towards.th.rep.pdf
Filename: Aaron.Sloman_towards.th.rep.ps
Title: Towards a general theory of representations
Author: Aaron Sloman
In Donald Peterson (ed)
Forms of representation, Intellect Books, 1996
Date: 31 July 1994
Abstract:
This position paper presents the beginnings of a general theory of representations starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Similarly concepts of semantics, pragmatics and inference are generalised to apply to information-bearing sub- states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent).This is one of several sequels to the paper presented at IJCAI in 1971
Filename: Aaron.Sloman_isre.pdf
Filename: Aaron.Sloman_isre.ps.gz
Title: Computational Modelling Of Motive-Management Processes
"Poster" prepared for the Conference of the International
Society for Research in Emotions, Cambridge July 1994
(Final version installed here July 30th 1994)
Authors: Aaron Sloman, Luc Beaudoin and Ian Wright
Revised version in Proceedings ISRE94, edited by Nico Frijda,
ISRE Publications. Email: frijda@uvapsy.psy.uva.nl
Date: 29 July 1994 (PDF version added 25 Dec 2005)
Abstract:
This is a 5 page summary with three diagrams of the main objectives and
some work in progress at the University of Birmingham Cognition and
Affect project. involving: Professor Glyn Humphreys (School of
Psychology), and Luc Beaudoin, Chris Paterson, Tim Read, Edmund Shing,
Ian Wright, Ahmed El-Shafei, and (from October 1994) Chris Complin
(research students). The project is concerned with "global" design
requirements for coping simultaneously with coexisting but possibly
unrelated goals, desires, preferences, intentions, and other kinds of
motivators, all at different stages of processing. Our work builds on
and extends seminal ideas of H.A.Simon (1967). We are exploring "broad
and shallow" architectures combining varied capabilities most of which
are not implemented in great depth. The poster summarises some ideas
about management and meta-management processes, attention filtering, and
the relevance to emotional states involved "perturbances", where there
is partial loss of control of attention.
Filename: Tim.Read_Applying_S.D.pdf (PDF)
Filename: Tim.Read_Applying_S.D.ps.gz
Title: Applying Systemic Design to the study of `emotion'
Presented at AICS94, Dublin Ireland
Author: Tim Read
Presented at AICS94, Dublin Ireland
Date: 20th July 1994
Abstract:
Emotion has proved a difficult concept for researchers to explain. This is
principally due to both terminological and methodological problems. Systemic
Design is a methodology which has been developed and used for studying emotion
in an attempt to resolve these difficulties, providing a step toward a
complete understanding of `emotional phenomena'. This paper discusses the
application of this methodology to study the three mammalian behavioural
control systems proposed by Gray (1990). The computer simulation
presented here models a rat in the Kamin (1957) avoidance experiment
for two reasons: firstly, to demonstrate how Gray's systems can form a large
part of the explanation of what is happening in this experiment (which has
proved difficult for researchers to do so far), and secondly, as avoidance
behaviour and its associated architectural concomitance are related to many so
called `emotional states'.
Filename: Ed.Shing_Constraining.Learning.ps.gz
Title: Computational Constraints for Associative Learning
Date: 15 May 1994
Author: Edmund Shing
Abstract:
Due to the dynamic nature of the real world, learning in intelligent
agents requires various processes of selection ("attention to") of
input features in order to facilitate computational tractability.
There are many different forms of learning observed in people and
animals; this research looks at reinforcement learning and analyses
the selection processes necessary for this to work effectively.
Machine learning work has traditionally concentrated on small
predictable domains (the "deep and narrow" approach to cognitive
simulation) and so has avoided the combinatorial explosion problem
faced by an adaptive agent situated in a complex and dynamic world.
A preliminary analysis of several forms of learning suggests that (a)
adaptive agent architectures require selection processes in order to
perform any "useful" learning; and (b) reinforcement learning coupled
with certain simple selection, monitoring and evaluation mechanisms
can achieve several seemingly more complex forms of learning.
An agent design is constructed following a "broad and shallow"
approach to meet both general (e.g. related to fundamental properties of
the real world) and specific (e.g. related to the specific theory
proposed) requirements, concentrating on learning and selection
mechanisms in the implementation of reinforcement learning. This
agent architecture should exhibit both expected reinforcement
learning behaviours and seemingly more complex learning
behaviours. Implications of this work are discussed.
Filename: Aaron.Sloman_explorations.pdf
Filename: Aaron.Sloman_explorations.ps
Title: Explorations in Design Space
Author: Aaron Sloman
Date: 20 April 1994
in Proc
ECAI94, 11th European Conference on Artificial Intelligence
Edited by A.G.Cohn, John Wiley, pp 578-582, 1994
Abstract:
This paper sketches a vision of AI as a unifying discipline that
explores designs for a variety of behaving systems, for both scientific
and engineering purposes. This unpacks the idea that AI is the general
study of intelligence, whether natural or artificial. Some aspects of
the methodology of such a discipline are outlined, and a project
attempting to fill gaps in current work introduced. This is one of a
series of papers outlining the "design-based" approach to the study of
mind, based on the notion that a mind is essentially a sophisticated
self-monitoring, self-modifying control system.
The "design-based" study of architectures for intelligent agents is
important not only for engineering purposes but also for bringing
together hitherto fragmentary studies of mind in various disciplines,
for providing a basis for an adequate set of descriptive concepts, and
for making it possible to understand what goes wrong in various human
activities and how to remedy the situation. But there are many
difficulties to be overcome.
Filename: Aaron.Sloman_representations.control.pdf
Filename: Aaron.Sloman_representations.control.ps
Filename: Aaron.Sloman_representations.control.ps.gz
Title: Representations as control substates (DRAFT)
Author: Aaron Sloman
Date: March 6th 1994
Abstract:
(This is a longer, earlier version of "Towards a general theory of
representations", and includes some additional material.)
Since first presenting a paper
criticising excessive reliance on logical
representations in AI at the second IJCAI at Imperial College London in
1971, I have been trying to understand what representations are and why
human beings seem to need so many different kinds, tailored to different
purposes. This position paper presents the beginnings of a general
answer starting from the notion that an intelligent agent is essentially
a control system with multiple control states, many of which contain
information (both factual and non-factual), albeit not necessarily in a
propositional form. The paper attempts to give a general
characterisation of the notion of the syntax of an information store, in
terms of types of variation the relevant mechanisms can cope with.
Different kinds of syntax can support different kinds of semantics, and
serve different kinds of purposes. Similarly concepts of semantics,
pragmatics and inference are generalised to apply to information-bearing
sub-states in control systems. A number of common but incorrect notions
about representation are criticised (such as that pictures are in some
way isomorphic with what they represent), and a first attempt is made to
characterise dimensions in which forms of representations can differ,
including the explicit/implicit dimension.
This is one of several sequels to the paper presented at IJCAI in 1971
Filename: aaron-sloman-semantics.pdf (PDF)
Filename: aaron-sloman-semantics.html (HTML)
Title: Semantics in an intelligent control system
Invited paper for conference at Royal Society in April 1994 on Artificial Intelligence and the Mind: New Breakthroughs or Dead Ends?Author: Aaron Sloman
in Philosophical Transactions of the Royal Society: Physical Sciences and Engineering Vol 349, 1689, pp 43-58, 1994
With comments by A. Prescott, N. Shadbolt and M. Steedman (not included here).
http://www.jstor.org/stable/54375This was followed by a paper by Fred Dretske, disagreeing with the claim that AI systems can make use of semantic content.
Fred Dretske
(with comments by A. Clark, Y. Wilks, D.Dennett, R.Chrisley, and L.J.Cohen).
The Explanatory Role of Information pp 59-70
http://www.jstor.org/stable/54376
Abstract:
Much research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an architecture for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level "virtual machine" architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of "semantic information".This is one of several sequels to the paper on representations presented at IJCAI in 1971.
Filename: Ian.Wright_Project_Summary.pdf (PDF)
Filename: Ian.Wright_Project_Summary.ps.gz
Title: A Summary of the Attention and Affect Project
Date: March 2nd 1994
Author: Ian Wright
Abstract:
The Attention and Affect project is summarized. The original aims
of the project are reviewed and the work to date described, followed
by a critique of the project in terms of the original aims.
Some ideas for future work are outlined.
Filename: Aaron.Sloman_variety.formalisms.pdf
Filename: Aaron.Sloman_variety.formalisms.ps
Title: Varieties of Formalisms for Knowledge Representation
Commentary on: "The Imagery Debate Revisited: A Computational
perspective," by Janice I. Glasgow, in: Computational
Intelligence. Special issue on Computational Imagery, Vol. 9,
No. 4, November 1993
Author: Aaron Sloman
Date: Nov 1993
Abstract:
Whilst I agree largely with Janice Glasgow's position paper, there are a
number of relevant subtle and important issues that she does not
address, concerning the variety of forms and techniques of
representation available to intelligent agents, and issues concerned
with different levels of description of the same agent, where that agent
includes different virtual machines at different levels of abstraction.
I shall also suggest ways of improving on her array-based representation
by using a general network representation, though I do not know whether
efficient implementations are possible.
This is one of several sequels to the paper presented at IJCAI in 1971
Filename: Tim.Read_Systemic.Design.pdf (PDF)
Filename: Tim.Read_Systemic.Design.ps.gz
Title: Systemic Design: A Methodology For Investigating Emotional
Phenomena
Presented at WAUME93
Author: Tim Read
Date: August 1993
Abstract:
In this paper I introduce Systemic Design as a methodology for
studying complex phenomena like those commonly referred to as being
emotional. This methodology is an extension of the design-based
approach to include: organismic phylogenetic considerations, a
holistic design strategy, and a consideration of resource limitations.
It provides a powerful technique for generating theoretical models of
the mechanisms underpinning emotional phenomena, the current
terminology associated with which is often muddled and inconsistent.
This approach enables concepts and mechanisms to be clearly specified
and communicated to other researchers in related fields.
Filename: Tim.Read-et.al_TerminlogyPit.pdf
Filename: Tim.Read,et.al_Terminology.Pit.ps.gz
Title: The Terminological Pitfalls of Studying Emotion
Authors: Tim Read and Aaron Sloman
(This paper is written by the first author with ideas developed
from conversations with the second).
Date: Aug 1993
Abstract:
The research community is full of papers with titles that include
terms like `emotion', `motivation', `cognition', and `attention'.
However when these terms are used they are either considered to be so
obvious as not to warrant a definition, or are defined in overly
simplistic and arbitrary ways. The reasons behind our usage of
existing terminology is easy to see, but the problems inherent with it
are not. The use of such terminology gives rise to a whole set of
problems, chief among them are confusion and pointless semantic
disagreement.
These problems occur because the current terminology is too vague, and
burdened with acquired meaning. We need to replace it with terminology
that emerges from a putatively complete theory of the conceptual space
of mechanisms and behaviours, spanning several functional levels
(e.g.: neural, behavioural and computational). Research that attempts
to use the current terminology to build larger and more complex
theory, just adds to the existing confusion.
In this paper I examine the reasons behind the use of current
terminology, explore the problems inherent with it, and offer a way to
resolve these problems. The days when one small research team could
hope to produce a theory to explain the complete range of phenomena
currently referred to as being `emotional' have passed. It is time for
concerted and coordinated activity to understand the relation of
mechanisms to behaviour. This will give rise to clear and unambiguous
terminology that is defined at different functional levels. Until the
current terminological problems are solved, our rate of progress will
be slow.
Filename: Louise.Pryor,et.al_Cassandra.ps.Z
Title: Cassandra: Planning with contingencies
Authors: Louise Pryor and Gregg Collins
Date: Sept 1993
Abstract:
A fundamental assumption made by classical planners is that there is no
uncertainty in the world: the planner has full knowledge of the initial
conditions in which the plan will be executed, and all actions have
fully predictable outcomes. These planners cannot therefore construct
contingency plans that is, plans that specify different actions to be
performed in different circumstances. In this paper we discuss the
issues that arise in the representation and construction of contingency
plans and describe Cassandra, a complete and sound partial-order
contingent planner that uses a single simple mechanism to represent
unknown initial conditions and the uncertain effects of actions.
Cassandra uses explicit decision steps that enable the agent executing
the plan to decide which plan branch to follow. The decision steps in a
plan result in subgoals to acquire knowledge, which are planned for in
the same way as any other subgoals. Unlike previous systems, Cassandra
thus distinguishes the process of gathering information from the
process of making decisions, and can use information-gathering actions
with a full range of preconditions. The simple representation of
uncertainty and the explicit representation of decisions in Cassandra
allow a coherent approach to the problems of contingent planning, and
provide a solid base for extensions such as the use of different
decision making procedures.
Filename: Louise.Pryor,et.al_R.Features.ps.Z
Title: Reference features as guides to reasoning about opportunities
Authors: Louise Pryor and Gregg Collins
Date: Feb 1993
Abstract:
An intelligent agent acting in a complex and unpredictable world must
be able to both plan ahead and act quickly to changes in its
surroundings. In particular, such an agent must be able to react
quickly when faced with unexpected opportunities to fulfill its goals.
We consider the issue of how an agent should respond to perceived
opportunities, and we describe a method for determining quickly
whether it is rational to seize an opportunity or whether a more
detailed analysis is required. Our system uses a set of heuristics
based on reference features to identify situations and objects that
characteristically involve problematic patterns of interaction. We
discuss the recognition of reference features, and their use in
focusing the system reasoning onto potentially adverse interactions
between its ongoing plans and the current opportunity.
New Searchable HTML version 11 Apr 2014
Filename:
Aaron.Sloman_Mind.as.controlsystem/ (HTML)
New PDF derived from new HTML:
Filename:
Aaron.Sloman_Mind.as.controlsystem.pdf (PDF in subdirectory)
Older version originally produced using FrameMaker:
Filename:
Aaron.Sloman_Mind.as.controlsystem.pdf
Title: The Mind as a Control System,
Author: Aaron Sloman
In Philosophy and the Cognitive Sciences,
(eds) C. Hookway and D. Peterson,
Cambridge University Press, pp 69--110
Date: 1993 (installed) Feb 15 1994
Originally Presented at Royal Institute of Philosophy conference
on Philosophy and the Cognitive Sciences,
in Birmingham in 1992, with proceedings published later.
Abstract:
Many people who favour the design-based approach to the study of mind,
including the author previously, have thought of the mind as a
computational system, though they don't all agree regarding the forms of
computation required for mentality. Because of ambiguities in the notion
of 'computation' and also because it tends to be too closely linked to
the concept of an algorithm, it is suggested in this paper that we
should rather construe the mind (or an agent with a mind) as a control
system involving many interacting control loops of various kinds, most
of them implemented in high level virtual machines, and many of them
hierarchically organised. (Some of the sub-processes are clearly
computational in character, though not necessarily all.) A feature
of the system is that the same sensors and motors are shared between
many different functions, and sometimes they are shared concurrently,
sometimes sequentially.
A number of
implications are drawn out, including the implication that there are
many informational substates, some incorporating factual information,
some control information, using diverse forms of representation. The
notion of architecture, i.e. functional differentiation into interacting
components, is explained, and the conjecture put forward that in order
to account for the main characteristics of the human mind it is more
important to get the architecture right than to get the mechanisms right
(e.g. symbolic vs neural mechanisms). Architecture dominates mechanism
Filename:
Aaron.Sloman_prospects.pdf
Filename: Aaron.Sloman_prospects.ps
Title: Prospects for AI as the General Science of Intelligence
Author: Aaron Sloman
in Proceedings AISB93, published by IOS Press as a book:
Prospects for Artificial Intelligence
Date: April 1993
Abstract:
Three approaches to the study of mind are distinguished:
semantics-based, phenomena-based and design-based. Requirements for the
design-based approach are outlined. It is argued that AI as the
design-based approach to the study of mind has a long future, and
pronouncements regarding its failure are premature, to say the least.
Filename: Luc.Beaudoin.and.Sloman_Motive_proc.pdf
Filename: Luc.Beaudoin.and.Sloman_Motive_proc.ps
Title: A study of motive processing and attention,
in A.Sloman, D.Hogg, G.Humphreys, D. Partridge, A. Ramsay (eds)
Prospects for Artificial Intelligence, IOS Press, Amsterdam,
pp 229-238, 1993.
Authors: Luc P. Beaudoin and Aaron Sloman
Date: April 1993
Abstract:
We outline a design based theory of motive processing and attention,
including: multiple motivators operating asynchronously, with limited
knowledge, processing abilities and time to respond. Attentional
mechanisms address these limits using processes differing in complexity
and resource requirements, in order to select which motivators to attend
to, how to attend to them, how to achieve those adopted for action and
when to do so. A prototype model is under development. Mechanisms
include: motivator generators, attention filters, a dispatcher that
allocates attention, and a manager. Mechanisms like these might explain
the partial loss of control of attention characteristic of many
emotional states.
Filename: Aaron.Sloman_Phenomena.Explain.pdf (PDF)
Filename: Aaron.Sloman_Phenomena.Explain.ps.gz
Title: What are the phenomena to be explained?
Author: Aaron Sloman
Date: Dec 1992
Seminar notes for the Attention and Affect Project, summarising its long term objectives
Filename: Aaron.Sloman_IP.Emotion.Theory.pdf (PDF)
Filename: Aaron.Sloman_IP.Emotion.Theory.ps.gz
Title: Towards an information processing theory of emotions
Author: Aaron Sloman
Date: Dec 1992
Seminar notes for the Attention and Affect Project
Filename: Aaron.Sloman_Silicon.Souls.pdf (PDF)
Filename: Aaron.Sloman_Silicon.Souls.ps.gz
Title: Silicon Souls, How to design a functioning mind
Author: Aaron Sloman
Date: May 1992
Professorial Inaugural Lecture, Birmingham, May 1992
In the form of lecture slides for an excessively long lecture.
Much of this is replicated in other papers published since.
Filename: sloman-penrose-aij-review.pdf
Filename: sloman-penrose-aij-review.html
Title: The Emperor's Real Mind
Author: Aaron Sloman
Lengthy review/discussion of R.Penrose (The Emperor's New
Mind) in the journal Artificial Intelligence
Vol 56 Nos 2-3 August 1992, pages 355-396
HTML version added 23 May 2015
NOTE ADDED 21 Nov 2009:
A much shorter review by Aaron Sloman was published in The Bulletin of the London Mathematical Society 24 (1992) 87-96
Available as PDF and HMTL:
sloman-penrose-review-lms.pdf
sloman-penrose-review-lms.html
Filename: sloman-humphreys-jci-proposal.pdf
(Previously Aaron.Sloman.et.al_JCI.Grant.pdf)
Filename: sloman-humphreys-jci-proposal.ps
(Previously Aaron.Sloman.et.al_JCI.Grant.ps)
Title: Appendix to JCI proposal, The Attention and Affect Project
Authors: Aaron Sloman and Glyn Humphreys
Appendix to research grant proposal for the Attention and Affect
project. (Paid for computer and computer officer support, and some
workshops, for three years, funded by UK Joint Research Council
initiative in Cognitive Science and HCI, 1992-1995.)
Date: January 1992
Filename: sloman-prolegomena-communication-affect.pdf (PDF)
Filename: sloman-prolegomena-communication-affect.html (HTML)
Author: Aaron Sloman
Title: Prolegomena to a Theory of Communication and Affect
In Ortony, A., Slack, J., and Stock, O. (Eds.)
Communication from an Artificial Intelligence Perspective:
Theoretical and Applied Issues.
Heidelberg, Germany: Springer, 1992, pp 229-260.
(HTML version added 23 May 2015)
Paper presented, Nov 1990, to NATO Advanced Research Workshop on
"Computational theories of communication and
their applications: Problems and Prospects".
Originally available as Cognitive Science Research Paper, CSRP-91-05, The
University of Birmingham.
Abstract:
As a step towards comprehensive computer models of communication, and
effective human machine dialogue, some of the relationships between
communication and affect are explored. An outline theory is presented
of the architecture that makes various kinds of affective states
possible, or even inevitable, in intelligent agents, along with some
of the implications of this theory for various communicative
processes. The model implies that human beings typically have many
different, hierarchically organised, dispositions capable of
interacting with new information to produce affective states, distract
attention, interrupt ongoing actions, and so on. High "insistence" of
motives is defined in relation to a tendency to penetrate an attention
filter mechanism, which seems to account for the partial loss of
control involved in emotions. One conclusion is that emulating human
communicative abilities will not be achieved easily. Another is that
it will be even more difficult to design and build computing systems
that reliably achieve interesting communicative goals.
Filename: BeaudoinSloman-1991-proposalForStudyOfMotiveProcessing.pdf (PDF)
Title: A Proposal for a Study of Motive Processing
Authors: Luc Beaudoin and Aaron Sloman
Date Installed: 30 Jan 2016
Where published: PhD Thesis proposal Luc Beaudoin, University of Birmingham
Abstract:
This paper was mostly written by the first author, although it is based on and develops ideas of the second author. The nursemaid scenario was first described by the second author (Sloman, 1986). The first author is in the process of implementing the model described in the paper.In this paper we discuss some of the essential features and context of human motive processing, and we characterize some of the state transitions of motives. We then describe in detail a domain for designing an agent exhibiting some of these features. Recent related work is briefly reviewed to demonstrate the need for extending theories to account for the complexities of motive processing described here.
The nursemaid scenario is available at
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/nursemaid-scenario.html
Filename: Aaron.Sloman_consciousness.html (HTML)
Filename: Aaron.Sloman_consciousness.pdf (PDF)
Installed 27 Dec 2007 -- updated 31 Oct 2015, 6 Nov 2017)
Title: Notes on consciousness
Author: Aaron Sloman
Abstract:
A discussion on why talking about consciousness is premature
appeared in AISB Quarterly No 72, pp 8-14, 1990
This paper Aaron.Sloman_consciousness.html
was modified on 31 Oct 2015 to refer to the discussion of polymorphous
concepts, suggesting that "conscious" exhibits parametric polymorphism
here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/family-resemblance-vs-polymorphism.html
6 Nov 2017:
Added reference to W. Ross Ashby (1956),
An Introduction to Cybernetics
as source of the Principle of Requisite Variety.
Title: How to dispose of the free will issue
NOTE (2 May 2014):
A revised slightly extended and reformatted version of
the paper is now available (HTML and PDF) here:
Filename: sloman-freewill-1988.html (HTML)
Filename: sloman-freewill-1988.pdf (PDF)
Filename: Aaron.Sloman_freewill.pdf (Old version)
Author: Aaron Sloman
Date: 1988 (or earlier)
HISTORY
Originally posted to comp.ai.philosophy circa 1988.
A similar version appeared in AISB Quarterly, Winter 1992/3, Issue 82, pp. 31-2.
An improved, elaborated, version of this paper with different sub-headings
by Stan Franklin
was published as
Chapter 2 of his book
Artificial Minds (MIT Press, 1995).
Paper back version
available.)
Franklin's Chapter is also available on this web site, with his permission:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/FranklinSlomanFreewill.html
Abstract:
Much philosophical discussion concerning freedom of the will is based on an assumption that there is a well-defined distinction between systems whose choices are free and those whose choices are not. This assumption is refuted by showing that when requirements for behaving systems are considered there are very many design options which correspond to a wide variety of distinctions more or less closely associated with our naive ideas of individual freedom. Thus, instead of one major distinction there are many different distinctions; different combinations of design choices will produce different sorts of agents, and the naive distinction is not capable of classifying them. In this framework, the pre-theoretical concept of freedom of the will needs to be abandoned and replaced with a host of different technical concepts corresponding to the capabilities enabled by different designs.It is argued that biological evolution "discovered" many of the design options and produced more and more complex combinations of increasingly sophisticated designs giving animals more and more freedom (though all the interesting varieties depend on the operation of deterministic mechanisms).
See also section 10.13 of Chapter 10 of The Computer Revolution in Philosophy: Philosophy, science and models of mind (1978) .
Added (2006): Four Concepts of Freewill: Two of them incoherent
This argues that people who discuss problems of free will often talk past each other because they do not clearly perceive that there is not one universally accepted notion of "free will". Rather there are at least four, only two of which are of real value.
Filename: Aaron.Sloman_vision.design.pdf (PDF)
(Out of date Postscript version removed. Please use PDF
version instead.)
Filename: Aaron.Sloman_vision.design.html (HTML slightly messy)
Title: On designing a visual system: Towards a Gibsonian computational
model of vision.
In Journal of Experimental and Theoretical AI
1,4, 289-337 1989
Author: Aaron Sloman
Date: Original 1989, installed here April 18th 1994
Reformatted, with images included 22 Oct 2006
Footnote at the beginning extended 8 Aug 2012
Abstract:
This paper contrasts the standard (in AI) "modular" theory of the nature
of vision with a more general theory of vision as involving multiple
functions and multiple relationships with other sub-systems of an
intelligent system. The modular theory (e.g. as expounded by Marr)
treats vision as entirely, and permanently, concerned with the
production of a limited range of descriptions of visible surfaces, for a
central database; while the "labyrinthine" design allows any output that
a visual system can be trained to associate reliably with features of an
optic array and allows forms of learning that set up new communication
channels. The labyrinthine theory turns out to have much in common with
J.J.Gibson's theory of affordances, while not eschewing information
processing as he did. It also seems to fit better than the modular
theory with neurophysiological evidence of rich interconnectivity within
and between sub-systems in the brain. Some of the trade-offs between
different designs are discussed in order to provide a unifying framework
for future empirical investigations and engineering design studies.
However, the paper is more about requirements than detailed designs.
NOTE:
A precursor to this paper was published in 1982:
Image interpretation: The way ahead?
Some of the author's later work on vision is also on this web site, including
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#gibson
What's vision for, and how does it work?
From Marr (and earlier) to Gibson and Beyond
(Moved here 7 Oct 2018)
Filename: sloman-dennett-bbs-1987.pdf
Filename: sloman-dennett-bbs-1987.ps
Title: WHY PHILOSOPHERS SHOULD BE DESIGNERS
(BBS Commentary on Dennett's Intentional Stance)
Author: Aaron Sloman
Date Installed: 9 Sep 2009
Date Published: 1988
Where published:
BBS 1988 11 (3): p529-530.Commentary on Dennett, D.C. Precis of The Intentional Stance.
BBS 1988 11 (3): 495-505.
Abstract:
This is a short commentary on some aspects of D.C.Dennett's book 'The Intentional Stance'. The paper criticises the "intentional stance" as not providing real insight into the nature of intelligence because it ignores the question HOW behaviour is produced. The paper argues that only by taking the "design stance" can we understand the difference between intelligent and unintelligent ways of doing the same thing.
Filename: Aaron.Sloman_Motives.Mechanisms.pdf
(PDF added 3 Jan 2010)
Filename: Aaron.Sloman_Motives.Mechanisms.txt
Title: Motives Mechanisms and Emotions
Author: Aaron Sloman
In Cognition and Emotion 1,3, pp.217-234 1987,
reprinted in M.A. Boden (ed)
The Philosophy of Artificial Intelligence,
"Oxford Readings in Philosophy" Series
Oxford University Press, pp 231-247 1990.
(Also available as Cognitive Science Research Paper No 62,
Sussex University.)
Filename: Sloman.ecai86.pdf
Filename: Sloman.ecai86.ps.gz
Filename: Sloman.ecai86.ps
Title: Reference without causal links,
in
Proceedings 7th European Conference on Artificial
Intelligence,
Brighton, July 1986. Re-printed in
J.B.H. du Boulay, D.Hogg, L.Steels (eds)
Advances in Artificial Intelligence - II
North Holland, 369-381, 1987.
Date: 1986
Author: Aaron Sloman
Abstract:
This enlarges on earlier work attempting to show in a general way how
it might be possible for a machine to use symbols with `non-
derivative' semantics. It elaborates on the author's earlier
suggestion that computers understand symbols referring to their own
internal `virtual' worlds. A machine that grasps predicate calculus
notation can use a set of axioms to give a partial, implicitly
defined, semantics to non-logical symbols. Links to other symbols
defined by direct causal connections within the machine reduce
ambiguity. Axiom systems for which the machine's internal states do
not form a model give a basis for reference to an external world
without using external sensors and motors.
Filename: Sloman.ijcai85.pdf
Filename: Sloman.ijcai85.ps.gz
Filename: Sloman.ijcai85.ps
Filename: Sloman.ijcai85.txt
(Plain text original)
Title: What enables a machine to understand?
in
Proceedings 9th International Joint Conference on AI,
pp 995-1001, Los Angeles, August 1985.
Date: 1985
Author: Aaron Sloman
Abstract:
The 'Strong AI' claim that suitably programmed computers can manipulate
symbols that THEY understand is defended, and conditions for
understanding discussed. Even computers without AI programs exhibit a
significant subset of characteristics of human understanding. To argue
about whether machines can REALLY understand is to argue about mere
definitional matters. But there is a residual ethical question.
Filename: Aaron.Sloman_Rep.Formalisms.pdf
Filename: Aaron.Sloman_Rep.Formalisms.ps.gz
Filename: Aaron.Sloman_Rep.Formalisms.ps
Author: A.Sloman
Title: Why we need many knowledge representation formalisms,
in
Research and Development in Expert Systems,
ed. M Bramer, pp 163-183, Cambridge University Press 1985.
(Proceedings Expert Systems 85 conference.
Also Cognitive Science Research paper No 52, Sussex University.)
Date: 1985 (Reformatted December 2005)
Abstract:
Against advocates of particular formalisms for representing ALL kinds of knowledge, this paper argues that different formalisms are useful for different purposes. Different formalisms imply different inference methods. The history of human science and culture illustrates the point that very often progress in some field depends on the creation of a specific new formalism, with the right epistemological and heuristic power. The same has to be said about formalisms for use in artificial intelligent systems. We need criteria for evaluating formalisms in the light of the uses to which they are to be put. The same subject matter may be best represented using different formalisms for different purposes, e.g. simulation vs explanation. If different notations and inference methods are good for different purposes, this has implications for the design of expert systems.This is one of several sequels to the paper presented at IJCAI in 1971
See also the School of Computer Science Web page.
This file, designed to be lynx-friendly, is maintained by
Aaron Sloman.
Email
A.Sloman@cs.bham.ac.uk