Others, written only by my PhD students or research fellows, etc.
will not be included below,
even if I had some role in their
production, e.g. as supervisor.
Some of those items are catalogued in here with the paper titles: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
JUMP TO DETAILED LIST (After Contents)
In: M Wooldridge and A Rao (Eds), Foundations of Rational Agency,(Expanded version of: 1996 paper)
Kluwer Academic Publishers, 1999
This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows.
[[This version includes diagrams not in the original version.]]
(in The AI Magazine April, 1999, with reply by Rosalind Picard.)Author: Aaron Sloman
Abstract:
This review summarises the main themes of Picard's book, some of which are related to Damasio's ideas in Descartes' Error. In particular, I try to show that not all secondary emotions need manifest themselves via the primary emotion system, and therefore they will not all be detectable by measurements of physiological changes. I agree with much of the spirit of the book, but disagree on detail.
NOTE: Rosalind Picard's reply to this review is available online at http://www.findarticles.com/cf_dls/m2483/1_20/54367782/p1/
Abstract:
This paper, an expanded version of a talk on love given to a literary
society, attempts to analyse some of the architectural requirements for
an agent which is capable of having primary, secondary and tertiary
emotions, including being infatuated or in love. It elaborates on work
done previously in the Birmingham Cognition and Affect group, describing
our proposed three level architecture (with reactive, deliberative and
meta-management layers), showing how different sorts of emotions relate
to those layers.
Some of the relationships between emotional states involving partial loss of control of attention (e.g. emotional states involved in being in love) and other states which involve dispositions (e.g. attitudes such as loving) are discussed and related to the architecture.
The work of poets and playwrights can be shown to involve an implicit commitment to the hypothesis that minds are (at least) information processing engines. Besides loving, many other familiar states and processes such as seeing, deciding, wondering whether, hoping, regretting, enjoying, disliking, learning, planning and acting all involve various sorts of information processing.
By analysing the requirements for such processes to occur, and relating them to our evolutionary history and what is known about animal brains, and comparing this with what is being learnt from work on artificial minds in artificial intelligence, we can begin to formulate new and deeper theories about how minds work, including how we come to think about qualia, many forms of learning and development, and results of brain damage or abnormality.
But there is much prejudice that gets in the way of such theorising, and also much misunderstanding because people construe notions of "information processing" too narrowly.
Abstract:
Patrice Terrier asks and Aaron Sloman attempts to answer questions about
AI, about emotions, about the relevance of philosophy
to AI, about Poplog, Sim_agent and other tools.
(EACE =
European Association for Cognitive Ergonomics
NOTE: The link now points to the final, published version of the paper. http://www.cs.bham.ac.uk/research/projects/cogaff/00-02.html#lmpsfinal
Author: Aaron Sloman
Date: 8 Jun 1999
Abstract: (This was a short abstract. See later version)
Because we apparently have direct access to the phenomena, it is
tempting to think we know exactly what we are talking about when we
refer to consciousness, experience, the "first-person" viewpoint, etc.
But this is as mistaken as thinking we fully understand what
simultaneity is just because we have direct access to the phenomena, for
instance when we see a flash and hear a bang simultaneously.
Einstein taught us otherwise. From the fact that we can recognise some instances of a concept it does not follow that we know what is meant in general by saying that something is or is not an instance. Endless debates about which animals and which types of machines have consciousness are among the many symptoms that our concepts of mentality are more confused than we realise.
Too often people thinking about mind and consciousness consider only adult human minds in an academic culture, ignoring people from other cultures, infants, people with brain damage or disease, insects, birds, chimpanzees and other animals, as well as robots and software agents in synthetic environments. By broadening our view, we find evidence for diverse information processing architectures, each supporting and explaining a specific combination of mental capabilities.
When concepts connote complex, clusters of capabilities, then different subsets may be present at different stages of development of a species or an individual. Very different subsets may be found in different species. Different subsets may be impaired by different sorts of brain damage or degeneration. When we know what sorts of components are implicitly referred to by our pre-theoretic "cluster concepts" we can then define new more precise concepts in terms of different subsets. It helps if we can specify the architectures which generate different subsets of information processing capabilities. That also enables us to ask new, deeper, questions not only about the development of individuals but about the evolution of mentality in different species.
Architecture-based concepts generated in the framework of virtual machine functionalism subvert familiar philosophical thought experiments about zombies, since attempts to specify a zombie with the {\sc} right kind of {\em virtual machine} functionality but lacking our mental states degenerates into incoherence when spelled out in great detail. When you have fully described the internal states, processes, dispositions and causal interactions within a zombie whose information processing functions are alleged to be {\em exactly} like ours, the claim that something might still be missing becomes incomprehensible.
This paper has been superseded by a longer revised version with the same name in Cognitive Processing, Vol 1, 2001, pp 1-22, (Summer 2001), available in this directory via the 2000- Contents file.Author: Aaron Sloman(Originally presented at I3 Spring Days Workshop on Behavior planning for life-like characters and avatars Sitges, Spain, March 1999)
Abstract:
There is much shallow thinking about emotions, and a huge diversity of
definitions of "emotion" arises out of this shallowness. Too often the
definitions and theories are inspired either by a mixture of
introspection and selective common sense, or by a misdirected
neo-behaviourist methodology, attempting to define emotions and other
mental states in terms of observables. One way to avoid such
shallowness, and perhaps achieve convergence, is to base concepts and
theories on an information processing architecture, which is subject to
various constraints, including evolvability, implementability, coping
with resource-limited physical mechanisms, and achieving required
functionality. Within such an architecture-based theory we can
distinguish primary emotions, secondary emotions, and tertiary emotions,
and produce a coherent theory which not only explains a wide range of
phenomena but also partly explains the diversity of theories: most of
them focus on only a subset of types of emotions.
(Intended as a partial antidote to wide-spread shallow views about emotions, and over-simplified ontologies too easily accepted by AI and HCI researchers now becoming interested in intelligence and affect.)Our everyday attributions of emotions, moods, attitudes, desires, and other affective states implicitly presuppose that people are information processors. To long for something you need to know of its existence, its remoteness, and the possibility of being together again. Besides these semantic information states, longing also involves a control state. One who has deep longing for X does not merely occasionally think it would be wonderful to be with X. In deep longing thoughts are often uncontrollably drawn to X.
We need to understand the architectural underpinnings of control of attention, so that we can see how control can be lost. Having control requires being able to some extent to monitor one's thought processes, to evaluate them, and to redirect them. Only "to some extent" because both access and control are partial. We need to explain why. (In addition, self-evaluation can be misguided, e.g. after religious indoctrination!)
"Tertiary emotions" like deep longing are different from "primary" emotions (e.g. being startled or sexually aroused) and "secondary emotions" (e.g. being apprehensive or relieved) which, to some extent, we share with other animals. Can chimps, bonobos or human toddlers have tertiary emotions? To clarify the empirical questions and explain the phenomena we need a good model of the information processing architecture.
Conjecture: various modules in the human mind (perceptual, motor, and more central modules) all have architectural layers that evolved at different times and support different kinds of functionality, including reactive, deliberative and self-monitoring processes.
Different types of affect are related to the functioning of these different layers: e.g. primary emotions require only reactive layers, secondary emotions require deliberative layers (including "what if" reasoning mechanisms) and tertiary emotions (e.g. deep longing, humiliation, infatuation) involve additional self evaluation and self control mechanisms which evolved late and may be rare among animals.
An architecture-based framework can bring some order into the morass of studies of affect (e.g. myriad definitions of "emotion"). This will help us understand which kinds of emotions can arise in software agents that lack the reactive mechanisms required for controlling a physical body.
HCI Designers need to understand these issues (a) if they want to model human affective processes, (b) if they wish to design systems which engage fruitfully with human affective processes, (c) if they wish to produce teaching/training packages for would-be counsellors, psychotherapists, psychologists.
Abstract:
An overview of some of the motivation of our research and design criteria for the SIM_AGENT toolkit for a special issue of CACM on multi-agent systems, edited by Anupam Joshi and Munindar Singh.For more information about the toolkit (now referred to as SimAgent), including movies of demos, see http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
Work on the Cognition and Affect project using the toolkit is reported here (PDF).
Invited talk for 6th Iberoamerican Conference on AI (IBERAMIA-98) Lisbon, October 1998.Date: 16 Jun 1998
In Progress in Artificial Intelligence, Springer, Lecture Notes in Artificial Intelligence, pp. 27--38, Editor Helder Coelho.
Abstract:
This paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness.
Invited contribution to symposium on Cognitive Agents: Modeling Human Cognition,Date: 16 Jun 1998
at IEEE International Conference on Systems, Man, and Cybernetics
San Diego, Oct 1998, pp 2652--7.
Abstract:
This paper discusses some of the requirements for the control architecture of an intelligent human-like agent with multiple independent dynamically changing motives in a dynamically changing only partly predictable world. The architecture proposed includes a combination of reactive, deliberative and meta-management mechanisms along with one or more global "alarm" systems. The engineering design requirements are discussed in relation our evolutionary history, evidence of brain function and recent theories of Damasio and others about the relationships between intelligence and emotions. (The paper was completed in haste for a deadline and I forgot to explain why Descartes was in the title. See Damasio 1994.)
In proceedings: AAAI-98 Workshop on Software Tools for Developing AgentsDate: 20 May 1998 (PDF added 21 Nov 2007)
(eds Brian Logan and Jeremy Baxter). July 1998, pp 1-10.
Abstract:
This paper identifies a collection of high level questions which need to be posed by designers of toolkits for developing intelligent agents (e.g. What kinds of scenarios are to be developed? What sorts of agent architectures are required? What are the scenarios to be used for? Are speed and ease of development more or less important than speed and robustness of the final system?). It then considers some of the toolkit design options relevant to these issues, including some concerned with multi-agent systems and some concerned with individual intelligent agents of high internal complexity, including human-like agents. A conflict is identified between requirements for exploring new types of agent designs and requirements for formal specification, verifiability and efficiency. The paper ends with some challenges for computer science theorists posed by complex systems of interacting agents.Filename: Sloman.toolworkshop.slides.pdf
Title: Slides for presentation on: What's an AI toolkit for?
This file contains the slides (two slides per A4 page) prepared for the presentation.
In Proceedings 2nd European Conference on Cognitive Modelling,Authors: Aaron Sloman and Brian Logan
Nottingham, April 1-4, 1998. Eds Frank Ritter and Richard M. Young, Nottingham University Press, pp 58--65.
Date: 11 Mar 1998
Abstract:
This paper discusses agent architectures which are describable in terms of the "higher level" mental concepts applicable to human beings, e.g. "believes", "desires", "intends" and "feels". We conjecture that such concepts are grounded in a type of information processing architecture, and not simply in observable behaviour nor in Newell's knowledge-level concepts, nor Dennett's "intentional stance." A strategy for conceptual exploration of architectures in design-space and niche-space is outlined, including an analysis of design trade-offs. The SIM_AGENT (SimAgent) toolkit, developed to support such exploration, including hybrid architectures, is described briefly.
Abstract:
A discussion of some of the commonalities between brains and computers as physical systems within which information processing machines can be implemented. Includes a distinction between machines which manipulate energy and forces, machines with manipulate matter and machines which process information. Concludes that we still have much to learn about computers and brains, and although it seems likely that brains are computers we don't yet know what sorts of computers they are.
Abstract:
A key assumption of all problem-solving approaches based on utility theory is that we can assign a utility or cost to each state. This in turn requires that all criteria of interest can be reduced to a common ratio scale. However, many realworld problems are difficult or impossible to formulate in terms of minimising a single criterion, and it is often more natural to express problem requirements in terms of a set of constraints which a solution should satisfy. In this paper, we present a decision support system for route planning in complex terrains based on a novel constraint-based search procedure, A with bounded costs (ABC), which searches for a solution which best satisfies a set of prioritised soft constraints, and illustrate the operation of the system in a simple route planning problem. Our approach provides a means of more clearly specifying problem-solving tasks and more precisely evaluating the resulting solutions as a basis for action.
Abstract:
At ATAL'95 a paper was presented reporting on the SIM AGENT toolkit [8]. SIM AGENT was developed to provide a flexible framework for the exploration of architectures for autonomous agents consisting of a variety of concurrent interacting modules operating in discrete time. The previous paper outlined two early experiments with the toolkit. In this paper, we describe the experiences of two groups actively using the toolkit and report some of what we have learnt about its strengths and weaknesses. We briefly describe how the toolkit has developed since 1995 and sketch some of the ways in which it might be improved.
Filename: Sloman.biota98.html
Filename: Sloman.biota.slides.ps
Filename: Sloman.biota.slides.pdf
Title: What sorts of brains can support what sorts of minds?
Author: Aaron Sloman
Date: 19 Oct 1998
Abstract:
The HTML file is the abstract for an invited talk at the DIGITAL BIOTA 2 ConferenceThe .ps and .pdf files are postscript and PDF files containing slightly extended versions of the slides I presented at the conference.
NB: A revised version of this paper appeared in a book published by Springer. The revised version is listed in a later index file in this directory.Author: Aaron Sloman
Abstract:
Clearly we can solve problems by thinking about them. Sometimes we have the impression that in doing so we use words, at other times diagrams or images. Often we use both. What is going on when we use mental diagrams or images? This question is addressed in relation to the more general multi-pronged question: what are representations, what are they for, how many different types are they, in how many different ways can they be used, and what difference does it make whether they are in the mind or on paper? The question is related to deep problems about how vision and spatial manipulation work. It is suggested that we are far from understanding what's going on. In particular we need to explain how people understand spatial structure and motion, and I'll try to suggest that this is a problem with hidden depths, since our grasp of spatial structure is inherently a grasp of a complex range of possibilities and their implications. Two classes of examples discussed at length illustrate requirements for human visualisation capabilities. One is the problem of removing undergarments without removing outer garments. The other is thinking about infinite discrete mathematical structures.
Abstract:
There is now a huge amount of interest in consciousness among scientists as well as philosophers, yet there is so much confusion and ambiguity in the claims and counter-claims that it is hard to tell whether any progress is being made. This "position paper" suggests that we can make progress by temporarily putting to one side questions about what consciousness is or which animals or machines have it or how it evolved. Instead we should focus on questions about the sorts of architectures that are possible for behaving systems and ask what sorts of capabilities, states and processes, might be supported by different sorts of architectures. We can then ask which organisms and machines have which sorts of architectures. This combines the standpoint of philosopher, biologist and engineer.NB This paper is partly superseded by this 2009 paper.If we can find a general theory of the variety of possible architectures (a characterisation of "design space") and the variety of environments, tasks and roles to which such architectures are well suited (a characterisation of "niche space") we may be able to use such a theory as a basis for formulating new more precisely defined concepts with which to articulate less ambiguous questions about the space of possible minds.
For instance our initially ill-defined concept ("consciousness") might split into a collection of more precisely defined concepts which can be used to ask unambiguous questions with definite answers.
As a first step this paper explores a collection of conjectures regarding architectures and their evolution. In particular we explore architectures involving a combination of coexisting architectural levels including: (a) reactive mechanisms which evolved very early, (b) deliberative mechanisms which evolved later in response to pressures on information processing resources and (c) meta-management mechanisms that can explicitly inspect evaluate and modify some of the contents of various internal information structures.
It is conjectured that in response to the needs of these layers, perceptual and action subsystems also developed layers, and also that an "alarm" system which initially existed only within the reactive layer may have become increasingly sophisticated and extensive as its inputs and outputs were linked to the newer layers.
Processes involving the meta-management layer in the architecture could explain the origin of the notion of "qualia". Processes involving the "alarm" mechanism and mechanisms concerned with resource limits in the second and third layers gives us an explanation of three main forms of emotion, helping to account for some of the ambiguities which have bedevilled the study of emotion. Further theoretical and practical benefits may come from further work based on this design-based approach to consciousness.
A deeper longer term implication is the possibility of a new science investigating laws governing possible trajectories in design space and niche space, as these form parts of high order feedback loops in the biosphere.
Summary of poster presentation. In Proceedings of the Second International Conference on Autonomous Agents (Agents '98), ACM Press, 1998, pp 471--472.Date: Feb 1998
Abstract:
Which agent architectures are capable of justifying descriptions in terms of the 'higher level' mental concepts applicable to human beings? We propose a new kind of architecture-based semantics for mentalistic descriptions in which mental concepts (e.g. 'believes', 'desires', 'intends', 'mood', 'emotion', etc.) are grounded in assumptions about information processing architectures, and not merely in concepts based solely on Dennett's 'intentional stance'. These ideas have led to the design of the SIM_AGENT toolkit which has been used to explore a variety of such architectures.
Abstract:
How can a virtual machine $X$ be implemented in a physical machine Y? We know the answer as far as compilers, editors, theorem-provers, operating systems are concerned, at least insofar as we know how to produce these implemented virtual machines, and no mysteries are involved. This paper is about extrapolating from that knowledge to the implementation of minds in brains. By linking the philosopher's concept of supervenience to the engineer's concept of implementation, we can illuminate both. In particular, by showing how virtual machines can be implemented in causally complete physical machines, and still have causal powers, we remove some philosophical problems about how mental processes can be real and can have real effects in the world even if the underlying physical implementation has no causal gaps. This requires a theory of ontological levels.This is an extract from a much longer, evolving, paper, in part about the relation between mind and brain, and in part about the more general question of how high level abstract kinds of structures, processes and mechanisms can depend for their existence on lower level, more concrete kinds.
Abstract:
This is an attempt to characterise a new unifying generalisation of the practice of software engineers, AI designers, developers of evolutionary forms of computation, etc. This topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory (yet to be developed!). Much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them. Likewise there are important emerging high level cross disciplinary ideas about processes and architectures found in nature that can be unified and formalised, extending work done in Alife and evolutionary computation. This paper attempts to provide a conceptual framework for thinking about the tasks.Within this framework we can also find a new approach to the so-called hard problem of consciousness, based on virtual machine functionalism, and find a new defence for a version of the "Strong AI" thesis.
The slides begin to apply the ideas developed in the Cognition and
Affect project to the analysis of architectural requirements for love
and various other emotional and affective states.
[THE SLIDES ARE PARTLY OUT OF DATE. See
Filename: Sloman.kd.pdf (above)
]
This abstract was included in the 'Philosophy' section of the proceedings of this conference: Toward a Science of Consciousness 1998 "Tucson III" April 27-May 2, 1998 Tucson, Arizona All the abstracts are online here.
In Robert Trappl and Paolo Petta (eds), Creating Personalities for Synthetic Actors: Towards Autonomous Personality Agents, Springer (Lecture notes in AI), 1997 pp 166--208,Author: Aaron Sloman
(Originally presented at Workshop on Designing personalities for synthetic actors, Vienna, June 1995. Includes some edited transcripts of discussion following presentation.)
Date: Installed 24 Jan 1996. Published 1997.
Abstract;
This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the 'Nursemaid' or 'Minder' scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas with work on synthetic agents inhabiting virtual reality environments.
Technical report CSRP-97-30, University of Birmingham School of Computer Science, 1997.Authors: Brian Logan and Aaron Sloman
Abstract:
For many autonomous agents, such as mobile robots, autonomous vehicles and Computer Generated Forces, route planning in complex terrain is a critical task, as many of the agent's higher-level goals can only be accomplished if the agent is in the right place at the right time. The route planning problem is often formulated as one of finding a minimum-cost route between two locations in a digitised map which represents a complex terrain of variable altitude, where the cost of a route is an indication of its quality. However route planners which attempt to optimise a single measure of plan quality are difficult to integrate into the architecture of an agent, and the composite cost functions on which they are based are difficult to devise or justify. In this paper, we present a new approach to route planning in complex terrains based on a novel constraint-based search procedure, A with bounded costs (ABC), which generalises the single criterion optimisation problem solved by conventional route planners and describe how a planner based on this approach has been integrated into the architecture of a simple agent. This approach provides a means of more clearly specifying agent tasks and more precisely evaluating the resulting plans as a basis for action.
Unsuccessful submission to ECAL97Author: Aaron Sloman
Under what conditions are "higher level" mental concepts which are applicable to human beings also applicable to artificial agents? Our conjecture is that our mental concepts (e.g. "belief", "desire", "intention", "experience", "mood", "emotion", etc.) are grounded in implicit assumptions about an underlying information processing architecture. At this level mechanisms operate on information structures with semantic content, but there is no presumption of rationality. Thus we don't need to assume Newell's knowledge-level, nor Dennett's "intentional stance." The actual architecture will clearly be richer than that naively presupposed by common sense. We outline a three tiered architecture: with reactive, deliberative and reflective layers, and corresponding layers in perceptual and action subsystems, and discuss some implications.
(Slides for a talk at DFKI Saarbruecken, 6th Feb 1997)Author: Aaron Sloman
Date: 6 Feb 1997
Abstract:
Everybody seems to be talking about agents, though it's not clear when the word "agent" adds anything beyond "system", "program", "tool", etc. My concern is to understand some of the main features of human agency: what they are, how they evolved, how they differ between individuals, how they are implemented, and how far they can be implemented in artificial systems. This is part of the general multi-disciplinary study of "design space", "niche space", their interrelations, and the trajectories possible within these spaces.I outline a conjecture that many aspects of human mental functioning, including emotional states, can be explained in terms of an architecture approximately decomposable into three layers, with different evolutionary origins, shared with different animals. The oldest and most widespread is a *reactive* layer. A more recent development, probably shared with fewer animals is a *deliberative* layer. The newest layer is concerned with *meta-management* and may be found only in a few species. The reactive layer involves highly parallel, dedicated and fast mechanisms, capable of fine-tuning but no major structural changes. The deliberative layer involves the ability to create, compare, evaluate, select and act on new complex structures (e.g. plans, solutions to problems, linguistic constructs), a process that requires much stored knowledge and is inherently serial and resource limited, for several different reasons.
Perceptual and action subsystems had to evolve corresponding layered architectures in order to engage with all these to greatest effect. The third layer is linked to phenomena involving self consciousness and self control (and explains the existence of qualia, as the contents of attentive processes).
Different sorts of emotional states and processes correspond to different architectural layers, and some of them are likely to arise in sophisticated artificial agents of the future.
A short introduction is given to the SIM_AGENT toolkit developed in Birmingham for research and teaching activities involving the design of agents each of which has complex interacting internal mechanisms running concurrently, including symbolic and "sub-symbolic" mechanisms. Some of the material overlaps with the Synthetic Minds poster, below.
in Luigia Carlucci Aiello and Stuart C. Shapiro (eds), Principles of Knowledge Representation and Reasoning: Proceedings of the Fifth International Conference (KR '96), Morgan Kaufmann Publishers, 1996, pp 627-638,
Abstract
This is a philosophical 'position paper', starting from the observation that we have an intuitive grasp of a family of related concepts of "possibility", "causation" and "constraint" which we often use in thinking about complex mechanisms, and perhaps also in perceptual processes, which according to Gibson are primarily concerned with detecting positive and negative affordances, such as support, obstruction, graspability, etc. We are able to talk about, think about, and perceive possibilities, such as possible shapes, possible pressures, possible motions, and also risks, opportunities and dangers. We can also think about constraints linking such possibilities. If such abilities are useful to us (and perhaps other animals) they may be equally useful to intelligent artefacts. All this bears on a collection of different more technical topics, including modal logic, constraint analysis, qualitative reasoning, naive physics, the analysis of functionality, and the modelling design processes. The paper suggests that our ability to use knowledge about "de-re" modality is more primitive than the ability to use "de-dicto" modalities, in which modal operators are applied to sentences. The paper explores these ideas, links them to notions of "causation" and "machine", suggests that they are applicable to virtual or abstract machines as well as physical machines. The concept of "possibility-transducer" is introduced. Some conclusions are drawn regarding the nature of mind and consciousness.
Title: Towards a Design-Based Analysis of Emotional Episodes
Authors: Ian Wright, Aaron Sloman, Luc Beaudoin
Date: Oct 1995 (published 1996)
Appeared (with commentaries) in Philosophy Psychiatry and Psychology, vol 3 no 2, 1996, pp 101--126.Abstract:The commentaries, by
- Dan Lloyd,
- Cristiano Castelfranchi and Maria Miceli
- Margaret Boden
are available here http://muse.jhu.edu/journals/philosophy_psychiatry_and_psychology/toc/ppp3.2.html followed by a reply by the authors.
(This is a revised version of the paper presented to the Geneva Emotions Workshop, April 1995 entitled The Architectural Basis for Grief.)
The design-based approach is a methodology for investigating mechanisms capable of generating mental phenomena, whether introspectively or externally observed, and whether they occur in humans, other animals or robots. The study of designs satisfying requirements for autonomous agency can provide new deep theoretical insights at the information processing level of description of mental mechanisms. Designs for working systems (whether on paper or implemented on computers) can systematically explicate old explanatory concepts and generate new concepts that allow new and richer interpretations of human phenomena. To illustrate this, some aspects of human grief are analysed in terms of a particular information processing architecture being explored in our research group.We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. However even the current early design provides an interpretative ground for some familiar phenomena, including characteristic features of certain emotional episodes, particularly the phenomenon of perturbance (a partial or total loss of control of attention).
The paper attempts to expound and illustrate the design-based approach to cognitive science and philosophy, to demonstrate the potential effectiveness of the approach in generating interpretative possibilities, and to provide first steps towards an information processing account of 'perturbant', emotional episodes.
Many of the architectural ideas have been developed further in later papers and presentations, all available online, e.g.
- Online presentations (mainly pdf)
- The Architectural Basis of Affective States and Processes
Aaron Sloman, Ron Chrisley and Matthias Scheutz
In Who Needs Emotions?: The Brain Meets the Robot, Ed. M. Arbib and J-M. Fellous, Oxford University Press, Oxford, New York, 2005
In Donald Peterson (ed) Forms of representation, Intellect Books, 1996Author: Aaron Sloman
Date: Installed 31 July 1994; Published 1996
Abstract:
This position paper presents the beginnings of a general theory of representations starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Similarly concepts of semantics, pragmatics and inference are generalised to apply to information-bearing sub-states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent).This is one of several sequels to the paper presented at IJCAI in 1971
In: Machines and Thought: The Legacy of Alan Turing (vol I), eds P.J.R. Millican and A. Clark, 1996, OUP(The Clarendon Press) pp 179--219,Author: Aaron Sloman
Revised version of paper presented to Turing Colloquium, University of Sussex, 1990.
Date: Mon May 8 1995 (Published 1996)
Abstract:
What is the relation between intelligence and computation? Although the difficulty of defining 'intelligence' is widely recognized, many are unaware that it is hard to give a satisfactory definition of 'computational' if computation is supposed to provide a non-circular explanation for intelligent abilities. The only well-defined notion of 'computation' is what can be generated by a Turing machine or a formally equivalent mechanism. This is not adequate for the key role in explaining the nature of mental processes, because it is too general, as many computations involve nothing mental, nor even processes: they are simply abstract structures. We need to combine the notion of 'computation' with that of 'machine'. This may still be too restrictive, if some non-computational mechanisms prove to be useful for intelligence. We need a theory-based taxonomy of {\em architectures} and {\em mechanisms} and corresponding process types. Computational machines may turn out to be a sub-class of the machines available for implementing intelligent agents. The more general analysis starts with the notion of a system with independently variable, causally interacting sub-states that have different causal roles, including both 'belief-like' and 'desire-like' sub-states, and many others. There are many significantly different such architectures. For certain architectures (including simple computers), some sub-states have a semantic interpretation for the system. The relevant concept of semantics is defined partly in terms of a kind of Tarski-like structural correspondence (not to be confused with isomorphism). This always leaves some semantic indeterminacy, which can be reduced by causal loops involving the environment. But the causal links are complex, can share causal pathways, and always leave mental states to some extent semantically indeterminate.
This (semi-serious) paper aims to replace deep sounding unanswerable, time-wasting pseudo-questions which are often posed in the context of attacking some version of the strong AI thesis, with deep, discovery-driving, real questions about the nature and content of internal states of intelligent agents of various kinds. In particular the question 'What is it like to be an X?' is often thought to identify a type of phenomenon for which no physical conditions can be sufficient, and which cannot be replicated in computer-based agents. This paper tries to separate out (a) aspects of the question that are important and provide part of the objective characterisation of the states, or capabilities of an agent, and which help to define the ontology that is to be implemented in modelling such an agent, from (b) aspects that are incoherent. The paper supports a philosophical position that is anti-reductionist without being dualist or mystical.
(Slides for a talk at MIT Media Lab, Nov 1996. Now out of date.)Author: Aaron Sloman
Date: Nov 1996
Abstract:
Although much research on emotions is done on other animals (e.g. rats) there seem to be certain characteristically human emotional states which interest poets, novelists, and gossips, such as excited anticipation of an election victory, humiliation at being dismissed. Similar states are inevitable in intelligent robots. Obviously these states involve conceptual abilities not shared by most other mammals. Less obviously, they involve "perturbant" states in which there is partial loss of control of thought processes: you want to prepare that lecture but your mind is drawn back to the source of joy or pain. This presupposes the ability to be in control: you cannot lose what you've never had. The talk contrasts the design-based approach to the study of mind with other approaches. The former involves explorations of "design space", "niche space", and their interconnections. A design-based theory is presented which shows how emotional (perturbant) states are possible.
Invited talk at Cognitive Modeling Workshop, AAAI96, Portland Oregon, Aug 1996.Author: Aaron Sloman
Date: August 1996
In Intelligent Agents Vol II (ATAL-95), Eds. Mike Wooldridge, Joerg Mueller, Milind Tambe, Springer-Verlag 1996 pp 392--407.Author: Aaron Sloman and Riccardo PoliUpdated version of: Cognitive Science technical report: CSRP-95-3 School of Computer Science, the University of Birmingham.
Presented at ATAL-95, Workshop on Agent Theories, Architectures, and Languages, at IJCAI-95 Workshop, Montreal, August 1995
SIM_AGENT is a toolkit that arose out of a project concerned with designing an architecture for an autonomous agent with human-like capabilities. Analysis of requirements showed a need to combine a wide variety of richly interacting mechanisms, including independent asynchronous sources of motivation and the ability to reflect on which motives to adopt, when to achieve them, how to achieve them, and so on. These internal 'management' (and meta-management) processes involve a certain amount of parallelism, but resource limits imply the need for explicit control of attention. Such control problems can lead to emotional and other characteristically human affective states. In order to explore these ideas, we needed a toolkit to facilitate experiments with various architectures in various environments, including other agents. The paper outlines requirements and summarises the main design features of a Pop-11 toolkit supporting both rule-based and 'sub-symbolic' mechanisms. Some experiments including hybrid architectures and genetic algorithms are summarised.The toolkit is intended to support exploration of alternative agent architectures rather than to implement a particular agent architecture. It was used in the CogAff project and other projects.
This is a four page paper, introducing a panel (John McCarthy, Marvin Minsky, and Aaron Sloman) at IJCAI95 in Montreal August 1995:Author: Aaron Sloman
"A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy."
John McCarthy also contributed a short paper on interactions between Philosophy and AI, available via his WEB page:
http://www-formal.stanford.edu/jmc/
This paper, along with the following paper by John McCarthy, introduces some of the topics to be discussed at the IJCAI95 event `A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.' Philosophy needs AI in order to make progress with many difficult questions about the nature of mind, and AI needs philosophy in order to help clarify goals, methods, and concepts and to help with several specific technical problems. Whilst philosophical attacks on AI continue to be welcomed by a significant subset of the general public, AI defenders need to learn how to avoid philosophically naive rebuttals.
Many thanks to Takashi Gomi, at Applied AI Systems Inc, who took the picture.
Invited talk for 5th Scandinavian Conference on AI, Trondheim, May 1995. in Proceedings SCAI95 published by IOS Press, Amsterdam.Author: Aaron Sloman
Most people who give definitions of AI offer narrow views based either on their own work area or the pronouncement of an AI guru about the scope of AI. Looking at the range of research activities to be found in AI conferences, books, journals and laboratories suggests something very broad and deep, going beyond engineering objectives and the study or replication of human capabilities. This is exploration of the space of possible designs for behaving systems (design space) and the relationships between designs and various collections of requirements and constraints (niche space). This exploration is inherently multi-disciplinary, and includes not only exploration of various architectures, mechanisms, formalisms, inference systems, and the like (aspects of natural and artificial designs), but also the attempt to characterise various kinds of behavioural capabilities and the environments in which they are required, or possible. The implications of such a study are profound: e.g. for engineering, for biology, for psychology, for philosophy, and for our view of how we fit into the scheme of things.
in: Janice Glasgow, Hari Narayanan, Chandrasekaran, (eds), pp. 7--32Author: Aaron Sloman
Diagrammatic Reasoning: Computational and Cognitive Perspectives, AAAI Press 1995
Date: Installed 17 October 1994; Published 1995
Abstract:
This paper offers a short and biased overview of the history of discussion and controversy about the role of different forms of representation in intelligent agents. It repeats and extends some of the criticisms of the `logicist' approach to AI that I first made in 1971, while also defending logic for its power and generality. It identifies some common confusions regarding the role of visual or diagrammatic reasoning including confusions based on the fact that different forms of representation may be used at different levels in an implementation hierarchy. This is contrasted with the way in the use of one form of representation (e.g. pictures) can be {\em controlled} using another (e.g. logic, or programs). Finally some questions are asked about the role of metrical information in biological visual systems.This is one of several sequels to the paper presented at IJCAI in 1971
In AISB Quarterly, Autumn 1995Authors: Darryl Davis, Aaron Sloman and Riccardo Poli,
Abstract:
This paper describes a toolkit that arose out of a project concerned with designing an architecture for an autonomous agent with human-like capabilities. Analysis of requirements showed a need to combine a wide variety of richly interacting mechanisms, including independent asynchronous sources of motivation and the ability to reflect on which motives to adopt, when to achieve them, how to achieve them, and so on. These internal `management' (and metamanagement) processes involve a certain amount of parallelism, but resource limits imply the need for explicit control of attention. Such control problems can lead to emotional and other characteristically human affective states. We needed a toolkit to facilitate exploration of alternative architectures in varied environments, including other agents. The paper outlines requirements and summarises the main design features of a toolkit written in Pop-11. Some preliminary work on developing a multi-agent scenario, using agents of differing sophistication is presented.NOTE: See also the current description of the toolkit, here: http://www.cs.bham.ac.uk/research/poplog/packages/simagent.html
"Poster" prepared for the Conference of the International Society for Research in Emotions, Cambridge July 1994 (Final version installed here July 30th 1994)Author: Aaron Sloman, Luc Beaudoin and Ian WrightRevised version in Proceedings ISRE94, edited by Nico Frijda, ISRE Publications.
Date: 29 July 1994 (PDF version added 25 Dec 2005)
Abstract:
This is a 5 page summary with three diagrams of the main objectives and some work in progress at the University of Birmingham Cognition and Affect project. involving: Professor Glyn Humphreys (School of Psychology), and Luc Beaudoin, Chris Paterson, Tim Read, Edmund Shing, Ian Wright, Ahmed El-Shafei, and (from October 1994) Chris Complin (research students). The project is concerned with "global" design requirements for coping simultaneously with coexisting but possibly unrelated goals, desires, preferences, intentions, and other kinds of motivators, all at different stages of processing. Our work builds on and extends seminal ideas of H.A.Simon (1967). We are exploring "broad and shallow" architectures combining varied capabilities most of which are not implemented in great depth. The poster summarises some ideas about management and meta-management processes, attention filtering, and the relevance to emotional states involved "perturbances", where there is partial loss of control of attention.
in Proc ECAI94, 11th European Conference on Artificial Intelligence Edited by A.G.Cohn, John Wiley, pp 578-582, 1994Author: Aaron Sloman
Date: 20 April 1994
Abstract:
This paper sketches a vision of AI as a unifying discipline that explores designs for a variety of behaving systems, for both scientific and engineering purposes. This unpacks the idea that AI is the general study of intelligence, whether natural or artificial. Some aspects of the methodology of such a discipline are outlined, and a project attempting to fill gaps in current work introduced. This is one of a series of papers outlining the "design-based" approach to the study of mind, based on the notion that a mind is essentially a sophisticated self-monitoring, self-modifying control system.The "design-based" study of architectures for intelligent agents is important not only for engineering purposes but also for bringing together hitherto fragmentary studies of mind in various disciplines, for providing a basis for an adequate set of descriptive concepts, and for making it possible to understand what goes wrong in various human activities and how to remedy the situation. But there are many difficulties to be overcome.
Author: Aaron Sloman
Date: March 6th 1994
Abstract:
(This is a longer, earlier version of "Towards a general theory of representations", and includes some additional material.)
Since first presenting a paper criticising excessive reliance on logical representations in AI at the second IJCAI at Imperial College London in 1971, I have been trying to understand what representations are and why human beings seem to need so many different kinds, tailored to different purposes. This position paper presents the beginnings of a general answer starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Different kinds of syntax can support different kinds of semantics, and serve different kinds of purposes. Similarly concepts of semantics, pragmatics and inference are generalised to apply to information-bearing sub-states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent), and a first attempt is made to characterise dimensions in which forms of representations can differ, including the explicit/implicit dimension.This is one of several sequels to the paper presented at IJCAI in 1971
This was followed by a paper by Fred Dretske, disagreeing with the claim that AI systems can make use of semantic content.
Abstract:
Much research on intelligent systems has concentrated on low level mechanisms or sub-systems of restricted functionality. We need to understand how to put all the pieces together in an architecture for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control, and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal, and artificial minds. Only within the framework of a theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The high level "virtual machine" architecture is more useful for this than detailed mechanisms. E.g. the difference between connectionist and symbolic implementations is of relatively minor importance. A good theory provides both explanations and a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper sketches some requirements for such architectures, and analyses an idea shared between engineers and philosophers: the concept of "semantic information".This is one of several sequels to the paper on representations presented at IJCAI in 1971.
This is a text file which is part of the online documentation for the SIM_AGENT toolkit. Often referred to subsequently as: SimAgent.
See also http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
(Link to the main SIM_AGENT overview page. Includes pointers to some movies demonstrating simple uses of the toolkit, and also later publications on the toolkit.)Also available: November 1994 Seminar Slides. (PDF)
(Partly out of date.)
These slides give an early partial descriptions of the sim_agent toolkit implemented in Poplog Pop-11 for exploring architectures for individual or interacting agents. See also the Atal95 paper.
Title: The Mind as a Control System,
in Philosophy and the Cognitive Sciences, (eds) C. Hookway and D. Peterson, Cambridge University Press, pp 69-110 1993Author: Aaron Sloman
Date: 1993 (installed) Feb 15 1994
Originally Presented at Royal Institute of Philosophy conference
on Philosophy and the Cognitive Sciences,
in Birmingham in 1992, with proceedings published later.
Abstract:
Many people who favour the design-based approach to the study of mind, including the author previously, have thought of the mind as a computational system, though they don't all agree regarding the forms of computation required for mentality. Because of ambiguities in the notion of 'computation' and also because it tends to be too closely linked to the concept of an algorithm, it is suggested in this paper that we should rather construe the mind (or an agent with a mind) as a control system involving many interacting control loops of various kinds, most of them implemented in high level virtual machines, and many of them hierarchically organised. (Some of the sub-processes are clearly computational in character, though not necessarily all.) A feature of the system is that the same sensors and motors are shared between many different functions, and sometimes they are shared concurrently, sometimes sequentially. A number of implications are drawn out, including the implication that there are many informational substates, some incorporating factual information, some control information, using diverse forms of representation. The notion of architecture, i.e. functional differentiation into interacting components, is explained, and the conjecture put forward that in order to account for the main characteristics of the human mind it is more important to get the architecture right than to get the mechanisms right (e.g. symbolic vs neural mechanisms). Architecture dominates mechanism
Commentary on: "The Imagery Debate Revisited: A Computational perspective," by Janice I. Glasgow, in: Computational Intelligence. Special issue on Computational Imagery, Vol. 9, No. 4, November 1993Author: Aaron Sloman
Date: Nov 1993
Abstract:
Whilst I agree largely with Janice Glasgow's position paper, there are a number of relevant subtle and important issues that she does not address, concerning the variety of forms and techniques of representation available to intelligent agents, and issues concerned with different levels of description of the same agent, where that agent includes different virtual machines at different levels of abstraction. I shall also suggest ways of improving on her array-based representation by using a general network representation, though I do not know whether efficient implementations are possible.This is one of several sequels to the paper presented at IJCAI in 1971
in Proceedings AISB93, published by IOS Press as a book:Author: Aaron Sloman
Prospects for Artificial Intelligence
Eds: A.Sloman, D.Hogg, G.Humphreys, D. Partridge, A. Ramsay, Pp: 1--10
Abstract:
Three approaches to the study of mind are distinguished: semantics-based, phenomena-based and design-based. Requirements for the design-based approach are outlined. It is argued that AI as the design-based approach to the study of mind has a long future, and pronouncements regarding its failure are premature, to say the least.
in A.Sloman, D.Hogg, G.Humphreys, D. Partridge, A. Ramsay (eds) Prospects for Artificial Intelligence, IOS Press, Amsterdam, pp 229-238, 1993.Authors: Luc P. Beaudoin and Aaron Sloman
Presented at AISB 1993, University of Birmingham.
Abstract:
We outline a design based theory of motive processing and attention, including: multiple motivators operating asynchronously, with limited knowledge, processing abilities and time to respond. Attentional mechanisms address these limits using processes differing in complexity and resource requirements, in order to select which motivators to attend to, how to attend to them, how to achieve those adopted for action and when to do so. A prototype model is under development. Mechanisms include: motivator generators, attention filters, a dispatcher that allocates attention, and a manager. Mechanisms like these might explain the partial loss of control of attention characteristic of many emotional states.
In Ortony, A., Slack, J., and Stock, O. (Eds.),Author: Aaron Sloman
Communication from an Artificial Intelligence Perspective: Theoretical and Applied Issues.
Heidelberg, Germany: Springer, 1992, pp 229-260.
(HTML version added 23 May 2015)Paper presented, Nov 1990, to NATO Advanced Research Workshop on "Computational theories of communication and their applications: Problems and Prospects".
Originally available as Cognitive Science Research Paper, CSRP-91-05, The University of Birmingham.
Abstract:
As a step towards comprehensive computer models of communication, and effective human machine dialogue, some of the relationships between communication and affect are explored. An outline theory is presented of the architecture that makes various kinds of affective states possible, or even inevitable, in intelligent agents, along with some of the implications of this theory for various communicative processes. The model implies that human beings typically have many different, hierarchically organised, dispositions capable of interacting with new information to produce affective states, distract attention, interrupt ongoing actions, and so on. High "insistence" of motives is defined in relation to a tendency to penetrate an attention filter mechanism, which seems to account for the partial loss of control involved in emotions. One conclusion is that emulating human communicative abilities will not be achieved easily. Another is that it will be even more difficult to design and build computing systems that reliably achieve interesting communicative goals.
Lengthy review/discussion of Roger Penrose (The Emperor's New Mind) in the journal Artificial Intelligence Vol 56 Nos 2-3 August 1992, pages 355-396Author: Aaron Sloman
NOTE ADDED 21 Nov 2009:Abstract:
A much shorter review by Aaron Sloman was published in The Bulletin of the London Mathematical Society 24 (1992) 87-96
Available as PDF and HMTL:
sloman-penrose-review-lms.pdf
sloman-penrose-review-lms.html
"The Emperor's New Mind" by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the "strong" AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards the notion of an *algorithm* as central to AI, whereas it is argued here that for the purpose of explaining mental capabilities the *architecture* of an intelligent system is more important than the concept of an algorithm, using the premise that what makes something intelligent is not *what* it does but *how it does it.* What needs to be explained is also unclear: Penrose thinks we all know what consciousness is and claims that the ability to judge Goedel's formula to be true depends on it. He also suggests that quantum phenomena underly consciousness. This is rebutted by arguing that our existing concept of "consciousness" is too vague and muddled to be of use in science. This and related concepts will gradually be replaced by a more powerful theory-based taxonomy of types of mental states and processes. The central argument offered by Penrose against the strong AI thesis depends on a tempting but unjustified interpretation of Goedel's incompleteness theorem. Some critics are shown to have missed the point of his argument. A stronger criticism is mounted, and the relevance of mathematical Platonism analysed. Architectural requirements for intelligence are discussed and differences between serial and parallel implementations analysed.
Authors: Aaron Sloman and Glyn Humphreys
Appendix to research grant proposal for the Attention and Affect project. (Paid for computer and computer officer support, and some workshops, for three years, funded by UK Joint Research Council initiative in Cognitive Science and HCI, 1992-1995.)
Date: January 1992
Author: Aaron Sloman
Date: Dec 1992
Seminar notes for the Attention and Affect Project, summarising its long term objectives
Author: Aaron Sloman
Date: Dec 1992
Seminar notes for the Attention and Affect Project
Author: Aaron Sloman
Date: May 1992
Professorial Inaugural Lecture, Birmingham, May 1992 In the form of lecture slides for an excessively long lecture. Much of this is replicated in other papers published since.
Authors: Luc Beaudoin and Aaron Sloman
Date Installed: 30 Jan 2016
Where published: PhD Thesis proposal Luc Beaudoin, University of Birmingham
Abstract:
This paper was mostly written by the first author, although it is based on and develops ideas of the second author. The nursemaid scenario was first described by the second author (Sloman, 1986). The first author is in the process of implementing the model described in the paper.In this paper we discuss some of the essential features and context of human motive processing, and we characterize some of the state transitions of motives. We then describe in detail a domain for designing an agent exhibiting some of these features. Recent related work is briefly reviewed to demonstrate the need for extending theories to account for the complexities of motive processing described here.
The nursemaid scenario is available at
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/nursemaid-scenario.html
A discussion on why talking about consciousness is premature appeared in AISB Quarterly No 72, pp 8-14, 1990
Date: Installed circa 1994, Published 1990
Abstract:
A discussion on why talking about consciousness is premature Appeared in AISB Quarterly No 72, pp 8-14, 1990This paper Aaron.Sloman_consciousness.html was modified on 31 Oct 2015 to refer to the discussion of polymorphous concepts, suggesting that "conscious" exhibits parametric polymorphism here:Opening paragraphs:
{1} The noun "consciousness" as used by most academics (philosophers, psychologists, biologists...) does not refer to anything in particular.So you can't sensibly ask how it evolved, or which organisms do and which don't have it.
Some people imagine they can identify consciousness as "What I've got now". Thinking you can identify what you are talking about by focusing your attention on it is as futile as a Newtonian attempting to identify an enduring portion of space by focusing his attention on it.
You can identify a portion of space by its relationships to other things, but whether this is or isn't the same bit of space as one identified earlier will depend on WHICH other things you choose: the relationships change over time, but don't all change in unison. Similarly, you can identify a mental state or process by its relationship to other things (e.g. the environment, other mental states or processes, behavioural capabilities, etc), but then whether the same state can or cannot occur in other organisms or machines will depend on WHICH relationships you have chosen -- and there is no uniquely "correct" set of relationships.
NOTE:
A more recent tutorial presentation on this topic is available here.
In Journal of Experimental and Theoretical AI, 1,4, 289-337 1989
Author: Aaron Sloman
Date: 1989, installed here April 18th 1994
Reformatted, with images included 22 Oct 2006
Abstract:
This paper contrasts the standard (in AI) "modular" theory of the nature of vision with a more general theory of vision as involving multiple functions and multiple relationships with other sub-systems of an intelligent system. The modular theory (e.g. as expounded by Marr) treats vision as entirely, and permanently, concerned with the production of a limited range of descriptions of visible surfaces, for a central database; while the "labyrinthine" design allows any output that a visual system can be trained to associate reliably with features of an optic array and allows forms of learning that set up new communication channels. The labyrinthine theory turns out to have much in common with J.J.Gibson's theory of affordances, while not eschewing information processing as he did. It also seems to fit better than the modular theory with neurophysiological evidence of rich interconnectivity within and between sub-systems in the brain. Some of the trade-offs between different designs are discussed in order to provide a unifying framework for future empirical investigations and engineering design studies. However, the paper is more about requirements than detailed designs.NOTE:
A precursor to this paper was published in 1982 Image interpretation: The way ahead?
Another was written for a conference in 1986, but has never been formally published: What are the purposes of vision?
Title: Must Intelligent Systems Be Scruffy?
Presented at Evolving Knowledge Conference. Reading University Sept 1989
Published in Evolving Knowledge in Natural Science and Artificial Intelligence, Eds J.E.Tiles, G.T.McKee, G.C.Dean, London: Pitman, 1990
Author: Aaron Sloman
Date: Presented 1989, Published 1990, Installed here 22 Feb 2002.
Plain text (troff) version here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/scruffy.ai.text
Abstract:
o Introduction: Neats vs Scruffies
o The scope of AI
o Bow to the inevitable: why scruffiness is unavoidable
o Non-explosive domains
o The physical (biological, social) world is even harder to deal with
o Limits of consistency in intelligent systems
o Scruffy semantics
o So various kinds of scruffiness are inevitable
o What should AI do about this?
o Conclusion
Originally in POP-11 Comes of Age: The Advancement of an AI Programming Language, (Ed) J.A.D.W. Anderson, Ellis Horwood, pp 30-54, 1989.Author: Aaron Sloman
Abstract:
This paper gives an overview of the origins and development of the
programming language Pop-11, one of the Pop family of languages
including Pop1, Pop2, Pop10, Wpop, Alphapop. Pop-11 is the msot
sophisticated version, comparable in scope and power to Common Lisp,
though different in many significant details, including its syntax. For
more on Pop-11 and Poplog, the system of which it is the core language,
see:
http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog.html
http://www.cs.bham.ac.uk/research/poplog/poplog.info.html
This paper first appeared in a collection published in 1989 to celebrate the 21st birthday of the Pop family of languages.
Where published:
Behavioral and Brain Sciences (BBS) 1988, 11 (3): p529-530.Commentary on:
Dennett, D.C. Precis of The Intentional Stance.
BBS 1988 11 (3): 495-505.
Abstract:
This is a short commentary on some aspects of D.C.Dennett's book 'The Intentional Stance'. The paper criticises the "intentional stance" as not providing real insight into the nature of intelligence because it ignores the question HOW behaviour is produced. The paper argues that only by taking the "design stance" can we understand the difference between intelligent and unintelligent ways of doing the same thing.
Filename: Aaron.Sloman_freewill.pdf (Old version)
Author:
Aaron Sloman
Date: 1988 (or earlier)
HISTORY
Originally posted to comp.ai.philosophy circa 1988.
A similar version appeared in AISB Quarterly, Winter 1992/3, Issue 82,
pp. 31-2.
An improved, elaborated, version of this paper with different sub-headings
by Stan Franklin
was published as
Chapter 2 of his book
Artificial Minds (MIT Press, 1995).
Paper back version
available.)
Franklin's Chapter is also available on this web site, with his permission:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/FranklinSlomanFreewill.html
Abstract:
Much philosophical discussion concerning freedom of the will is based on an assumption that there is a well-defined distinction between systems whose choices are free and those whose choices are not. This assumption is refuted by showing that when requirements for behaving systems are considered there are very many design options which correspond to a wide variety of distinctions more or less closely associated with our naive ideas of individual freedom. Thus, instead of one major distinction there are many different distinctions; different combinations of design choices will produce different sorts of agents, and the naive distinction is not capable of classifying them. In this framework, the pre-theoretical concept of freedom of the will needs to be abandoned and replaced with a host of different technical concepts corresponding to the capabilities enabled by different designs.It is argued that biological evolution "discovered" many of the design options and produced more and more complex combinations of increasingly sophisticated designs giving animals more and more freedom (though all the interesting varieties depend on the operation of deterministic mechanisms).
See also section 10.13 of Chapter 10 of The Computer Revolution in Philosophy: Philosophy, science and models of mind (1978) .
Added (2006): Four Concepts of Freewill: Two of them incoherent
This argues that people who discuss problems of free will often talk past each other because they do not clearly perceive that there is not one universally accepted notion of "free will". Rather there are at least four, only two of which are of real value.
Title: Bread today, jam tomorrow: The impact of AI on education
Authors: Benedict du Boulay and Aaron Sloman
Date Installed here: 23 Feb 2016
Where published:
Fifth International Conference on Technology and Education
Education In The 90s: Challenges Of The New Information Technologies
Edinburgh, Scotland 28 - 31 March 1988
Also here (but no longer available):
Cognitive Science Research Papers
Serial No. CSRP 098
School of Cognitive Sciences
University of Sussex
Brighton, BN1 9QN, England
Abstract:
Several factors make it very difficult to automate skilled teacher student interactions, e.g. integrating new material in a way that links effectively to the student's existing knowledge, taking account of the student's goals and beliefs and adjusting the form of presentation as appropriate. These difficulties are illustrated with examples from teaching programming. There are domain-specific and domain-neutral problems in designing ITS. The domain-neutral problems include: encyclopaedic knowledge, combining different kinds of knowledge, knowing how to devise a teaching strategy, knowing how to monitor and modify the strategy, knowing how to motivate intellectual curiosity, understanding the cognitive states and processes involved in needing (wanting) or an explanation, knowing how to cope with social and affective processes, various communicative skills (this includes some of the others), knowing how to use various representational and communicative media, and knowing when to use them (an example of strategy).
In Cognition and Emotion 1,3, pp.217-234 1987,
reprinted in M.A. Boden (ed) The Philosophy of Artificial Intelligence, "Oxford Readings in Philosophy" Series Oxford University Press, pp 231-247 1990.
(Also available as Cognitive Science Research Paper No 62, Sussex University.)
Abstract: (From the introduction)
Ordinary language makes rich and subtle distinctions between different sorts of mental states and processes such as mood, emotion, attitude, motive, character, personality, and so on. Our words and concepts have been honed for centuries against the intricacies of real life under pressure of real needs and therefore give deep hints about the human mind.
Yet actual usage is inconsistent, and our ability to articulate the distinctions we grasp and use intuitively is as limited as our ability to recite rules of English syntax. Words like "motive" and "emotion" are used in ambiguous and inconsistent ways. The same person will tell you that love is an emotion, that she loves her children deeply, and that she is not in an emotional state. Many inconsistencies can be explained away if we rephrase the claims using carefully defined terms. As scientists we need to extend colloquial language with theoretically grounded terminology that can be used to mark distinctions and describe possibilities not normally discerned by the populace. For instance, we'll see that love is an attitude, not an emotion, though deep love can easily trigger emotional states. In the jargon of philosophers (Ryle 1949), attitudes are dispositions, emotions are episodes, though with dispositional elements.
For a full account of these episodes and dispositions we require a theory about how mental states are generated and controlled and how they lead to action -- a theory about the mechanisms of mind. The theory should explain how internal representations are built up, stored, compared, and used to make inferences, formulate plans or control actions. Outlines of a theory are given below. Design constraints for intelligent animals or machines are sketched, then design solutions are related to the structure of human motivation and to computational mechanisms underlying familiar emotional states.
Emotions are analysed as states in which powerful motives respond to relevant beliefs by triggering mechanisms required by resource-limited intelligent systems. New thoughts and motives get through various filters and tend to disturb other ongoing activities. The effects may interfere with or modify the operation of other mental and physical processes, sometimes fruitfully sometimes not. These are states of being "moved". Physiological changes need not be involved. Emotions contrast subtly with related states and processes such as feeling, impulse, mood, attitude, temperament; but there is no space for a full discussion here.
Title: Did Searle attack strong strong or weak strong AI?
Originally inAuthor: Aaron Sloman
A.G. Cohn and J.R. Thomas (eds) Artificial Intelligence and Its Applications, John Wiley and Sons 1986.
(Proceedings AISB Conference, Warwick University, 1985)
Date: 1986
(Installed here 13 Jan 2001 (Originally presented 1985)
(Added HTML version 22 May 2015)
(Added Postscript and PDF versions 23 Oct 2005)
Abstract:
John Searle's attack on the Strong AI thesis, and the published replies, are all based on a failure to distinguish two interpretations of that thesis, a strong one, which claims that the mere occurrence of certain process patterns will suffice for the occurrence of mental states, and a weak one which requires that the processes be produced in the right sort of way. Searle attacks strong strong AI, while most of his opponents defend weak strong AI. This paper explores some of Searle's concepts and shows that there are interestingly different versions of the 'Strong AI' thesis, connected with different kinds of reliability of mechanisms and programs.Keywords: Searle, strong AI, minds and machines, intentionality, meaning, reference, computation.
In Proceedings 7th European Conference on Artificial Intelligence, Brighton, July 1986. Re-printed inAuthor: Aaron Sloman
J.B.H. du Boulay, D.Hogg, L.Steels (eds) Advances in Artificial Intelligence - II North Holland, 369-381, 1987.
Date: 1986
Abstract:
This enlarges on earlier work attempting to show in a general way how it might be possible for a machine to use symbols with `non-derivative' semantics. It elaborates on the author's earlier suggestion that computers understand symbols referring to their own internal `virtual' worlds. A machine that grasps predicate calculus notation can use a set of axioms to give a partial, implicitly defined, semantics to non-logical symbols. Links to other symbols defined by direct causal connections within the machine reduce ambiguity. Axiom systems for which the machine's internal states do not form a model give a basis for reference to an external world without using external sensors and motors.
Based on invited presentation at Fyssen Foundation Workshop on Vision,Author: Aaron Sloman
Versailles France, March 1986, Organiser: M. Imbert
(The proceedings were never published.)
Abstract:
Contents1 Introduction 2 The `modular' theory 3 Previous false starts 4 What is, what should be, and what could be 5 Problems with the modular model 6 Higher level principles 7 Is this a trivial verbal question? 8 Interpretation involves "conceptual creativity" 9 The biological need for conceptual creativity 10 The uses of a visual system 11 Sub-tasks for vision in executing plans 12 Perceiving functions and potential for change 13 Figure and ground 14 Seeing why 15 Seeing spaces 16 Seeing mental states 17 Practical uses of 2-D image information 18 Varieties of descriptive databases 19 Kinds of visual learning 20 What changes during visual learning? 21 Triggering mental processes 22 The enhanced model 23 Conclusion: a three-pronged objective 24 Acknowledgment 25 References At the Fyssen workshop I tried to initiate a discussion of the functions (or purposes, or uses) of vision in humans and other animals. However, the others present all seemed to assume that it was clear what the functions were, and they wished to discuss mechanisms that could explain those functions. However, that usually requires adopting a restricted view of animal vision, or even future robot vision. This paper outlines some of the diversity of functions of vision in animals, and future robots, and begins a discussion of the variety of architectures, forms of representation and mechanisms that could be useful in visual systems in various contexts. There is still a huge amount to be done. This paper extends ideas in Chapters 6 to 10 of The Computer Revolution in Philosophy and in Image interpretation: The way ahead? Some of the ideas were also included in the 1989 paper on vision "On designing a visual system: Towards a Gibsonian computational model of vision."
In Proceedings 9th International Joint Conference on AI, pp 995-1001, Los Angeles, August 1985.Author: Aaron Sloman
Date: 1985
Abstract:
The 'Strong AI' claim that suitably programmed computers can manipulate symbols that THEY understand is defended, and conditions for understanding discussed. Even computers without AI programs exhibit a significant subset of characteristics of human understanding. To argue about whether machines can REALLY understand is to argue about mere definitional matters. But there is a residual ethical question.
In Research and Development in Expert Systems, ed. M Bramer, pp 163-183, Cambridge University Press 1985.Author: A.Sloman
(Proceedings Expert Systems 85 conference. Also Cognitive Science Research paper No 52, Sussex University.)
Date: 1985 (Reformatted December 2005)
Abstract:
Against advocates of particular formalisms for representing ALL kinds of knowledge, this paper argues that different formalisms are useful for different purposes. Different formalisms imply different inference methods. The history of human science and culture illustrates the point that very often progress in some field depends on the creation of a specific new formalism, with the right epistemological and heuristic power. The same has to be said about formalisms for use in artificial intelligent systems. We need criteria for evaluating formalisms in the light of the uses to which they are to be put. The same subject matter may be best represented using different formalisms for different purposes, e.g. simulation vs explanation. If different notations and inference methods are good for different purposes, this has implications for the design of expert systems.This is one of several sequels to the paper presented at IJCAI in 1971
In Real time multiple-motive expert systems, Proceedings Expert Systems 1985,Author: Aaron Sloman
Ed. M. Merry, Cambridge University Press, 1985, pp. 213--224.A sequel to Sloman and Croucher 1981 (Why robots will have emotions)
Date: 1985 (Installed here May 2004).
Abstract:
Sooner or later attempts will be made to design systems capable of dealing with a steady flow of sensor data and messages, where actions have to be selected on the basis of multiple, not necessarily consistent, motives, and where new information may require substantial re-evaluation of plans and strategies, including suspension of current actions. Where the world is not always friendly, and events move quickly, decisions will often have to be made which are time-critical. The requirements for this sort of system are not clear, but it is clear that they will require global architectures very different from present expert systems or even most AI programs. This paper attempts to analyse some of the requirements, especially the role of macroscopic parallelism and the implications of interrupts. It is assumed that the problems of designing various components of such a system will be solved, e.g. visual perception, memory, inference, planning, language understanding, plan execution, etc. This paper is about some of the problems of putting them together, especially perception, decision-making, planning and plan-execution systems.
Title: A Suggestion About Popper's Three Worlds
In the Light of Artificial Intelligence
(Previously: Artificial Intelligence and Popper's Three Worlds)
Author: Aaron Sloman
Date: 1985
Date Installed: 9 Oct 2012
Where published:
In Problems, Conjectures, and Criticisms: New Essays in Popperian Philosophy,
Eds. Paul Levinson and Fred Eidlin, Special issue of ETC: A Review of General Semantics, (42:3) Fall 1985.
http://www.generalsemantics.org/store/etc-a-review-of-general-semantics/309-etc-a-review-of-general-semantics-42-3-fall-1985.html
Abstract:
Materialists claim that world2 is reducible to world1. Work in Artificial Intelligence suggests that world2 is reducible to world3, and that one of the main explanatory roles Popper attributes to world2, namely causal mediation between worlds 1 and 3, is a redundant role. The central claim can be summed up as: "Any intelligent ghost must contain a computational machine." Computation is a world3 process. Moreover, much of AI (like linguistics) is clearly both science and not empirically refutable, so Popper's demarcation criterion needs to be replaced by a criterion which requires scientific theories to have clear and definite consequences concerning what is possible, rather than about what will happen.
Originally published in The Mind and the Machine: philosophical aspects of Artificial Intelligence,Date Installed: 13 Jan 2007 (Originally published 1984)
ed. Stephen Torrance, Ellis Horwood, 1984, pp 35-42.
Abstract: (Extract from text)
Describing this structure is an interdisciplinary task I commend to philosophers. My aim for now is not to do it -- that's a long term project -- but to describe the task. This requires combined efforts from several disciplines including, besides philosophy: psychology, linguistics, artificial intelligence, ethology and social anthropology.Clearly there is not just one sort of mind. Besides obvious individual differences between adults there are differences between adults, children of various ages and infants. There are cross-cultural differences. There are also differences between humans, chimpanzees, dogs, mice and other animals. And there are differences between all those and machines. Machines too are not all alike, even when made on the same production line, for identical computers can have very different characteristics if fed different programs. Besides all these existing animals and artefacts, we can also talk about theoretically possible systems.
NOTE
This theme was taken up by (among others)
Roman V. Yampolskiy, The Universe of Minds (2014)
https://arxiv.org/pdf/1410.0369
https://www.semanticscholar.org/paper/The-Universe-of-Minds-Yampolskiy/8c28056af2b97de5625aaed41791d9c14ea5cfda
Originally in New Horizons in Educational Computing (Ed) M. Yazdani,Author: Aaron Sloman
Ellis Horwood, 1984. pp 220-235
Abstract:
The paper argues that instead of choosing very simple and restricted programming languages and environments for beginners, we can offer them many advantages if we use powerful, sophisticated languages, libraries, and development environments. Several reasons are given. The Pop-11 subset of the Poplog system is offered as an example.NOTE:
The ideas are developed further in the description of teaching resources in Poplog
And in my presentation at the award of an Honorary DSc at the University of Sussex in 2006
Originally in Artificial Intelligence - Human Effects, (Eds) M. Yazdani and A. Narayanan,Author: Aaron Sloman
Ellis Horwood, Chichester, 1984. pp 173--182
Abstract:
(From the introduction to the chapter.)
Cognitive Science has three interrelated aspects: theoretical, applied and empirical. Work in all three areas depends on and feeds back into the other two. Theoretical work explores possible computational systems, possible mental processes and structures, attempting to understand what sorts of mechanisms and representational systems are possible, how they differ, what their strengths and weaknesses are, etc. Empirical work studies existing intelligent systems, e.g. humans and other animals. Applied work is both concerned with problems relating to existing minds (e.g. learning difficulties, psychopathology) and also the design of new useful computational systems. This paper sketches some of the assumptions underlying much of the theoretical work, and hints at some of the practical applications. In particular, education and psychotherapy are both activities in which the computational processes in the mind of the pupil or patient are altered. In order to understand what they are doing, educationalists and psychotherapists require a computational theory of mind. This is not the dehumanising notion it may at first appear to be.
Physical and Biological Processing of ImagesAuthor: Aaron Sloman
(Proceedings of an international symposium organised by The Rank Prize Funds, London, Sept 1982.)
Editors: O.J.Braddick and A.C. Sleigh.
Pages 380--401, Springer-Verlag, 1983.
Some unsolved problems about vision are discussed in relation to the goal of understanding the space of possible mechanisms with the power of human vision. The following issues are addressed: What are the functions of vision? What needs to be represented? How should it be represented? What is a good global architecture for a human like visual system? How should the visual sub-system relate to the rest of an intelligent system? It is argued that there is much we do not understand about the representation of visible structures, the functions of a visual system and its relation to the rest of the human mind. Some tentative positive suggestions are made, but more questions are offered than answers.Note1
This paper is available in two formats as explained above. The OCR version probably has some errors that I have not corrected. But it is much smaller and easier to read than the scanned in images. I had forgotten about this paper for many years, until I stumbled across a reference to it. It is a precursor to
On designing a visual system: Towards a Gibsonian computational model of vision.
(Published in 1989).The 1982 paper presents several of the ideas I later developed in the context of a more embracing theory of the architecture of human-like minds, in which there are concurrently active 'layers' of different kinds performing different tasks, some evolutionarily very old some newer, all sharing the same sensors and effectors (see also 'The mind as a control system'(1993)).
I believe this is potentially a far more powerful and general theory than the much discussed 'dual-stream' or 'dual-pathway' theories of vision based on differences between dorsal and ventral visual pathways. But evaluating the ideas requires a much broader multi-disciplinary perspective, which is not easy for researchers to achieve.
Note2
This paper pointed out, among other things, the need for natural and artificial vision systems to be able to perceive both static and continuously moving structures, and structures with parts that change their shapes and relationships continuously. It also emphasised differences between seeing what is the case and seeing how to do something, especially in a changing situation, involving continuous control of movement (e.g. painting a chair).It later turned out that this distinction, which is familiar to engineers as a distinction between use of vision to acquire and record information that might be used for variety of purposes and use of vision for 'servo-control', was loosely related to distinct functions of ventral and dorsal visual pathways in primate brains, which were misleadingly labelled "what" and "where" pathways by some researchers, who later attempted to correct the confusion was made by renaming these "perception" and "action" pathways, which unfortunately does not allow visual control of actions to be termed "perception" or "seeing". These confusions are still wide-spread.
Where published:
New Ideas in PsychologyAbstract: (Introduction to article)
vol. 1, no = 1 pp. 41--50. Online here
Having discussed these issues with the author over many years, I was not surprised to find myself agreeing with nearly everything in the paper, and admiring the clarity and elegance of its presentation. All I can offer by way of commentary, therefore, is a collection of minor quibbles, some reformulations to help readers for whom the computational approach is very new, and a few extensions of the discussion.Extracts
WHAT IS ARTIFICIAL INTELLIGENCE?I'll start with a few explanatory comments on the nature of A.I., to supplement the section of the paper "A.I. as the Study of Representation". Cognitive Science has three main classes of goals (a) theoretical (the study of possible minds, possible forms of representation and computation), (b) empirical (the study of actual minds and mental abilities of humans and other animals), (c) practical (the attempt to help individuals and society by alleviating problems (i.e. learning problems, mental disorders) and designing new useful intelligent machines).
Activities pursuing these three goals are most fruitful when the goals are interlinked, providing opportunities for feedback between theoretical, empirical and applied work. Artificial Intelligence is a subdiscipline of Cognitive Science which straddles the theoretical approach (studying general properties of possible computational systems) and applications (designing new systems to help in education, industry, commerce, medicine, entertainment). Its empirical content is mostly based not on specialised research, but on common knowledge of many of the things people can do - such as using and understanding language, seeing things, making plans, solving problems, playing games. This knowledge of what people can do sets design goals for both the theoretical and the applied work. In particular, an important aspect of A.I. research is task analysis: given that people can perform a certain task, what are the computational resources required, and what are the trade-offs between different representations and processing strategies? This sort of analysis is relevant to the study of other animals insofar as many human abilities are shared with other animals.
Where published:
Aaron Sloman, Drew V. McDermott, William A. Woods, Brian Cantwell Smith and Patrick J. Hayes,
"Panel discussion: Under What Conditions Can a Machine Attribute Meanings to Symbols?", chaired by Aaron Sloman,
In Proceedings IJCAI 1983, pp44-48,
http://ijcai.org/Past%20Proceedings/IJCAI-83-VOL-1/CONTENT/content.htm
Where published:
Intelligent Information Retrieval: Informatics 7, 1983 (pp.3--14)
Ed. Kevin P. Jones
Proceedings Cambridge Aslib Informatics 7 Conference, Cambridge 22-23 March 1983.
Abstract (Extract from Introduction):
It is rash for the first speaker at a conference to offer to talk about unsolved problems: the risk is that subsequent papers will present solutions. To minimise this risk, I resolved to discuss only some of the really hard long term problems. Consequently, I'll have little to say about solutions!These long-term problems are concerned with the aim of designing really intelligent systems. Of course, it is possible to quibble endlessly about the definition of 'intelligent', and to argue about whether machines will ever really be intelligent, conscious, creative, etc. I want to by-pass such semantic debates by indicating what I understand by the aim of designing intelligent machines. I shall present a list of criteria which I believe are implicitly assumed by many workers in Artificial Intelligence to define their long term aims. Whether these criteria correspond exactly to what the word 'intelligent' means in ordinary language is an interesting empirical question, but is not my present concern. Moreover, it is debatable whether we should attempt to make machines which meet these criteria, but for present purposes I shall take it for granted that this is a worthwhile enterprise, and address some issues about the nature of the enterprise.
Finally, it is not obvious that it is possible to make artefacts meeting these criteria. For now I shall ignore all attempts to prove that the goal is unattainable. Whether it is attainable or not, the process of attempting to design machines with these capabilities will teach us a great deal, even if we achieve only partial successes.
Abstract:
By analysing what we mean by 'A longs for B', and similar descriptions of emotional states we see that they involve rich cognitive structures and processes, i.e. computations. Anything which could long for its mother, would have to have some sort of representation of its mother, would have to believe that she is not in the vicinity, would have to be able to represent the possibility of being close to her, would have to desire that possibility, and would have to be to some extent pre-occupied or obsessed with that desire. The paper includes a fairly detailed discussion of what it means to say 'X is angry with Y', and relationships between anger, exasperation, annoyance, dismay, etc., including exploring some of the dynamics of emotions such as anger. Emotions are contrasted with attitudes and moods.NOTE:
Author: Aaron Sloman
Date: (Originally Published in 1982)
Abstract:
To be added.
Abstract: (Extract from the text)
The distinction between compiled and interpreted programs plays an important role in computer science and may be essential for understanding intelligent systems. For instance programs in a high-level language tend to have a much clearer structure than the machine code compiled equivalent, and are therefore more easily synthesised, debugged and modified. Interpreted languages make it unnecessary to have both representations. Further, if the interpreter is itself an interpreted program it can be modified during the course of execution, for instance to enhance the semantics of the language it is interpreting, and different interpreters may be used with the same program, for different purposes: e.g. an interpreter running the program in 'careful mode' would make use of comments ignored by an interpreter running the program at maximum speed (Sussman 1975). (The possibility of changing interpreters vitiates many of the arguments in Fodor (1975) which assume that all programs are compiled into a low level machine code, whose interpreter never changes).People who learn about the compiled/interpreted distinction frequently re-invent the idea that the development of skills in human beings may be a process in which programs are first synthesised in an interpreted language, then later translated into a compiled form. The latter is thought to explain many features of skilled performance, for instance, the speed, the difficulty of monitoring individual steps, the difficulty of interrupting, starting or resuming execution at arbitrary desired locations, the difficulty of modifying a skill, the fact that performance is often unconscious after the skill has been developed, and so on. On this model, the old jokes about centipedes being unable to walk, or birds to fly, if they think about how they do it, might be related to the impossibility of using the original interpreter after a program has been compiled into a lower level language.
Despite the attractions of this theory I suspect that a different model is required in some cases.
Originally a Cognitive Science Research Paper at Sussex University:Date Installed: 17 Jun 2005. Re-formatted: 11 Mar 2018
Sloman, Aaron and Monica Croucher, "You don't need a soft skin to have a warm heart: towards a computational analysis of motives and emotions," CSRP 004, 1981.
Abstract:
The paper introduces an interdisciplinary methodology for the study of minds of animals humans and machines, and, by examining some of the pre-requisites for intelligent decision-making, attempts to provide a framework for integrating some of the fragmentary studies to be found in Artificial Intelligence.The space of possible architectures for intelligent systems is very large. This essay takes steps towards a survey of the space, by examining some environmental and functional constraints, and discussing mechanisms capable of fulfilling them. In particular, we examine a subspace close to the human mind, by illustrating the variety of motives to be expected in a human-like system, and types of processes they can produce in meeting some of the constraints.
This provides a framework for analysing emotions as computational states and processes, and helps to undermine the view that emotions require a special mechanism distinct from cognitive mechanisms. The occurrence of emotions is to be expected in any intelligent robot or organism able to cope with multiple motives in a complex and unpredictable environment.
Analysis of familiar emotion concepts (e.g. anger, embarrassment, elation, disgust, pity, etc.) shows that they involve interactions between motives (e.g. wants, dislikes, ambitions, preferences, ideals, etc.) and beliefs (e.g. beliefs about the fulfilment or violation of a motive), which cause processes produced by other motives (e.g. reasoning, planning, execution) to be disturbed, disrupted or modified in various ways (some of them fruitful). This tendency to disturb or modify other activities seems to be characteristic of all emotions. In order fully to understand the nature of emotions, therefore, we need to understand motives and the types of processes they can produce.
This in turn requires us to understand the global computational architecture of a mind. There are several levels of discussion: description of methodology, the beginning of a survey of possible mental architectures, speculations about the architecture of the human mind, analysis of some emotions as products of the architecture, and some implications for philosophy, education and psychotherapy.
Originally appeared in Proceedings IJCAI 1981, VancouverAbstract:
Also available from Sussex University as Cognitive Science Research paper No 176
Emotions involve complex processes produced by interactions between motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or violation of a motive, or triggering of a 'motive-generator', can disturb processes produced by other motives. To understand emotions, therefore, we need to understand motives and the types of processes they can produce. This leads to a study of the global architecture of a mind. Some constraints on the evolution of minds are discussed. Types of motives and the processes they generate are sketched.(Note we now use slightly different terminology from that used in this paper. In particular, what the paper labelled as "intensity" we now call "insistence", i.e. the capacity to divert attention from other things.)
NB
This paper is often misquoted as arguing that robots (or at least intelligent robots) should have emotions. On the contrary, the paper argues that certain sorts of high level disturbances (i.e. emotional states) will be capable of arising out of interactions between mechanisms that exist for other reasons. Similarly 'thrashing' is capable of occurring in multi-processing operating systems that support swapping and paging, but that does not mean that operating systems should produce thrashing.A more recent analysis of the confused but fashionable arguments (e.g. based on Damasio's writings) claiming that emotions are needed for intelligence can be found in this semi-popular presentation.
One of the arguments is analogous to arguing that a car requires a functioning horn for its starter motor to work, because damaging the battery can disable the horn and disable the starter motor.
Title: Experiencing Computation: A tribute to Max Clowes
(Originally appeared in Computing in Schools 1981)
Author: Aaron Sloman
Date installed:
11 Feb 2001 (Originally published 1981)
Abstract:
Max Clowes (pronounced as if spelt Clues, or Klews) was one of the pioneers of AI vision research in the UK. He inspired and helped to develop Artificial Intelligence and computational Cognitive Science at he University of Sussex. In 1981 he tragically died, shortly after leaving the University in order to work on computing in Schools. This paper was originally published in 1981. The version here has had some footnotes added referring to subsequent developments.
Title: Deep and shallow simulations
Commentary on: Modeling a paranoid mind, by Kenneth Mark Colby
The Behavioral and Brain Sciences (1981) 4(04) pp 515-534
http://dx.doi.org/10.1017/S0140525X00000030
Abstract:
A deep simulation attempts to model mental processes, whereas a shallow simulation attempts only to replicate behaviour. The question raised by Colby's paper is, What can we learn from a shallow simulation?
Where published:
In: Open Peer Commentary on Shimon Ullman: `Against Direct Perception'Abstract:
Behavioral and Brain Sciences Journal, (BBS) (1980) 3, pp. 401-404The whole publication, including commentaries is:
S. Ullman, Against direct perception
The Behavioral And Brain Sciences (1980) 3, 373-415
http://dx.doi.org/10.1017/S0140525X0000546X
No abstract in paper. Will add a summary here later.Compare my more recent discussion of Gibson:
http://tinyurl.com/BhamCog/talks/#talk93
Aaron Sloman, What's vision for, and how does it work? From Marr (and earlier) to Gibson and Beyond,
Online tutorial presentation, Sep, 2011. Also at
http://www.slideshare.net/asloman/
Where published:
Commentary on 'Minds, brains, and programs' by John R. SearleAbstract:
in The Behavioral and Brain Sciences Journal (BBS) (1980) 3, 417-457
http://dx.doi.org/10.1017/S0140525X00005756
Also http://www.cnbc.cmu.edu/~plaut/MindBrainComputer/papers/Searle80BBS.mindsBrainsPrograms.pdf
This commentary: pages 447-448
Searle's delightfully clear and provocative essay contains a subtle mistake, which is also often made by Al researchers who use familiar mentalistic language to describe their programs. The mistake is a failure to distinguish form from function.That some mechanism or process has properties that would, in a suitable context, enable it to perform some function, does not imply that it already performs that function. For a process to be understanding, or thinking, or whatever, it is not enough that it replicate some of the structure of the processes of understanding, thinking, and so on. It must also fulfil the functions of those processes. This requires it to be causally linked to a larger system in which other states and processes exist. Searle is therefore right to stress causal powers. However, it is not the causal powers of brain cells that we need to consider, but the causal powers of computational processes. The reason the processes he describes do not amount to understanding is not that they are not produced by things with the right causal powers, but that they do not have the right causal powers, since they are not integrated with the right sort of total system.
Title: The primacy of non-communicative language
Author: Aaron Sloman
In The Analysis of Meaning, Proceedings 5,Date: Originally published 1979. Added here 2 Dec 2000
(Invited talk for ASLIB Informatics Conference, Oxford, March 1979,)
ASLIB and British Computer Society, London, 1979.
Eds M. MacCafferty and K. Gray, pages 1--15.
Abstract:
How is it possible for symbols to be used to refer to or describe things? I shall approach this question indirectly by criticising a collection of widely held views of which the central one is that meaning is essentially concerned with communication. A consequence of this view is that anything which could be reasonably described as a language is essentially concerned with communication. I shall try to show that widely known facts, for instance facts about the behaviour of animals, and facts about human language learning and use, suggest that this belief, and closely related assumptions (see A1 to A3, in the paper) are false. Support for an alternative framework of assumptions is beginning to emerge from work in Artificial Intelligence, work concerned not only with language but also with perception, learning, problem-solving and other mental processes. The subject has not yet matured sufficiently for the new paradigm to be clearly articulated. The aim of this paper is to help to formulate a new framework of assumptions, synthesising ideas from Artificial Intelligence and Philosophy of Science and Mathematics.
Where published:
In Donald Michie (Editor) Expert Systems in the Microelectronic Age (Edinburgh University Press, 1979)
Abstract:
A brief introduction to the main problems of epistemology as understood by philosophers and an explanation of (a) why they are relevant to AI, and (b) how they are transformed in the context of AI as the science of natural and artificial intelligent systems.
Author: Aaron Sloman
(University of Sussex. At the University of Birmingham since 1991.)
http://www.cs.bham.ac.uk/~axs
Date installed: 29 Sep 2001
Last updated: 19 Aug 2016
Abstract: See the book contents list
Published 1978: Revised Version, August 2016
HTML Also available as goo.gl/AJLDih
The PDF version is more suitable for printing, and shows page structure better,
but loses some of the detail, e.g. some text indentation.
The PDF version should have contents in a side-panel, e.g. if viewed in XPDF or
Acrobat Reader, but not if viewed "embedded" in
a web browser, e.g. Firefox or Chrome.
The page numbers of the PDF version are likely to change after further edits.
For citations use section numbers/headings rather than page numbers.
(Published free, with a Creative Commons Licence: details below.)
NEW VERSION
The original was photocopied by Manuela Viezzer in 2000, then scanned in by Sammy Snow. A lot of work remained to be done, correcting OCR errors and re-drawing the diagrams (for which I used the 'tgif' package on Linux). Since then most chapters have had additional notes and comments added, all clearly marked as new additions. In July 2015 the separate parts (except for the index) were combined to one integrated document with internal cross-references and made available in html and pdf formats listed above.
Some reviews of the 1978 version are listed below and in the online edition of the book.
OUT OF DATE VERSIONS
Product description added by Sergei Kaunov:
(The review rightly criticises some of the unnecessarily aggressive tone and
throw-away remarks, but also gives the most thorough assessment of the main
ideas of the book that I have seen.
Like many reviewers and AI researchers, Hofstadter, like Stich (see below) regards the philosophy
of science in the first part of the book, e.g. Chapter 2, as relatively uninteresting,
whereas I think
understanding those issues is central to understanding how human
minds work as they learn
about the world and about themselves, and also central
to any good philosophy of science.)
Added 23 Jul 2015: Stich Review
A review of this book was published by Steven P. Stich, in 1981
That review has now been made available, with the author's permission, here:
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/stich-review-crp.html
The review (like Hofstadter's review) criticised the notion of 'Explaining
possibilities' as one
of the aims of science and my use of
Artificial Intelligence as an example, in Chapter 2.
Response to reviews
A partial response to the reviews by Stich and Hofstadter is
available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/explaining-possibility.html
Construction kits as explanations of possibilities
(generators of possibilities)
(Work in progress.)
Abstract:
1. Premack, D., Woodruff, G. Does the chimpanzee have a theory of mind? BBS 1978 1 (4): 515.Despite the virtues of the target articles, I find something sadly lacking: an awareness of deep problems and a search for deep explanations.
2. Griffin, D.R. Prospects for a cognitive ethology. BBS 1978 1 (4): 527.
3. Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. Linguistically-mediated tool use and exchange by chimpanzees (Pan Troglodytes). BBS 1978 1 (4): 539.
Are the authors of these papers merely concerned to collect facts? Clearly not: they are also deeply concerned to learn the extent of man's uniqueness in the animal world, to refute behaviourism, and to replace anecdote with experimental rigour. But what do they have to say to someone who doesn't care whether humans are unique, who believes that behaviourism is either an irrefutable collection of tautologies or a dead horse, and who already is deeply impressed by the abilities of cats, dogs, chimps, and other animals, but who constantly wonders: HOW DO THEY DO IT?
My answer is that the papers do not have much to say about that: for that, investigation of designs for working systems is required, rather than endless collection of empirical facts, interesting as those may be.
See also The primacy of non-communicative language (Above)
Where published:
in Proceedings AISB/GI Conference, 18-20th July 1978,
Hamburg, Germany
Programme Chair: Derek Sleeman
Program Committee:Alan Bundy (Edinburgh) Steve Hardy (Sussex) H. -H. Nagel (Hamburg) Jacques Pitrat (Paris) Derek Sleeman (Leeds) Yorick Wilks (Essex)General chair: K. -H. NAGELPublished by: SSAISB and GI
Abstract:
(Extract from text)
Vision work in AI has made progress with relatively small problems. We are not aware of any system in which many different kinds of knowledge co-operate. Often there is essentially one kind of structure, e.g. a network of lines or regions, and the problem is simply to segment it, and/or to label parts of it. Sometimes models of known objects are used to guide the analysis and interpretation of an image, as in the work of Roberts (1965), but usually there are few such models, and there isn't a very deep hierarchy of objects composed of objects composed of objects....
By contrast, recent speech understanding systems, like HEARSAY (Lesser 1977, Hayes-Roth 1977), deal with more complex kinds of interactions between different sorts of knowledge. They are still not very impressive compared with people, but there are some solid achievements. Is the lack of similar success in vision due to inherently more difficult problems?
Some vision work has explored interactions between different kinds of knowledge, including the Essex coding-sheet project (Brady, Bornat 1976) based on the assumption that provision for multiple co-existing processes would make the tasks much easier. However, more concrete and specific ideas are required for sensible control of a complex system, and a great deal of domain-specific descriptive know-how has to be explicitly provided for many different sub-domains.The POPEYE project is an attempt to study ways of putting different kinds of visual knowledge together in one system.
NOTE:
Chapter 9 of The Computer Revolution in Philosophy provides further information about the Popeye system.
Commentary on Z. Pylyshyn:
Computational models and empirical constraints
Behavioral and Brain Sciences Vol 1 Issue 1 March 1978, pp 91 - 99This commentary: pp 115-6
Installed here: 28 Jul 2014
Abstract:
If we are to understand the nature of science, we must see it as an
activity and achievement of the human mind alongside others, such as the
achievements of children in learning to talk and to cope with people and other
objects in their environment, and the achievements of non-scientists living in a
rich and complex world which constantly poses problems to be solved. Looking at
scientific knowledge as one form of human knowledge, scientific understanding as
one form of human understanding, scientific investigation as one form of human
problem-solving activity, we can begin to see more clearly what science is, and
also what kind of mechanism the human mind is.By undermining the slogan that science is the search for laws, and subsidiary
slogans such as that quantification is essential, that scientific theories must
be empirically refutable, and that the methods of philosophers cannot serve the
aims of scientists, I shall try, in what follows, to liberate some scientists
from the dogmas indoctrinated in universities and colleges. I shall also try to
show philosophers how they can contribute to the scientific study of man,
thereby escaping from the barrenness and triviality complained of so often by
non-philosophers and philosophy students.A side-effect which will be reported elsewhere, is to undermine some old
philosophical distinctions and pour cold water on battles which rage around them
-- like the distinction between subjectivity and objectivity, and the battles
between empiricists and rationalists.Key idea: A major aim of science is not to discover and explain laws, but to
discover what is possible, and how it is possible.This view of science developed further in Sloman (1978) helps to explain the
contributions of Theoretical Linguistics, Chemistry, Artificial Intelligence,
and Computer Science insofar as they all enrich our understanding of what is
possible and how it is possible.
Abstract:
Title: Physicalism and the Bogey of Determinism
Abstract:
Some of ideas that were in the paper and in my responses to
commentators were also presented in
The
Computer Revolution in Philosophy, including a version of
this diagram (originally pages
344-345, in the discussion section below),
discussed in more detail in
Chapter 6 of the book, and later elaborated as an architectural
theory assuming concurrent reactive, deliberative and metamanagement
processes, e.g. as explained in this 1999 paper
Architecture-Based Conceptions of Mind, and later papers.
A slightly revised version (with clearer diagrams) was published as
Chapter 8 of the 1978 book:
The Computer Revolution in Philosophy
Date:
Published/Presented 1974, installed here 3 Jan 2010.
Abstract:
Abstract:
(Extracts from paper)
In order to close a loophole in Shorter's argument I
describe a possible situation in which both physical continuity
and bodily identity are clearly separated from personal identity.
Moreover, the example does not, as Shorter's apparently does, assume the
falsity of current physical theory.
It will be a long time before engineers make a machine which will
not merely copy a tape recording of a symphony, but also correct
poor intonation, wrong notes, or unmusical phrasing. An entirely new
dimension of understanding of what is being copied is required for
this. Similarly, it may take a further thousand years, or more,
before the transcriptor is modified so that when a human body is
copied the cancerous or other diseased cells are left out and
replaced with normal healthy cells, if, by then, the survival rate
for bodies made by this modified machine were much greater than for
bodies from which tumours had been removed surgically, or treated
with drugs, then I should have little hesitation, after being
diagnosed as having incurable cancer, in agreeing to have my old
body replaced by a new healthy one, and the old one destroyed before
recovering from the anaesthetic. This would be no suicide, nor
murder.
Title: Interactions between Philosophy and Artificial Intelligence:
The role of intuition and non-logical reasoning in intelligence,
Originally published in:
This was later revised as
Chapter 7
of
The Computer Revolution in Philosophy
(1978)
Abstract:
There were several sequels to this paper including
the Afterthoughts paper
written in 1975, some further developments regarding ontologies and
criteria for adequacy
in a 1984-5 paper and several other papers
mentioned in
the section on
diagrammatic/visual reasoning here.
Response by Pat Hayes
Reprinted in:
Readings in knowledge representation,
Title: Tarski, Frege and the Liar Paradox
Abstract:
The paper suggests that this view of paradoxes, including the paradox of
the Liar, is superior to Tarski's analysis which required postulating a
hierarchy of meta-languages. We do not need such a hierarchy to explain
what is going on or to deal with the fact that such paradoxes exist.
Moreover, the hierarchy would not necessarily be useful for an
intelligent agent, compared with languages that contain their own
meta-language, like the one I am now using.
Abstract:
This is a sequel to the 1969 paper on
"How to derive 'Better' from 'Is'"
also online at this web site. It presupposes the analysis of 'better' in
the earlier paper, and argues that statements using the word 'ought' say
something about which of a collection of alternatives is better than the
others, in contrast with statements using 'must' or referring to
'obligations', or what is 'obligatory'. The underlying commonality
between superficially different statements like 'You should take an
umbrella with you' and 'The sun should come out soon' is explained,
along with some other philosophical puzzles, e.g. concerning why
'ought' does not imply 'can', contrary to what some philosophers have
claimed.
Curiously, the 'Ought' and 'Better' paper is mentioned at
http://semantics-online.org/blog/2005/08/
in the section on David Lodge's novel "Thinks...", which includes a
reference to
this paper
'What to Do If You Want to Go to Harlem:
Anankastic Conditionals and Related Matters' by
Kai von Fintel and Sabine Iatridou (MIT), which includes a discussion of
the paper on 'Ought' and 'Better'.
Abstract:
(extracts from paper)
In his book Speech Acts (Cambridge University Press, 1969),
Searle discusses what he calls 'the speech act fallacy' (pp.
136,ff), namely the fallacy of inferring from the fact that
The paper argues
that even if conclusion (2) is false, Searle's argument against it is
inadequate because he does not consider all the possible ways in which a
speech-act might account for non-indicative occurrences. In
particular, there are other things we can do with speech acts besides
performing them and predicating their performance, e.g. besides
promising and expressing the proposition that one is promising. E.g. you
can indicate that you are considering performing act F but are not yet
prepared to perform it, as in 'I don't promise to come'.
So the analysis proposed can be summarised thus:
If F and G are speech acts, and p and q propositional contents or other
suitable objects, then:
Abstract:
There are well-known objections to both approaches, and the aim of this
paper is to suggest an alternative which has apparently never previously
been considered, for the very good reason that at first sight it looks
so unpromising, namely the alternative of defining the problematic words
as logical constants.
This should not be confused with the programme of treating them as
undefined symbols in a formal system, which is not new. In this essay an
attempt will be made to define a logical constant "Better"
which has surprisingly many of the features of the ordinary word
"better" in a large number of contexts. It can then be shown
that other important uses of "better" may be thought of as
derived from this use of the word as a logical constant.
The new symbol is a logical constant in that its definition (i.e., the
specification of formation rules and truth-conditions for statements
using it) makes use only of such concepts as "entailment,"
"satisfying a condition," "relation," "set of
properties," which would generally be regarded as purely logical
concepts. In particular, the definition makes no reference to wants,
desires, purposes, interests, prescriptions, choice, non-descriptive
uses
of language, and the other paraphernalia of non-naturalistic (and some
naturalistic) analyses of evaluative words.
(However, some of those 'paraphernalia' can be included in
arguments/subjects to which the complex relational predicate
'better' is applied.)
NOTE Added 7 Nov 2013
I An adequate theory of meaning and truth must account for the following
facts, whose explanation is the topic, though not the aim, of the paper.
(i) Different signs (e.g., in different languages) may express the same
proposition.
(ii) The syntactic and semantic rules in virtue of which sentences are
able to express contingent propositions also permit the expression of
necessary propositions and generate necessary
relations between contingent propositions.
E.g. although 'It snows in Sydney or it does not snow in Sydney' can be
verified empirically (since showing one disjunct to be true would be an
empirical
verification, just as a proposition of the form 'p and not-p'
can be falsified empirically), the empirical enquiry can be
short-circuited by showing what the result must be.
(iii) At least some such restrictions on truth-values, or combinations
of truth-values (e.g., when two or more contingent propositions are
logically equivalent, or inconsistent, or
when one follows from others),
result from purely formal, or logical, or topic-neutral features of the
construction of the relevant propositions, features which have nothing
to do with precisely which concepts occur, or which objects are referred
to. Hence we call some propositions logically true, or logically
false,
and say some inferences are
valid in virtue of their logical form, which
prevents simultaneous truth of premisses and falsity of conclusion.
(iv) The truth-value-restricting logical forms are systematically
inter-related so that the whole infinite class of such forms can be
recursively generated from a relatively small subset, as illustrated in
axiomatisations of logic.
Subsequent discussion will show these statements to be over-simple.
Nevertheless, they will serve to draw attention to the range of facts
whose need of explanation is the starting point of this paper. They have
deliberately been formulated to allow that there may be cases of
non-logical necessity.
Available in three formats:
Date Installed: 23 Dec 2007; Updated 5 Apr 2016
A summary of the meeting
by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley
with abstracts of papers
presented, including this one, was published in
The Journal of Symbolic Logic,
Vol. 28, No. 3. (Sep., 1963), pp. 262-272.
accessible
online here.
This paper extends Frege's concept of a function to "rogators",
which are like functions in that they take arguments and produce
results, but are unlike functions in that their results can depend
on the state of the world, in addition to which arguments they are
applied to.
It was scanned in and digitised in December 2007. The html version was
re-formatted on 5 Apr 2016 and a corresponding "lightweight" PDF version derived
from it. The original 15MB
scanned PDF file is now sloman-rogators-orig.pdf
The key ideas were originally presented in the author's
Oxford DPhil Thesis (Aaron Sloman, 1962): Knowing and Understanding
NOTE
Date Installed:
9 Jan 2007 (Published 1965)
Date Installed: 6 Jan 2010; Published 1964
Where published:
The original bulky scanned PDF chapters and also the new PDF and TXT versions
are available here, along with more detailed
information about the contents, the background to the thesis, and some
references to later developments. The contents list of files is in here.
The scanned PDF (image only) files are also at the Oxford University Bodleian
library web site, via this 'permanent ID':
Abstract:
Some of the ideas developed here were expanded in
(Via LaTeX: derived from a scanned version)
Filename: sloman-tinlap-1975.pdf
(original formatting: also
here)
Title: Afterthoughts on Analogical Representations (1975)
Originally Published in
in
Theoretical Issues in Natural Language Processing (TINLAP-1),
Eds. R. Schank & B. Nash-Webber,
pp. 431--439,
MIT,
Author: Aaron Sloman
Now available online
http://acl.ldc.upenn.edu/T/T75/
Reprinted in
Readings in knowledge representation,
Eds. R.J. Brachman & H.J. Levesque,
Morgan Kaufmann,
1985.
Date installed: 28 Mar 2005
In 1971 I wrote
a paper
attempting to relate some old philosophical
issues about representation and reasoning to problems in Artificial
Intelligence. A major theme of the paper was the importance of
distinguishing "analogical" from "Fregean" representations. I still
think the distinction is important, though perhaps not as important for
current problems in A.I. as I used to think. In this paper I'll try to
explain why.
1974
Filename: sloman-bogey.pdf
(incomplete PDF from OCR)
Filename: sloman-bogey-print.pdf
(A more complete, PDF version, derived from the html version.)
Author: Aaron Sloman
Date: Published 1974, installed here 29 Dec 2005
Presented at an interdisciplinary conference on Philosophy of
Psychology at the University of Kent in 1971. Published in
the proceedings, as
A. Sloman,
'Physicalism and the Bogey of Determinism'
This paper rehearses some relatively old arguments about how
any coherent notion of free will is not only compatible with
but depends on determinism.
(along with Reply by G. Mandler and W. Kessen, and additional comments
by Alan R. White, Philippa Foot and others, and
replies to criticisms)
in
Philosophy of Psychology,
Ed S.C.Brown,
London: Macmillan, 1974, pages 293--304.
(Published by Barnes & Noble in USA.)
Commentary and discussion followed on
pages 305--348.
However the mind-brain identity theory is attacked on the grounds
that what makes a physical event an intended action A is that the
agent interprets the physical phenomena as doing A. The paper
should have referred to the monograph
Intention (1957) by Elizabeth Anscombe
(summarised
here by Jeff Speaks),
which discusses in detail the fact that the same physical event
can have multiple (true) descriptions, using different ontologies.
My point is partly analogous to
Dennett's
appeal to the 'intentional stance', though that
involves an external observer attributing rationality along
with beliefs and desires to the agent. I am adopting the
design stance not the intentional stance, for I do not assume
rationality in agents with semantic competence (e.g. insects), and
I attempt to explain
how an agent has to be designed in order to perform intentional
actions; the design must allow the agent to interpret physical
events (including events in its brain) in a way that is not just
perceiving their physical properties. That presupposes semantic
competence which is to be explained in terms of how the machine
or organism works, i.e. using the design stance, not
by simply postulating rationality and assuming beliefs and desires
on the basis of external evidence.
The html paper preserves original page divisions.
(I may later add further notes
and comments to this HTML version.)
Note added 3 May 2006
An online review of the whole book is available
here.
by Marius Schneider, O. F. M.,
The Catholic University of America, Washington, D. C., apparently
written in 1975.
Title: On learning about numbers: Some problems and speculations
In
Proceedings AISB Conference 1974, University of Sussex,
pp. 173--185,
Author: Aaron Sloman
The aim of this paper is methodological and tutorial.
It uses elementary number competence to show how reflection on the
fine structure of familiar human abilities generates requirements
exposing the inadequacy of initially plausible explanations.
We have to learn how to organise our common sense knowledge and
make it explicit, and we don't need experimental data as much as
we need to extend our model-building know-how.
1973
1972
1971
Filename: sloman-new-bodies.html (HTML)
Title: New Bodies for Sick Persons: Personal Identity Without Physical Continuity
Author: Aaron Sloman
First published in
In Analysis vol 32 NO 2, December 1971, pages 52 --55
Date Installed:
9 Jan 2007 (Originally Published 1971)
In his recent Aristotelian society paper ('Personal identity, personal
relationships, and criteria' in
Proceedings the Aristotelian Society,
1970-71, pp. 165--186), J. M. Shorter argues that the connexion
between physical identity and personal identity is much less tight than
some philosophers have supposed, and, in order to drive a wedge between
the two sorts of identity, he discusses logically possible situations
in which there would be strong moral and practical reasons for treating
physically discontinuous individuals as the same person. I am sure his
main points are correct: the concept of a person serves a certain sort
of purpose and in changed circumstances it might be able to serve
that purpose only if very different, or partially different, criteria
for identity were employed. Moreover, in really bizarre, but "logically"
possible, situations there may be no way of altering the
identity-criteria, nor any other feature of the concept of
person, so as to enable the concept to have the same moral, legal,
political and other functions as before: the concept may simply
disintegrate, so that the question 'Is X really the same person as Y or
not ?', has no answer at all. For instance, this might be the case if
bodily discontinuities and reduplications occurred very frequently.
To suppose that the "essence" of the
concept of a person, or some set of
general logical principles, ensures that questions of identity always
have answers in all possible circumstances, is quite unjustified.
(with full list of references -- added June 2006)
Author:
Aaron Sloman
Proceedings IJCAI 1971
Date added: 12 May 2004
(Proceedings also available
here)
, then reprinted in
Artificial Intelligence, vol 2, 1971,
http://dx.doi.org/10.1016/0004-3702(71)90011-7
then in
J.M. Nicholas, ed.
Images, Perception, and Knowledge,
Dordrecht-Holland: Reidel.
1977
This paper echoes, from a philosophical standpoint, the claim of
McCarthy and Hayes that Philosophy and Artificial Intelligence have
important relations. Philosophical problems about the use of 'intuition'
in reasoning are related, via a concept of analogical representation,
to problems in the simulation of perception, problem-solving and the
generation of useful sets of possibilities in considering how to act.
The requirements for intelligent decision-making proposed by McCarthy
and Hayes in
Some Philosophical
Problems from the Standpoint of Artificial Intelligence (1969)
are criticised as too narrow, because they allowed for the use of only
one formalism, namely logic. Instead general requirements are suggested
showing the usefulness of other forms of representation.
A much cited paper by Hayes discussing issues raised in the 1971 paper and elsewhere
was presented at the AISB Conference at Sussex University in 1974, and later
reprinted in the collection mentioned below. In view of its general significance and
unavailability online I have included the 1974 Conference version here, with the
permission of the author.
File: hayes-aisb-1974-prob-rep.pdf (PDF)
Related work includes these presentations:
Patrick J. Hayes "Some Problems and Non-Problems in Representation Theory"
in Proceedings AISB Summer Conference, 1974
University of Sussex
Eds. R.J. Brachman and H.J. Levesque, Morgan Kaufmann, Los Altos, California, 1985
Originally in Philosophy, Vol XLVI, pages 133-147, 1971
Author: Aaron Sloman
Date installed: 16 Oct 2003
The paper attempts to resolve a variety of logical and semantic
paradoxes on the
basis of Frege's ideas about compositional semantics: i.e. complex
expressions have a reference that depends on the references of the
component parts and the mode of composition, which determines a function
from the lowest level components to the value for the whole expression.
The paper attempts to show that it is inevitable within this framework
that some syntactically well formed expressions will fail to have any
reference, even though they may have a well defined sense.
This can be compared with the ways in which syntactically well-formed
programs in programming languages may fail to terminate or in some other
way fail semantically and produce run-time errors.
1970
Title: 'Ought' and 'Better'
Author: Aaron Sloman
Date Installed: 19 Sep 2005
Originally published as
Aaron Sloman, 'Ought and Better'
Mind, vol LXXIX, No 315, July 1970, pp
385--394)
1969
Title: Transformations of Illocutionary Acts (1969)
Author: Aaron Sloman
First published in
Analysis Vol 30 No 2, December 1969 pages 56-59
Date Installed:
10 Jan 2007
This paper discusses varieties of negation and other logical
operators when applied to speech acts, in response to an argument by
John Searle.
(1) in simple indicative sentences, the word W is used to perform
some speech-act A (e.g. 'good' is used to commend, 'true' is used to
endorse or concede, etc.)
the conclusion that
(2) a complete philosophical explication of the concept W is given
when we say 'W is used to perform A'.
He argues that as far as the words 'good', 'true', 'know' and 'probably'
are concerned, the conclusion is false because the speech-act analysis
fails to explain how the words can occur with the same meaning in
various grammatically different contexts, such as interrogatives ('Is it
good?'), conditionals('If it is good it will last long'), imperatives
('Make it good'), negations, disjunctions, etc.
o Utterances of the structure
'If F(p) then G(q)' express provisional commitment to performing G on q,
pending the performance of F on p
It is not claimed that 'not', 'if', etc., always are actually used in
accordance with the above analyses, merely that this is a possible type
of analysis which (a) allows a word which in simple indicative sentences
expresses a speech act to contribute in a uniform way to the meanings of
other types of sentences and (b) allows signs like 'not', 'if', the
question construction, and the imperative construction,
to have uniform
effects on signs for speech acts. This type of analysis differs from the
two considered and rejected by Searle. Further, if one puts either
assertion or commendation or endorsement
in place of the speech acts F and G in the above schemata, then the
results seem to correspond moderately well with some (though not
all) actual uses of the words and constructions in question. With
other speech acts, the result does not seem to correspond to
anything in ordinary usage: for instance, there is nothing in
ordinary English which corresponds to applying the imperative
construction to the speech act of questioning, or even commanding,
even though if this were done in accordance with the above schematic
rules the result would in theory be intelligible.
o Utterances of the form 'F(p) or
G(q) 'would express a commitment to performing (eventually) one or other
or both of the two acts though neither is performed as yet.
o The question
mark, in utterances of the form 'F(p)?' instead of expressing some new
and completely unrelated kind of speech act, would merely express
indecision concerning whether to perform F on p together with an attempt
to get advice or help in resolving the indecision.
o
The imperative form 'Bring it about that . .' followed by a suitable
grammatical transformation of F(p) would express the act of trying to
get (not cause) the hearer to bring about that particular state of
affairs in which the speaker would perform the act F on p (which is not
the same as simply bringing it about that the speaker performs the act).
Title: How to derive "better" from "is",
Author: Aaron Sloman
Originally Published as:
A. Sloman
How to derive "better" from "is"
American Philosophical Quarterly,
Date Installed here: 23 Oct 2002
Vol 6, Number 1, Jan 1969,
pp 43--52.
ONE type of naturalistic analysis of words like "good,"
"ought," and "better" defines them in terms of
criteria for applicability which vary from one context to another (as in
"good men," "good typewriter," "good method of
proof"), so that their meanings vary with context. Dissatisfaction
with this "crude" naturalism leads some philosophers to
suggest that the words have a context-independent non-descriptive
meaning defined in terms of such things as expressing emotions,
commanding, persuading, or guiding actions.
I was under the impression that no philosophers had ever paid any attention to this
paper. I've just discovered a counter example:
Paul Bloomfield 'Prescriptions Are Assertions: An Essay On Moral Syntax'
American Philosophical Quarterly Vol 35, No 1, January 1998
1968
(132 KBytes, via latex from OCR -- PDF)
Filename:
sloman-ExplainNecessity.pdf
(11.4 MB Scanned PDF from original)
Title: Explaining Logical Necessity
Author: Aaron Sloman
Date Installed: 4 Dec 2007 (Published originally in 1968);
Updated 19 Dec 2009
in
Proceedings of the Aristotelian Society,
1968/9,
Volume, 69,
pp 33--50.
Abstract: (From the introductory section)
Summary:
I: Some facts about logical necessity stated.
II: Not all necessity is logical.
III: The need for an explanation.
IV: Formalists attempt unsuccessfully to reduce logic to syntax.
V: The no-sense theory of Wittgenstein's Tractatus merely reformulates
the problem.
VI: Crude conventionalism is circular.
VII: Extreme conventionalism is more sophisticated.
VIII: It yields some important insights.
IX: But it ignores the variety of kinds of proof.
X: Proofs show why things must be so, but different proofs show
different things. Hence there can be no
general explanation of necessity.
1967
1966
1965
Author: Aaron Sloman
Re-formatted 5 Apr 2016
Added 5 Apr 2016
This paper was originally presented at a meeting of the Association for
Symbolic Logic held in St. Anne's College, Oxford, England from 15-19
July 1963 as a NATO Advanced Study Institute with a Symposium on
Recursive Functions sponsored by the Division of Logic, Methodology and
Philosophy of Science of the International Union of the History and
Philosophy of Science.
The full paper was published in the conference proceedings:
Aaron Sloman 'Functions and Rogators', in
Abstract:
Formal Systems and Recursive Functions:
Proceedings of the Eighth Logic Colloquium Oxford, July 1963
Eds J N Crossley and M A E Dummett
North-Holland Publishing Co (1965), pp. 156--175
(Now online).
This paper was described by David
Wiggins as 'neglected but valuable' in his
'Sameness and Substance Renewed'
(2001).
(Published also in E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley)
(1963)
Frege, and others, have made extensive use of the notion of a
function, for example in analysing the role of quantification, the
notion of a function being defined, usually, in the manner familiar to
mathematicians, and illustrated with mathematical examples. On this view
functions satisfy extensional criteria for identity. It is not usually
noticed that in non-mathematical contexts the things which are thought
of as analogous to functions are, in certain respects, unlike the
functions of mathematics. These differences provide a reason for saying
that there are entities, analogous to functions, but which do not
satisfy extensional criteria for identity. For example, if we take the
supposed function 'x is red' and consider its value (truth or falsity)
for some such argument as the lamp post nearest my front door, then we
see that what the value is depends not only on which object is taken as
argument, and the 'function', but also on contingent facts about the
object, in particular, what colour it happens to have. Even if the lamp
post is red (and the value is truth), the same lamp post might have been
green, if it had been painted differently. So it looks as if we need
something like a function, but not extensional, of which we can say that
it might have had a value different from that which it does have. We
cannot say this of a function considered simply as a set of ordered
pairs, for if the same argument had had a different value it would not
have been the same function. These non-extensional entities are
described as 'rogators', and the paper is concerned to explain what the
function-rogator distinction is, how it differs from certain other
distinctions, and to illustrate its importance in logic, from the
philosophical point of view.
Filename:
sloman-necessary.html (HTML)
Title: 'NECESSARY', 'A PRIORI' AND 'ANALYTIC'
Author:
Aaron Sloman
First published in Analysis vol 26, No 1, pp 12-16
1965.
Abstract (actually the opening paragraph of the paper):
It is frequently taken for granted, both by people discussing logical
distinctions and by people using them, that the terms 'necessary',
'a priori', and 'analytic' are equivalent, that they mark not three
distinctions, but one. Occasionally an attempt is made to establish
that two or more of these terms are equivalent. However, it seems me far
from obvious that they are or can be shown to be equivalent, that
they cannot be given definitions which enable them to mark important and
different distinctions. Whether these different distinctions happen to
coincide or not is, as I shall show, a further question, requiring
detailed investigation. In this paper, an attempt will be made to show
in a brief and schematic way that there is an open problem here and
that it is extremely misleading to talk as if there were only one
distinction.
1964
Filename: rules-premisses.pdf (PDF)
Title: Rules of inference, or suppressed premisses? (1964)
Author:
Aaron Sloman
Date Installed:
31 Dec 2006
First published in Mind Volume LXXIII, Number 289 Pp. 84-96,
1964.
Abstract (actually the opening paragraph of the paper):
In ordinary discourse we often use or accept as valid, arguments of the
form "P, so Q", or "P, therefore Q", or "Q, because P" where the
validity of the inference from P to Q is not merely logical: the
statement of the form "If P then Q" is not a logical truth, even if it
is true. Inductive inferences and inferences made in the course of moral
arguments provide illustrations of this. Philosophers, concerned about
the justification for such reasoning, have recently debated whether the
validity of these inferences depends on special rules of inference which
are not merely logical rules, or on suppressed premisses which, when
added to the explicit premisses, yield an argument in which the
inference is logically, that is deductively, valid. In a contribution to
MIND ("Rules of Inference in Moral Reasoning", July 1961), Nelson Pike
describes such a debate concerning the nature of moral reasoning. Hare
claims that certain moral arguments involve suppressed deductive
premisses, whereas Toulmin analyses them in terms of special rules
of
inference, peculiar to the discourse of morality. Pike concludes that
the main points so far made on either side of the dispute are "quite
ineffective" (p. 391), and suggests that the problem itself is to blame,
since the reasoning of the "ordinary moralist" is too rough and ready
for fine logical distinctions to apply (pp. 398-399). In this paper an
attempt will be made to take his discussion still further and explain in
more detail why arguments in favour of either rules of inference or
suppressed premisses must be ineffective. It appears that the root of
the trouble has nothing to do with moral reasoning specifically, but
arises out of a general temptation to apply to meaningful discourse a
distinction which makes sense only in connection with purely formal
calculi.
Title: Colour Incompatibilities and Analyticity
Author: Aaron Sloman
Analysis, Vol. 24, Supplement 2. (Jan., 1964), pp. 104-119.
Abstract: (Opening paragraph)
The debate about the possibility of synthetic necessary truths is an
old and familiar one. The question may be discussed either in a
general way, or with reference to specific examples. This essay is
concerned with the specific controversy concerning the
incompatibility of colours, or colour concepts, or colour words. The
essay is mainly negative: I shall neither assume, nor try to prove,
that colours are incompatible, or that their incompatibility is
either analytic or synthetic, but only that certain more or Less
familiar arguments intended to show that incompatibility relations
between colours are analytic fail to do so. It will follow from
this that attempts to generalise these arguments to show that no
necessary truths can be synthetic will be unsuccessful, unless they
bring in quite new sorts of considerations. The essay does, however,
have a positive purpose, namely the partial clarification of some of
the concepts employed by philosophers who discuss this sort of
question, concepts such as 'analytic' and 'true in virtue of
linguistic rules'. Such clarification is desirable since it is often
not at all clear what such philosophers think that they have
established, since the usage of these terms by philosophers is often
so loose and divergent that disagreements may be based on partial
misunderstanding. The trouble has a three-fold source : the meaning
of 'analytic' is unclear, the meaning of 'necessary' is unclear, and
it is not always clear what these terms are supposed to be applied
to. (E.g. are they sentences, statements, propositions, truths,
knowledge, ways of knowing, or what?) Not all of these confusions
can be eliminated here, but an attempt will be made to clear some of
them away by giving a definition of 'analytic' which avoids some of
the confused and confusing features of Kant's exposition without
altering the spirit of his definition.
1963
Author: Aaron Sloman
Date Installed: 23 Dec 2007
A summary of the 1963 Logic Colloquium was published
by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley
with abstracts of papers
presented, including
my 'Functions and Rogators', was published in
The Journal of Symbolic Logic,
Vol. 28, No. 3. (Sep., 1963), pp. 262-272.
accessible
online here.
1962
Title: Oxford University DPhil Thesis (1962): Knowing and Understanding
Relations between meaning and truth, meaning and necessary truth,
meaning and synthetic necessary truth
Author: Aaron Sloman
In 2016, the thesis chapters were combined to form a freely available
machine readable book, in PDF and TXT/HTML formats.
o Full thesis transcribed (PDF)
o Full thesis transcribed (html)
(Added 6 Jan 2018)
(Plain text, i.e. no italics/underlining, but
with figures added, on pages 287, 288, 307)
This thesis was scanned in and made generally available by Oxford University
Research Archive (at the Bodleian library) in the form of PDF versions of the
chapters, in 2007. Those PDF files had only the scanned image content and were
viewable and printable, but not searchable. In 2014 a few of the files were
converted to text. In 2016, with the help of Luc Beaudoin an Indian company
(Hitech) was engaged to retype the remaining chapters. All the chapters are now
available in searchable .txt and .pdf forms. Later a free book version
containing all the chapters will be made available here. (Email a.sloman if you
would like an early copy.)
Date Installed: 2 May 2007 (Last updated 6 Jan 2018)
http://ora.ox.ac.uk/objects/uuid:cda7c325-e49f-485a-aa1d-7ea8ae692877
The aim of the thesis is to show that there are some
synthetic necessary truths, or that synthetic apriori knowledge is
possible. This is really a pretext for an investigation into the
general connection between meaning and truth, or between
understanding and knowing, which, as pointed out in the preface, is
really the first stage in a more general enquiry concerning meaning.
(Not all kinds of meaning are concerned with truth.) After the
preliminaries (chapter one), in which the problem is stated and some
methodological remarks made, the investigation proceeds in two
stages. First there is a detailed inquiry into the manner in which
the meanings or functions of words occurring in a statement help to
determine the conditions in which that statement would be true (or
false). This prepares the way for the second stage, which is an
inquiry concerning the connection between meaning and necessary
truth (between understanding and knowing apriori). The first stage
occupies Part Two of the thesis, the second stage Part Three. In all
this, only a restricted class of statements is discussed, namely
those which contain nothing but logical words and descriptive words,
such as "Not all round tables are scarlet" and "Every three-sided
figure is three-angled". (The reasons for not discussing proper
names and other singular definite referring expressions are given in
Appendix I.)
Some (Possibly) New Considerations
Regarding Impossible Objects
Their significance for mathematical cognition,
and current serious limitations of AI vision systems.
Maintained by
Aaron Sloman.
Email
A.Sloman@cs.bham.ac.uk