THE UNIVERSITY OF BIRMINGHAM
School of Computer Science
THE COGNITION AND AFFECT PROJECT

PAPERS ADDED IN THE YEAR 2007 (APPROXIMATELY)

PAPERS 2007 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE

NOTE


See also
This file is http://www.cs.bham.ac.uk/research/projects/cogaff/07.html
Maintained by Aaron Sloman
This contains an index to files in the Cognition and Affect Project's FTP/Web directory produced or published in the year 2007. Some of the papers published in this period were produced earlier and are included in one of the lists for an earlier period http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents

A list of PhD and MPhil theses was added in June 2003

This file Last updated:
30 Dec 2009; 13 Nov 2010; 7 Jul 2012; 26 Nov 2012; 11 Jul 2017


PAPERS (AND TALKS) IN THE COGNITION AND AFFECT DIRECTORY
Produced or published in 2007 (Approximately)
(Latest first)

Most of the papers listed here are in postscript and PDF format. More recent papers are in PDF only. A few are in HTML only.
For information on free browsers for these formats see http://www.cs.bham.ac.uk/~axs/browsers.html


The following Contents list (in reverse chronological order) contains links to locations in this file giving further details, including abstracts, and links to the papers themselves.

JUMP TO DETAILED LIST (After Contents)

CONTENTS -- FILES 2007 (Latest First)

What follows is a list of links to more detailed information about each paper. From there you can select the actual papers, in various formats, e.g. PDF, postscript and some in html.

Note: Several of the items listed here were actually published several decades ago, but have only now been digitised and made available online.


DETAILS OF FILES AVAILABLE

BACK TO CONTENTS LIST


CoSy Papers and Presentations
Many CogAff papers were added to the Birmingham CoSy (EU Robotics Project 2004-2008) Web site


Filename: jablonka-sloman-chappell.html (HTML)
Filename: jablonka-sloman-chappell.pdf (PDF)
Title: Computational Cognitive Epigenetics
         (BBS Commentary on Jablonka and Lamb: Evolution in Four Dimensions.)
Authors: Aaron Sloman and Jackie Chappell
Abstract:

J&L refer only implicitly to aspects of cognitive competence that preceded both evolution of human language and language learning in children. These are important for evolution and development but need to be understood using the 'design-stance', which the book adopts only for molecular and genetic processes, not for behavioural and symbolic processes. Design-based analyses reveal more routes from genome to behaviour than J&L seem to have considered. This both points to gaps in our understanding of evolution and epigenetic processes, and may lead to possible ways of filling the gaps.
Previously located at http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703

Filename: meta-requirements.html (HTML):
Filename: meta-requirements.pdf (PDF):
A First Draft Analysis of some Meta-Requirements for Cognitive Systems in Robots
(An exercise in logical topography analysis.)
AUTHORS: Aaron Sloman and David Vernon
DATE INSTALLED: 20 Jan 2007
Bibliographic information: Unpublished discussion paper.
ABSTRACT:

This is a contribution to discussions regarding the construction of a research roadmap for future cognitive systems, including intelligent robots, in the context of the euCognition network, and the UKCRC Grand Challenge 5: Architecture of Brain and Mind.

We argue that in the context of trying either (a) to produce working systems to elucidate scientific questions about intelligent systems, or (b) to advance long term engineering objectives through advancing science, the task of coming up with a set of requirements that is sufficiently detailed to provide a basis for developing milestones and evaluation criteria is itself a hard research problem. One aspect of the problem is to provide an analysis of words and phrases that are commonly used to specify objectives, but whose meanings are very abstract and unclear, in particular words like "robust". "flexible", "creative" and "autonomous". This document argues that the words all share a feature that could be described as expressing a "meta-requirement". What that means is that none of them is directly associated with a set of features which, if found in an object or process or system, would justify the application of the label, or which can be used to derive design features. In other words the words express concepts that do not specify criteria for their instances though they do express criteria for deriving criteria.

Concepts with these features are examples of parametric polymorphism. How they work, and what their use implies, depends in fairly systematic ways on what other concepts they are combined with. The concept "efficient" illustrates this: each of the following can be more or less efficient: a car engine, a lawn mower, a mathematical algorithm, a proof, a plan for building someting, an educational policy. But what makes each type efficient depends on the function or purpose, in a fairly systematic way.

To derive criteria from such polymorphic concepts more information is required, from which the criteria can be derived, in a systematic way that differs for each of the meta-criteria. Consciousness is an example: what it means to be conscious of a sound, the colour of an object, a change that has happened, a puzzling thought, an intention, a lack of understanding, being watched, and many more involve different details that depend on what one is conscious OF.

Analyses are presented of some the labels used to specify meta-requirements. This is an exercise in analysis of logical topography. It is related to the paper COSY-DP-0605 ("Spatial prepositions as higher order functions: And implications of Grice's theory for evolution of language") which also discusses meta-concepts/higher-order concepts.

Subsequent work will need to provide more detailed examples of the use of the various meta-criteria.


COSY-DP-0703 (HTML)
TITLE: Two Notions Contrasted: 'Logical Geography' and 'Logical Topography'
(Variations on a theme by Gilbert Ryle: The logical topography of 'Logical Geography'.)
AUTHOR(S): Aaron Sloman
DATE INSTALLED: 30 Dec 2007
Updated: 8 Jan 2008
Bibliographic information: Unpublished discussion paper
ABSTRACT:

In his 1949 book The Concept of Mind (CM) and in other writings the philosopher Gilbert Ryle suggested that a good way for philosophers to resolve some philosophical disputes (often by discovering that both sides were based on conceptual confusions) is to study the 'logical geography' of the concepts involved. I used to think I knew what that meant. But now I think I was using a different concept of from Ryle's -- referring to a different type of analysis that is fundamentally related to the scientific project of providing explanations of how the world works. This paper provides some background then describes the difference I have in mind, showing how a theory about how some class of objects works generates a set of possible types of states, events and processes that can be referred to as a "Logical Topography". There are different ways of carving up that space into different categories or identifying different relationships that can occur within it. Those different ways define different "Logical geographies". This paper shows how work in AI and robotics can extend the work of Ryle and other philosophers by exposing logical topographies that support different possible logical geographies, only one of which may correspond to how our ordinary concepts work. This helps to resolve some century-old puzzles about the nature of conceptual analysis and to show how the relationships between philosophy and science can be deeper than many philosophers realise.
(This paper extends one of the points made in Appendix IV of my Oxford DPhil thesis (1962),
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1962-01 )


COSY-DP-0702 (HTML):
Title: Predicting Affordance Changes
(Alternative ways to deal with uncertainty)

AUTHOR(S): Aaron Sloman (and the CoSy PlayMate team)
Bibliographic information:Unpublished discussion paper COSY-DP-0702
ABSTRACT:
Discussion of some of the relationships between (a) predicting physical, topological and geometrical consequences of motions and (b) predicting the changes in affordances that result from such motions, including both (b.1.) changes in action affordances (changes in what the agent can do in the environment) and (b.2.) changes in epistemic affordances, i.e. changes in the information available to the agent or changes in the ease of planning or deciding.

It is suggested that in some circumstances the predictions can be based on processes operating on selected fragments of a 2-D representation of a 3-D scene (or a 2.5-D representation when occlusion is involved) and reasoning by manipulating the representation. Moreover, where uncertainty is a problem for prediction it is often due to the existence of a "phase boundary" between configurations where the prediction definitely gives one result and configurations where the prediction definitely gives another result. One way of reducing uncertainty is move an object (or even the viewing position) away from such a phase boundary.

This sometimes allows simple, deterministic, geometric reasoning to be used, instead of much more complex and unreliable reasoning with probability distributions and expected utilities.


Filename: chappell-sloman-ijuc-07.pdf
Title: Natural and artificial meta-configured altricial information-processing systems

Authors: Jackie Chappell and Aaron Sloman
Date Installed: Nov 2006, Published 2007

Where published:

Invited contribution to a special issue of The International Journal of Unconventional Computing
Vol 2, Issue 3, 2007, pp. 211--239
Abstract:
The full variety of powerful information-processing mechanisms 'discovered' by evolution has not yet been re-discovered by scientists and engineers. By attending closely to the diversity of biological phenomena, we may gain new insights into (a) how evolution happens, (b) what sorts of mechanisms, forms of representation, types of learning and development and types of architectures have evolved, (c) how to explain ill-understood aspects of human and animal intelligence, and (d) new useful mechanisms for artificial systems. We analyse tradeoffs common to both biological evolution and engineering design, and propose a kind of architecture that grows itself, using, among other things, genetically determined meta-competences that deploy powerful symbolic mechanisms to achieve various kinds of discontinuous learning, often through play and exploration, including development of an 'exosomatic' ontology, referring to things in the environment --- in contrast with learning systems that discover only sensorimotor contingencies or adaptive mechanisms that make only minor modifications within a fixed architecture.

Keywords:
behavioural epigenetics, biologically inspired robot architectures, development of behaviour, exosomatic ontology, evolution of behaviour, nature/nurture tradeoffs, precocial-altricial spectrum, preconfigured/meta-configured competences sensorimotor contingencies.

NOTE:
This paper is a sequel to a paper published in proceedings of IJCAI 2005 by the same authors The Altricial-Precocial Spectrum for Robots.

Further work on this topic is presented in this online paper:
A.Sloman The Meta-Configured Genome (2017)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html


Filename: sloman-aaai-consciousness.pdf
Why Some Machines May Need Qualia and How They Can Have Them:
Including a Demanding New Turing Test for Robot Philosophers

Invited presentation for AAAI Fall Symposium 2007
AI and Consciousness: Theoretical Foundations and Current Approaches
(
Symposium Web site
Supplementary Web Site )
Author: Aaron Sloman
Date Installed: 3 Sep 2008 (Previously on CoSy site)

Abstract:

This paper extends three decades of work arguing that instead of focusing only on (adult) human minds, we should study many kinds of minds, natural and artificial, and try to understand the space containing all of them, by studying what they do, how they do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding the complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. in part because current ontologies for specifying and comparing designs are inconsistent and inadequate. A methodology for making progress is summarised and a novel requirement proposed for human-like philosophical robots, namely that a single generic design, in addition to meeting many other more familiar requirements, should be capable of developing different and opposed viewpoints regarding philosophical questions about consciousness, and the so-called hard problem. No designs proposed so far come close.

See also this short talk at Bielefeld on 10th October 2007

http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#bielefeld
'Why robot designers need to be philosophers'



Filename: sloman-aaai-representation.pdf
Title: Diversity of Developmental Trajectories in Natural and Artificial Intelligence

Invited presentation for AAAI Fall Symposium 2007
Computational Approaches to Representation Change during Learning and Development.
(
Symposium Web site)
Authors: Aaron Sloman
Date Installed: 3 Sep 2008 (Previously on CoSy site)

Abstract:

There is still much to learn about the variety of types of learning and development in nature and the genetic and epigenetic mechanisms responsible for that variety. This paper is one of a collection exploring ideas about how to characterise that variety and what AI researchers, including robot designers, can learn from it. This requires us to understand important features of the environment. Some robots and animals can be pre-programmed with all the competences they will ever need (apart from fine tuning), whereas others will need powerful learning mechanisms. Instead of using only completely general learning mechanisms, some robots, like humans, need to start with deep, but widely applicable, implicit assumptions about the nature of the 3-D environment, about how to investigate it, about the nature of other information users in the environment and about good ways to learn about that environment, e.g. using creative play and exploration. One feature of such learning could be learning more about how to learn in that sort of environment. What is learnt initially about the environment is expressible in terms of an innate ontology, using innately determined forms of representation, but some learning will require extending the forms of representation and the ontology used. Further progress requires close collaboration between AI researchers, biologists studying animal cognition and biologists studying genetics and epigenetic mechanisms.


The Functions and Rogators paper has now been moved to a new location:
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#rog


Title: What About Their Internal Languages?
Author: Aaron Sloman
Moved to another file 8 Feb 2016


Moved to another file
Title: Explaining Logical Necessity

Author: Aaron Sloman


Filename: sloman-oii-2007.pdf
Title: Requirements for Digital Companions: It's harder than you think (OUT OF DATE)

See the final version (May 2009) here.
Author: Aaron Sloman
Date Installed: 30 Nov 2007

Abstract:

Position Paper for Workshop on Artificial Companions in Society:
Perspectives on the Present and Future
Organised by the Companions project.
Oxford Internet Institute (25th--26th October, 2007)
A closely related slide presentation is here.

Presenting some of the requirements for a truly helpful, as opposed to merely engaging (or annoying) artificial companion, with arguments as to why meeting those requirements is way beyond the current state of the art in AI.

Contents
1  Introduction                                         2
   1.1 Functions of DCs . . . . . . . . . . . . . . .   2
   1.2 Motives for acquiring DCs . . . . . . . . . .    3
2 Categories of DC use and design that interest me.     4
3 Problems of achieving the enabling functions          4
   3.1 Kitchen mishaps . . . . . . . . . . . . . . .    5
   3.2 Alternatives to canned responses . . . . . .     5
   3.3 Identifying affordances and searching for things
        that provide them  . . . . . . . . . . . . .    5
   3.4 More abstract problems . . . . . . . . . . .     6
4 Is the solution statistical?                          6
   4.1 Why do statistics-based approaches work at all?  7
   4.2 What's needed . . . . . . . . . . . . .          7
5 Can it be done?                                       8
6  Rights of intelligent machines                       8
7  Risks of premature advertising                       9
References                                              9


Filename: sloman-sunbook.pdf
Title: Putting the Pieces Together Again (Preprint)

Author: Aaron Sloman

In The Cambridge Handbook of Computational Psychology
Ed. Ron Sun, Cambridge University Press (2008)
Paperback version.
Hardback version.
Date Installed: 20 Oct 2007
(Details here updated: 22 Mar 2008)

Abstract:

This is a 'preprint' for the final chapter of a Handbook of Computational Psychology which is currently in press. The differences between this and the version to be published include British vs American spelling and punctuation. This version also has a few footnotes that had to be excluded. For some reason the publisher did not want abstracts for each chapter, so there is no official abstract. The preprint version also includes a table of contents for the chapter (copied below).

Overview
Instead of surveying achievements of AI and computational Cognitive Science as might be expected, this chapter complements the Editor's review of requirements for work on integrated systems in Chapter 1, by presenting a personal view of some of the major unsolved problems, and obstacles to solving them. It attempts to identify some major gaps, and to explain why progress has been much slower than many people expected. It also includes some recommendations for improving progress and for countering the fragmentation and factionalism of the research community.

It it is relatively easy to identify long term ambitions in vague terms, e.g. the aim of modelling human flexibility, human learning, human cognitive development, human language understanding or human creativity; but taking steps to fulfil the ambitions is fraught with difficulties. So progress in modelling human and animal cognition is slow despite many impressive narrow-focus achievements, including those reported in earlier chapters.

An attempt is made to explain why progress in producing realistic models of human and animal competences is slow, namely (a) the great difficulty of the problems, (b) failure to understand the breadth, depth and diversity of the problems, (c) the fragmentation of the research community and (d) social and institutional pressures against risky multi-disciplinary, long-term research. Advances in computing power, theory and techniques will not suffice to overcome these difficulties. Partial remedies are offered, namely identifying some of the unrecognised problems and suggesting how to plan research on the basis of `backward-chaining' from long term goals, in ways that may, perhaps, help warring factions to collaborate and provide new ways to select targets and assess progress.

Contents of the Chapter

1 Introduction                                                                       1
  1.1 The scope of cognitive modelling . . . . . . . . . . . . . . . . . . . . .     2
  1.2 Levels of analysis and explanation . . . . . . . . . . . . . . . . . . . . .   2
2 Difficulties and how to address them.                                              3
  2.1 Institutional obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . .  3
  2.2 Intrinsic difficulties in making progress . . . . . . . . . . . . . . . . . .  4
3 Failing to see problems: ontological blindness                                     4
4 What are the functions of vision?                                                  5
  4.1 The importance of mobile hands . . . . . . . . . . . . . . . .    . . . . . .  6
  4.2 Seeing processes, affordances and empty spaces . . . . . . .      . . . . . .  8
  4.3 Seeing without recognising objects . . . . . . . . . . . . . . .  . . . . . .  9
  4.4 Many developmental routes to related cognitive competences        . . . . . . 10
  4.5 The role of perception in ontology extension . . . . . . . . .    . . . . . . 10
5 Representational capabilities                                                     11
  5.1 Is language for communication? . . . . . . . . . . . . . . . . . . . . . .    12
  5.2 Varieties of complexity: 'Scaling up' and 'scaling out' . . . . . . . . . .   14
  5.3 Humans scale out, not up . . . . . . . . . . . . . . . . . . . . . . . . .    16
6 Are humans unique?                                                                17
  6.1 Altricial and precocial skills in animals and robots . . . . . . . . . . . .  17
  6.2 Meta-semantic competence . . . . . . . . . . . . . . . . . . . . . . . . .    19
7 Using detailed scenarios to sharpen vision                                        20
   7.1 Sample Competences to be Modelled  . . . . . . . . . . . . . . . . . . .     20
   7.2 Fine-grained Scenarios are Important . . . . . . . . . . . . . . . . . . .   21
   7.3 Behavior specifications are not enough . . . . . . . . . . . . . . . . . .   22
8 Resolving fruitless disputes by methodological `lifting'                          22
   8.1 Analyse before you choose . . . . . . . . . . . . . . . . . . . . . . . . .  23
   8.2 The need to survey spaces of possibilities . . . . . . . . . . . . . . . . . 24
   8.3 Towards an ontology for types of architectures . . . . . . . . . . . . . .   24
9 Assessing scientific progress                                                     25
   9.1 Organising questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
   9.2 Scenario-based backward chaining research    . . . . . . . . . . . . . . . . 27
   9.3 Assessing (measuring?) progress . . . . . .  . . . . . . . . . . . . . . . . 27
   9.4 Replacing rivalry with collaboration . . . . . . . . . . . . . . . . . . . . 28
10 Conclusion                                                                       29
References                                                                          30
--------------------------------------------------------------- 
NOTE:
This chapter overlaps with various other things including


Filename: challenge-penrose.pdf
Title: Perception of structure 2: Impossible Objects

Author: Aaron Sloman
Date Installed: 15 May 2007

Abstract:

This is a sequel to the presentation of a challenge to vision researchers here, about visual perception. This sequel discusses some detailed requirements for visual mechanisms related to how typical (adult) humans see pictures of `impossible objects'.

Many people have seen the picture by M.C. Escher representing a watermill, people, and two towers. It is simultaneously a work of art, a mathematical exercise and a probe into the human visual system. You probably see a variety of 3-D structures of various shapes and sizes, some in the distance and some nearby, some familiar, like flights of steps and a water wheel, others strange, e.g. some things in the `garden'. There are many parts you can imagine grasping, climbing over, leaning against, walking along, picking up, pushing over, etc.: you see both structure and affordances in the scene. Yet all those internally consistent and intelligible details add up to a multiply contradictory global whole. What we see could not possibly exist. The implications of this sort of thing, are discussed, with examples.

So the existence of pictures of impossible objects shows (a) that what we see is not necessarily internally consistent, even when we see 3-D structures and processes and (b) that detecting the impossibility does not happen automatically: it requires extra work, and may sometimes be too difficult. This has implications for forms of representation in 3-D vision, in particular that scene perception cannot involve building a model of the scene, since models cannot be inconsistent.

For additional challenges for vision researchers see
http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane
Challenge for Vision: Seeing a Toy Crane


Filename: challenge.pdf
Title: Perception of structure: Anyone Interested?

Author: Aaron Sloman
Date Installed: 15 May 2007 (Written Feb 2005)

Abstract:

This is not strictly a paper, but a short slide presentation making a point about the state of vision research, when making plans for a robot with vision and manipulation capabilities (the CoSy Playmate.

I have the impression that most of the research work being done on vision in AI is concerned with:

  • Recognition/classification/tracking of objects (including face recognition).
  • Optical character recognition (special case of the previous point)
  • Self-localisation and route learning/route following.
  • Pushing things, avoiding things, blocking things (as in robot football).
  • Various special-purpose applications of the above, e.g. floor cleaning and lawn mowing.
What is missing from the above?
  • Perception of structure (at different levels), e.g.
    • perception of 3-D parts and their relationships
    • Perception of motion in which relationships between parts of one object and parts of other objects change, including things like sliding along, fitting together, pushing, twisting, bending, straightening, inserting, removing, rearranging.
  • Perception of positive and negative affordances and causal relations, e.g.
    • Possibilities for action, for achieving specific effects
    • Obstructions to action, and limitations of actions
    especially as regards parts of complex objects, which can be grasped, pulled, pushed, twisted, rotated, squeezed, stroked, prodded, thrown, caught, chewed, sucked, put on (as clothing or covering), removed, assembled, disassembled, and many more...; as well as many variations of each of the above.
A sequel to this is the discussion of impossible objects, above.



Filename: viezzer-thesis/thesis/viezzer-phd.pdf (PDF)
Filename: viezzer-thesis/thesis/main.ps (Postscript)
Abstract and further information
Title: Autonomous concept formation: An architecture-based analysis (PhD Thesis, 2007)

Author: Manuela Viezzer
Date Installed: 6 May 2007

Abstract:

Abstract, synopsis and program code available here


NOW IN ANOTHER FILE (html overview and PDF chapters)
Title: Oxford DPhil Thesis (1962): Knowing and Understanding
Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth

Author: Aaron Sloman


Filename: sloman-cognitive-modelling-chapter.pdf
Title: Putting the pieces together again

Author: Aaron Sloman

This is an out of date early draft of chapter for a handbook of cognitive modelling. The final chapter is available here.

Date Installed: 27 Apr 2007


Re-located to another file
Title: The structure of the space of possible minds

Author: Aaron Sloman

Originally published in The Mind and the Machine: philosophical aspects of Artificial Intelligence,

Filename: sloman-transformations.pdf
Title: Transformations of Illocutionary Acts (1969)

Author: Aaron Sloman

First published in Analysis Vol 30 No 2, December 1969 pages 56-59
Date Installed: 10 Jan 2007

Abstract: (extracts from paper)

This paper discusses varieties of negation and other logical operators when applied to speech acts, in response to an argument by John Searle.

In his book Speech Acts (Cambridge University Press, 1969), Searle discusses what he calls 'the speech act fallacy' (pp. 136,ff), namely the fallacy of inferring from the fact that

(1) in simple indicative sentences, the word W is used to perform some speech-act A (e.g. 'good' is used to commend, 'true' is used to endorse or concede, etc.)
the conclusion that
(2) a complete philosophical explication of the concept W is given when we say 'W is used to perform A'.
He argues that as far as the words 'good', 'true', 'know' and 'probably' are concerned, the conclusion is false because the speech-act analysis fails to explain how the words can occur with the same meaning in various grammatically different contexts, such as interrogatives ('Is it good?'), conditionals('If it is good it will last long'), imperatives ('Make it good'), negations, disjunctions, etc.

The paper argues that even if conclusion (2) is false, Searle's argument against it is inadequate because he does not consider all the possible ways in which a speech-act might account for non-indicative occurrences.

In particular, there are other things we can do with speech acts besides performing them and predicating their performance, e.g. besides promising and expressing the proposition that one is promising. E.g. you can indicate that you are considering performing act F but are not yet prepared to perform it, as in 'I don't promise to come'. So the analysis proposed can be summarised thus:

If F and G are speech acts, and p and q propositional contents or other suitable objects, then:

o Utterances of the structure 'If F(p) then G(q)' express provisional commitment to performing G on q, pending the performance of F on p
o Utterances of the form 'F(p) or G(q) 'would express a commitment to performing (eventually) one or other or both of the two acts though neither is performed as yet.
o The question mark, in utterances of the form 'F(p)?' instead of expressing some new and completely unrelated kind of speech act, would merely express indecision concerning whether to perform F on p together with an attempt to get advice or help in resolving the indecision.
o The imperative form 'Bring it about that . .' followed by a suitable grammatical transformation of F(p) would express the act of trying to get (not cause) the hearer to bring about that particular state of affairs in which the speaker would perform the act F on p (which is not the same as simply bringing it about that the speaker performs the act).
It is not claimed that 'not', 'if', etc., always are actually used in accordance with the above analyses, merely that this is a possible type of analysis which (a) allows a word which in simple indicative sentences expresses a speech act to contribute in a uniform way to the meanings of other types of sentences and (b) allows signs like 'not', 'if', the question construction, and the imperative construction, to have uniform effects on signs for speech acts. This type of analysis differs from the two considered and rejected by Searle. Further, if one puts either assertion or commendation or endorsement in place of the speech acts F and G in the above schemata, then the results seem to correspond moderately well with some (though not all) actual uses of the words and constructions in question. With other speech acts, the result does not seem to correspond to anything in ordinary usage: for instance, there is nothing in ordinary English which corresponds to applying the imperative construction to the speech act of questioning, or even commanding, even though if this were done in accordance with the above schematic rules the result would in theory be intelligible.


Filename: sloman-new-bodies.pdf (PDF)
Filename: sloman-new-bodies.html (HTML)
Title: New Bodies for Sick Persons: Personal Identity Without Physical Continuity

Author: Aaron Sloman

First published in In Analysis vol 32 NO 2, December 1971, pages 52 --55
Date Installed: 9 Jan 2007 (Originally Published 1971)

Abstract: (Extracts from paper)

In his recent Aristotelian society paper ('Personal identity, personal relationships, and criteria' in Proceedings the Aristotelian Society, 1970-71, pp. 165--186), J. M. Shorter argues that the connexion between physical identity and personal identity is much less tight than some philosophers have supposed, and, in order to drive a wedge between the two sorts of identity, he discusses logically possible situations in which there would be strong moral and practical reasons for treating physically discontinuous individuals as the same person. I am sure his main points are correct: the concept of a person serves a certain sort of purpose and in changed circumstances it might be able to serve that purpose only if very different, or partially different, criteria for identity were employed. Moreover, in really bizarre, but "logically" possible, situations there may be no way of altering the identity-criteria, nor any other feature of the concept of person, so as to enable the concept to have the same moral, legal, political and other functions as before: the concept may simply disintegrate, so that the question 'Is X really the same person as Y or not ?', has no answer at all. For instance, this might be the case if bodily discontinuities and reduplications occurred very frequently. To suppose that the "essence" of the concept of a person, or some set of general logical principles, ensures that questions of identity always have answers in all possible circumstances, is quite unjustified.

In order to close a loophole in Shorter's argument I describe a possible situation in which both physical continuity and bodily identity are clearly separated from personal identity. Moreover, the example does not, as Shorter's apparently does, assume the falsity of current physical theory.

It will be a long time before engineers make a machine which will not merely copy a tape recording of a symphony, but also correct poor intonation, wrong notes, or unmusical phrasing. An entirely new dimension of understanding of what is being copied is required for this. Similarly, it may take a further thousand years, or more, before the transcriptor is modified so that when a human body is copied the cancerous or other diseased cells are left out and replaced with normal healthy cells, if, by then, the survival rate for bodies made by this modified machine were much greater than for bodies from which tumours had been removed surgically, or treated with drugs, then I should have little hesitation, after being diagnosed as having incurable cancer, in agreeing to have my old body replaced by a new healthy one, and the old one destroyed before recovering from the anaesthetic. This would be no suicide, nor murder.



[Now in another file]
Title: 'NECESSARY', 'A PRIORI' AND 'ANALYTIC'
Author: Aaron Sloman


BACK TO CONTENTS LIST


NOTE


Older files in this directory (pre 2007) are accessible via the main index


RETURN TO MAIN COGAFF INDEX FILE

See also the School of Computer Science Web page.

This file is maintained by Aaron Sloman, and designed to be lynx-friendly, and viewable with any browser.
Email A.Sloman@cs.bham.ac.uk