PAPERS 1962-80 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html
Maintained by Aaron Sloman.
It contains an index to files relevant to the Cognition and
Affect Project's FTP/Web directory produced or published in the years
1962-1980. Some of the papers published in this period were produced
earlier and are included in one of the lists for an earlier period. Some
older papers recently digitised have also been included.
http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents
A list of PhD and MPhil theses was added in June 2003
This file Last updated: 10 Jun 2012; 7 Jul 2012; 24 Mar 2014
JUMP TO DETAILED LIST (After Contents)
Where published:
In: Open Peer Commentary on Shimon Ullman: 'Against Direct Perception'Abstract:
Behavioral and Brain Sciences Journal, (BBS) (1980) 3, pp. 401-404The whole publication, including commentaries is:
S. Ullman, Against direct perception
The Behavioral And Brain Sciences (1980) 3, 373-415
http://dx.doi.org/10.1017/S0140525X0000546X
No abstract in paper. Will add a summary here later.Compare my more recent discussion of Gibson:
http://tinyurl.com/BhamCog/talks/#talk93
Aaron Sloman, What's vision for, and how does it work? From Marr (and earlier) to Gibson and Beyond,
Online tutorial presentation, Sep, 2011 (with later updates).
Where published:
Commentary on 'Minds, brains, and programs' by John R. Searle
in The Behavioral and Brain Sciences Journal (BBS) (1980) 3, 417-457
http://dx.doi.org/10.1017/S0140525X00005756
Also http://www.cnbc.cmu.edu/~plaut/MindBrainComputer/papers/Searle80BBS.mindsBrainsPrograms.pdf
This commentary: pages 447-448
Abstract:
Searle's delightfully clear and provocative essay contains a subtle mistake, which is also often made by AI researchers who use familiar mentalistic language to describe their programs. The mistake is a failure to distinguish form from function.That some mechanism or process has properties that would, in a suitable context, enable it to perform some function, does not imply that it already performs that function. For a process to be understanding, or thinking, or whatever, it is not enough that it replicate some of the structure of the processes of understanding, thinking, and so on. It must also fulfil the functions of those processes. This requires it to be causally linked to a larger system in which other states and processes exist. Searle is therefore right to stress causal powers. However, it is not the causal powers of brain cells that we need to consider, but the causal powers of computational processes. The reason the processes he describes do not amount to understanding is not that they are not produced by things with the right causal powers, but that they do not have the right causal powers, since they are not integrated with the right sort of total system.
Mike Brady (MIT)Sponsored by:
Steven Hardy (Sussex)
Joerg Siekmann (Karlsruhe)
Karen Sparck-Jones Cambridge
Bob Wielinga (Amsterdam)
Richard Young (Cambridge)
Abstract
This paper discusses the design of a program that tackles the ambiguity
resulting from the interpretation of line-drawings by means of geometric
constraints alone. It does this by supplementing its basic geometric
reasoning by means of a set of models of various sizes. Earlier programs
are analysed in terms of models,and three different functions for models
are distinguished. Finally, principles for selecting models for the present
purpose are related to the concept of a "mapping event" between the picture
and scene domains.
_______________________________________________________________________
Abstract:
Some ideas are presented, derived from work on the POPEYE vision project,
concerning the nature and use of different kinds of intermediate picture
descriptions. It is suggested that there are "natural elements" in terms of
which stored models should be defined and that it is of prime importance to
search for those intermediate picture descriptions which are most
characteristic of the expression of such elements.
_______________________________________________________________________
Abstract:
Why do people interpret sketches, cartoons, etc. so easily? A theory is
outlined which accounts for the relation between ordinary visual perception
and picture interpretation. Animals and versatile robots need fast,
generally reliable and "gracefully degrading" visual systems. This can be
achieved by a highly - parallel organisation, in which different domains of
structure are processed concurrently, and decisions made on the basis of
incomplete analysis. Attendant risks are diminished in a "cognitively
friendly world" (CFW). Since high Levels of such a system process
inherently impoverished and abstract representations, it is ideally suited
to the interpretation of pictures.
Title: The primacy of non-communicative language
Author: Aaron Sloman
In The Analysis of Meaning, Proceedings 5,Date: Originally published 1979. Added here 2 Dec 2000
(Invited talk for ASLIB Informatics Conference, Oxford, March 1979,)
ASLIB and British Computer Society, London, 1979.
Eds M. MacCafferty and K. Gray, pages 1--15.
Abstract:
How is it possible for symbols to be used to refer to or describe things? I shall approach this question indirectly by criticising a collection of widely held views of which the central one is that meaning is essentially concerned with communication. A consequence of this view is that anything which could be reasonably described as a language is essentially concerned with communication. I shall try to show that widely known facts, for instance facts about the behaviour of animals, and facts about human language learning and use, suggest that this belief, and closely related assumptions (see A1 to A3, in the paper) are false. Support for an alternative framework of assumptions is beginning to emerge from work in Artificial Intelligence, work concerned not only with language but also with perception, learning, problem-solving and other mental processes. The subject has not yet matured sufficiently for the new paradigm to be clearly articulated. The aim of this paper is to help to formulate a new framework of assumptions, synthesising ideas from Artificial Intelligence and Philosophy of Science and Mathematics.Note:
See also Title: What About Their Internal Languages? (1978 -- below)This theme is developed in several later papers and presentations, over several decades, e.g.
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk111
Talk 111: Two Related Themes (intertwined) (2015)
What are the functions of vision? How did human language evolve?
(Languages are needed for internal information processing, including visual processing)
Where published:
In Donald Michie (Editor) Expert Systems in the Microelectronic Age (Edinburgh University Press, 1979)
Abstract:
A brief introduction to the main problems of epistemology as understood by philosophers and an explanation of (a) why they are relevant to AI, and (b) how they are transformed in the context of AI as the science of natural and artificial intelligent systems.
Author: Aaron Sloman
(University of Sussex. At the University of Birmingham since 1991.)
http://www.cs.bham.ac.uk/~axs
Date installed: 29 Sep 2001
Last updated: August 2019
Abstract: See the book contents list
Published 1978: Revised Version, August 2016, August 2018
The PDF version is more suitable for printing, and shows page structure better,
but loses some of the detail, e.g. some text indentation.
The PDF version should have contents in a side-panel, e.g. if viewed in XPDF or
Acrobat Reader, but not if viewed "embedded" in
a web browser, e.g. Firefox or Chrome.
The page numbers of the PDF version are likely to change after further edits.
For citations use section numbers/headings rather than page numbers.
(Published free, with a Creative Commons Licence: details below.)
PARTIAL HISTORY
The original was photocopied by Manuela Viezzer in 2000, then scanned in by Sammy Snow. A lot of work remained to be done, correcting OCR errors and re-drawing the diagrams (for which I used the 'tgif' package on Linux). Since then most chapters have had additional notes and comments added, all clearly marked as new additions. In July 2015 the separate parts (except for the index) were combined to one integrated document with internal cross-references and made available in html and pdf formats listed above.
Some reviews of the 1978 version are listed below and in this document http://www.cs.bham.ac.uk/research/projects/cogaff/crp/concat/crp-reviews.html (also pdf)
OUT OF DATE VERSIONS
-
After the book had been scanned, a collection of separate chapters was made
available at this web site (originally HTML only, then PDF versions were added).
Those have now been merged into the new integrated version
above.
-
Note added 10 Aug 2015
I have discovered that a 2012 version of this book has been made available
on the Archive.Org web site
(https://archive.org/about/
https://archive.org) a
non-profit organisation building an internet library. The book is available
there in various formats:
https://archive.org/details/TheComputerRevolutionInPhilosophyPhilosophyScienceAndModelsOfMind
I don't know whether that archived version will ever be updated.
-
There is
an out of date version online at the eprints web site(PDF) of ASSC
(Association for the Scientific Study of Consciousness).
-
Kindle Ebook Version: added 18 Dec 2011
Sergei Kaunov converted the online
version available in 2011
to Amazon kindle format. (Alas now out of date.) It is available for
download at a very low cost (the minimum allowed by Amazon):
from
http://www.amazon.com/dp/B006JT8FSK
or
http://www.amazon.co.uk/dp/B006JT8FSK
Product description added by Sergei Kaunov:
-
Kindle Mobi-file: http://kaunov.webrestart.ru/upl/CRP.mobi
Created by Sergei Kaunov
-
Epub-file: http://kaunov.webrestart.ru/upl/CRP.epub
Created by Sergei Kaunov
(The review rightly criticises some of the unnecessarily aggressive tone and
throw-away remarks, but also gives the most thorough assessment of the main
ideas of the book that I have seen.
Like many reviewers and AI researchers, Hofstadter, like Stich (see below) regards the philosophy
of science in the first part of the book, e.g. Chapter 2, as relatively uninteresting,
whereas I think
understanding those issues is central to understanding how human
minds work as they learn
about the world and about themselves, and also central
to any good philosophy of science.)
Added 23 Jul 2015: Stich Review
A review of this book was published by Steven P. Stich, in 1981
That review has now been made available, with the author's permission, here:
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/stich-review-crp.html
The review (like Hofstadter's review) criticised the notion of 'Explaining possibilities' as one of the aims of science and my use of Artificial Intelligence as an example, in Chapter 2.
Response to reviews
A partial response to the reviews by Stich and Hofstadter is
available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/explaining-possibility.html
Construction kits as explanations of possibilities
(generators of possibilities)
(Work in progress.)
Abstract:
1. Premack, D., Woodruff, G. Does the chimpanzee have a theory of mind? BBS 1978 1 (4): 515.Despite the virtues of the target articles, I find something sadly lacking: an awareness of deep problems and a search for deep explanations.
2. Griffin, D.R. Prospects for a cognitive ethology. BBS 1978 1 (4): 527.
3. Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. Linguistically-mediated tool use and exchange by chimpanzees (Pan Troglodytes). BBS 1978 1 (4): 539.
Are the authors of these papers merely concerned to collect facts? Clearly not: they are also deeply concerned to learn the extent of man's uniqueness in the animal world, to refute behaviourism, and to replace anecdote with experimental rigour. But what do they have to say to someone who doesn't care whether humans are unique, who believes that behaviourism is either an irrefutable collection of tautologies or a dead horse, and who already is deeply impressed by the abilities of cats, dogs, chimps, and other animals, but who constantly wonders: HOW DO THEY DO IT?
My answer is that the papers do not have much to say about that: for that, investigation of designs for working systems is required, rather than endless collection of empirical facts, interesting as those may be.
See also The primacy of non-communicative language (Above)
Where published:
in Proceedings AISB/GI Conference, 18-20th July 1978,
Hamburg, Germany
Programme Chair: Derek Sleeman
Program Committee:Alan Bundy (Edinburgh) Steve Hardy (Sussex) H. -H. Nagel (Hamburg) Jacques Pitrat (Paris) Derek Sleeman (Leeds) Yorick Wilks (Essex)General chair: K. -H. NAGELPublished by: SSAISB and GI
Abstract:
(Extract from text)
Vision work in AI has made progress with relatively small problems. We are not aware of any system in which many different kinds of knowledge co-operate. Often there is essentially one kind of structure, e.g. a network of lines or regions, and the problem is simply to segment it, and/or to label parts of it. Sometimes models of known objects are used to guide the analysis and interpretation of an image, as in the work of Roberts (1965), but usually there are few such models, and there isn't a very deep hierarchy of objects composed of objects composed of objects....
By contrast, recent speech understanding systems, like HEARSAY (Lesser 1977, Hayes-Roth 1977), deal with more complex kinds of interactions between different sorts of knowledge. They are still not very impressive compared with people, but there are some solid achievements. Is the lack of similar success in vision due to inherently more difficult problems?
Some vision work has explored interactions between different kinds of knowledge, including the Essex coding-sheet project (Brady, Bornat 1976) based on the assumption that provision for multiple co-existing processes would make the tasks much easier. However, more concrete and specific ideas are required for sensible control of a complex system, and a great deal of domain-specific descriptive know-how has to be explicitly provided for many different sub-domains.The POPEYE project was an attempt to study ways of putting different kinds of visual knowledge together in one system.
NOTE:
Chapter 9 of The Computer Revolution in Philosophy provides further information about the Popeye system.
Commentary on Z. Pylyshyn:
Computational models and empirical constraints
Behavioral and Brain Sciences Vol 1 Issue 1 March 1978, pp 91 - 99This commentary: pp 115-6
Originally published inAbstract (From first page)
Proceedings Summer Conference on Artificial Intelligence
AISB-2, July 12-14th 1976, pp. 242-255.
http://www.aisb.org.uk/publications/proceedings/aisb1976.pdf
Editor Mike Brady
POPEYE is a vision program currently being developed by a small group at Sussex University. The aim is to explore the problems of interpreting messy and complex pictures of familiar objects. Familiarity is important because knowledge the objects helps to overcome the problems of dealing with noise and ambiguities. Pictures are presented to POPEYE in the form of a two-dimensional binary array, representing scenes containing overlapping letters made of "bars". Pictures are generated by programs either from descriptions or with the aid of an interactive graphics terminal. We are using POP2, the programming language developed for A.I. at Edinburgh University. However, we have found it useful to extend the language, and this paper describes some of the extensions. POPEYE's domain-specific knowledge will be described on another occasion. POPEYE should process the pictures in a sensible, flexible way, so that the main features to have emerged at any time can redirect the flow of attention. This applies at all levels.NOTE:
Further details of the program were summarised in Chapter 9 of The Computer Revolution in Philosophy available online at
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#chap9
Abstract:
(Via LaTeX: derived from a scanned version)
(Original formatting -- but with photocopying errors:
here)
Title: Afterthoughts on Analogical Representations (1975)
Author: Aaron Sloman
Originally Published in
in
Theoretical Issues in Natural Language Processing (TINLAP-1),
Eds. R. Schank & B. Nash-Webber,
pp. 431--439,
MIT,
Date installed: 28 Mar 2005
Now available online
http://acl.ldc.upenn.edu/T/T75/
Reprinted in
Readings in knowledge representation,
Eds. R.J. Brachman & H.J. Levesque,
Morgan Kaufmann,
1985.
In 1971 I wrote a paper (in IJCAI 1971,
reprinted in AIJ 1971) attempting to relate some old philosophical issues
about representation and reasoning to problems in Artificial Intelligence. A
major theme of the paper was the importance of distinguishing "analogical"
from "Fregean" representations. I still think the distinction is important,
though perhaps not as important for current problems in A.I. as I used to think.
In this paper I'll try to explain why.
1974
Title: Physicalism and the Bogey of Determinism
Author: Aaron Sloman
Date: Published 1974, installed here 29 Dec 2005
Abstract:
Presented at an interdisciplinary conference on Philosophy of Psychology at the University of Kent in 1971. Published in the proceedings, asA. Sloman, 'Physicalism and the Bogey of Determinism'
(along with Reply by G. Mandler and W. Kessen, and additional comments by Alan R. White, Philippa Foot and others, and replies to criticisms)
in Philosophy of Psychology, Ed S.C.Brown, London: Macmillan, 1974, pages 293--304. (Published by Barnes & Noble in USA.)
Commentary and discussion followed on pages 305--348.This paper rehearses some relatively old arguments about how any coherent notion of free will is not only compatible with but depends on determinism.
However the mind-brain identity theory is attacked on the grounds that what makes a physical event an intended action A is that the agent interprets the physical phenomena as doing A. The paper should have referred to the monograph Intention (1957) by Elizabeth Anscombe (summarised here by Jeff Speaks), which discusses in detail the fact that the same physical event can have multiple (true) descriptions, using different ontologies.
My point is partly analogous to Dennett's appeal to the 'intentional stance', though that involves an external observer attributing rationality along with beliefs and desires to the agent. I am adopting the design stance not the intentional stance, for I do not assume rationality in agents with semantic competence (e.g. insects), and I attempt to explain how an agent has to be designed in order to perform intentional actions; the design must allow the agent to interpret physical events (including events in its brain) in a way that is not just perceiving their physical properties. That presupposes semantic competence which is to be explained in terms of how the machine or organism works, i.e. using the design stance, not by simply postulating rationality and assuming beliefs and desires on the basis of external evidence.Some of ideas that were in the paper and in my responses to commentators were also presented in The Computer Revolution in Philosophy, including a version of this diagram (originally pages 344-345, in the discussion section below), discussed in more detail in Chapter 6 of the book, and later elaborated as an architectural theory assuming concurrent reactive, deliberative and metamanagement processes, e.g. as explained in this 1999 paper Architecture-Based Conceptions of Mind, and later papers.
The html paper preserves original page divisions.
(I may later add further notes and comments to this HTML version.)
Note added 3 May 2006
An online review of the whole book is available here. by Marius Schneider, O. F. M., The Catholic University of America, Washington, D. C., apparently written in 1975.
In Proceedings AISB Conference 1974, University of Sussex, pp. 173--185,Author: Aaron SlomanA slightly revised version (with clearer diagrams) was published as Chapter 8 of the 1978 book: The Computer Revolution in Philosophy
Date: Published/Presented 1974, installed here 3 Jan 2010.
Abstract:
The aim of this paper is methodological and tutorial. It uses elementary number competence to show how reflection on the fine structure of familiar human abilities generates requirements exposing the inadequacy of initially plausible explanations. We have to learn how to organise our common sense knowledge and make it explicit, and we don't need experimental data as much as we need to extend our model-building know-how.
First published in In Analysis vol 32 NO 2, December 1971, pages 52 --55Date Installed: 9 Jan 2007 (Originally Published 1971)
Abstract: (Extracts from paper)
In his recent Aristotelian society paper ('Personal identity, personal relationships, and criteria' in Proceedings the Aristotelian Society, 1970-71, pp. 165--186), J. M. Shorter argues that the connexion between physical identity and personal identity is much less tight than some philosophers have supposed, and, in order to drive a wedge between the two sorts of identity, he discusses logically possible situations in which there would be strong moral and practical reasons for treating physically discontinuous individuals as the same person. I am sure his main points are correct: the concept of a person serves a certain sort of purpose and in changed circumstances it might be able to serve that purpose only if very different, or partially different, criteria for identity were employed. Moreover, in really bizarre, but "logically" possible, situations there may be no way of altering the identity-criteria, nor any other feature of the concept of person, so as to enable the concept to have the same moral, legal, political and other functions as before: the concept may simply disintegrate, so that the question 'Is X really the same person as Y or not ?', has no answer at all. For instance, this might be the case if bodily discontinuities and reduplications occurred very frequently. To suppose that the "essence" of the concept of a person, or some set of general logical principles, ensures that questions of identity always have answers in all possible circumstances, is quite unjustified.In order to close a loophole in Shorter's argument I describe a possible situation in which both physical continuity and bodily identity are clearly separated from personal identity. Moreover, the example does not, as Shorter's apparently does, assume the falsity of current physical theory.
It will be a long time before engineers make a machine which will not merely copy a tape recording of a symphony, but also correct poor intonation, wrong notes, or unmusical phrasing. An entirely new dimension of understanding of what is being copied is required for this. Similarly, it may take a further thousand years, or more, before the transcriptor is modified so that when a human body is copied the cancerous or other diseased cells are left out and replaced with normal healthy cells, if, by then, the survival rate for bodies made by this modified machine were much greater than for bodies from which tumours had been removed surgically, or treated with drugs, then I should have little hesitation, after being diagnosed as having incurable cancer, in agreeing to have my old body replaced by a new healthy one, and the old one destroyed before recovering from the anaesthetic. This would be no suicide, nor murder.
Title: Interactions between Philosophy and Artificial Intelligence:
The role of intuition and non-logical reasoning in intelligence,
Originally published in:
Includes short history of the paper at the beginning.
(PDF Original format scanned from IJCAI Proceedings)
Author:
Aaron Sloman
Proceedings IJCAI 1971
(Proceedings also available here:
http://www.ijcai.org/past_proceedings/
, then reprinted in
Artificial Intelligence, vol 2, 1971,
http://dx.doi.org/10.1016/0004-3702(71)90011-7
then in
J.M. Nicholas, ed.
Images, Perception, and Knowledge,
Dordrecht-Holland: Reidel.
1977
This was later revised as Chapter 7 of The Computer Revolution in Philosophy (1978) - listed above. Date added: 12 May 2004
Abstract:
This paper echoes, from a philosophical standpoint, the claim of McCarthy and Hayes that Philosophy and Artificial Intelligence have important relations. Philosophical problems about the use of 'intuition' in reasoning are related, via a concept of analogical representation, to problems in the simulation of perception, problem-solving and the generation of useful sets of possibilities in considering how to act. The requirements for intelligent decision-making proposed by McCarthy and Hayes in Some Philosophical Problems from the Standpoint of Artificial Intelligence (1969) are criticised as too narrow, because they allowed for the use of only one formalism, namely logic. Instead general requirements are suggested showing the usefulness of other forms of representation.There were several sequels to this paper including the Afterthoughts paper written in 1975, some further developments regarding ontologies and criteria for adequacy in a 1984-5 paper and several other papers mentioned in the section on diagrammatic/visual reasoning here.
Response by Pat Hayes
A much cited paper by Hayes discussing issues raised in the 1971 paper and elsewhere was presented at the AISB Conference at Sussex University in 1974, and later reprinted in the collection mentioned below. In view of its general significance and unavailability online I have included the 1974 Conference version here, with the permission of the author.File: hayes-aisb-1974-prob-rep.pdf (PDF)Related work includes:
Patrick J. Hayes "Some Problems and Non-Problems in Representation Theory"
in Proceedings AISB Summer Conference, 1974
University of SussexReprinted in: Readings in knowledge representation,
Eds. R.J. Brachman and H.J. Levesque, Morgan Kaufmann, Los Altos, California, 1985
- A (Possibly) New Theory of Vision (PDF)
- Two views of child as scientist: Humean and Kantian (PDF)
- Work with J. Chappell on causation in animals and robots.
- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
Hidden Depths of Triangle Qualia
Theorems About Triangles, and Implications for Biological Evolution and AI
The Median Stretch, Side Stretch, and Triangle Area Theorem, Old and new proofs.
- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-sum.html
The Triangle Sum Theorem
Old and new proofs concerning the sum of interior angles of a triangle.
(More on the hidden depths of triangle qualia.)
- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Meta-Morphogenesis and Toddler Theorems: Case Studies
Title: Tarski, Frege and the Liar Paradox
Originally in Philosophy, Vol XLVI, pages 133-147, 1971Author: Aaron Sloman
Abstract:
The paper attempts to resolve a variety of logical and semantic paradoxes on the basis of Frege's ideas about compositional semantics: i.e. complex expressions have a reference that depends on the references of the component parts and the mode of composition, which determines a function from the lowest level components to the value for the whole expression. The paper attempts to show that it is inevitable within this framework that some syntactically well formed expressions will fail to have any reference, even though they may have a well defined sense. This can be compared with the ways in which syntactically well-formed programs in programming languages may fail to terminate or in some other way fail semantically and produce run-time errors.The paper suggests that this view of paradoxes, including the paradox of the Liar, is superior to Tarski's analysis which required postulating a hierarchy of meta-languages. We do not need such a hierarchy to explain what is going on or to deal with the fact that such paradoxes exist. Moreover, the hierarchy would not necessarily be useful for an intelligent agent, compared with languages that contain their own meta-language, like the one I am now using.
Abstract:
Originally published as Aaron Sloman, 'Ought and Better' Mind, vol LXXIX, No 315, July 1970, pp 385--394)This is a sequel to the 1969 paper on "How to derive 'Better' from 'Is'" also online at this web site. It presupposes the analysis of 'better' in the earlier paper, and argues that statements using the word 'ought' say something about which of a collection of alternatives is better than the others, in contrast with statements using 'must' or referring to 'obligations', or what is 'obligatory'. The underlying commonality between superficially different statements like 'You should take an umbrella with you' and 'The sun should come out soon' is explained, along with some other philosophical puzzles, e.g. concerning why 'ought' does not imply 'can', contrary to what some philosophers have claimed.
Curiously, the 'Ought' and 'Better' paper is mentioned at http://semantics-online.org/blog/2005/08/ in the section on David Lodge's novel "Thinks...", which includes a reference to this paper 'What to Do If You Want to Go to Harlem: Anankastic Conditionals and Related Matters' by Kai von Fintel and Sabine Iatridou (MIT), which includes a discussion of the paper on 'Ought' and 'Better'.
First published in Analysis Vol 30 No 2, December 1969 pages 56-59Date Installed: 10 Jan 2007
Abstract: (extracts from paper)
This paper discusses varieties of negation and other logical operators when applied to speech acts, in response to an argument by John Searle.In his book Speech Acts (Cambridge University Press, 1969), Searle discusses what he calls 'the speech act fallacy' (pp. 136,ff), namely the fallacy of inferring from the fact that
(1) in simple indicative sentences, the word W is used to perform some speech-act A (e.g. 'good' is used to commend, 'true' is used to endorse or concede, etc.)the conclusion that(2) a complete philosophical explication of the concept W is given when we say 'W is used to perform A'.He argues that as far as the words 'good', 'true', 'know' and 'probably' are concerned, the conclusion is false because the speech-act analysis fails to explain how the words can occur with the same meaning in various grammatically different contexts, such as interrogatives ('Is it good?'), conditionals('If it is good it will last long'), imperatives ('Make it good'), negations, disjunctions, etc.The paper argues that even if conclusion (2) is false, Searle's argument against it is inadequate because he does not consider all the possible ways in which a speech-act might account for non-indicative occurrences.
In particular, there are other things we can do with speech acts besides performing them and predicating their performance, e.g. besides promising and expressing the proposition that one is promising. E.g. you can indicate that you are considering performing act F but are not yet prepared to perform it, as in 'I don't promise to come'. So the analysis proposed can be summarised thus:
If F and G are speech acts, and p and q propositional contents or other suitable objects, then:
o Utterances of the structure 'If F(p) then G(q)' express provisional commitment to performing G on q, pending the performance of F on pIt is not claimed that 'not', 'if', etc., always are actually used in accordance with the above analyses, merely that this is a possible type of analysis which (a) allows a word which in simple indicative sentences expresses a speech act to contribute in a uniform way to the meanings of other types of sentences and (b) allows signs like 'not', 'if', the question construction, and the imperative construction, to have uniform effects on signs for speech acts. This type of analysis differs from the two considered and rejected by Searle. Further, if one puts either assertion or commendation or endorsement in place of the speech acts F and G in the above schemata, then the results seem to correspond moderately well with some (though not all) actual uses of the words and constructions in question. With other speech acts, the result does not seem to correspond to anything in ordinary usage: for instance, there is nothing in ordinary English which corresponds to applying the imperative construction to the speech act of questioning, or even commanding, even though if this were done in accordance with the above schematic rules the result would in theory be intelligible.
o Utterances of the form 'F(p) or G(q) 'would express a commitment to performing (eventually) one or other or both of the two acts though neither is performed as yet.
o The question mark, in utterances of the form 'F(p)?' instead of expressing some new and completely unrelated kind of speech act, would merely express indecision concerning whether to perform F on p together with an attempt to get advice or help in resolving the indecision.
o The imperative form 'Bring it about that . .' followed by a suitable grammatical transformation of F(p) would express the act of trying to get (not cause) the hearer to bring about that particular state of affairs in which the speaker would perform the act F on p (which is not the same as simply bringing it about that the speaker performs the act).
Title: How to derive "better" from "is",
Author: Aaron Sloman
Originally Published as: A. Sloman How to derive "better" from "is" American Philosophical Quarterly,Date Installed here: 23 Oct 2002
Vol 6, Number 1, Jan 1969, pp 43--52.
Abstract:
ONE type of naturalistic analysis of words like "good," "ought," and "better" defines them in terms of criteria for applicability which vary from one context to another (as in "good men," "good typewriter," "good method of proof"), so that their meanings vary with context. Dissatisfaction with this "crude" naturalism leads some philosophers to suggest that the words have a context-independent non-descriptive meaning defined in terms of such things as expressing emotions, commanding, persuading, or guiding actions.There are well-known objections to both approaches, and the aim of this paper is to suggest an alternative which has apparently never previously been considered, for the very good reason that at first sight it looks so unpromising, namely the alternative of defining the problematic words as logical constants.
This should not be confused with the programme of treating them as undefined symbols in a formal system, which is not new. In this essay an attempt will be made to define a logical constant "Better" which has surprisingly many of the features of the ordinary word "better" in a large number of contexts. It can then be shown that other important uses of "better" may be thought of as derived from this use of the word as a logical constant.
The new symbol is a logical constant in that its definition (i.e., the specification of formation rules and truth-conditions for statements using it) makes use only of such concepts as "entailment," "satisfying a condition," "relation," "set of properties," which would generally be regarded as purely logical concepts. In particular, the definition makes no reference to wants, desires, purposes, interests, prescriptions, choice, non-descriptive uses of language, and the other paraphernalia of non-naturalistic (and some naturalistic) analyses of evaluative words.
(However, some of those 'paraphernalia' can be included in arguments/subjects to which the complex relational predicate 'better' is applied.)
NOTE Added 7 Nov 2013
I was under the impression that no philosophers had ever paid any attention to this
paper. I've just discovered a counter example:
Paul Bloomfield 'Prescriptions Are Assertions: An Essay On Moral Syntax'
American Philosophical Quarterly Vol 35, No 1, January 1998
in Proceedings of the Aristotelian Society, 1968/9, Volume, 69, pp 33--50.Note: some of the key ideas were in Aaron Sloman's Oxford DPhil Thesis (1962): Knowing and Understanding
Abstract: (From the introductory section)
Summary:
I: Some facts about logical necessity stated.
II: Not all necessity is logical.
III: The need for an explanation.
IV: Formalists attempt unsuccessfully to reduce logic to syntax.
V: The no-sense theory of Wittgenstein's Tractatus merely reformulates
the problem.
VI: Crude conventionalism is circular.
VII: Extreme conventionalism is more sophisticated.
VIII: It yields some important insights.
IX: But it ignores the variety of kinds of proof.
X: Proofs show why things must be so, but different proofs show different things. Hence there can be no general explanation of necessity.An adequate theory of meaning and truth must account for the following facts, whose explanation is the topic, though not the aim, of the paper.
(i) Different signs (e.g., in different languages) may express the same proposition.
(ii) The syntactic and semantic rules in virtue of which sentences are able to express contingent propositions also permit the expression of necessary propositions and generate necessary relations between contingent propositions. E.g. although 'It snows in Sydney or it does not snow in Sydney' can be verified empirically (since showing one disjunct to be true would be an empirical verification, just as a proposition of the form 'p and not-p' can be falsified empirically), the empirical enquiry can be short-circuited by showing what the result must be.
(iii) At least some such restrictions on truth-values, or combinations of truth-values (e.g., when two or more contingent propositions are logically equivalent, or inconsistent, or when one follows from others), result from purely formal, or logical, or topic-neutral features of the construction of the relevant propositions, features which have nothing to do with precisely which concepts occur, or which objects are referred to. Hence we call some propositions logically true, or logically false, and say some inferences are valid in virtue of their logical form, which prevents simultaneous truth of premisses and falsity of conclusion.
(iv) The truth-value-restricting logical forms are systematically inter-related so that the whole infinite class of such forms can be recursively generated from a relatively small subset, as illustrated in axiomatisations of logic.
Subsequent discussion will show these statements to be over-simple. Nevertheless, they will serve to draw attention to the range of facts whose need of explanation is the starting point of this paper. They have deliberately been formulated to allow that there may be cases of non-logical necessity.
Where published: Aristotelian Society Supplementary Volume, 41, pp. 77--94,
Wiley-Blackwell, 1967,
(Part of Symposium with R. McGowan.)
Abstract:
(Extract from first page of paper.)
Mr. McGowan's paper seems to have two main aims, first, to say what an inductive inference policy is and how it differs from alternative non-deductive policies, and secondly, to show that the inductive policy is better, or more rational, than the alternatives. I shall criticise his characterisation of induction, his arguments to show its superiority, and some of his undiscussed assumptions. Finally, I shall take the risk of discussing the nature of attempts to justify induction and suggesting some lines of further enquiry, based on an analysis of the logic of "better". I start with some comments on Mr. McGowan's preliminary discussion, before turning to his recursive characterisation of an inductive inference policy.(Extract from final paragraph:)
I have criticised some of the details of his argument and put forward the counter-claim that policies based on his weaker principles of "desistence" ("all observed regularities come to an end, and sooner rather than later") are better at avoiding contradictions and conform to past experience more closely than policies based on his strong principle of persistence. Accordingly, some modifications of his predictive rules have been suggested. Perhaps most importantly of all, I have argued that the assertion that one policy or inference is better or more rational than another is an incomplete assertion until a basis of comparison has been specified, since different policies may be better or more rational in relation to different bases, and I have indicated some possible approaches for further investigation of this point. A final line of investigation which should be mentioned is the problem of deciding which of two bases of comparison is better relative to some higher-order basis of comparison, a problem which may turn out to be very important in connexion with justifications of predictive policies. It seems that I have asked more questions than I have answered. Perhaps formulating them will help someone more familiar with the field than I am to find interesting answers.
See 'How to derive "Better" from "is"', (1969)
Title: Functions and Rogators (1965)
Author: Aaron Sloman
Available in three formats:
Date Installed: 23 Dec 2007; Updated 5 Apr 2016
This paper was originally presented at a meeting of the Association for Symbolic Logic held in St. Anne's College, Oxford, England from 15-19 July 1963 as a NATO Advanced Study Institute with a Symposium on Recursive Functions sponsored by the Division of Logic, Methodology and Philosophy of Science of the International Union of the History and Philosophy of Science.The full paper was published in the conference proceedings:A summary of the meeting by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley with abstracts of papers presented, including this one, was published in The Journal of Symbolic Logic, Vol. 28, No. 3. (Sep., 1963), pp. 262-272. accessible online here.
Aaron Sloman 'Functions and Rogators', inAbstract:
Formal Systems and Recursive Functions:
Proceedings of the Eighth Logic Colloquium Oxford, July 1963
Eds J N Crossley and M A E Dummett
North-Holland Publishing Co (1965), pp. 156--175This paper extends Frege's concept of a function to "rogators", which are like functions in that they take arguments and produce results, but are unlike functions in that their results can depend on the state of the world, in addition to which arguments they are applied to.
It was scanned in and digitised in December 2007. The html version was re-formatted on 5 Apr 2016 and a corresponding "lightweight" PDF version derived from it. The original 15MB scanned PDF file is now sloman-rogators-orig.pdf
The key ideas were originally presented in the author's Oxford DPhil Thesis (Aaron Sloman, 1962): Knowing and Understanding
(Now online).NOTE
This paper was described by David Wiggins as 'neglected but valuable' in his 'Sameness and Substance Renewed' (2001).
Frege, and others, have made extensive use of the notion of a function, for example in analysing the role of quantification, the notion of a function being defined, usually, in the manner familiar to mathematicians, and illustrated with mathematical examples. On this view functions satisfy extensional criteria for identity. It is not usually noticed that in non-mathematical contexts the things which are thought of as analogous to functions are, in certain respects, unlike the functions of mathematics. These differences provide a reason for saying that there are entities, analogous to functions, but which do not satisfy extensional criteria for identity. For example, if we take the supposed function 'x is red' and consider its value (truth or falsity) for some such argument as the lamp post nearest my front door, then we see that what the value is depends not only on which object is taken as argument, and the 'function', but also on contingent facts about the object, in particular, what colour it happens to have. Even if the lamp post is red (and the value is truth), the same lamp post might have been green, if it had been painted differently. So it looks as if we need something like a function, but not extensional, of which we can say that it might have had a value different from that which it does have. We cannot say this of a function considered simply as a set of ordered pairs, for if the same argument had had a different value it would not have been the same function. These non-extensional entities are described as 'rogators', and the paper is concerned to explain what the function-rogator distinction is, how it differs from certain other distinctions, and to illustrate its importance in logic, from the philosophical point of view.
First published in Analysis vol 26, No 1, pp 12-16 1965.Abstract (actually the opening paragraph of the paper):
It is frequently taken for granted, both by people discussing logical distinctions and by people using them, that the terms 'necessary', 'a priori', and 'analytic' are equivalent, that they mark not three distinctions, but one. Occasionally an attempt is made to establish that two or more of these terms are equivalent. However, it seems me far from obvious that they are or can be shown to be equivalent, that they cannot be given definitions which enable them to mark important and different distinctions. Whether these different distinctions happen to coincide or not is, as I shall show, a further question, requiring detailed investigation. In this paper, an attempt will be made to show in a brief and schematic way that there is an open problem here and that it is extremely misleading to talk as if there were only one distinction.
First published in Mind Volume LXXIII, Number 289 Pp. 84-96, 1964.
Abstract (actually the opening paragraph of the paper):
In ordinary discourse we often use or accept as valid, arguments of the form "P, so Q", or "P, therefore Q", or "Q, because P" where the validity of the inference from P to Q is not merely logical: the statement of the form "If P then Q" is not a logical truth, even if it is true. Inductive inferences and inferences made in the course of moral arguments provide illustrations of this. Philosophers, concerned about the justification for such reasoning, have recently debated whether the validity of these inferences depends on special rules of inference which are not merely logical rules, or on suppressed premisses which, when added to the explicit premisses, yield an argument in which the inference is logically, that is deductively, valid. In a contribution to MIND ("Rules of Inference in Moral Reasoning", July 1961), Nelson Pike describes such a debate concerning the nature of moral reasoning. Hare claims that certain moral arguments involve suppressed deductive premisses, whereas Toulmin analyses them in terms of special rules of inference, peculiar to the discourse of morality. Pike concludes that the main points so far made on either side of the dispute are "quite ineffective" (p. 391), and suggests that the problem itself is to blame, since the reasoning of the "ordinary moralist" is too rough and ready for fine logical distinctions to apply (pp. 398-399). In this paper an attempt will be made to take his discussion still further and explain in more detail why arguments in favour of either rules of inference or suppressed premisses must be ineffective. It appears that the root of the trouble has nothing to do with moral reasoning specifically, but arises out of a general temptation to apply to meaningful discourse a distinction which makes sense only in connection with purely formal calculi.
Date Installed: 6 Jan 2010; Published 1964
Where published:
Analysis, Vol. 24, Supplement 2. (Jan., 1964), pp. 104-119.
Abstract: (Opening paragraph)
The debate about the possibility of synthetic necessary truths is an old and familiar one. The question may be discussed either in a general way, or with reference to specific examples. This essay is concerned with the specific controversy concerning the incompatibility of colours, or colour concepts, or colour words. The essay is mainly negative: I shall neither assume, nor try to prove, that colours are incompatible, or that their incompatibility is either analytic or synthetic, but only that certain more or Less familiar arguments intended to show that incompatibility relations between colours are analytic fail to do so. It will follow from this that attempts to generalise these arguments to show that no necessary truths can be synthetic will be unsuccessful, unless they bring in quite new sorts of considerations. The essay does, however, have a positive purpose, namely the partial clarification of some of the concepts employed by philosophers who discuss this sort of question, concepts such as 'analytic' and 'true in virtue of linguistic rules'. Such clarification is desirable since it is often not at all clear what such philosophers think that they have established, since the usage of these terms by philosophers is often so loose and divergent that disagreements may be based on partial misunderstanding. The trouble has a three-fold source : the meaning of 'analytic' is unclear, the meaning of 'necessary' is unclear, and it is not always clear what these terms are supposed to be applied to. (E.g. are they sentences, statements, propositions, truths, knowledge, ways of knowing, or what?) Not all of these confusions can be eliminated here, but an attempt will be made to clear some of them away by giving a definition of 'analytic' which avoids some of the confused and confusing features of Kant's exposition without altering the spirit of his definition.
A summary of the 1963 Logic Colloquium was published by E. J. Lemmon, M. A. E. Dummett, and J. N. Crossley with abstracts of papers presented, including my 'Functions and Rogators', was published in The Journal of Symbolic Logic, Vol. 28, No. 3. (Sep., 1963), pp. 262-272. accessible online here.
-- PDF version (transcribed, searchable version, 2.1MB)
(Added 2016, then revised several times, fixing multiple transcription errors.)
http://www.cs.bham.ac.uk/research/projects/cogaff/aaron-sloman-oxford-dphil.pdf
-- HTML version (transcribed, searchable version, 669KB)
(Added 6 Jan 2018 and later revised)
http://www.cs.bham.ac.uk/research/projects/cogaff/aaron-sloman-oxford-dphil.html
(Plain text, i.e. no italics/underlining, but
with figures added, on pages 287, 288, 307)
Since late 2018, the transcribed, searchable PDF version is also available at the Oxford site, along with the original, 74.1MB, scanned version, linked below.
The aim of the thesis is to show that there are some synthetic necessary truths, or that synthetic apriori knowledge is possible. This is really a pretext for an investigation into the general connection between meaning and truth, or between understanding and knowing, which, as pointed out in the preface, is really the first stage in a more general enquiry concerning meaning. (Not all kinds of meaning are concerned with truth.) After the preliminaries (chapter one), in which the problem is stated and some methodological remarks made, the investigation proceeds in two stages. First there is a detailed inquiry into the manner in which the meanings or functions of words occurring in a statement help to determine the conditions in which that statement would be true (or false). This prepares the way for the second stage, which is an inquiry concerning the connection between meaning and necessary truth (between understanding and knowing apriori). The first stage occupies Part Two of the thesis, the second stage Part Three. In all this, only a restricted class of statements is discussed, namely those which contain nothing but logical words and descriptive words, such as "Not all round tables are scarlet" and "Every three-sided figure is three-angled". (The reasons for not discussing proper names and other singular definite referring expressions are given in Appendix I.)
The scanned document, based on the carbon copy of the typed thesis has slightly fuzzy text, which is easy for humans to read, but seems to defeat OCR technology.
So, in 2016, at the instigation of my former student, Luc Beaudoin (https://cogzest.com/about/founder/), an Indian company (Hitech) was engaged to retype the remaining chapters. After much tedious checking and editing to correct transcription errors and omissions (including much help from Luc and his partner) all the chapters are now (December 2018) available as a free online book. in searchable PDF and HTML formats.
The original PDF files scanned without OCR totalled about 74Mbytes and were not searchable by computer. The transcribed PDF version installed in 2018, linked below, is searchable and much smaller, about 2.1Mbytes. There is also a new HTML version, derived from the transcribed chapters, also linked below.
Oxford ORA versions (since December 2018)
https://ora.ox.ac.uk/objects/uuid:cda7c325-e49f-485a-aa1d-7ea8ae692877
In December 2018 the scanned chapters that had originally been made available as
separate pdf files in Oxford, were concatenated into a single (non-searchable)
PDF file
(74.1MB), now available both here
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-1962/scanned_aaron-sloman-oxford-dphil.pdf
and in Oxford,
The Oxford site now also provides the much smaller transcribed PDF version of the searchable thesis linked above, as well as the scanned 74.1MByte non-searchable version.
More about the thesis
More detailed information about the thesis is also available
here, including information about the contents, the
background to the thesis, and some references to later developments and
publications.
Later work
Some of the ideas developed here were later presented, and in some cases
expanded, in the following publications.
See also the School of Computer Science Web page.
This file is maintained by Aaron Sloman:
http://www.cs.bham.ac.uk/~axs
Email a.sloman@cs.bham.ac.uk