PAPERS ADDED IN THE YEAR 2009 (APPROXIMATELY)
PAPERS 2009 CONTENTS LIST
RETURN TO MAIN COGAFF INDEX FILE
This file is
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html
Maintained by Aaron Sloman.
It
contains an index to files in the Cognition and Affect
Project's FTP/Web directory produced or published in the year
2009. Some of the papers published in this period were produced
earlier and are included in one of the lists for an earlier period.
Some older papers recently digitised may also be included.
http://www.cs.bham.ac.uk/research/cogaff/0-INDEX.html#contents
A list of PhD and MPhil theses was added in June 2003
This file Last updated: 7 Jul 2012; 10 Jul 2017; ... ; 26 Jun 2019
11 Dec 2009; 18 Jan 2010; 25 Apr 2010; 21 May 2010; 13 Nov 2010;
JUMP TO DETAILED LIST (After Contents)
See Entries in the CoSy project
Filename: sloman-honda.pdf (PDF)
Title:
Some Requirements for Human-like Robots:Why the recent over-emphasis on
embodiment has held up progress
Author: Aaron Sloman
DATE INSTALLED:
21 May 2008 (Updated: 31 Jun 2008, 9 Sep 2008; moved here: 16 Feb 2016)
Where published:
Invited paper for inclusion in proceedings of symposium on "Creating Brain-like Intelligence" at Honda research, Frankfurt, Germany, February 2007.Abstract:Creating Brain-like Intelligence,
Eds. B. Sendhoff and E. Koerner and O. Sporns and H. Ritter and K. Doya,
Springer-Verlag, 2009 Berlin,
Available online hereThis paper uses the well known paper by Rodney Brooks "Elephants don't play chess" as the basis for a critique of some recent developments (sometimes labelled "Nouvelle AI") that were inspired by that and similar papers. I argue that the good points of nouvelle AI need to be combined with the good points of symbolic AI, in contrast with those who regard them as incompatible.
Some issues concerning requirements for architectures, mechanisms, ontologies and forms of representation in intelligent human-like or animal-like robots are discussed. The tautology that a robot that acts and perceives in the world must be embodied is often combined with false premises, such as the premiss that a particular type of body is a requirement for intelligence, or for human intelligence, or the premiss that all cognition is concerned with sensorimotor interactions, or the premiss that all cognition is implemented in dynamical systems closely coupled with sensors and effectors. It is time to step back and ask what robotic research in the past decade has been ignoring. I shall try to identify some major research gaps by a combination of assembling requirements that have been largely ignored and design ideas that have not been investigated -- partly because at present it is too difficult to make significant progress on those problems with physical robots, as too many different problems need to be solved simultaneously. In particular, the importance of studying the environment about which the animal or robot has to learn, extending ideas of J.J.Gibson in (Gibson 1979), has not been widely appreciated.
Title: Colour Incompatibilities and Analyticity
Author: Aaron Sloman
Now moved to
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1964-02
Filename: sloman-ijmc.pdf
Title: An Alternative to Working on Machine Consciousness
Authors: Aaron Sloman
Date Installed: 25 Nov 2009
Where published:
In International Journal For Machine Consciousness (2010)
with commentaries and reply. Table of contents and free background paper available here: http://www.worldscinet.com/ijmc/02/0201/S17938430100201.htmlNOTE: After this was written and I had seen the commentaries it became clear to me that this paper was making many assumptions that I had not made explicit or explained clearly. As a result I wrote a long "background" paper, which is partly a tutorial paper, available below as:
Phenomenal and Access Consciousness and the "Hard" Problem: A View from the Designer StanceThere is also an accompanying tutorial presentation:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#cons09
Why the "hard" problem of consciousness is easy and the "easy" problem hard. (And how to make progress)
(PDF Presentation, also on slideshare.net)
Abstract:
This paper extends three decades of work (by the author) arguing that researchers who discuss consciousness should not restrict themselves only to (adult) human minds, but should study (and attempt to explain and model) many kinds of minds, natural and artificial, thereby contributing to our understanding of the space containing all of them. We need to study what they do or can do, how they can do it, and how the natural ones can be emulated in synthetic minds. That requires: (a) understanding sets of requirements that are met by different sorts of minds, i.e. the niches that they occupy, (b) understanding the space of possible designs, and (c) understanding the complex and varied relationships between requirements and designs. Attempts to model or explain any particular phenomenon, such as vision, emotion, learning, language use, or consciousness lead to muddle and confusion unless they are placed in that broader context. in part because current ontologies for specifying and comparing designs are inconsistent and inadequate. A methodology for making progress is summarised and a novel requirement proposed for a theory of how human minds work, namely that the theory should support a single generic design for a learning, developing system that, in addition to meeting many other more familiar requirements, should be capable of developing different and opposed viewpoints regarding philosophical questions about consciousness, and the so-called hard problem. In other words, we need a common explanation for the mental machinations of mysterians, materialists, functionalists, identity theorists, and those who regard all such theories as attempting to answer incoherent questions. No designs proposed so far come close.NOTE:
There were several commentaries published in the journal but they are not freely available via the journal. One commentary that I found particularly helpful was by Cathy Legg, available here:
http://researchcommons.waikato.ac.nz/handle/10289/3299
Commentary on "An alternative to working on machine consciousness", by Aaron SlomanCathy Legg's Abstract: A commentary on a current paper by Aaron Sloman where he argues that in order to make progress in AI, consciousness (and other such unclear concepts of common-sense regarding the mind), "should be replaced by more precise and varied architecture-based concepts better suited to specify what needs to be explained by scientific theories". This original vision of philosophical inquiry as the mapping out of 'design-spaces' for a contested concept seeks to achieve a holistic, synthetic understanding of what possibilities such spaces embody and how different parameters might structure them in nomic and highly inter-connected ways. It therefore does not reduce to either "relations of ideas" or "matters of fact" in Hume's famous dichotomy. It is also shown to be in interesting ways the exact opposite of the current vogue for 'experimental philosophy'.
Filename: sloman-mm09.pdf
Title: From "Baby Stuff" to the World of Adult Science:
Developmental AI from a Kantian viewpoint.
Author: Aaron Sloman
Date Installed: 23 Nov 2009
Where published:
In Proceedings Workshop on Matching and Meaning, AISB 2009 Convention, pp. 10--16,
Ed. Fiona McNeill,
http://www.aisb.org.uk/convention/aisb09/Proceedings/MATCHING/FILES/Proceedings.pdf
Abstract:
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. Developing forms of representation, mechanisms and architectures that meet those requirements will have to come later.For closely related slide presentation see http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#brown
Filename: ai-syllabus-apa.pdf (PDF)
Title: Teaching AI and Philosophy at School?
Authors: Aaron Sloman
Date Installed: 10 Nov 2009
Where published:
Newsletter on Philosophy and Computers, American Philosophical Association, 09, 1, pp. 42--48,
http://www.apaonline.org/publications/newsletters/v09n1_Computers_index.aspx (HTML)
http://www.apaonline.org/documents/publications/v09n1_Computers.pdf (PDF)
http://www.apaonline.org/publications/newsletters/v09n1_index.aspx (List of newsletters).This paper is based on some of the ideas in an earlier discussion paper
http://www.cs.bham.ac.uk/~axs/courses/alevel-ai.html
NOTES ON A POSSIBLE ARTIFICIAL INTELLIGENCE GCE/A-LEVEL SYLLABUS (March 2007)
Abstract:
This paper proposes a way of teaching computing, not as a branch of engineering, but as a way of learning to do philosophy, cognitive science, psychology, linguistics, and biology, among other things. It could be the core of a new kind of liberal education. But what I am proposing is not new and untried--what is proposed is close to the spirit and philosophy of teaching programming and AI to complete beginners, which some of us developed at Sussex University from the mid 1970s onwards. A revival of that approach might address a serious current malaise. The vision presented here overlaps with that in Jeannette Wing's (2006), but has a different emphasis.
Local (updated, expanded) version:
Filename: architecture-based-motivation.html
(local HTML -- latest version)
Filename: architecture-based-motivation.pdf
(local PDF -- based on HTML version)
Original published version (2009):
PDF version of newsletter on APA website
Title: Architecture-Based Motivation vs Reward-Based Motivation
Authors: Aaron Sloman
Date Installed: 10 Nov 2009 (Modified: 24 Jan 2014; 14 Jun 2015)
Where published:
Published in:
Newsletter on Philosophy and Computers, American Philosophical Association,
(Including Newsletter index)
This paper was published in issue 09, 1, pp. 10--13:
(PDF) version of whole newsletter
www.apaonline.org/resource/collection/EADE8D52-8D02-4136-9A2A-729368501E43/v09n1Computers.pdf
(Now partly out of date: see local version, above.)
Abstract:
"Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them." David Hume, A Treatise of Human Nature (2.3.3.4), 1739-1740 (http://www.class.uidaho.edu/mickelsen/ToC/hume%20treatise%20ToC.htm)Whatever Hume may have meant by this, and whatever various commentators may have taken him to mean, I claim that there is at least one interpretation in which this statement is obviously true, namely: no matter what factual information an animal or machine A contains, and no matter what competences A has regarding abilities to reason, to plan, to predict, or to explain, A will not actually do anything unless it has, in addition, some sort of control mechanism that selects among the many alternative processes that A's information and competences can support.
In short: control mechanisms are required in addition to factual information and reasoning mechanisms if A is to do anything. This paper is about what forms of control are required. I assume that in at least some cases there are motives, and the control arises out of selection of a motive for action. That raises the question where motives come from. My answer is that they can be generated and selected in different ways, but one way is not itself motivated: it merely involves the operation of mechanisms in the architecture of A that generate motives and select some of them for action. The view I wish to oppose is that all motives must somehow serve the interests of A, or be rewarding for A. This view is widely held and is based on a lack of imagination about possible designs for working systems. I summarize it as the assumption that all motivation must be reward-based. In contrast, I claim that at least some motivation may be architecture-based, in the sense explained below.
Filename: phenomenal-access-consciousness.pdf
Filename: phenomenal-access-consciousness.html
Published version freely available on journal web site
here.
The HTML version was generated automatically from the PDF version.
The HTML has not been fully checked and may contain errors,
especially in the bibliography.
Title: Phenomenal and Access Consciousness and the
"Hard" Problem: A View from the Designer Stance
Authors: Aaron Sloman
Date Installed: 25 Oct 2009; Updated 25 Nov 2009; 11 Dec 2009;
18 Jan 2010
In International Journal of Machine Consciousness.
Table of contents and free background paper available here:
http://www.worldscinet.com/ijmc/02/0201/S17938430100201.htmlOriginally written to provide background to my response to commentators on
An Alternative to Working on Machine Consciousness
Available above.
Abstract:
(DRAFT)
This paper is an attempt to summarise and justify critical comments I have been making over several decades (e.g. in Ch. 10 of CRP 1978) about research on consciousness by philosophers, scientists and others. This includes (a) explaining why the concept of "phenomenal consciousness" (P-C) is semantically flawed and unsuitable as a target for scientific research or machine modelling, whereas something like the concept of "access consciousness" (A-C) with which it is contrasted refers to phenomena that can be described and explained within a future scientific theory, and (b) explaining why the "hard problem" is a bogus problem, because of its dependence on th P-C concept. It is compared with another bogus problem, "the `hard' problem of spatial identity" introduced as part of a tutorial on semantically flawed concepts. Different types of semantic flaw and conceptual confusion not normally studied outside analytical philosophy are distinguished. The semantic flaws of the "zombie" argument, closely allied with the P-C concept are also explained. These topics are related both to the evolution of human and animal minds and brains and to requirements for human-like robots. The diversity of the phenomena related to the concept "consciousness" make it a polymorphic concept, partly analogous to concepts like "efficient", "sensitive", and "impediment" all of which need extra information to be provided before they can be applied to anything, and then the criteria of applicability differ. As a result, there cannot be one explanation of consciousness, one set of neural associates of consciousness, one explanation for the evolution of consciousness, nor one machine model of consciousness. I present a way of making progress based on the designer stance, using facts about running virtual machines, without which current computers obviously could not work. I suggest the same is true of biological minds.NOTE
I have been informed that my presentation of Ned Block's distinction between "access consciousness" and "phenomenal consciousness" does not do justice to Block's intentions, although I am sure there are philosophers who have taken the position I attribute to him. I would be interested to hear from anyone who can pinpoint what I have misunderstood (including Ned Block, if he reads this).See also http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#cons09
Why the "hard" problem of consciousness is easy and the "easy" problem hard. (And how to make progress)
(PDF Presentation, also on slideshare.net)http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#mos09
Virtual Machines and the Metaphysics of Science (Expanded version of presentation at: Metaphysics of Science'09)
(PDF Presentation, also on slideshare.net)
Filename: sloman-inf-chap.pdf (PDF)
Filename: sloman-inf-chap.html (HTML -- with incomplete
bibliography)
(html was produced from latex source by
'tth' which does not cope well with 'apacite' references.
Title: What's information, for an organism
or intelligent machine?
How can a machine or organism
mean?
Author: Aaron Sloman
Date Installed: 23 Sep 2009. Updated 6 Dec 2009. (This entry updated 26 Jun 2019)
Where published:
Preprint of a chapter (pages 393-438) in a book on Information and Computation
published by World Scientific Publishing Co. Edited by
Dr. Gordana Dodig-Crnkovic (Malardalen University, Sweden) and
Dr. Mark Burgin (UCLA, USA), 2011This includes a critique of most interpretations of Bateson's phrase "a difference that makes a difference", expanded later in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/information-difference.html
Abstract:
Words and phrases referring to information are now used in many scientific and non-scientific academic disciplines and in many forms of engineering. This chapter suggests that this is a result of increasingly wide-spread, though often implicit, acknowledgment that besides matter and energy the universe contains information (including information about matter, energy and information) and many of the things that happen, including especially happenings produced by living organisms, and more recently processes in computers, involve information-processing. It is argued that the concept ``information'' can no more be defined explicitly in terms of simpler concepts than any of the other deep theoretical concepts of science can, including ``matter'' and ``energy''. Instead the meanings of the words and phrases referring to such things are defined implicitly in part by the structure of the theories in which they occur, and in part by the way those theories are tested and used in practical applications. This is true of all deep theoretical concepts of science. It can also be argued that many of the pre-scientific concepts developed by humans (including very young humans) in the process of coming to understand their environment are also implicitly defined by their role in the theories being developed. A similar claim can be made about other intelligent animals, and future robots. An outline of a theory about the processes and mechanisms various kinds of information can be involved in is presented as partial implicit definition of ``information''. However there is still much work to be done including investigation of varieties of information processing in organisms.Relevance:
The ideas in this paper are central to the Turing-inspired Meta-Morphogenesis project first proposed in this book chapter (published 2013):
http://www.cs.bham.ac.uk/research/projects/cogaff/12.html#1203
And explained more fully in this web page and linked pages:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
NOTES:
After the paper was written, my ideas about information continued to develop,
and this led, among other things to:
Title: WHY PHILOSOPHERS SHOULD BE DESIGNERS
(BBS Commentary on Dennett's Intentional Stance)
7 Oct 2018: MOVED TO
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#6a
Title: Epistemology and Artificial Intelligence
Moved to
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1972-02
17 Apr 2019
Filename: sloman-oii-2009.pdf
Title: Requirements for Digital Companions: It's harder than you think
(Final version)
Author: Aaron Sloman
Date Installed: 4 May 2009; Revised 25 Apr 2010
Where published:
Aaron Sloman, Requirements for Artificial Companions: It's harder than you think,
in Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues,
Ed. Yorick Wilks, John Benjamins, Amsterdam, pp. 179--200, 2010,Proceedings of Workshop on Artificial Companions in Society:
Perspectives on the Present and Future
Organised by the Companions project.
Oxford Internet Institute (25th--26th October, 2007)
See the 2007 version.
A PDF presentation on this is available here.NOTE:
Some of the text was changed for the version in the book. E.g. 'Digital Companion', and 'DC' were replaced by 'Artificial Companion' and 'AC'.
I did not have time to check all the changes before final versions were required. I object strongly to the removal of section numbers to facilitate cross references in a scientific volume. They have been retained in this version. So this version should be regarded as definitive, not the version published in the book.
Abstract
(Extracts from opening sections):
Producing a system that meets the stated requirements, without arbitrary restrictions, will involve solving a great many problems that are currently beyond the state of the art in AI, including problems that would arise in the design of robotic companions helping the owner by performing practical tasks in the physical environment. In other words, even if the DC is not itself a robot and interacts with the user only via input devices such as camera, microphone, keyboard, mouse, touch-pad, and touch-screen, and output devices such as screen and audio output devices, nevertheless it will, in some circumstances, need the visual competences, the ontology, the representational resources, the reasoning competences, the planning competences, and the problem-solving competences that a helpful domestic robot would need. This is because some of the intended beneficiaries of DCs will need to be given advice about what physical actions to perform, what physical devices to acquire, and how to use such devices. I shall give examples illustrating the need for such competences.One of the problems in producing robots with competences of type (b) is that the requirements for such systems are extremely complex and subtle, and far from obvious, whereas many researchers think that the requirements are obvious and well understood, so that the only task is to work out how to produce systems that meet the requirements. Similar problems will arise for work on DCs.
Section 2 offers a first draft high level taxonomy of types of function that might be desired for DCs, so that we can distinguish functions that might be provided on different time-scales and understand which expectations are likely to remain unfulfilled in the next decade or longer.
The following section, 3, offers a shallow taxonomy of types of motive that may drive researchers, funding organisations, carers and users, involved in funding, developing, purchasing and using DCs. For ethical reasons we need to distinguish clearly (a) the motives, interests, and needs of the end-users (the people who are to be helped, advised, comforted, entertained, or whatever) and (b) the motives interests and needs of others involved, e.g. carers, relatives, and the companies or organisations responsible for providing care, and also the scientists and engineers involved in developing DCs.
The remaining sections expand on the difficulties in achieving the more ambitious functions. There are ethical issues related to production and use of DCs, but that is not the main topic of this paper, which is more concerned with technical requirements, scientific problems and near-term feasibility. Ethical issues will, however, arise in connection with some of the more sophisticated enabling functions of DCs. The paper ends with a summary and some warnings.
CONTENTS 1 Introduction 2 1.1 Terminology: different human roles . . . . . . . . . . . . . . . . . 4 2 Types of function for digital companions 5 2.1 Engaging functions . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Enabling functions . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Motives for developing, funding, buying or using DCs 8 4 Problems of achieving the enabling functions 9 4.1 Kitchen mishaps . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2 Identifying affordances and searching for things that provide them 11 4.3 Remembering particularities: episodic memory . . . . . . . . . . 12 4.4 More abstract problems . . . . . . . . . . . . . . . . . . . . . . . 13 4.5 Is training the solution? . . . . . . . . . . . . . . . . . . . . . . 14 4.6 Beyond behavioural dispositions . . . . . . . . . . . . . . . . . . 14 5 Is the solution statistical? 15 5.1 Why do statistics-based approaches work at all? . . . . . . . . . . . 15 6 Can it be done? 16 6.1 What's needed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.2 Alternatives to canned responses . . . . . . . . . . . . . . . . . . 17 7 Conclusion 18 7.1 Ethical issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 References 19
Filename: sloman_vm_cogsci.pdf
Title: What Cognitive Scientists Need to Know about Virtual Machines
Authors: Aaron Sloman
Date Installed: 1 May 2009 (Revised 8th May 2009)
Where published:
Proceedings of the 31st Annual Conference of the Cognitive Science Society,
Eds. N. A. Taatgen and H. van Rijn,
Cognitive Science Society, Austin, TX, pp. 1210--1215, 2009.NB: This is a six page paper -- too short to do justice to the topic!
Presented at the Cognitive Science Conference 2009
PDF Slide Presentation. topic.An expanded version, entitled "Virtual Machines and the Metaphysics of Science", was presented at the conference on 'Metaphysics of Science' in Nottingham, 12-14 Sept 2009.
Extended abstract here.
Slides for the presentation (expanded after the conference) give more detail here (FTP)
A more recent online introduction to these ideas is here
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
Virtual Machine Functionalism (VMF)
(The only form of functionalism worth taking seriously in Philosophy of Mind and theories of Consciousness)
Earlier documents on this topic:
Abstract:
Many psychologists, philosophers, neuroscientists and others interact with a variety of man-made virtual machines (VMs) every day without reflecting on what that implies about options open to biological evolution, and the implications for relations between mind and body. This tutorial position paper introduces some of the roles different sorts of VMs, contrasting Abstract VMs (AVMs) which are merely mathematical objects that do nothing, and running instances (RVMs) which interact with other things and have parts that interact causally. We can also distinguish single function, specialised VMs (SVMs), e.g. a running chess game or word processor, from "platform" VMs (PVMs), e.g. operating systems which provide support for changing collections of RVMs. (There was no space in the paper to distinguish two sorts of platform VMs, namely operating systems that can support actual concurrent interacting processes, and language run-time VMs which can support different sorts of functionality, though each instance of the language run-time VM (e.g. a Lisp VM, a Prolog VM) may not support multiple processes.The different sorts of RVMs play important but different roles in engineering designs, including "vertical separation of concerns" and suggests that biological evolution "discovered" problems that require VMs for their solution long before we did. Some of the resulting biological VMs have generated philosophical puzzles relating to consciousness, mind-body relations, and causation. Some new ways of thinking about these are outlined, based on attending to some of the unnoticed complexity involved in making artificial VMs possible.
The paper also discusses some of the implications for philosophical and cognitive theories about mind-brain supervenience and some options for design of cognitive architectures with self-monitoring and self-control, along with warnings about a kind self-deception arising out of use of RVMs.
Keywords:
virtual machine; causation; counterfactuals; evolution; self-monitoring; self-control; epigenesis; nature-nurture; mind-body;
See also the School of Computer Science Web page.
Created: 26 Apr 2009
This file is maintained by
Aaron Sloman, and designed to be
lynx-friendly,
and
viewable with any browser.
Email A.Sloman@cs.bham.ac.uk