PAST, RECENT AND PENDING PRESENTATIONS
By
Aaron Sloman
School of Computer Science
The University of Birmingham, UK.
This is
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
Also accessible as:
goo.gl/piY2Lv
These are presentations on topics in philosophy of mind, philosophy of mathematics, philosophy of computation, various aspects of AI, cognitive science, and education, including work on the Birmingham Cognition and Affect Project (1991--, begun previously at Sussex university), work done in the Cosy Project (2004-8), and its successor the the CogX Project (2008-12), including: consciousness, emotions and other affective states and processes, reasoning, evolution (trajectories in design space and niche space), information-processing, artificial intelligence, cognitive science, biology, physics, philosophy of mind, supervenience, philosophy of mathematics, epistemology, virtual machines, implementation, vision and other forms of perception -- especially visual perception of affordances, architectures for intelligent systems, forms of representation, software tools for exploring architectures and designing intelligent agents, and to some extent also about neuroscience and psychology.
Note added 25 Sep 2010: The main list is in roughly reverse chronology, but I have started to build a list of pointers to talks on particular topics. This will take some time, so some of the pointers are just stubs, for now.
- CONTENTS: ROUGHLY REVERSE CHRONOLOGY
- CONTENTS: MAJOR TOPICS (a sort of index, to be extended).
Note Some of these presentations are also on 'slideshare.net'. Unfortunately Slideshare no longer allows uploads to be updated. So many of my presentations there are out of date. Look for newer versions here.
WARNING:
Any of my pdf slides found at any other location are likely to be out of date.There is more information organised by topic in my "DOINGS" list but it has not been updated for some time.
Below is a summary list of presentations in (roughly) reverse chronological order, followed by more details on each presentation, in (roughly) chronological order. The summary has links to the details.
The order is only "roughly" chronological since many of the older talks have been revised recently, and some have also been presented recently.
WARNING:
Any of my pdf slides found at any other location are likely to be out of date.
I try to keep the versions on slideshare.net up to date, but sometimes forget to
upload a new version.
Google Scholar
publications list,
(N.B. DO NOT BELIEVE CITATION COUNTS. They can be inflated or incomplete.)
A revised, extended version of parts of previous presentations on virtual machines, information, and architectures.
Dagstuhl Seminar No. 08091, 24.02.2008-29.02.2008
Logic and Probability for Scene Interpretation. Schloss
Dagstuhl, Feb 25th 2008
http://bourbon.cs.umd.edu:8001/tgif/
My slides are mostly composed in Latex, using home-grown macros, importing eps or jpg files produced by tgif. More recent versions were created directly by pdflatex.
From about talk 5 (May 2001) I started preparing the slides in a format more suited to fill a typical computer screen which is wider than it is tall. These need to be viewed in "Landscape" or "Seascape" mode (rotated 90 degrees to the left). Your pdf/postscript viewer should provide such an option, if the wide display format is not used automatically. Paper size is set to A4, which may cause problems printing some of the slides on US letter paper.
Some documents (including documents in the 'Misc' directory, http://www.cs.bham.ac.uk/research/projects/cogaff/misc/, are produced using html, for online viewing, with pdf produced using a combination of html2ps and ps2pdf.
International Joint Conference on AI 2016 10th July 2016
Tutorial T24: If Turing had lived longer, how might he
have investigated what AI and Philosophy can learn
from evolved information processing systems?Including homage to John McCarthy and Marvin Minsky, two of the founders of AI, recently deceased, both interested in connections between AI and philosophy.
Talk Contents List
Installed: 28 Mar 2015
Updated: 15 Apr 2015; 11 Nov 2015; 7 Feb 2017
Abstract
Talk 93: What's vision for, and how does it work?
From Marr (and earlier) to Gibson and Beyond
Most people think language is essentially concerned with communication between
individuals. So they ask the wrong questions about evolution of language, and
give limited answers - concerned only with forms of communication.
A different view of language opens up more questions, requiring more
complex and varied answers:
A language is primarily a means by which information can be
represented, for any purpose, including internal purposes such as
learning, reasoning, formation of intentions and control of actions.
That includes perceptual information, e.g. visual information.
Instead of asking: how did communication using language evolve?
We can ask:
o For what purposes do organisms use information?
Learning about the environment (e.g. through visual perception), control of
actions, selection of goals,
formation of plans, execution of plans, making predictions, asking questions,
finding answers,
communication with other individuals, social teaching and learning....(add your
own ideas).
There are also connections with the examples of "toddler theorems"
in the presentation on evolution and development of mathematical capabilities
below, and with ideas on learning and development in the
work of Piaget and Karmiloff-Smith included in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Vision, Action and Mathematics From Affordances to Euclid.Installed here: 25 Sep 2014
Part of the Turing-inspired Meta-Morphogenesis project.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
(What I suspect Alan Turing might have done if he had lived longer.)
Based on my 4th contribution to this book (pp 849-857):
http://www.cs.bham.ac.uk/~axs/amtbookCompare Talk 93:
What's vision for, and how does it work? From Marr (and earlier) to Gibson and Beyond
Lecture1 for ICY and conversion MSc students 5 Mar 2015Installed here: 25 Nov 2013; Updated 5 Mar 2015
Video recording of presentation here
Lecture 2 (17 March 2015) is here.CNCR Journal Club Meeting on Monday 7th October 2013
Partial Abstract
Presents some of the differences and relationships between philosophy, science, and engineering, illustrated in particular by the use of AI in enriching and testing philosophical concepts and theories.
http://www.pt-ai.org/2013Installed: DRAFT PDF will be installed 21 or 22 Sep 2013
Philosophy and Theory of Artificial Intelligence
21 Sep 2013
Abstract (As originally submitted).
I originally got involved in AI many years ago, not to build new useful machines, nor to build working models to test theories in psychology or neuroscience, but with the aim of addressing philosophical disagreements between Hume and Kant about mathematical knowledge, in particular Kant's claim that mathematical knowledge is both non-empirical (apriori, but not innate) and non-trivial (synthetic, not analytic) and also concerns necessary (non-contingent) truths.I thought a "baby robot" with innate but extendable competences could explore and learn about its environment in a manner similar to many animals, and learn the sorts of things that might have led ancient humans to discover Euclidean geometry.
The details of the mechanisms and how they relate to claims by Hume, Kant, and other philosophers of mathematics, could help us expand the space of philosophical theories in a deep new way.
Decades later, despite staggering advances in automated theorem proving concerned with logic, algebra, arithmetic, properties of computer programs, and other topics, computers still lack human abilities to think geometrically, despite advances in graphical systems used in game engines and scientific and engineering simulations. (What those do can't be done by human brains.)
I'll offer a diagnosis of the problem and suggest a way to make progress, illuminating some unobvious achievements of biological evolution.
Adam Ford also made available two related interviews recorded at the conference:
Please do not save or send anyone copies - instead keep a link
to this location and send that if necessary:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk107
The PDF files grew too long and may later be split into smaller pieces.
For more on the Meta-Morphogenesis project see:
Abstract for the tutorial
OR http://goo.gl/9eN8Ks (Main project web
site.)
Because of a projector problem I gave the ALT2012 talk without slides.
Also recorded on video. Link to Youtube version below.
The video is also linked on the slideshare site (above).
Added: 1 Dec 2012:
Video of presentation at the conference
(without slides: projector not working!)
Also at Computing at School (CAS) Conference, July 2013, University of Birmingham
Original version Installed: 20 Sep 2012;
Updated: 21 Sep 2012; 24 Jan 2013; 6 Aug 2014; 23 Aug 2014
Abstract for 2012 version (relevant to 2014 version)
As an example, the presentation attempts to show that current
debates about whether to use phonics or look-and-say methods for
teaching reading cannot be resolved sensibly without thinking
computationally about the nature of reading, learning, thinking,
speaking, understanding, and how all of these depend on
multi-layered information-processing architectures that are still
growing in different ways while children are learning to read.
Michael Morpurgo mounted a campaign of criticism of rigid use of and testing of
phonics in 2012, e.g. in BBC talks
http://www.bbc.co.uk/programmes/b01hxh6w
Compare: Andrew Davis
A Monstrous Regimen of Synthetic Phonics: Fantasies of Research-Based
Teaching 'Methods' Versus Real Teaching
in Journal of Philosophy of Education, Vol. 46, No. 4, 2012, pp 560--573.
https://www.dur.ac.uk/education/staff/profile/?mode=pdetail&id=617&sid=617&pdetail=82425
Those criticisms would be strengthened by use of computational thinking about
processes of education and multiple functions and mechanisms that need to be
integrated in advanced reading (e.g. fast silent reading).
Note added 21 Sep 2012
Perhaps the slides should have referred to a 2007 ACM paper by Peter
J. Denning (much better than his earlier work on "Great Principles"):
Computing is a Natural Science
Information processes and computation continue to be found abundantly in the deep structures of many fields. Computing is not--in fact, never was--a science only of the artificial.COMMUNICATIONS OF THE ACM July 2007/Vol. 50, No. 7, pp 13--18.
http://cs.gmu.edu/cne/pjd/PUBS/CACMcols/cacmJul07.pdf
Abstract
In my research I meander through various disciplines, using fragments of AI that I regard as relevant to understanding natural and artificial intelligence, willing to learn from anyone.As a result, all my knowledge of work in particular sub-fields of AI is very patchy, and rarely up to date. This makes me unfit to write the history of European collaboration on some area of AI research as originally intended for this panel session.
However, by interpreting the topic rather loosely, I can (with permission from the event organisers) regard some European philosophers who were interested in Philosophy of mathematics as early AI researchers from whom I learnt much, such as Kant and Frege. Hume's work is also relevant.
Moreover, more recent work by neuro-developmental psychologist Annette Karmiloff-Smith, begun in Geneva with Piaget then developed independently, helps to identify important challenges for AI (and theoretical neuroscience), that also connect with philosophy of mathematics and the future of AI and robotics, rather than the history.
I'll present an idiosyncratic, personal, survey of a subset of AI stretching back in time, and deep into other disciplines, including philosophy, psychology and biology, and possibly also deep into the future, linked by problems of explaining human mathematical competences. The unavoidable risk is that someone in AI has done very relevant work on mathematical discovery and reasoning, of which I am unaware.
I'll be happy to be informed, and will extend these slides if appropriate.
See online paper
http://www.cs.bham.ac.uk/research/projects/cogaff/12.html#1205http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
Theorems About Triangles, and Implications for Biological Evolution and AI: The Median Stretch, Side Stretch, Triangle Sum, and Triangle Area Theorems
ECAI 2012 Workshop on Computational Creativity, Concept Invention, and General IntelligenceInstalled: 28 Aug 2012
Montpellier, 27th August 2012
http://www.cogsci.uni-osnabrueck.de/~c3gi
Workshop proceedings: http://www2.lirmm.fr/ecai2012/images/stories/ecai_doc/pdf/workshop/W40_c3gi_pre-proceedings_20120803.pdf
Abstract
Whether the mechanisms proposed by Darwin and others suffice to explain all details of the achievements of biological evolution remains open. Variation in heritable features can occur spontaneously, and Darwinian natural selection can explain why some new variants survive longer than others. But that does not satisfy Darwin's critics and also worries supporters who understand combinatorial search spaces.http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.htmlOne problem is the difficulty of knowing exactly what needs to be explained: Most research has focused on evolution of physical form, and physical competences and behaviours, in part because those are observable features of organisms. What is much harder to observe is evolution of information-processing capabilities and supporting mechanisms (architectures, forms of representation, algorithms, etc.).
Information-processing in organisms is mostly invisible, in part because it goes on inside the organism, and in part because it often has abstract forms whose physical manifestations do not enable us to identify the abstractions easily. Compare the difficulty of inferring thoughts, percepts or motives from brain measurements, or decompiling computer instruction traces. Moreover, we may not yet have the concepts required for looking at or thinking about the right things: we may need more than the vast expansion of our conceptual tools for thinking about information processing capabilities and mechanisms in the last half century. However, while continually learning what to look for, we can collaborate in attempting to identify the many important transitions in information processing capabilities, ontologies, forms of representation, mechanisms and architectures that have occurred on various time-scales in biological evolution, in individual development (epigenesis) and in social/cultural evolution -- including processes that can modify later forms of evolution and development: meta-morphogenesis.
Conjecture: The cumulative effects of successive phases of meta-morphogenesis produce enormous diversity among living information processors, explaining how evolution came to be the most creative process on the planet. Progress in AI depends on understanding the products of this process.
Latest version of the workshop paper: http://www.cs.bham.ac.uk/research/projects/cogaff/12.html#1203
Abstract
Online AbstractSee next talk, on the Meta-Morphogenesis project.
This is the latest version of the presentation given at the Workshop "The Incomputable":
http://www.mathcomp.leeds.ac.uk/turing2012/inc/
Royal Society Kavli Centre, Chicheley: 11-15 June 2012
Abstract for talk.
Talk at Cambridge University Computing and Technology Society
www.cucats.org
Available HERE (PDF).
Invited talk at:
Cambridge University Computing and Technology Society
(Tuesday 8th May 2012)
NB: Criticisms and suggestions welcome.
Installed: 14 May 2012
Abstract
See the abstract posted at:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cucats-abstract.htmlSee also
- http://www.cs.bham.ac.uk/research/projects/poplog/examples/thinky.html
types of programming education, including 'thinky' programming.- http://www.cs.bham.ac.uk/research/projects/poplog/examples/examples.html
draft notes on how to teach 'thinky' programming.- http://www.cs.bham.ac.uk/research/projects/poplog/cas-ai/video-tutorials.html
experimental video tutorials.- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
The main Meta-Morphogenesis project site.
for Philosophy of Cognitive Science Students, Birmingham Feb 2012.Installed/revised:21 Apr 2012
Abstract
We can integrate philosophy of mind with other fields, and turn vague insoluble problems into problems about what sorts of information processing architectures make different sorts of minds possible, including minds that grow and change their architectures. By considering different evolutionary and developmental trajectories in different species and in different sorts of future machines and robots we can understand each case much better, including understanding what human minds are, and how they grow and change. It's important not only to consider different sorts of minds, but also whole architectures with different sorts of components performing different functions, since the nature of each function depends on the others it interacts with.
University of Birmingham, Language and Cognition, 21st October 2011
and School of Computer Science 31st October 2011.
Also: variants at: University of Aberystwyth; Royal Society meeting on Animal Minds, Chichely Hall; EuCognition Meeting Oxford; University of Nottingham.
Abstract
All the presentations were informal and based on portions of these three web sites (different portions):Slides will be added here later. See also Slides on toddler theorems below.
- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/beyond-modularity.html
- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
"Barcelona Cognition, Brain and Technology summer school - BCBT"
http://bcbt.upf.edu/bcbt11
Abstract
I first learnt about AI in 1969 when I was a lecturer in philosophy, and soon became convinced that the best way to make progress in solving a range of philosophical problems (e.g. in philosophy of mathematics, philosophy of mind, philosophy of language, philosophy of science, epistemology, philosophy of emotions, and some parts of metaphysics) was to produce and analyse designs for successively larger working fragments of minds. I think that project can be enhanced by using it to pose new questions about transitions in the evolution of biological information-processing systems. I shall try to explain these relationships between AI, biology and philosophy and show how they can yield major new insights, while also inspiring important (and difficult) new research. I hope to make the presentation interactive.I shall post relevant reading matter on the web site being prepared for a closely related tutorial in August at AAAI, here: http://www.cs.bham.ac.uk/research/projects/cogaff/aaaitutorial/
A subset of these slides will be used for a talk at
The Barcelona Cognition, Brain and Technology summer school BCBT2011
"How to combine science and engineering to solve philosophical problems"
See alsoAlso posted on slideshare by the AWARE EU project http://www.aware-project.eu/
Note: (This is supplemented by a newer presentation: Talk 111, added March 2015, which presents themes linking evolution of language and evolution of vision.)
Abstract
Very many researchers assume that it is obvious what vision (e.g. in humans) is for, i.e. what functions it has, leaving only the problem of explaining how those functions are fulfilled.So they postulate mechanisms and try to show how those mechanisms can produce the required effects, and also, in some cases, try to show that those postulated mechanisms exist in humans and other animals and perform the postulated functions.
The main point of this presentation is that it is far from obvious what vision is for - and J.J. Gibson's main achievement is drawing attention to some of the functions that other researchers had ignored.
I'll present some of the other work, show how Gibson extends and improves it, and then point out much more there is to the functions of vision and other forms of perception than even Gibson had noticed.
In particular, much vision research, unlike Gibson, ignores vision's function in on-line control and perception of continuous processes; and nearly all, including Gibson's work, ignores meta-cognitive perception, and perception of possibilities and constraints on possibilities and the associated role of vision in reasoning.
If we don't understand that we cannot understand how biological mechanisms arising from requirements for being embodied in a rich, complex and changing 3-D environment underpin human mathematical capabilities, including the ability to reason about topology and Euclidean geometry.
See discussions of "Toddler theorems" below.
Abstract
To Be Added
Prepared for online presentation for Computing At School (CAS)
(Using elluminate conference tool.)
Background notes: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/teach-share.htmlRecording of the presentation: http://bit.ly/iVhp0i
Requires JavaWebStart (javaws). Set up and test your system here first: http://www.elluminate.com/support
Abstract
I contrast the evolution of physical forms, and observable behaviours with evolution of types of information processing in organisms of various kinds.
This could be described as "invisible evolution": hard to identify but essential to understand if we want to understand the achievements of biological evolution and the nature of what it produced -- including ourselves.
Talk 90a: Piaget (and collaborators) on Possibility and Necessity
(Superseded by Talk 90b below)
Version 1: 21 Feb 2011 in Psychology and Computer Science, Birmingham
HERE, (PDF).
Available
HERE, in 2-UP PDF Format.
Talk 90b:
Version 2: presented at Dagstuhl workshop 28th March 2011, and Oxford
CIAO/Automatheo workshop 6th April (after revision).
Available
HERE, (PDF).
Available
HERE, in 2-UP PDF Format.
Videos relevant to the talk: http://www.cs.bham.ac.uk/research/projects/cogaff/movies/vid/
Note added 7 Mar 2011
Since the talk I have been looking at (among other things) Annette Karmiloff-Smith's work on Representational Redescription, in her 1992 book Beyond ModularityOriginal Abstract
There is much overlap in our ideas, which I am attempting to document here.I have also expanded the slides to include a rational reconstruction of what I think Piaget was studying expressed in terms of the concept of "Exploration Domain" (close to "micro-worlds" in AI and "microdomains" in Karmiloff-Smith).
Humans and some other animals seem to have the ability first to learn patterns of phenomena in an exploration domain, then, in some cases, to reorganise (unwittingly) the empirical information into something like a deductive system in which previous patterns (sometimes corrected) become either "examples" or "theorems". (Hence Possibility and Necessity.)
This process will depend on both features of the environment and mechanisms produced by evolution to help animals cope with various sorts of environment. Some of the features of this process are: different exploration domains are explored and learnt about in parallel, sometimes domains can be combined to form more complex domains, many, though not all, domains are closely related to the structure of space, time and matter, most animals do not also have the metacognitive ability to learn to make their own learning an exploration domain, though humans do. Well known transitions in human language learning seem to be based on late evolutionary developments of the above mechanisms in humans. The processes of reorganisation depend on architectural growth, sometimes combined with use of new special purpose forms of representation.
The processes of construction of deductive revisions, and the processes of deployment of the new systems are sometimes buggy (as is mathematical theorem proving (Lakatos: Proofs and Refutations). Also for each learner the trajectory (development+learning) may be unique and depend on genetic, social and physical environmental opportunities.
There seem to be deep implications for biology, developmental psychology, neuroscience, comparative cognitive science, education, AI/Robotics and philosophy (e.g. epistemology, philosophy of language and philosophy of mathematics).
Results of the human genome project cannot be understood until much more is known about what the genome (or genomes) contributes to these processes and how.
It is not widely known that shortly before he died Jean Piaget and his collaborators produced a pair of books on Possibility and Necessity, exploring questions about how two linked sets of abilities develop:
(a) The ability to think about how things might be, or might have been, different from the way they are.
(b) The ability to notice limitations on possibilities, i.e. what is necessary or impossible.I believe Piaget had deep insights into important problems for cognitive science that have largely gone unnoticed, and are also important for research on intelligent robotics, or more generally Artificial Intelligence (AI), as well as for studies of animal cognition and how various animal competences evolved and develop.
The topics are also relevant to understanding biological precursors to human mathematical competences and to resolving debates in philosophy of mathematics, e.g. between those who regard mathematical knowledge as purely analytic, or logical, and those who, like Immanuel Kant, regard it as being synthetic, i.e. saying something about reality, despite expressing necessary truths that cannot be established purely empirically, even though they may be initially discovered empirically (as happens in children).It is not possible in one seminar to summarise either book, but I shall try to present an overview of some of the key themes and will discuss some of the experiments intended to probe concepts and competences relevant to understanding necessary connections.
In particular, I hope to explain: (a) The relevance of Piaget's work to the problems of designing intelligent machines that learn the things humans learn. (Most researchers in both Developmental Psychology and AI/Robotics have failed to notice or have ignored most of the problems Piaget identified.) (b) How a deep understanding of AI, and especially the variety of problems and techniques involved in producing machines that can learn and think about the problems Piaget explored, could have helped Piaget describe and study those problems with more clarity and depth, especially regarding the forms of representation required, the ontologies required, the information processing mechanisms required and the information processing architectures that can combine those mechanisms in a working system -- especially architectures that grow themselves.
That kind of computational or "design-based" understanding of the problems can lead to deeper clearer specifications of what it is that children are failing to grasp at various stages in the first decade of life, and what sorts of transitions can occur during the learning. I believe the problems, and the explanations, are far more complex than even Piaget thought. The potential connection between his work and AI was appreciated by Piaget himself only very shortly before he died.
One of the key ideas implicit in Piaget's work (and perhaps explicit in something I have not read) is that the learnable environment can be decomposed into explorable domains of competence that are first investigated by finding useful, reusable patterns, describing various fragments.
Then eventually a large scale reorganisation is triggered (per domain) which turns the information about the domain into a more economical and more powerful generative system that subsumes most of the learnt patterns and, through use of compositional semantics in the internal representation, allows coping with much novelty -- going far beyond what was learnt.
(I think this is the original source of human mathematical competences.)
Language learning seems to use a modified, specialised, version of this more general (but not totally general) mechanism, but the linguistic mechanisms were both a later product of evolution and also get turned on later in young humans than the more general domain learning mechanisms. The linguistic mechanisms also require (at a later stage) specialised mechanisms for learning, storing and using lots of exceptions to the general rules induced (the syntactic and semantic rules).
The language learning builds on prior learning of a variety of explorable domains, providing semantic content to be expressed in language. Without that prior development, language learning must be very shallow and fragmentary -- almost useless.
When two or more domains of exploration have been learnt they may be combinable, if their contents both refer to things and processes in space time. Space-time is the the great bed in which many things can lie together and produce novelty.
I think Piaget was trying to say something like this but did not have the right concepts, though his experiments remain instructive.
Producing working demonstrations of these ideas in a functional robot able to manipulate things as a child does will require major advances in AI, though there may already be more work of this type than I am aware of.
See also http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1101
Evolved Cognition and Artificial Cognition: Some Genetic/Epigenetic Trade-offs for Organisms and Robots
Abstract
One of the amazing facts about human vision is how fast a normal adult visual system can respond to a complex optic array with rich 2-D structure representing complex 3-D structures and processes, e.g. turning a corner in a large and unfamiliar town.This has implications for the mechanisms required, which I try to spell out. See also:
Aaron Sloman,
Architectural and Representational Requirements for Seeing Processes, Proto-affordances and Affordances,
In Logic and Probability for Scene Interpretation, Eds. Anthony G. Cohn, David C. Hogg, Ralf Moeller and Bernd Neumann,
Dagstuhl Seminar 08091 Proceedings, 2008, Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany,
http://drops.dagstuhl.de/opus/volltexte/2008/1656
The talk was presented on 9th Nov, showing these slides and some videos. I may later extend the slides. Suggestions welcome.
Abstract:
A task for AI is to work with biologists, not just learning from them, but also providing them with new AI-informed concepts, formalisms, questions, suggestions for experiments, theories, and working explanatory models.The videos shown in the lecture, and a few more, are available here.
See also:
- Talk 78: Computing: The Science of Nearly Everything below.
- Philosophy as AI and AI as philosophy (AAAI 2011 Tutorial)
- LIFE and INFORMATION Self-modifying information-processing architectures
- Evolution of mind as a feat of computer systems engineering
Lessons from decades of development of virtual machinery, including self-monitoring virtual machinery.
Varieties of Self-Awareness and Their Uses in Natural and Artificial Systems
Related presentations:
Abstract
This unfinished, still somewhat disorganised, draft attempts to explain what running virtual machines are, in terms of kinds of dynamical system whose behaviours and competences are not best described in terms of physics and chemistry, even though they have to be fully implemented in physical mechanisms in order to exist and operate. This attempts to explain in more detail than my earlier papers how "sideways causation" and how "downward causation" can occur in running virtual machines, i.e. non-physical things causally influencing one another and also influencing physical events and processes -- without any magic, mysticism, quantum mechanics etc. needed, just sufficiently tangled webs of true counterfactual conditionals supported by sophisticated machinery designed or evolved for that purpose.Two notions of real existence are proposed (a) being able to cause or be caused by other things (existence in our world) and (b) being an abstraction that is part of a system of constraints and implications (mathematical existence, story-relative existence, etc.). Some truths about causal connections between things with the first kind of existence can be closely related to mathematical connections between things of the second kind. (I think that's roughly Immanuel Kant's view of causation, in opposition to Hume.)
Some of the problems are concerned with concurrent interacting subsystems within a virtual machine, including co-operation, conflict, self-monitoring, and self-modulation. The patterns of causation involving interacting information are not well understood. Existing computer models seem to be far too simple to model things like conflicting tastes, principles, hopes, fears, ...
In particular physical opposing forces and other well understood interacting physical mechanisms are very different from these interactions in mental machinery, even though they are fully implemented in physical machinery. This is likely to be "work in progress for some time to come."This presentation is intended to provide background supporting material for other presentations and papers on virtual machinery, consciousness, qualia, introspection and the evolution of mind, including Talk 84 below explaining how Darwin could have answered some of his critics regarding evolution of mind and consciousness.
Expanded 20 Sep 2015
I don't believe the ideas about requirements are clear enough yet or the ideas about explanatory mechanisms deep enough. See also:http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
Virtual Machine Functionalism (VMF)
(The only form of functionalism worth taking seriously in
Philosophy of Mind and theories of Consciousness)http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html
The scientific/metaphysical explanatory role of construction kits:
fundamental and derived kits; concrete, abstract and hybrid kits.
Related presentations:
Abstract
This is one of a collection of presentations regarding virtual machines and their causal powers. See also
Talk 86, Talk 84, Talk 71, and some older talks on supervenience and virtual machinery.This presentation provides a few notes on Dennett's views on virtual machines, extracted from Talk73 with some revisions, including a criticism of what he says about "centres of narrative gravity" and "centres of gravity" and "point masses", e.g. in his paper "Real Patterns".
His occasional reluctance to be a realist about virtual machinery and his reluctance to be a realist about mental states and processes (as opposed to being willing to adopt "The intentional stance") were both attributed in an early version of this presentation to a failure to understand the significance of the explanatory power of virtual machines and their causal powers in computing systems, as discussed in various presentations listed here. However his more recent publication
The Cultural Evolution of Words and Other Thinking Toolsis unequivocal about the importance and reality VMs.
http://www.ncbi.nlm.nih.gov/pubmed/19687141
Slides prepared for the SAB2010 presentation
HERE (PDF)
(Out of date but may be useful with the video.)
Videos
of talks at SAB2010
Installed: 23 Nov 2010.
Last Updated: 23 Nov 2010; 10 Dec 2010;
Invited talk at:
SAB2010Related presentations:
11th International Conference on Simulation of Adaptive Behaviour
Paris, 29 August 2010 (Presented at Clos-Lucé, Amboise)
The published paper.Also presented 10th Sept at Conference on "Nature and Human Nature" (Consciousness and Experiential Psychology) Oxford:
http://www.bps.org.uk/conex/events/cep_2010.cfm
Abstract
Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes -- an old, and still surviving, philosophical problem.We can now show in principle how evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by producing systems with increasingly abstract, but effective, mechanisms, including self-observation capabilities, implemented in virtual machinery.
It is suggested that these capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machines which, in turn, are implemented in lower level physical mechanisms. For this, evolution would have had to produce far more complex virtual machines than human engineers have so far managed, but the key idea of switching information processing to a higher level of abstraction, might be the same.
However it is not yet clear whether the biological virtual machines could have been implemented in the kind of discrete technology used in computers as we know them. These ideas were not available to Darwin and his contemporaries because most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines has only been developed in the last half century, as a by-product of a large number of design decisions by hardware and software engineers.
Note: Some of the ideas about evolutionary pressures from the environment are summarised briefly in a commentary on a 'target article' by Margaret Boden
Can computer models help us to understand human creativity?
http://nationalhumanitiescenter.org/on-the-human/2010/05/can-computer-models-help-us-to-understand-human-creativity/
Previously at this defunct web site:
Can computer models help us to understand human creativity?
My commentary is at the end of the above web page, and also copied here.Note: This is related to hard unsolved philosophical problems about the concept of causation.
Some ideas about this are presented here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/gentoa
And in Talk 89.
Research Awayday, 21st July 2010. Winterbourne, University of Birmingham.Followed up by a meeting (or series of meetings) to discuss the question:
How can a genome specify an information-processing architecture that grows itself guided by interaction with the environment?
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/genome-architecture-project.html
Symposium on AI-Inspired Biology at AISB'2010 convention, 31st March--1st April 2010.The proceedings paper is here.
Abstract (from paper in proceedings):
There is much work in AI that is inspired by natural intelligence, whether in humans, other animals or evolutionary processes. In most of that work the main aim is to solve some practical problem, whether the design of useful robots, planning/scheduling systems, natural language interfaces, medical diagnosis systems or others.Since the beginning of AI there has also been an interest in the scientific study of intelligence, including general principles relevant to the design of machines with various sorts of intelligence, whether biologically inspired or not. The first explicit champion of that approach to AI was John McCarthy, though many others have contributed, explicitly or implicitly, including Alan Turing, Herbert Simon, Marvin Minsky, Ada Lovelace a century earlier, and others.
A third kind of interest in AI, which is at least as old, and arguably older, is concerned with attempting to search for explanations of how biological systems work, including humans, where the explanations are sufficiently deep and detailed to be capable of inspiring working designs. That design-based attempt to understand natural intelligence, in part by analysing requirements for replicating it, is partly like and partly unlike the older mathematics-based attempt to understand physical phenomena, insofar as there is no requirement for an adequate mathematical model to be capable of replicating the phenomena to be explained: Newton's equations did not produce a new solar system, though they helped to explain and predict observed behaviours in the old one.
This paper attempts to explain some of the main features of the design-based approach to understanding natural intelligence, many of them already well known, though not all.
The design based approach makes heavy use of what we have learnt about computation since Ada Lovelace. But it should not be restricted to forms of computation that we already understand and which can be implemented on modern computers. We need an open mind as to what sorts of information-processing systems can exist and which varieties were produced by biological evolution.
Or "How could evolution get ghosts into machines?"
Related presentations:
Also on my 'slideshare.net' web site
Preview of invited talk to be presented at Le Clos Lucé,
Amboise, France at
SAB2010 in August 2010.
(Final version of presentation to go in
Talk 84.
Conference paper is
here.
Presented at School of BioSciences Seminar, UoB, 11th May 2010
Abstract
Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes -- an old, and still surviving, philosophical problem.We can now show in principle how evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by using systems with increasingly abstract mechanisms based on virtual machines.
It is suggested that these capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machines which, in turn, are implemented in lower level physical mechanisms. For this, evolution would have had to produce far more complex virtual machines than human engineers have so far managed, but the key idea might be the same.
However it's not yet clear whether the biological virtual machines could have been implemented in the kind of discrete technology used in computers as we know them. These ideas were not available to Darwin and his contemporaries because most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines has only been developed in the last half century, as a by-product of a large number of design decisions by hardware and software engineers.
Note: Some of the ideas about evolutionary pressures from the environment are summarised briefly in a commentary on a 'target article' by Margaret Boden Can computer models help us to understand human creativity?
My commentary is at the end of the above web page, and also copied here.Note: This is related to hard unsolved philosophical problems about the concept of causation.
See also:
Presented at Symposium on Mathematical Practice and Cognition, AISB 2010 Convention, Leicester, March 29-30 2010.
http://homepages.inf.ed.ac.uk/apease/aisb10/programme.htmlProceedings paper available here.
Abstract
This is a progress report on a long term quest to defend Kant's philosophy of mathematics. In humans, and other species with competences that evolved to support interactions with a complex, varied and changing 3-D world, some competences go beyond discovered correlations linking sensory and motor signals. Dealing with novel situations or problems requires abilities to work out what can, cannot, or must happen in the environment, under certain conditions.I conjecture that in humans these products of evolution form the basis of mathematical competences. Mathematics grows out of the ability to use, reflect on, characterise, and systematise both the discoveries that arise from such competences and the competences themselves. So a "baby" human-like robot, with similar initial competences and meta-competences, could also develop mathematical knowledge and understanding, acquiring what Kant called synthetic, non-empirical knowledge.
I attempt to characterise the design task and some ways of making progress, in part by analysing transitions in child or animal intelligence from empirical learning to being able to "work things out". This may turn out to include a very general phenomenon involved in so-called "U-shaped" learning, including the language learning that evolved later. Current techniques in AI/Robotics are nowhere near this. A long term collaborative project investigating the evolution and development of such competences may contribute to robot design, to developmental psychology, to mathematics education and to philosophy of mathematics. There is still much to do.
Slightly revised version of parts of previous presentations on closely related topics. See below.
I.e. not just:
- useful skills of various kinds,
- useful and/or entertaining applications,
- formal properties of computations
- hardware/software engineering.
But also:
- Powerful and deep new concepts and models
- able to illuminate many other disciplines,
- including studies of mind and life
(partly by raising questions never asked before).
Poster: Computing The Science of Nearly Everything (PDF). (PPT - using OpenOffice).
NOTE:
Some examples of relatively unconventional kinds of programming
(including "thinky" programming),
that could be explored by young learners, are presented here:
http://www.cs.bham.ac.uk/research/projects/poplog/examples
See also Talk 87 above:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk87
Talk 87: What does AI have to do with Biology?
NOTE: Alan Bundy organised a seminar series at
the University of Edinburgh, on a related
theme, between 2006 and 2009.
Details:
http://www.inf.ed.ac.uk/research/programmes/comp-think/previous.html
Abstract
Topics for possible discussion within CAS
1. Do we want to broaden the scope of CAS to include: teaching about the ways in which computing ideas and programming experience can illuminate other disciplines, especially understanding natural intelligence, in humans and other species?Why nearly everything?
[Including the nature of mind and consciousness.]2. How do those goals affect the choice of computing/programming concepts, techniques and principles that are relevant?
3. What are good ways to do that? E.g. what sorts of languages and tools help and what sorts of learning/teaching activities?
4. Which children should learn about this? Contrast
-- Offering specialised versions for learners interested in biology, psychology, economics, linguistics, philosophy, mathematics.
-- Offering a study of computation as part of a general science syllabus.5. Is there any scope for that within current syllabus structures, and if not, what can be done about making space?
RELATED MATERIAL
Workshop on AI Heritage, MIT, June 11-12, 2009
Abstract
To be added(Includes some personal reflections 1969-onwards.)
Theory lab lunch, School of computer science, Tues 23rd March 2010.
Abstract
Trying to get even computer scientists to take the ideas seriously.Related (more recent) presentations:
- Talk 86: Supervenience and Causation in Virtual Machinery
- Talk 84: Using virtual machinery to bridge the "explanatory gap"
Or: Helping Darwin: How to Think About Evolution of Consciousness
Or: How could evolution (or anything else) get ghosts into machines?- Talk 85: Daniel Dennett on Virtual Machines
Invited talk at:
Dagstuhl Seminar: "From Form to Function" Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
A precursor talk is here.
Abstract
I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.
These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.
They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.
A longer abstract is here http://www.cs.bham.ac.uk/research/projects/cogaff/misc/between-form-and-function.html
Presented Wed 10th March 2010 at the Senate House in the Inside Outside workshop
http://graham.web-stu.dcs.qmul.ac.uk/insideOutside.xhtmlThis is a modified version of the talk below.
Talk at:
Language and Cognition Seminar, School of Psychology, 6 Nov 2009(This is a sequel to Talk 73 below, presented at Metaphysics of Science 2009 on "Virtual Machines and the Metaphysics of Science".)
I have a closely related tutorial paper on this topic destined for Int. Journal of Machine Consciousness
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#906
Phenomenal and Access Consciousness and the "Hard" Problem: A View from the Designer Stance
Abstract
The "hard" problem of consciousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html
In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.
"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,
Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.
There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.
The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).
The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.
Related presentations:
Abstract
Philosophers regularly use complex (running) virtual machines (not virtual realities) composed of enduring interacting non-physical subsystems (e.g. operating systems, word-processors, email systems, web browsers, and many more). These VMs can be subdivided into different kinds with different types of functions, e.g. "specific-function VMs" and "platform VMs" (including language VMs, and operating system VMs) that provide support for a variety of different (possibly concurrent) "higher level" VMs, with different functions.Yet, almost all ignore (or misdescribe) these VMs when discussing functionalism, supervenience, multiple realisation, reductionism, emergence, and causation.
Such VMs depend on many hardware and software designs that interact in very complex ways to maintain a network of causal relationships between physical and virtual entities and processes.
I'll try to explain this, and show how VMs are important for philosophy, in part because evolution long ago developed far more sophisticated systems of virtual machinery (e.g. running on brains and their surroundings) than human engineers so far. Most are still not understood.
This partly accounts for the apparent intractability of several philosophical problems.
E.g. running VM subsystems can be disconnected from input-output interactions for extended periods, and some can have more complexity than the available input/output bandwidth can reveal.
Moreover, despite the advantages of VMs for self-monitoring and self control, they can also lead to self-deception.
SEE ALSO:
A longer abstract (and a workshop paper) here
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#vmsFor an application of these ideas to old philosophical problems of consciousness see Talk 74: Why the "hard" problem of consciousness is easy and the "easy" problem hard. (And how to make progress)
For an attempt to show how Darwin could have used these ideas to provide answers to critics who claimed that evolution by natural selection could not produce consciousness see:
Talk 80: Helping Darwin: How to Think About Evolution of Consciousness -- Or "How could evolution get ghosts into machines?"For an attempt to specify a (very large and ambitious) multi-disciplinary research project related to this see
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/genome-architecture-project.html
A Possible Genome To Architecture Project (GenToA)
[The Meta-Genome Project?]
How can a genome specify an information processing architecture that grows itself guided by interaction with the environment?Some early ideas about this were in Chapter 6 of The Computer Revolution in Philosophy: Philosophy Science and Models of Mind (1978)
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/For a lot of related material see Steve Burbeck's web site http://evolutionofcomputing.org/Multicellular/Emergence.html
Invited talk at:
Opensource Schools Unconference: NCSL Nottingham 20th July 2009
http://opensourceschools.org.uk/unconference09
The theoretical ideas, using Vygotsky's notion of a "Zone of Proximal Development" (ZPD), among other ideas, are illustrated using teaching methods based on Pop-11 and the Poplog AI programming environment, some illustrated here: http://www.cs.bham.ac.uk/research/projects/poplog/freepoplog.html#teaching
Presented at
Cognitive Science Conference 2009 CogSci'09, Amsterdam, 31st July 2009There is an older presentation related to this here
`Virtual Machines in Philosophy, Engineering & Biology" (presented at WPE 2008).
A later version aimed at computer scientists is Talk 76: The history, nature, and significance of virtual machinery.
Abstract
Many psychologists, philosophers, neuroscientists and others interact with a variety of man-made virtual machines (VMs) every day without reflecting on what that implies about options open to biological evolution, and the implications for relations between mind and body. This tutorial position paper introduces some of the roles different sorts of VMs, contrasting Abstract VMs (AVMs) which are merely mathematical objects that do nothing, and running instances (RVMs) which interact with other things and have parts that interact causally. We can also distinguish single function, specialised VMs (SVMs), e.g. a running chess game or word processor, from "platform" VMs (PVMs), e.g. operating systems which provide support for changing collections of RVMs. (There was no space in the paper to distinguish two sorts of platform VMs, namely operating systems that can support actual concurrent interacting processes, and language run-time VMs which can support different sorts of functionality, though each instance of the language run-time VM (e.g. a Lisp VM, a Prolog VM) may not support multiple processes.The different sorts of RVMs play important but different roles in engineering designs, including "vertical separation of concerns" and suggests that biological evolution "discovered" problems that require VMs for their solution long before we did. Some of the resulting biological VMs have generated philosophical puzzles relating to consciousness, mind-body relations, and causation. Some new ways of thinking about these are outlined, based on attending to some of the unnoticed complexity involved in making artificial VMs possible.
The paper also discusses some of the implications for philosophical and cognitive theories about mind-brain supervenience and some options for design of cognitive architectures with self-monitoring and self-control, along with warnings about a kind self-deception arising out of use of RVMs.
The 6 page conference paper (very compressed) is available here.
Talk 70: What Has Life Got To Do With Mind? Or vice versa?
(PDF)
(Thoughts inspired by discussions with Margaret Boden.)
Presented at a seminar on Margaret Boden's work, Sussex University,
22 May, 2009.
Includes some reminiscences about Cognitive Science/AI at Sussex from 1965, and
a discussion of whether mind requires life, or life requires mind (defined as
information processing, or informed control).
Session on 'The Ultimate Robot' part of FET'09 in Prague, April 2009.
Abstract
I am not trying to build a robot: I am trying to understand what problems evolution solved, how it solved them and whether the problems can be solved on computer-based systems.One way of doing that is trying to build things, to find out what's wrong with our theories and what the problems are. But it is also necessary tp keep looking at products of evolution to compare them with what you have achieved so far.
Moreover, many of the problems come from the structure of the environment (e.g. the kinds of processes that do occur, that can occur, that can be produced or prevented, and the varieties of information that can be obtained by perceiving and acting in the environment). Most AI/Robotics/Cognitive Science/ don't study the environment enough.
Related talks
- Talk 67: A New Approach to Philosophy of Mathematics:
Design a young explorer, able to discover "toddler theorems"
(Or: "The Naive Mathematics Manifesto").
- Ontologies for baby animals and robots.
From "baby stuff" to the world of adult science: Developmental AI from a Kantian viewpoint.
- Talk 28: Do intelligent machines, natural or artificial, really need emotions?
Revised: 14 Jan 2014
Latest version, presented at Brown University, on 10th June 2009
Available HERE (PDF).
Last modified: 27 May 2010
Older version (presented in Prague)
available HERE (PDF).
Presented at
Workshop on Matching and Meaning, at AISB'09 Edinburgh 9th April 2009.
at Spring 2009 Pattern Recognition and Computer Vision Colloquium
April 23, 2009 Czech Technical University, Center for Machine Perception
Abstract
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this.Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension.
The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
Talk 67a: Why (and how) did biological evolution produce mathematicians?(Title used for presentation at University of Birmingham mathematics graduate conference 1st June 2009).
Available in PDF A4 Landscape Format
Presented at Nottingham LSRI Tuesday 2nd Feb 2010
If learning mathematics requires a teacher, where did
the first teachers come from?
Slides for the talk (PDF,
messy)
Video of the presentation at Nottingham LSRI 2nd Feb 2010.
(Includes Zeyn Saigol's refutation of my rubber-band star theorem.)
Also available as a (submitted) workshop paper
here.
(Comments welcome).
Alternative title: A New Approach to Philosophy of Mathematics:
Design a young explorer, able to discover "toddler theorems"
(Or: "The Naive Mathematics Manifesto").
Installed 16 Dec 2008 (Updated 24 Dec 2008; 30 Jan 2009; 15 Apr 2009, 7 May 2009, 25 May 2010)
Invited talk at Mathematics Graduate conference, June 2009AbstractInvited talk at York CS department Wed 6th May 2009, (combined with part of talk on UKCRC Grand Challenge 5)
Previously presented at CISA seminar, Informatics, Edinburgh, Wed 8th April 2009.
At a joint meeting of the Language and Cognition Seminar and the Vision Club,
School of Psychology, University of Birmingham, Friday 12th December 2008Presentation at Sussex University
An earlier version of the above talk on development of mathematical competences was given at University of Sussex, Tuesday 9th December 2008The PDF slides for the Sussex presentation are here.
The presentation at Sussex, including part of the discussion, was recorded
on video by Nick Hockings and he kindly made the resulting
video available online (in three resolutions).That is temporarily unavailable, but the medium resolution version is available here.
A link will be added here when Nick has found a new location.
Michael Brooks, a journalist who was present at the Sussex presentation wrote a report for the New Scientist here.
Unfortunately someone very silly at New Scientist gave it a totally inappropriate headline
and misrepresented my claims as being to make a mathematical robot, as opposed to understanding
human mathematical competences and their biological origins.
(I don't think it was Michael Brooks as he seemed to understand what I was saying.)NOTE: The slides were much revised between the successive presentations.
Some versions start with a fairly detailed example experimental domain,
concerned with shapes that can and cannot be made with a rubber band and pins.
Later versions start with an introductory overview on the evolution of cognition.Previous versions
The talks above build on and overlap with earlier presentations:
There are also connections with the ideas on learning and development in the work of Piaget and Karmiloff-Smith, and the examples of "toddler theorems" presented in:
- Could a Child Robot Grow Up To be A Mathematician And Philosopher?
Talk at University of Liverpool, 21st Jan 2008
http://www.cs.bham.ac.uk/research/projects/cogaff/talks#math-robot
- Could a Baby Robot Grow up to be a Mathematician and Philosopher?
Presented at 7th International Conference on Mathematical Knowledge Management (MKM'08)
University of Birmingham, 29 Jul 2008
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#mkm08
Conference paper available online:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802
Kantian Philosophy of Mathematics and Young RobotsA related, much longer, journal paper is
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807
The Well-Designed Young Mathematician, In Artificial Intelligence (December 2008)
Official site: http://dx.doi.org/10.1016/j.artint.2008.09.004
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Main theses[*] http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609
- Humans (and perhaps other species) can acquire various kinds of information about the environment empirically then develop a new derivation of the information giving it a wholly or partially non-empirical status -- and a kind of necessity.
- The forms of representation, perceptual and other mechanisms, and information-processing architecture that make the second step possible evolved to meet requirements imposed by complex and changing environments, including other intelligent individuals.
I'll try to show how this is important for the ability to produce creative solutions to novel problems, without having to do statistical learning/testing.
- The competences required for this are not all present at birth: they have to develop in layers (Chappell & Sloman 2007)[*].
- Those biological competences provide the basis of the ability to do mathematics, and some of that ability exists unrecognized even in toddlers.
I'll introduce the notion of a "toddler theorem" and give examples.
- Understanding the biological origins of mathematical competences provides support for Kant's philosophy of mathematics, wrongly thought to have been refuted by discovery that physical space is non-Euclidean.
Natural and artificial meta-configured altricial information-processing systems, IJUC 2007
Presented 11th November 2008 at The Workshop on Philosophy and Engineering (WPE 2008), 10-12 November 2008, Royal Academy of Engineering, London.Longer version available above in Talk 64 on virtual machines.
A 6 page paper on this, accepted for CogSci'09 (Amsterdam, July-Aug 2009) is available here:
"What Cognitive Scientists Need to Know about Virtual Machines"
Originally intended as talk at:
Kickoff workshop for the CogX project (29 September to 3rd October, 2008, Portoroz, Slovenia)
But insufficient time was available to present the material.Later slides, extending the material can be found in Talk 67 on toddler theorems, and Talk 68 on ontologies for baby animals and robots.
Abstract
These slides are based on the observation that current machine perceptual abilities and machine manipulative abilities are extremely limited compared with what humans and many other animals can do.There are mobile robots that are impressive as engineering products, e.g. BigDog -- the Boston dynamics robot and some other mobile robots that are able to keep moving in fairly rough terrain, including in some cases moving up stairs or over very irregular obstacles.
However, they all seem to lack any understanding of what they are doing, or the ability to achieve a specific goal despite changing obstacles, and then adopt another goal. For more detailed examples of missing capabilities see these web sites
As far as I know, none of the existing robots that manipulate objects can perceive what is possible in a situation when it is not happening, and reason about what the result would be if something were to happen.
Neither can they reason about why something is not possible.
I.e. they lack the abilities underlying the perception of positive and negative affordances.
They cannot wonder why an action failed, or what would have happened if..., or notice that their action might have failed if so and so had occurred part way through, etc., or realise that some information was available that they did not notice when they could have used it.
A more recent version of this, aimed mainly at philosophers, is Talk 73: Virtual Machines and the Metaphysics of Science.
A much shorter version was presented at
The
2008 Workshop on Philosophy and Engineering.
(10-12 Nov 2008, Royal Academy of Engineering, London).
The slides for that are here.
Abstract
here.
Previously presented at:
I gave two presentations for which the slides are available below, as follows
- Talk A Saturday 1st November
Why virtual machines really matter -- for several disciplines
(Or, Why philosophers need to be robot designers)A much shorter version was presented at the Workshop on Philosophy and Engineering (WPE) London 10-12 November 2008, also on slideshare.net.
- Talk B Sunday 2nd November
Evolution of minds and languages.
What evolved first and develops first in children:
Languages for communicating, or languages for thinking (Generalised Languages: GLs)
For an introduction to biological virtual machines that grow themselves in layers as a result of interacting with the environment see these draft slides.
Abstract
One of the most important ideas (for engineering, biology, neuroscience, psychology, social sciences and philosophy) to emerge from the development of computing has gone largely unnoticed, even by many computer scientists, namely the idea of a running virtual machine (VM) that acquires, manipulates, stores and uses information to make things happen.The idea of a VM as a mathematical abstraction is widely discussed, e.g. a Turing machine, the Java virtual machine, the Pentium virtual machine, the von Neumann virtual machine. These are abstract specifications whose relationships can be discussed in terms of mappings between them. E.g. a von Neumann VM can be implemented on a Universal Turing Machine. An abstract VM can be analysed and talked about, but, like a mathematical proof, or a large number, it does not {\bf do} anything. The processes discussed in relation to abstract VMs do not occur in time: they are mathematical descriptions of processes that can be mapped onto descriptions of other processes. In contrast a physical machine can consume, transform, transmit, and apply energy, and can produce changes in matter. It can make things happen. Physical machines (PMs) also have abstract mathematical specifications that can be analysed, discussed, and used to make predictions, but which, like all mathematical objects cannot do anything.
But just as instances of designs for PMs can do things (e.g. the engine in your car does things), so can instances of designs for VMs do things: several interacting VM instances do things when you read or send email, browse the internet, type text into a word processor, use a spreadsheet, etc. But those running VMs, the active instances of abstract VMs, cannot be observed by opening up and peering into or measuring the physical mechanisms in your computer.
My claim is that long before humans discovered the importance of active virtual machines (AVMs), long before humans even existed, biological evolution produced many types of AVM, and thereby solved many hard design problems, and that understanding this is important (a) for understanding how many biological organisms work and how they develop and evolve, (b) for understanding relationships between mind and brain, (c) for understanding the sources and solutions of several old philosophical problems, (d) for major advances in neuroscience, (e) for a full understanding of the variety of social, political and economic phenomena, and (e) for the design of intelligent machines of the future. In particular, we need to understand that the word "virtual" does not imply that AVMs are unreal or that they lack causal powers, as some philosophers have assumed. Poverty, religious intolerance and economic recessions can occur in socio-economic virtual machines and can clearly cause things to happen, good and bad. The virtual machines running on brains, computers and computer networks also have causal powers. Some virtual machines even have desires, preferences, values, plans and intentions, that result in behaviours. Some of them get philosophically confused when trying to understand themselves, for reasons that will be explained. Most attempts to get intelligence into machines ignore these issues.
Available in PDF format.
A version (using 'flash') is also available on
my 'slideshare.net'
space.
Talk at 7th International Conference on Mathematical Knowledge Management Birmingham, UK, 28-30 July 2008
http://events.cs.bham.ac.uk/cicm08/mkm08/
University of Birmingham, 29 Jul 2008Proceedings paper online here
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802ABSTRACT:
Kantian Philosophy of Mathematics and Young RobotsA child, or young human-like robot of the future, needs to develop an information-processing architecture, forms of representation, and mechanisms to support perceiving, manipulating, and thinking about the world, especially perceiving and thinking about actual and possible structures and processes in a 3-D environment. The mechanisms for extending those representations and mechanisms, are also the core mechanisms required for developing mathematical competences, especially geometric and topological reasoning competences. Understanding both the natural processes and the requirements for future human-like robots requires AI designers to develop new forms of representation and mechanisms for geometric and topological reasoning to explain a child's (or robot's) development of understanding of affordances, and the proto-affordances that underlie them. A suitable multi-functional self-extending architecture will enable those competences to be developed. Within such a machine, human-like mathematical learning will be possible. It is argued that this can support Kant's philosophy of mathematics, as against Humean philosophies. It also exposes serious limitations in studies of mathematical development by psychologists.See also Talk 56 and
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807
The Well-Designed Young Mathematician
Artificial Intelligence, December 2008.)
The paper for the proceedings is available at
http://www.cs.bham.ac.uk/research/projects/cogaff/08.html#805
and
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802
ABSTRACT:
Some AI researchers aim to make useful machines, including robots. Others aim to understand general principles of information-processing machines whether natural or artificial, often with special emphasis on humans and human-like systems: They primarily address scientific and philosophical questions rather than practical goals. However, the tasks required to pursue scientific and engineering goals overlap considerably, since both involve building working systems to test ideas and demonstrate results, and the conceptual frameworks and development tools needed for both overlap. This paper, partly based on requirements analysis in the CoSy robotics project, surveys varieties of meta-cognition and draws attention to some types that appear to play a role in intelligent biological individuals (e.g. humans) and which could also help with practical engineering goals, but seem not to have been noticed by most researchers in the field. There are important implications for architectures and representations.
Talk for Graduate School Seminar series, Biosciences, University of Birmingham, on 24th June 2008.
Given on 9th June, at Birmingham Informatics CRN Workshop on Complexity and Critical Infrastructures - Environment focus.
and earlier (May 13th 2008) at UIUC Complexity conference on Understanding Complex Systems
Dagstuhl Seminar No. 08091, 24.02.2008-29.02.2008
Logic and Probability for Scene Interpretation. Schloss
Dagstuhl, Feb 25th 2008NOTE: a sequel to this talk is available here.
Abstract
http://www.cs.bham.ac.uk/research/projects/cogaff/dag08/
Invited Presentation at Public Session of AISB'08,
3rd April 2008, Aberdeen, Scotland
Abstract
The talk aims to:
Talk at: Intelligent Robotics Lab Seminar, Birmingham, 22nd Jan 2008
Abstract
A short history of AI vision research, introducing 'Generalised Gibsonianism (GG)', which allows for 'Proto-affordances' and use of vision in planning, reasoning and problem solving, based on seeing and manipulating possibilities. Closely related to Talk 56.
Thinking about Mathematics and Science Seminar, University of Liverpool. Monday 21 January 2008.Some old problems going back to Immanuel Kant (and earlier) about the nature of mathematical knowledge can be addressed in a new way by asking (a) what sorts of developmental changes in a human child make it possible for the child to become a mathematician, and (b) how this could replicated in a robot that develops through exploring the world, including its own exploration of the world.
This is relevant not only to philosophy of mathematics, developmental psychology, and robotics, but also to a future mathematical education strategy based on much deeper ideas about what a mathematical learner is than are available to current educators. How many educators could design and implement a learner?
The slides have been substantially expanded since the talk, partly in the light of comments and criticisms received. This process is likely to continue. There are partial overlaps with several other talks here.
Abstract
The original abstract is here.A conference paper summarising some of the issues is here
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0802
Kantian Philosophy of Mathematics and Young RobotsSee also Talk 67
Symposium on AI and Consciousness: Theoretical Foundations and Current Approaches
at AAAI Fall Symposium, Washington, 9-11 November 2007
Abstract
See online paper http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0705
Symposium on Computational Approaches to Representation Change During Learning and Development at AAAI Fall Symposium, Washington November 2007
Abstract
See the full paper
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0704
Invited talk at:
University of Oxford Internet Institute, 26 Oct 2007
Workshop on Artificial Companions in Society: Perspectives on the Present and Future Oxford 25th--26th October, 2007
Organised by The Companions Project
Abstract
For the position paper see (revised version, published in 2010 in a book based on the workshop):
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#oii
An early draft of the chapter is here:
http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#711
Originally for Birmingham Language and Cognition seminar, School of Psychology, Oct 2007
Also presented at Mind as Machine, Continuing Education Weekend Course Oxford 1-2 Nov 2008
Abstract
Investigating the evolution of cognition requires an understanding of how to design working cognitive systems since there is very little direct evidence (no fossilized behaviours or thoughts).That claim is illustrated in relation to theories about the evolution of language. Almost everyone seems to have got things badly wrong by assuming that language must have started as primitive communication between individuals that gradually got more complex, and then later somehow got absorbed into cognitive systems.
An alternative theory is presented here, namely that generalised languages (GLs) supporting (a) structural variability, (b) compositional semantics (generalised to include both diagrammatic syntaxes and contextual influences on semantics at every level) and (c) manipulability for reasoning, evolved first for various kinds of 'thinking', i.e. internal information processing. This is inconsistent with many theories of the evolution of language. It is also inconsistent with Dennett's account of the evolution of consciousness in Content and Consciousness (1969).
See the slides for more detail.
A earlier presentation in the School of Computer Science, in March 2007 is closely related to this:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0702 (PDF)This work based on collaboration with Jackie Chappell. See also
What is human language? How might it have evolved?
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703
'Computational cognitive epigenetics'
A. Sloman and J.Chappell (BBS 2007)
Short talk at the Inauguration ceremony of the "Research Institute for Cognition and Robotics - CoR-Lab"This is an expanded version of the slides. Part of the argument is that control of complex systems, including complex robots, and animals, can be usefully mediated by virtual machines.Where such a virtual machine also acquires and uses information about itself this can be useful, but it can also lead to the machine becoming philosophical and getting confused.
A much expanded version of these slides is in Talk 64: Why virtual machines really matter -- for several disciplines
This is a revised, clarified and expanded version of a part of Talk 14, on Symbol Grounding vs Symbol Tethering.
Revised after presentation at the University of Sussex 27 Nov 2007, and University of Birmingham 29 Nov 2007
Also listed as COSY-PR-0705 on CoSy web site.
Abstract
This is, like Talk 14, an attack on concept empiricism, including its recently revived version, "symbol grounding theory".The idea of an axiom system having some models is explained more fully than in previous presentations, showing how the structure of a theory can give some semantic content to undefined symbols in that theory, making it unnecessary for all meanings to be derived bottom up from (grounded in) sensory experience, or sensory-motor contingencies. Although symbols need not be grounded, since they are mostly defined by the theory in which they are used, the theory does need to be "tethered", if is capable of being used for predicting and explaining things that happen, or making plans for acting in the real world. These ideas were quite well developed by 20th Century philosophers of science, and I now both attempt to generalise those ideas to be applicable to theories expressed using non-logical representations (e.g. maps, diagrams, working models, etc.) and begin to show how they can be used in explaining how a baby or a robot, can develop new concepts that have some semantic content but are not definable in terms of previously understood concepts. There is still much work to be done, but what needs to be done to explain how intelligent robots might work, and how humans and other intelligent animals learn about the environment, is very different from most of what is going on in robotics and in child and animal psychology.
The addition of new explanatory hypotheses is abduction. Normally abduction uses pre-existing symbols. The simultaneous introduction of new symbols and new axioms (ontology-extending abduction) generates a very difficult problem of controlling search.
Advertised abstract for Birmingham talk.
See also this discussion on What's information?, and the ideas about virtual machine functionalism, here.
Invited talk at ENF'2007, Emulating the Mind
1st international Engineering and Neuro-Psychoanalysis Forum
Vienna July 2007The full paper is available http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0702
Abstract
This paper summarises a subset of the ideas I have been working on over the last 35 years or so, about relations between the study of natural minds and the design of artificial minds, and the requirements for both sorts of minds.The key idea is that natural minds are information-processing machines produced by evolution. We still do not have a good understanding of what the problems were that evolution had to solve, nor what the solutions were: e.g. we do not know how many different kinds of information processing system evolution produced, nor what they are used for -- even in ourselves.
What sort of information-processing machine a human mind is requires much detailed investigation of the many kinds of things minds can do.
It is not clear whether producing artificial minds with similar powers will require new kinds of computing machinery or merely much faster and bigger computers than we have now. Having been studying the problems of visual perception for many years I don't believe that any model proposed so far, whether based on conventional computation, neural computation, or anything else is capable of explaining the phenomena of human visual perception, including what it achieves, how fast it achieves it, how it develops and how many non-visual tasks the visual system is used for (e.g. doing mathematics).[*]
Insofar as some sorts of psychotherapy (including psychoanalysis) are analogous to run-time debugging of a virtual machine, in order to do them well, we need to understand the architecture of the machine well enough to know what sorts of bugs can develop and which ones can be removed, or have their impact reduced, and how.
Otherwise treatment will be a hit-and-miss affair.
This requires understanding how minds work when they don't need therapy -- a distant goal.
[*] Some challenges for vision researchers are here: ============
Presentations by both of us, along with abstracts, and also a post-workshop presentation on varieties of causal competence available in PDF format.
- Aaron Sloman:
Evolution of two ways of understanding causation: Humean and Kantian. (PDF), Abstract(HTML)- Jackie Chappell:
Understanding causation: the practicalities -- Screen version with hyperlinks (PDF) Abstract (HTML)Print version without hyperlinks (PDF)- Causal competences of many kinds (PDF)
An incomplete draft paper written after the workshop:
Invited talk at:
BBSRC funded Workshop onAbstractClosing the gap between neurophysiology and behaviour: A computational modelling approachA paper for the proceedings is online here (PDF).
University of Birmingham, United Kingdom
May 31st-June 2nd 2007
Over several decades I have been trying, as a philosopher-designer, to understand requirements for a robot to have human-like visual competences, and have written several papers pointing out what some of those requirements are and how far all working models known to me are from satisfying them. This included a paper in 1989 proposing replacing 'modular' architectures with 'labyrinthine' architectures, reflecting the varieties of interconnectivity between visual subsystems and other subsystems (e.g. action control subsystems, auditory subsystems).One of the recurring themes has been the relationship between structure and process. For instance, doing school Euclidean geometry involves seeing how processes of construction can produce new structures from old ones in proving theorems, such as pythagoras' theorem. Likewise understanding how an old-fashioned clock works involves seeing causal connections and constraints related to possible processes that can occur in the mechanism. In contrast performing many actions involves producing processes (e.g. grasping), seeing those processes, and using visual servoing to control the fine details. This need not be done consciously, as in posture control, and many other skilled performances. Some processes transform structures discretely, e.g. by changing the topology of something (adding a new line to a diagram, separating two parts of an object) others continuously (e.g. painting a wall or blowing up a balloon).
Another theme that has been evident for many decades is the fact that percepts can involve hierarchical structure, although not all the structures should be thought of as loop-free trees, e.g. a bicycle doesn't fit that model even though to a first approximation most animals and plants do (e.g. decomposition into parts that are decomposed into parts, etc.) Less obviously, perception (as I showed in chapter 9 of The Computer Revolution in Philosophy) can involve layered ontologies, where one sub-ontology might consist entirely of 2-D image structures and processes, whereas another includes 3-D spatial structures and processes, and another kinds of 'stuff' of which objects are made and their properties (e.g. rigidity, elasticity, solubility, thermal conductivity, etc.), to which can be added mental states and processes, e.g. seeing a person as happy or sad, or as intently watching a crawling insect. The use of multiple ontologies is even more obvious when what is seen is text, or sheet music, perceived using different geometric, syntactic, and semantic ontologies.
What did not strike me until 2005 when I was working on an EU-funded robot project (CoSy) is what follows from the combination of the two themes (a) the content of what is seen is often processes and process-related affordances, and (b) the content of what is seen involves both hierarchical structure and multiple ontologies. What follows is a set of requirements for a visual system that makes current working models seem even further from what we need in order to understand human and animal vision, and also in order to produce working models for scientific or engineering purposes.
One way to make progress may be to start by relating human vision to the many evolutionary precursors, including vision in other animals. If newer systems did not replace older ones, but built on them, that suggests that many research questions need to be rephrased to assume that many different kinds of visual processing are going on concurrently, especially when a process is perceived that involves different levels of abstraction perceived concurrently, e.g. continuous physical and geometric changes relating parts of visible surfaces and spaces at the lowest level, discrete changes, including topological and causal changes at a higher level, and in some cases intentional actions, successes, failures, near misses, etc. at a still more abstract level. The different levels use different ontologies, different forms of representation, and probably different mechanism, yet they are all interconnected, and all in partial registration with the optic array (not with retinal images, since perceived processes survive saccades).
The slides include a speculation that achieving all this functionality at the speeds displayed in human (and animal) vision may require new kinds of information-processing architectures, mechanisms and forms of representation, perhaps based on complex, interacting, self-extending, networks of multi-stable mutually-constraining dynamical systems -- some of which change continuously, some discontinuously.
See also these challenges for vision researchers listed below [*]
This was a poster presentation at
PAC-07 Conference, 1-3 July 2007, Bristol, onAbstract available online (HTML)Perception, Action and Consciousness confronting the dual-route (dorsal/ventral) theory of visual perception and the enactivist view of consciousness.
COSPAL Workshop Aalborg, 14th June 2007 onCognitive Systems: Perception, Action, Learning
Poster for 10th Conference of the Association for the Scientific Study of Consciousness (ASSC).I was ill and did not manage to present my poster at ASSC10, Oxford June 2006. This is a PDF slide presentation of the main points.
Also available at ASSC e-prints web site as eprint 112
This presentation elaborates on'The substratum of this experience is the mastery of a technique' (Wittgenstein)I try to show, with illustrative videos, that many 'techniques' are implicitly involved in ordinary experiences -- and that the complexities grow as a child develops, extending its ontology and therefore the variety of affordances it can experience and use. I point out that there are two interpretations of sensorimotor contingencies, one intrasomatic (relating only the contents of sensory and motor signals at various levels of abstraction) the other extrasomatic (amodal, objective), referring to an environment that exists independently of whether and how it is experienced or acted on, and that the latter provides computational advantages in some cases, supporting a Kantian rather than a Humean view of knowledge and concepts. This also suggests a re-interpretation of mirror neurons as 'abstraction neurons'.What we are conscious of in the environment depends on the ontology we have available. A child whose ontology does not include the notion of boundary, or the notion of alignment of boundaries may not be able to replace a cut-out wooden picture in its recess, even if he knows which recess it should go in. Careful observation of children at various stages shows transitions that involve extensions of the available ontology, which must go along with development of suitable forms of representation and mechanisms for manipulating them, and an architecture that combines them all. Thus the substratum of the more sophisticated child's experience is mastery of many 'techniques', not just one as implied by Wittgenstein (who probably did not intend that). It is suggested that there are considerable differences between precocial species whose competences and architecture are mostly genetically determined and altricial species that develop most of their own competences e.g. through playful exploration, driven by meta-level bootstrapping mechanisms.
Only when I started working in detail on requirements for a human-like robot able to manipulate 3-D objects using vision and an arm with gripper did I notice what should have been obvious long before, namely that structured objects have 'multi-strand' relationships not expressible simply as R(x, y), because the relation between x and y involves many relations between parts of x and parts of y.
For a more detailed presentation of the resulting theory see
COSY-PR-0505: A (Possibly) New Theory of Vision (PDF)Hence, motion of such structured objects involves 'multi-strand' (concurrent) processes. That is, many relationships change in parallel -- e.g. faces, edges, corners of one block may all be changing their relationships to faces edges and corners of another (and things get more complex when objects are flexible, e.g. your hand peeling a banana or a sweater being put on a child).
Thus seeing what you are doing in such cases can have a kind of complexity that appears not to have been noticed previously because of too much focus on simpler visual tasks like recognition and tracking.
I'll show why we need to postulate mechanisms in which concurrent processes at different levels of abstraction, in partial registration with the optic array (NOT the retina, since saccades, etc., occur frequently) are represented.
Nothing in AI comes close to modelling this, and it seems likely that it will be hard to explain in terms of known neural mechanisms. If the opportunity arises I'll try to explain some of the implications for human development, understanding of causation, and computational modelling, and spell out requirements to be addressed in future interdisciplinary research, explaining deep connections with Gibson's notion of affordance, and its generalisation to 'vicarious affordances'.
The evolution of grasping devices that move independently of eyes (i.e. hands instead of mouth or beak) had profound implications -- undermining claims about sensory-motor contingencies -- also suggesting that mirror neurons should have been called 'abstraction neurons'.
Some of the ideas are also sketched here: COSY-DP-0601 'Orthogonal Competences Acquired by Altricial Species'
A critique of common assumptions about 'sensorimotor contingencies' is presented, including making a distinction between somatic (internal) and exosomatic (external) ontologies. Too many people expect too much to come from the somatic (intrasomatic) variety -- including knowledge of sensorimotor contingencies, a notion criticised in
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0603Requirements for 'fully deliberative' systems are analysed in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/fully-deliberative.html
Symposium on 50 years of AI, at the KI2006 Conference, Bremen, Germany, June 17th 2006
Video recordings of the symposium talks and discussion are available at :
http://bscc.spatial-cognition.de/node/14
The video recording of my lecture is also available on this site:
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ki2006Abstract: An extended abstract for the talk is at
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ki2006-abstract.html
Also at:
http://doi.org/10.1007/978-3-540-69912-5_33
The current emphasis on causation as correlational/statistical, i.e. Humean, as in Bayesian nets, ignores a deeper notion of causation as structure-based and deterministic, i.e. Kantian. The history of science involves wherever possible moving from Humean to Kantian causation, and that's what young children and some other animals seem to do. Where structure-based understanding is not achievable we fall back on Humean causation as a last resort. But structure-based understanding is not something unitary: it has to be learnt about over and over again in connection with many different kinds of physical matter, physical structure, physical process, and likewise structures and processes of more abstract kinds, e.g. in functional relations, in social processes, in number theory, in computational virtual machines and in mental processes. This talk is mainly about causal understanding of the physical/geometrical world of a young child (or chimp, or crow ?), which I suspect provides a basis for much else.This talk overlaps with presentation COSY-PR-0505 (PDF) on vision, and also with a paper discussing 'Orthogonal re-combinable competences'.
Installed: October 2005
Last Updated: 17 Feb 2007
Seminar in School of Computer Science University of Birmingham 13th October 2005,Abstract:
Imperial College London on 25th October 2005,
Aston University on 28th October 2005,
Osnabrück, Germany 16th November 2005
(Closely related to presentations on affordances, ontologies, causation, child as scientist, and later presentations on vision.)
The key idea is that whereas I previously thought (like many others) that vision involved concurrently analysing and interpreting structures at different levels of abstraction, using different ontologies at the different levels (as explained in the summary of the Popeye program in Chapter 9 of 'The Computer Revolution in Philosophy' (1978)) it is now clear that that was an oversimplification, and vision should be seen as involving analysis and interpretation not just of structures but also processes at different levels concurrently, which sometimes implies running several simulations concurrently at different levels of abstraction, using different ontologies -- in partial registration with sensory data were appropriate, and sometimes also motor signals.The talk explains what this means, what it does not mean, presents some of the evidence, summarises some of the implications, and points to some of the (many) unsolved problems, including unsolved problems about how this could be implemented either on computers or in brains. The presentation briefly lists some of the many precursors of the theory, but does not go into detail.
The slides will go on being revised and extended in the light of comments and criticisms. It soon became clear that the topic is much broader than vision, but I have left the title. One of the implications concerns our understanding of causation, and our learning about causation, discussed in the next presentation. There are also implications regarding visual/spatial reasoning. The work of Rick Grush reported in BBS 2004 is very closely related to some of the ideas presented here.
See http://mind.ucsd.edu/papers/intro-emulation/intro-em.pdfThe theory is also very closely related to theories about the development of mathematical competences presented above, and also presentations on perception of affordances and proto-affordances here.
Talk 34: TUTORIAL ON INTEGRATION AT EC COGNITIVE SYSTEMS 'KICKOFF' CONFERENCE,
Aaron Sloman
(Bled, Slovenia, 28-30 October 2004)
Available HERE (PDF)
Abstract
The EC Cognitive Systems KickOff Conference was organised by
the CoSy
project on behalf of the EC Cognitive Systems Initiative. Additional
presentations and videos are available at the
official CoSy web site
in the
events section.
The main theme of the Cosy Project is integration of many kinds of
functionality, normally studied separately. This tutorial presentation
attempted to explain what integration implies with some examples. This
was one of 8 tutorial presentations in two parallel streams on the
second day of the conference. More recent work on the topic of
integration can
be
found at the
Birmingham CoSy papers web
site.
(Overlaps with several previous talks)
The Birmingham CoSy Web Site includes several sequels to this paper. See also talks on understanding of causation in animals and machines at WONAC 2007.
Much discussion of the nature of human minds is based on prejudice or fear of one sort or another -- sometimes arising out of 'turf wars' between disciplines, sometimes out of dislike of certain theories of what we are, sometimes out of religious concerns, sometimes out of ignorance of what has already been learnt in various disciplines, sometimes out of over-reliance on common sense and introspection, or what seems 'obviously' true. But one thing is clear to all: minds are active, changing entities: you change as you read this abstract and you can decide whether to continue reading it or stop here. I.e. minds are active machines of some kind. So I propose that we investigate, in a dispassionate way, the variety of design options for working systems capable of doing things that minds can do, whether in humans or other animals, in infants or adults, in normal or brain-damaged people, in biological or artificial minds. We can try to understand the trade-offs between different ways in which complete systems may be assembled that can survive and possibly reproduce in a complex and changing environment (including other minds.) This can lead to a new science of mind in which the rough-hewn concepts of ordinary language (including garden-gate gossip and poetry) are shown not to be wrong or useless, but merely stepping stones to a richer, deeper, collection of ways of thinking about what sorts of machines we are, and might be. This will also help to shed new light on the recent (confused) fashion for thinking that emotions are 'essential' for intelligence. It should also help us to understand how the concerns of different disciplines, e.g. biology, neuroscience, psychology, linguistics, philosophy, etc. relate to different layers of virtual machines operating at several different levels of abstraction, as also happens in computing systems.Other talks in this directory elaborate further on some of the themes presented.
Available here
This talk explains why 'symbol tethering' (which treats most of meaning as determined by structure, with experience and action helping to reduce indeterminacy) is more useful for explicit forms of representation and theorising than 'symbol grounding' (which treats all meaning as coming 'bottom-up' from experience of instances, and which is just another variant on the old philosophical theory 'concept empiricism' defended by empiricist philosophers such as Locke, Berkeley and Hume, and refuted around 1781 by Kant.NOTE: following a suggestion from Jackie Chappell, I now use the phrase 'symbol tethering' instead of 'symbol attachment'.
Since writing this I have discovered another attack on concept empiricism on the web page of Edouard Machery. See Concept Empiricism: Taking a Hard Look at the Facts.
This talk overlaps in part with Talk 49 and Talk 14
The talk was originally entitled 'Varieties of meaning in perceptual processes' but I did not manage to get to the perceptual processes part, being developed in this paper.
These slides are likely to be updated when I have time to complete the planned section on varieties of meaning in perceptual mechanisms.
Talk originally given to
Cafe
Scientifique & Culturel Birmingham,
7th May 2004
Announced at
http://www.birminghamcafe.org/view.html?eid=11
Revised version presented on 24th June 2005 in Utrecht at
The 3rd multi-disciplinary symposium organized by the NWO Cognition
Programme:
How rational are we?
Also presented several other times/places.
Part of the problem is that many of the words we use for describing human mental states and processes (including 'emotion' and 'intelligence') are far too ill-defined to be useful in scientific theories. Nevertheless there are many people who LIKE the idea that emotions, often thought of as inherently irrational, are required for higher forms of intelligence, the suggestion being that rationality is not all it's cracked up to be. But wishful thinking is not a good basis for advancing scientific understanding.
Another manifestation of wishful thinking is people attributing to me opinions that are the opposite of what I have written in things they claim to have read.
So I propose that we investigate, in a dispassionate way, the variety of design options for minds, whether in animals (including humans) or machines, and try to understand the trade-offs between different ways of assembling systems that survive in a complex and changing environment. This can lead to a new science of mind in which the rough-hewn concepts of ordinary language (including garden-gate gossip and poetry) are shown not to be wrong or useless, but merely stepping stones to a richer, deeper, collection of ways of thinking about what sorts of machines we are, and might be.
For more on this see http://www.cs.bham.ac.uk/research/cogaff/
This overlaps considerably with
See also:
- Invited talk for AAAI04 Symposium on emotions
- Talk 3,
- Talk 24 and others.
Beyond shallow models of emotion, in Cognitive Processing: International Quarterly of Cognitive Science, Vol 2, No 1, pp. 177-198, 2001 http://www.cs.bham.ac.uk/research/projects/cogaff/00-02.html#74 and this review/comment: http://www.ce3c.com/emotion/?p=106
What is it that an 18 month old child has not yet grasped when he cannot
see how to join
two parts of a toy train, despite having excellent vision and many
motor skills? And what changes soon after when he has learnt how to do
it.
This overlaps considerably with
Talk 7 and
Talk 21 on Human Vision
See also these
More recent slides on
Two views of child as scientist:
Humean and Kantian
(October 2005).
Talk 27: REQUIREMENTS FOR VISUAL/SPATIAL REASONING
Talk to language and cognition seminar, Birmingham, Oct 2003
Available in two formats using Postscript and PDF here:
Abstract
This is yet another set of slides about the role of vision and
spatial understanding in reasoning, but with especial emphasis on
affordances and the fact that since the possibilities for
action and the affordances are different at different spatial scales,
and in different contexts, our understanding of space will have
different components concerned with those different scales and contexts.
For many years, like many other scientists, engineers and philosophers, I have been writing and talking about "information-processing" systems, mechanisms, architectures, models and explanations, e.g.:Since the word "information" and the phrase "information-processing" are both widely used in the sense in which I was using them, I presumed that I did not need to explain what I meant. Alas I was naively mistaken:
- My 1978 book The Computer Revolution in Philosophy now online here: http://www.cs.bham.ac.uk/research/cogaff/crp/ (especially chapter 10).
- A. Sloman, (1993) The mind as a control system, in Philosophy and the Cognitive Sciences, Cambridge University Press, Eds. C. Hookway & D. Peterson, pp. 69--110.
Online here: http://www.cs.bham.ac.uk/research/cogaff/
The conceptual confusions related to these notions lead to spurious debates, often at cross-purposes, because people do not recognize the unclarity in their concepts and the differences between their usages and those of other disputants. I found evidence for this at two recent workshops I attended, both of which were in other ways excellent: the Models of Consciousness Workshop in Birmingham and The UK Foresight Interaction workshop. in Bristol, both held in the first week of September 2003.
- Not everyone agrees with many things now often taken as obvious, for instance that all organisms process information.
- Some people think that "information-processing" refers to the manipulation of bit patterns in computers.
- Not everyone believes information can cause things to happen.
- Some people think that talk of "information-processing" involves unfounded assumptions about the use of representations.
- There is much confusion about what "computation" means, what its relation to information is, and whether organisms in general or brains in particular do it or need to do it.
- Some of the confusion is caused by conceptual unclarity about virtual machines, and blindness to their ubiquity.
What I heard in that week, often heard in previous discussions, finally provoked me to bring together a collection of points in "tutorial" mode. Hence these slides, developing a number of claims, including these:
This is work in progress. Comments and criticisms welcome. The presentation will be updated/improved from time to time. These slides are closely related to presentation attacking the notion of 'symbol grounding' and proposing 'symbol tethering' instead. (There are also older slides the slides attacking the notion of 'symbol grounding' (Talk 14).)
- "Information" (in the sense that refers to meaning or content, not the Shannon information-theoretic sense) is theoretical notion which, like "energy" cannot be explicitly defined in terms of unproblematic pre-theoretical concepts.
- These, like all theoretical concepts, are partly defined by a web of relationships to other concepts in a theory or collection of related theories -- and as the theories change the concepts change.
- Biological organisms all process information in that sense, but they vary in the variety of things they can do with information, the forms in which they encode it, the mechanisms used and the architectures in which the information is manipulated.
- Computers are best thought of as just another type of information processor (when they are working -- not when switched off!), which have more in common with some aspects of natural information processing than with others.
- Many of the processes occur in virtual machines rather than in physical machines, though they are all (ultimately) implemented in virtual machines, some biological, some social, some artificial.
- The same physical computer can, at different times, instantiate different information processors (e.g. when running different operating systems, different programs with), whereas in biological organisms there is much closer coupling between the physical design and many of the types of information processing that go on (e.g. in cell-repair, digestion, low-level motor control, hormonal control, immune system processes).
- Nevertheless, in humans, and many other animals, the same physical system can run very different virtual machines concerned with perceiving, thinking about, explaining, predicting and interacting with a physical, biological, social, political, etc. environment. E.g. despite much that is in common between virtual machines in a typical human adult and a typical 5 year old child, there are also many differences, produced by decades of learning, and cultural absorption, which may also lead to great differences between adult virtual machines, e.g. in a ballet dancer, a composer of symphonies, a jazz musician, a brick-layer, a philosopher and a quantum physicist.
- Most people who discuss issues relevant to natural or artificial information processing systems do not have enough knowledge of what virtual machines are, how they are implemented in lower level virtual or physical machines, or how virtual machine events can be causes. Software engineers understand these matters and use them in their work every day, but this is craft knowledge and they do not articulate it explicitly in a manner than clarifies the philosophical issues.
- As a philosophical software engineer I have tried to explain things in a way that will, I hope, clarify some debates in philosophy, AI, cognitive science, psychology, neuroscience, and biology.
I also have some online notes on What is information? Meaning? Semantic content?
Now a book-chapter:What's information, for an organism or intelligent machine? How can a machine or organism mean?, in
Information and Computation, Eds. G. Dodig-Crnkovic and M. Burgin, World Scientific, New Jersey,
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#905
DRAFT INCOMPLETE SET OF SLIDES (3 Jun 2003).
Available in two formats using Postscript and PDF here:
Most people think that because they experience and talk about consciousness they have a clear understanding of what they mean by the noun "consciousness". This is just one of many forms of self-deception to be expected in a sufficiently rich architecture with reflective capabilities that provide some access to internal states and processes, but which could not possibly have complete self-knowledge. This talk will approach the topic of understanding what a mind is from the standpoint of a philosophical information-engineer designing minds of various kinds.A key idea is that besides physical machines that manipulate matter and energy there are virtual machines that manipulate information, including control information. A running virtual machine (for instance a running instance of the Java virtual machine) is not just a mathematical abstraction (like the generic Java virtual machine). A running virtual machine includes processes and events that can interact causally with one another, with the underlying physical machine, and with the environment. People rely on the causal powers of such virtual machines when they use the internet, use word processors or spelling checkers, or use aeroplanes with automatic landing systems. So they are not epiphenomenal.
Such a virtual machine may be only very indirectly related to the underlying physical machine and in particular there need not be any simple correlations between virtual machine structures and processes and physical structures and processes. This can explain some of the alleged mystery in the connections between mental entities and processes and brain entities and processes.
We'll see how some designs for sophisticated information-processing virtual machines are likely to produce systems that will discover in themselves the very phenomena that first led philosophers to talk about sensory qualia and other aspects of consciousness. This can serve to introduce a new form of conceptual analysis that builds important bridges between philosophy, psychology, neuroscience, biology, and engineering. For instance, qualia can be accounted for as internally referenced virtual machine entities, which are described using internally developed causally-indexical predicates that are inherently incommunicable between different individuals.
All this depends crucially on the concept of a virtual machine which despite being virtual has causal powers.
Papers and talks providing background to the presentation can be found here:
http://www.cs.bham.ac.uk/research/cogaff/For more information on the Association for the Scientific Study of Science, see http://assc.caltech.edu/
Recent research on different layers in an integrated architecture, using differing forms of representation, different types of mechanisms, and different information, to provide different functional capabilities, suggests a way of thinking about classes of possible architectures (the CogAff schema), tentatively proposed as a framework for comparing and contrasting designs for complete systems. An exceptionally rich special case of the schema, H-Cogaff, incorporating diverse concurrently active components, layered not only centrally but also in its perceptual and action mechanisms, seems to accommodate many features of human mental functioning, explaining how our minds relate to many different aspects of our biological niche.This architecture allows for more varieties of learning and development than are normally considered, and also for more varieties of affective states, including different kinds of pleasures, pains, motives, evaluations, preferences, attitudes, moods, and emotions, differing according to which portions of the architecture are involved, what their effects are within that and other portions of the architecture, what sorts of information they are concerned with, and how they effect external behaviour. These ideas have implications both for applications of AI (e.g. in digital entertainments, or in the design of learning environments), and for scientific theories about human minds and brains.
For more on these ideas see these talks http://www.cs.bham.ac.uk/research/cogaff/talks/
And the Cognition and Affect project papers http://www.cs.bham.ac.uk/research/cogaff/
A relevant paper
Available in two formats using Postscript and PDF here:
Most people think that because they experience and talk about consciousness they have a clear understanding of what they mean by the noun "consciousness". This is just one of many forms of self-deception to be expected in a sufficiently rich architecture with reflective capabilities that provide some access to internal states and processes, but which could not possibly have complete self-knowledge. This talk will approach the topic of understanding what a mind is from the standpoint of a philosophical information-engineer designing minds of various kinds.We'll see how some designs are likely to produce systems that will discover in themselves the very phenomena that first led philosophers to talk about sensory qualia and other aspects of consciousness. This can serve to introduce a new form of conceptual analysis that builds important bridges between philosophy, psychology, neuroscience, biology, and engineering. It depends crucially on the concept of a virtual machine which despite being virtual has causal powers. Papers and talks providing background to the presentation can be found here:
http://www.cs.bham.ac.uk/research/cogaff/
The claim that the development of computers and of AI depended on the notion of a Turing machine is criticised. Computers were the inevitable result of convergence of two strands of technology with a very long history: machines for automating various physical processes and machines for performing abstract operations on abstract entities, e.g. doing numerical calculations or playing games.Some of the implications of combining these technologies, so that machines could operate on their own instructions, were evident to Babbage and Lovelace, in the 19th century. Although important advances were made using mechanical technology (e.g. punched cards in Jacquard looms and in Hollerith machines used for manipulating census information in the USA) it was only the development of new electronic technology in the 20th century that made the Babbage/Lovelace dream a reality. Turing machines were a useful abstraction for investigating abstract mathematical problems, but they were not needed for the development of computing as we know it.
Various aspects of these developments are analysed, along with their relevance to AI (which will use whatever information-processing technology turns up, whether computer-like or not). I'll discuss some similarities between computers viewed as described above and animal brains. This comparison depends on a number of distinctions: between energy requirements and information requirements of machines, between physical structure and virtual machine structure, between ballistic and online control, between internal and external operations, and between various kinds of autonomy and self-awareness. In passing, I defend Chomsky's claim that humans have infinite competence (e.g. linguistic, mathematical competence) despite performance limitations. Likewise virtual machines in computers.
These engineering ideas, which owe nothing to Turing machines, or the mathematical theory of computation, are all intuitively familiar to software engineers, though rarely made fully explicit. The ideas are important both for the scientific task of understanding, modelling or replicating human or animal intelligence and for the engineering applications of AI, as well as other applications of computers. I think Turing himself understood all this.
The talk is partly based on this paper:
A. Sloman, 'The irrelevance of Turing machines to AI' in Matthias Scheutz, Ed., Computationalism: New Directions MIT Press, 2002. (Also online at http://www.cs.bham.ac.uk/research/cogaff/),
I try to show how a full account of human vision will have to analyse it as a multi-functional system doing very different kinds of processing in parallel, serving different kinds of purposes. These include various kinds of processing that we share with animals that evolved much earlier. In particular there are processes linked to purely reactive mechanisms such as posture control and saccadic triggers, processes providing "chunks" at different levels of abstraction both in the 2-D and 3-D domains, processes providing "parsed" descriptions of complex multi-component structures (e.g. seeing a pair of scissors, reading a sentence), processes categorising types of motion (e.g. watching a swaying branch before jumping onto it, or an approaching predator), processes recognising very abstract functional and causal properties and relations (support, pushing, constraining), processes concerned with detecting various sorts of mental states in other information processors (predators, prey, and conspecifics in social species), and processes concerned with categorising things that don't exist but could exist, e.g. seeing possibilities for action, possible effects of various changes, and other visual "affordances" (generalising J.J.Gibson).Most research on vision, whether in AI, psychology, or neuroscience tends to be very narrowly focused on particular tasks requiring particular forms of representation and particular algorithms.
The multi-functional viewpoint presents a framework for trying to bring different research programmes together, posing new, very demanding constraints because of the great difficulty of designing such complex systems in an integrated fashion.
More detailed presentations are in papers in the CogAff directory. Some of the other talks listed here are also relevant.
During the second half of the 20th Century, many Artificial Intelligence researchers made wildly over-optimistic claims about how soon it would be possible to build machines with human-like intelligence. Some even predicted super-human intelligent machines, which might be a wonderful achievement or a disaster, depending on your viewpoint. But we are still nowhere near machines with the general intelligence of a child, or a chimpanzee, or even a squirrel, although many machines easily outperform humans in very narrowly defined tasks, such as playing certain board games, checking mathematical proofs, solving some mathematical problems, solving various design problems, and some factory assembly-line tasks.This talk attempts to explain why, despite enormous advances in materials science, mechanical and electronic engineering, software engineering and computer power, current robots (and intelligent software systems) are still so limited. The main reason is our failure to understand what the problems are: what collection of capabilities needs to be replicated. We need to understand human and animal minds far better than we do. This requires much deeper understanding of processes such as perception, learning, problem-solving, self-awareness, motivation and self-control. We also need to extend our understanding of possible architectures for information-processing virtual machines. I shall outline some of the less obvious problems, such as problems in characterising the tasks of visual perception, and sketch some ideas for architectures that will be needed to combine a wide variety of human capabilities. This has many implications for the scientific study of humans, and also practical implications, for instance in the teaching of mathematics. It also has profound implications for philosophy of mind.
This is a first draft of a talk on interface design that I expect to go on improving over time. It is in part motivated by hearing many talks on interface design that fail to pay any attention to questions about the kinds of information processing mechanisms that humans use when interacting with machines (or with one another). This often leads to bad designs.
This presentation gives an introduction to philosophy of science, though a rather idiosyncratic one, stressing science as the search for powerful new ontologies rather than merely laws. You can't express a law unless you have an ontology including the items referred to in the law (e.g. pressure, volume, temperature). The talk raises a number of questions about the aims and methods of science, about the differences between the physical sciences and the science of information-processing systems (e.g. organisms, minds, computers), whether there is a unique truth or final answers to be found by science, whether scientists ever prove anything (no -- at most they show that some theory is better than any currently available rival theory), and why science does not require faith (though obstinacy can be useful). The slides end with a section on whether a science of mind is possible, answering yes, and explaining how.See also presentations on virtual machines, e.g. my talk at WPE 2008.
My presentation is now available in two versions: original and updated (Dec 2016 -- slight reformatting and a few new items added)
A more detailed record of the meeting with slides of other speakers can be found here, along with pictures: /http://www.aiai.ed.ac.uk/events/ccs2002/
This paper is concerned with some methodological and philosophical problems related both to the long-term objective of building human-like robots (like those 'in the movies') and short- and medium-term objectives of building robots with capabilities of more or less intelligent animals. In particular, we claim that organisms are information-processing machines, and thus information-processing concepts will be essential for designing biologically-inspired robots. However identifying relevant concepts is non trivial since what an information processor is doing cannot in general be determined simply by observing it. A phenomenon that we label 'ontological blindness' often gets in the way. We give some examples to illustrate this difficulty. Having a general framework for describing and comparing agent architectures may help. We present the CogAff schema as a first draft framework that can be used to help overcome some kinds of ontological blindness by directing research questions.The full paper is at the Cognition and Affect web site.
Evolution, the great designer, has produced minds of many kinds, including minds of human infants, toddlers, teenagers, and minds of bonobos, squirrels, lambs, lions, termites and fleas. All these minds are information processing machines. They are virtual machines implemented in physical machines. Many of them are of wondrous complexity and sophistication. Some people argue that they are all inherently unintelligible: just a randomly generated, highly tangled mess of mechanisms that happen to work, i.e. they keep the genes going from generation to generation.I attempt to sketch and defend an alternative view: namely that there is a space of possible designs for minds, with an intelligible structure, and features of this space constrained what evolution could produce. The CogAff architecture schema gives a first approximation to the structure of that space of possible (evolvable) agent architectures. H-CogAff is a special case that (to a first approximation) seems to explain many human capabilities.
By understanding the structure of that space, and the trade-offs between different special cases within it, we can begin to understand some of the more complex biological minds by seeing how they fit into that space. Doing this properly for any type of organism (e.g. humans) requires understanding the affordances that the environment presents to those organisms -- a difficult task, since in part understanding the affordances requires us to understand the organism at the design level, e.g. understanding its perceptual capabilities.
This investigation of alternative sets of requirements and the space of possible designs should also enable us to understand the possibilities for artificial minds of various kinds, also fitting into that space of designs. And we may even be able to design and build some simple types in the near future, even if human-like systems are a long way off.
(This talk is closely related to several of the previous talks, e.g. on emotions, on consciousness, on perception, on architectures.)
There's a brief report on some this work by Michael Brooks, in the NewScientist on 25th Feb 2009
http://www.newscientist.com/article/mg20126971.800-rise-of-the-robogeeks.html
Unfortunately it emphasises the engineering potential more than the scientific and philosophical goals -- due to space limitations, I understand.
A revised version of a subset of the presentation was produced in September-November 2007 Talk 49: on model-based semantics and why theory tethering is better than symbol grounding. For most people that will be a better introduction to this topic.
Available in four formats using PDF and Postscript (which may need to be inverted) here:
Abstract
This presentation attacks concept empiricism, the theory that all concepts are abstracted from experience of instances or defined in terms of concepts previously understood, recently re-invented and called "symbol-grounding" theory. The attack is closely related to the philosopher Kant's attack on concept empiricism, when he argued that concepts are required in order to have experience, and therefore not all concepts can be derived from experience. Within this framework we explain how a person blind from birth can understand colour concepts, for example.A newer talk on 'Varieties of Meaning' presents additional arguments and explains some of the ideas in more detail.
Several other presentations here (e.g. the presentation on information processing virtual machines) are also relevant.
A related discussion paper (HTML) asks how a learner presented with a 2-D display of a rotating Necker cube could
develop the 3-D ontology as providing the best way to see what's going on.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/nature-nurture-cube.html
(including pointers to some online rotating cubes!).A simpler example of continuously moving linear objects projected onto a 2-D discrete array:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/simplicity-ontology.htmlRelated discussion papers and presentations on the CoSy robot project web site include
- Orthogonal Recombinable Competences Acquired by Altricial Species (Blankets, string, and plywood) (HTML)
- Sensorimotor vs objective contingencies (HTML)
- Natural and artificial meta-configured altricial information-processing systems (PDF)
(A journal paper, with Biologist Jackie Chappell, to appear in IJUC).- 'Ontology extension' in evolution and in development, in animals and machines (PDF presentation)
- And the euCognition wiki, with opposing papers on symbol grounding here: Symbol tethering: the myth of symbol grounding (HTML)
Two papers written a few years before Harnad's symbol grounding paper presented a draft version of a theory explaining how a machine can use symbols to refer to things in a way that does not require causal connection with those things. Both papers presuppose an understanding of the way a formal system can determine a set of Tarskian models.
Old version available in PDF here:
A version (using 'flash') is also available on my 'slideshare.net' space. There are related introductory talks on this web site:
Available in two formats using Postscript and PDF here:
The slides introduce some problems about the relations between virtual machines and physical machines. I attempt to show how the philosophers' notion of "supervenience" is related to the engineer's concept of "implementation", and the computer scientist's notion of "virtual machine". This is closely related to very old philosophical problems about the relationship between mind and matter (or mind and brain).Virtual machines are "fully grounded" in physical machines without being identical with or in other ways reducible to them.
One popular way of trying to understand virtual machines makes use of a common notion of 'functionalism'. This is often explained in terms of a virtual machine that has a state-transition table. This notion is criticised as inadequate and compared with a more sophisticated notion of a virtual machine that has multiple states of different sorts changing and interacting concurrently on different time-scales: Virtual Machine Functionalism (implicitly taken for granted by software engineers, but unfamiliar to many philosophers and others who discuss functionalism).
Multi-component virtual machines are ubiquitous in our common sense ontology, though we don't normally notice, e.g. when we talk about social, political, and economic processes. Some philosophers argue that virtual machine events are purely "epiphenomenal" and therefore cannot have any effects.
A rebuttal of this view requires a satisfactory analysis of the concept of "cause" -- one of the hardest unsolved problems in philosophy. A partial analysis is sketched, and shown to accommodate parallel causation in hierarchies of virtual machines. This allows mental causes to be effective in producing effects. This should be no surprise to software engineers and computer scientists who frequently build virtual machines precisely because they can have desired effects. Philosophers with no knowledge of computing often find this very hard to understand. A corollary is that the training of philosophers needs to be improved, and probably the training of psychologists also.
See also Talk 5 (IJCAI Tutorial on Philosophy, 2001), and Talk 26 on Information-processing virtual machines.
Available in three formats using Postscript and PDF here:
The slides are available in Postscript and PDF here:
See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/aiforschools.html
Many of the other talks overlap with this. Talk 23 is a followup to this, as is Talk 25 .
The slides are available in Postscript and PDF here:
If reading the files using a postscript viewer, such as "gv" you may need to change the orientation (e.g. to seascape).
The slides are available in Postscript and PDF here:
Much work in AI is fragmented, partly because the subject is so huge that it is difficult for anyone to think about all of it. Even within sub-fields, such as language, reasoning, and vision, there is fragmentation, as the sub-sub-fields are rich enough to keep people busy all their lives. However, there is a risk that results of isolated research will be unsuitable for future integration, e.g. in models of complete organisms, or human like robots. This paper offers an architectural framework for thinking about the many components of visual systems and how they relate to the whole organism or machine. The viewpoint is biologically inspired, using conjectured evolutionary history as a guide to some of the features of the architecture. It may also be useful both for modelling animal vision and designing robots with similar capabilities.
If reading the files using a postscript viewer, such as "gv" you may need to change the orientation (e.g. to seascape).
If reading the files using a postscript viewer, such as "gv" you may need to change the orientation (e.g. to seascape).
These slides were revised in August 2006, partly taking into account ideas from two recent papers with Jackie ChappellCOSY-TR-0502: The Altricial-Precocial Spectrum for Robots
COSY-TR-0609: Altricial Self-organising Information-processing systems
An International AI Symposium in memory of Sidney Michaelson was organised by the British Computer Society, Edinburgh Branch, on 7th April 2001.
Reviewed here (with pictures).
Abstract
The event ended with a debate on the motion:
"This house believes that robots will have free will"
The review states:The formal part of the proceedings concluded with a debate. Getting this off the ground was no mean task. Can you imagine getting a bunch of academics to agree what they will debate and who will propose and oppose the motion? The email trail this exercise generated, including debating the voting strategy, became a marathon in itself. However, we achieved agreement, and Harold Thimbleby, Chris Huyck and Yorick Wilks spoke for, and Mike Brady, Aaron Sloman and Mike Burton spoke against the motion "This house believes that robots will have free will". The debate was chaired by Ian Ritchie (recent past president of BCS) who skilfully kept the speakers to time. A vote was taken before and after the debate. Before, the Ayes had a big majority, but at the final count outcome was evens: a good way to end.Two more serious papers on this topic are here
A picture of the opposing team is here.
The slides are available in Postscript and PDF here:
If reading the files using a postscript viewer, such as "gv" you may need to set the page size to A3.
A revised version was presented at University College London
on 19th Jun 2002 (Gatsby Centre and Institute for Cognitive
Neuroscience).
This overlaps with
talk 24
The slides are available in Postscript and PDF here:
In the last decade and a half, there has been a steadily growing amount of work on affect in general and emotion in particular, in empirical psychology, cognitive science and AI, both for scientific purposes and for the purpose of designing synthetic characters, e.g. in games and entertainments.Such work understandably starts from concepts of ordinary language (e.g. "emotion", "feeling", "mood", etc.). However, these concepts can be deceptive: the words appear to have clear meanings but are used in very imprecise and systematically ambiguous ways. This is often because people use explicit or implicit pre-scientific theories about mental states and process which are incomplete or vague. Some of the confusion arises because different thinkers address different subsets of the phenomena.
More sophisticated theories can provide a basis for deeper and more precise concepts, as has happened in physics and chemistry following the development of new theories of the architecture of matter which led to revisions of our previous concepts of various kinds of substances and various kinds of processes involving those substances.
In the Cognition and Affect project we have been exploring the benefits of developing architecture-based concepts of mind. We start by defining a space of architectures generated by the CogAff architecture schema, which covers a variety of information-processing architectures, including, we think, architectures for insects, many kinds of animals, humans at different stages of development, and possible future robots.
In this framework we can produce specifications of architectures for complete agents (of various kinds) and then find out what sorts of states and processes are supported by those architectures. Thus for each type of architecture there is a collection of "mental concepts" relevant to organisms or machines that have that sort of architecture.
Thus we investigate a space of architectures linked to a space of possible types of minds, and for some of those minds we find analogues of familiar human concepts, including, for example, "emotion", "consciousness", "motivation", "learning", "understanding", etc.
We have identified a special type of architecture H-Cogaff, a particularly rich instance of the CogAff architecture schema, conjectured as a model of normal adult human minds. The architecture-based concepts that H-Cogaff supports provide a framework for defining with greater precision than previously a host of mental concepts, including affective concepts, such as "emotion", "attitude", "mood", "pleasure" etc. These map more or less loosely onto various pre-theoretical versions of those concepts.
For instance H-Cogaff allows us to define at least three distinct varieties of emotions; primary, secondary and tertiary emotions, involving different layers of the architecture which we believe evolved at different times. We can also distinguish different kinds of learning, different forms of perception, different sorts of control of behaviour, all supported within the same architecture.
A different architecture, supporting a different range of mental concepts might be appropriate for exploring affective states of other animals, for instance insects, reptiles, or other mammals. Human infants probably have a much reduced version of the architecture which includes self-bootstrapping mechanisms that lead to the adult form.
Various kinds of brain damage can be distinguished within the H-Cogaff architecture. We show that some popular arguments based on evidence from brain damage purporting to show that emotions are needed for intelligence are fallacious because they don't allow for the possibility of common control mechanisms underlying both tertiary emotions and intelligent control of thought processes. Likewise we show that the widely discussed theory of William James which requires all emotions to involve experience of somatic states fails to take account of emotions that involve only loss of high level control of mental processes without anything like experience of bodily states.
We have software tools for building and exploring working models of these architectures, but so far model construction is at a very early stage.
Further details can be found here http://www.cs.bham.ac.uk/research/cogaff/
The slides are available in Postscript and PDF here:
The slides are modified versions of slides used for talks at a Seminar in Newcastle University in September 2000, at talks in Birmingham during October and December 2000, Oxford University in January 2001, IRST (Trento) in 2001, Birmingham in 2003 to 2007, and York University in Feb 2004.
The SimAgent toolkit, developed in this school since about 1994 (initially in collaboration with DERA) and used for a number of different projects here and elsewhere, is designed to support both teaching and exploratory research on multi-component architectures for both artificial agents (software agents, robots, etc.) and also models of natural agents. Unlike many other toolkits (e.g. toolkits associated with SOAR, ACT-R, PRS) it does not impose a commitment to a particular class of architectures but allows rapid-prototyping of novel architectures for agents with sensors and effectors of various sorts (real or simulated) and many different kinds of internal modules doing different sorts of processing, e.g. perception, learning, problem-solving, generating new motives, producing emotional states, reactive control, deliberative control, self-monitoring and meta-management, and linguistic processing.The toolkit supports exploration of architectures with many sorts of processes running concurrently, and interacting in unplanned ways.
One of the things that makes this possible is the use of a powerful, interactive, multi-paradigm extendable language, Pop-11 (similar in power and generality to Common Lisp, though different in its details). This has made it possible to combine within the same package support for different styles of programming for different sub-tasks, e.g. procedural, functional, rule-based, object oriented (with multiple inheritance and generic functions), and event-driven programming, as well as allowing modules to be edited and recompiled while the system is running, which supports both incremental development and testing and also self-modifying architectures.
A collaborative project between Birmingham and Nottingham is producing extensions to support distributed agents using the HLA (High Level Architecture) platform.
The talk will give an overview of the aims of the toolkit, show some simple demonstrations, explain how some of it works, and provide information for anyone who wishes to try using it.
The talk may be useful to students considering projects requiring complex agent architectures.
FURTHER INFORMATION
The slides are available in Postscript and PDF here:
Also presented at the University of Surrey 7 Feb 2001, and in a modified form at a "consultation" between christian scientists and AI researchers at Windsor Castle, Feb 14-16, 2001.
The slides are modified versions of slides used for talks at ESSLLI in August 2000, at a Seminar in Newcastle University in September 2000, at a seminar in Nottingham University November 2000.
The other main speakers at the Conference were John McCarthy and Marvin Minsky.
The slides attempt to explain (in outline) what an architecture is, what virtual machine functionalism is, what architecture-based concepts are, what the CogAff architecture schema is, what is in the H-Cogaff (Human-Cogaff) architecture, how this relates to different sorts of emotions and other mental phenomena, how architectures evolve or develop, trajectories in design space and niche space, and what some of the very hard unanswered questions are.
And a more detailed specification:
http://www.cs.bham.ac.uk/research/cogaff/manip/
WARNING:
Any of my pdf slides found at any other location are likely to be out of date.
I try to keep the versions on slideshare.net up to date, but sometimes forget to
upload a new version.
Further papers on the topics addressed in the slides can be found in the Cognition and Affect Project directory http://www.cs.bham.ac.uk/research/cogaff/
Comments and criticisms welcome.
Our Software tools are available free of charge with full sources in the Free Poplog directory: http://www.cs.bham.ac.uk/research/poplog/freepoplog.html
Evolvable virtual information processing architectures for human-like minds (Oct 1999 -- June 2003)described here.
The ideas are being developed further in the context of the EC-Funded CoSy project which aims to improve our understanding of design possibilities for natural and artificial cognitive systems integrating many different sorts of capabilities. CoSy papers and presentations are here.