School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

PAST, RECENT AND PENDING PRESENTATIONS
By Aaron Sloman
School of Computer Science
The University of Birmingham, UK.

This is http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
Also accessible as: goo.gl/piY2Lv

These are presentations on topics in philosophy of mind, philosophy of mathematics, philosophy of computation, various aspects of AI, cognitive science, and education, including work on the Birmingham Cognition and Affect Project (1991--, begun previously at Sussex university), work done in the Cosy Project (2004-8), and its successor the the CogX Project (2008-12), including: consciousness, emotions and other affective states and processes, reasoning, evolution (trajectories in design space and niche space), information-processing, artificial intelligence, cognitive science, biology, physics, philosophy of mind, supervenience, philosophy of mathematics, epistemology, virtual machines, implementation, vision and other forms of perception -- especially visual perception of affordances, architectures for intelligent systems, forms of representation, software tools for exploring architectures and designing intelligent agents, and to some extent also about neuroscience and psychology.

CONTENTS

Note added 25 Sep 2010: The main list is in roughly reverse chronology, but I have started to build a list of pointers to talks on particular topics. This will take some time, so some of the pointers are just stubs, for now.

There is more information organised by topic in my "DOINGS" list but it has not been updated for some time.


CONTENTS: MAJOR TOPICS (a sort of index, to be extended).

Some of these sub-headings will be revised.


CONTENTS: ROUGHLY REVERSE CHRONOLOGY

Below is a summary list of presentations in (roughly) reverse chronological order, followed by more details on each presentation, in (roughly) chronological order. The summary has links to the details.

The order is only "roughly" chronological since many of the older talks have been revised recently, and some have also been presented recently.

WARNING:
Any of my pdf slides found at any other location are likely to be out of date.
I try to keep the versions on slideshare.net up to date, but sometimes forget to
upload a new version.

Google Scholar publications list,
(N.B. DO NOT BELIEVE CITATION COUNTS. They can be inflated or incomplete.)


USE OF LATEX AND TGIF PACKAGE

The diagrams in the slides were almost all produced using the small, fast, versatile, portable, reliable, and free tgif package, available for linux and unix systems from here:
http://bourbon.cs.umd.edu:8001/tgif/

My slides are mostly composed in Latex, using home-grown macros, importing eps or jpg files produced by tgif. More recent versions were created directly by pdflatex.

From about talk 5 (May 2001) I started preparing the slides in a format more suited to fill a typical computer screen which is wider than it is tall. These need to be viewed in "Landscape" or "Seascape" mode (rotated 90 degrees to the left). Your pdf/postscript viewer should provide such an option, if the wide display format is not used automatically. Paper size is set to A4, which may cause problems printing some of the slides on US letter paper.

Some documents (including documents in the 'Misc' directory, http://www.cs.bham.ac.uk/research/projects/cogaff/misc/, are produced using html, for online viewing, with pdf produced using a combination of html2ps and ps2pdf.


SUMMARY LIST OF TALKS,
with more details later

Talk 116: A short introduction to the Meta-configured Genome theory

See the second video in this playlist:
https://www.youtube.com/playlist?list=PLYC-dSilAaYa6Mk1g6hBGUyqCwrIvyOWB
The idea is explained in this document (under revision October 2019)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html
The Meta-Configured Genome
(Multi-layered, multi-stage, parametrised, epigenesis)

Talk 115: Pre-recorded Video Presentation at Tehran Conference
April 2019

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sharif-talk.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sharif-talk.pdf
Invited talk on Artificial and Natural Intelligence Nature and Philosophical Debates
Especially: "How can a physical universe produce mathematical minds? And why are they so hard to replicate in current AI systems?"
Includes link to recorded presentation and online notes on the presentation.
For Sharif University Spring School on AI Philosophy, Ethics, and Society
http://www.en.sharif.edu/

Talk 114: Short Guest talk at East Side Gallery, Nov 2018

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-eastside-2018.html
Toddler space scientists:
Empty space includes billions of potential curved paths through which a wasp or a ball could move. Mathematicians have studied space for centuries, but it is also partially understood by many animals that see things in space, move through space, and manipulate things, including nest-building birds, animals that hunt for, peel, or tear open their food and pre-verbal human toddlers.
Note 1: Despite appearances in many impressive demos, current AI systems and robots do not share this deep spatial understanding, as pointed out in the talk.
Note 2: "billions" is an understatement!

Talk 113: Video Presentation at AGA workshop IJCAI August 2017

This was an invited talk (presented remotely using a video recording) with associated web page expanding on various aspects of the video. Available at
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html (Also PDF)
Why can't (current) machines reason like Euclid or even human toddlers?
(And many other intelligent animals)
The web page is still under development!

Talk 112: IJCAI 2016 TUTORIAL

This quarter-day presentation at IJCAI 2016 in New York used a web page rather than slides:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-tut-ijcai-2016.html
International Joint Conference on AI 2016 10th July 2016
Tutorial T24: If Turing had lived longer, how might he
have investigated what AI and Philosophy can learn
from evolved information processing systems?

Including homage to John McCarthy and Marvin Minsky, two of the founders of AI, recently deceased, both interested in connections between AI and philosophy.
Talk Contents List


Talk 111: Two Related Themes (intertwined).
What are the functions of vision?
How did human language evolve?
(Languages are needed for internal information processing,
including visual processing)

Available HERE (PDF).
     (Updated 22 Jul 2019)
Also on slideshare in flash format -- but out of date. Unfortunately Slideshare no longer allows uploads to be updated. The version above is newer than the slideshare version.

http://www.slideshare.net/asloman/evolution-of-46383806
(Guest lectures for MSc conversion students and intercalated year students. 2015)
(School of Computer Science, University of Birmingham)
Video recording of presentation in 2015
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ailect2-2015 (158MB)
Kindly recorded and made available by Khalid Khattak (a student on the course).
(added 1 Apr 2015).
This was the second of two lectures. Slides for the first are here.
Audio recording of presentation in 2017 (Audio with pauses while typing)
The ideas presented here contradict most theories of evolution of language, including the delightful theory by Danny Hillis that songs evolved first and were then later transformed, endorsed by the distinguished physicist Freeman Dyson in this 2014 lecture (at around 31Mins:29Secs):
https://www.youtube.com/watch?v=JLT6omWrvIw

Installed: 28 Mar 2015
Updated: 15 Apr 2015; 11 Nov 2015; 7 Feb 2017

Abstract

This presentation combines major themes from two previous talks:
Talk 52: Evolution of minds and languages.

Talk 93: What's vision for, and how does it work?
From Marr (and earlier) to Gibson and Beyond

If human languages had to be learnt by users, human languages could not have evolved. Instead they are not learnt, but created -- collaboratively.

Most people think language is essentially concerned with communication between individuals. So they ask the wrong questions about evolution of language, and give limited answers - concerned only with forms of communication.
A different view of language opens up more questions, requiring more complex and varied answers:
A language is primarily a means by which information can be represented, for any purpose, including internal purposes such as learning, reasoning, formation of intentions and control of actions. That includes perceptual information, e.g. visual information. Instead of asking: how did communication using language evolve? We can ask:

o For what purposes do organisms use information?
Learning about the environment (e.g. through visual perception), control of actions, selection of goals, formation of plans, execution of plans, making predictions, asking questions, finding answers, communication with other individuals, social teaching and learning....(add your own ideas).

- What types of information do organisms need to acquire and use?
- In what forms (languages) can the information usefully be represented?
- What mechanisms are required for acquisition, storage and use of information?
- A special case: How did languages also come to be used for communication?

Key ideas

Animals need "internal languages" (internal representations/encodings of information) for purposes that are not normally thought of as linguistic.
E.g. perceiving, experiencing, having desires, forming questions, forming intentions, working out what to do, initiating and controlling actions, learning things about the environment (including other agents), remembering, imagining, theorising, designing .....
Without the use of richly structured internal languages, human vision, thought, learning, planning would be impossible. There would be nothing to communicate.
There would be no need for communicative languages if individuals had nothing to communicate, and had no internal means of storing and using information communicated or acquired by perception or learning.
So, having one or more internal languages is a prerequisite for using an external language.(Sloman, 1978b, 1979)
Internal languages (forms of representation) must therefore have evolved first, and must develop first in individuals: later both can develop in parallel.
This requires a "generalised" notion of a language: a GL.
Internal GLs and external languages (ELs) require forms of representation that are manipulable, with
- structural variability,
- varying complexity (e.g. for information about objects/events of varying complexity),
- compositional semantics (allowing new meanings to be assembled from simpler ones
There is a very compressed summary of theories of vision, especially in AI and in Gibson's work (and Marr, in passing).

There are also connections with the examples of "toddler theorems" in the presentation on evolution and development of mathematical capabilities below, and with ideas on learning and development in the work of Piaget and Karmiloff-Smith included in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html


Talk 110 (poster):
What can we learn about animal cognition including biological vision, by studying evolution of varieties of biological information processing?
Available HERE (PDF)
(Subject to change: please keep links rather than copies.)

Presented at:
First ViiHM Workshop on Biological and Machine Vision 24th and 25th September 2014, Stratford upon Avon
Draft abstract
Vision, Action and Mathematics From Affordances to Euclid.
Part of the Turing-inspired Meta-Morphogenesis project.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
(What I suspect Alan Turing might have done if he had lived longer.)
Based on my 4th contribution to this book (pp 849-857):
http://www.cs.bham.ac.uk/~axs/amtbook

Compare Talk 93:
What's vision for, and how does it work? From Marr (and earlier) to Gibson and Beyond

Installed here: 25 Sep 2014

Talk 109: ARTIFICIAL INTELLIGENCE AND PHILOSOPHY
(Updated 5 Mar 2015)
(Replaces Talk 13)
How AI (including robotics) relates to philosophy and in some ways Improves on Philosophy
Available HERE (PDF).
(Subject to change: please keep links not copies.)
Much older version on slideshare.

Presented at:
Lecture1 for ICY and conversion MSc students 5 Mar 2015
Video recording of presentation here
Lecture 2 (17 March 2015) is here.

CNCR Journal Club Meeting on Monday 7th October 2013

Installed here: 25 Nov 2013; Updated 5 Mar 2015

Partial Abstract

Presents some of the differences and relationships between philosophy, science, and engineering, illustrated in particular by the use of AI in enriching and testing philosophical concepts and theories.


Talk 108: Why is it so hard to make human-like AI (robot) mathematicians?
Especially Euclidean geometers.
DRAFT Available HERE (PDF).
(To be revised.)

Presented at:
http://www.pt-ai.org/2013
Philosophy and Theory of Artificial Intelligence
21 Sep 2013
Installed: DRAFT PDF will be installed 21 or 22 Sep 2013
(To be revised later.)

Abstract (As originally submitted).

I originally got involved in AI many years ago, not to build new useful machines, nor to build working models to test theories in psychology or neuroscience, but with the aim of addressing philosophical disagreements between Hume and Kant about mathematical knowledge, in particular Kant's claim that mathematical knowledge is both non-empirical (apriori, but not innate) and non-trivial (synthetic, not analytic) and also concerns necessary (non-contingent) truths.

I thought a "baby robot" with innate but extendable competences could explore and learn about its environment in a manner similar to many animals, and learn the sorts of things that might have led ancient humans to discover Euclidean geometry.

The details of the mechanisms and how they relate to claims by Hume, Kant, and other philosophers of mathematics, could help us expand the space of philosophical theories in a deep new way.

Decades later, despite staggering advances in automated theorem proving concerned with logic, algebra, arithmetic, properties of computer programs, and other topics, computers still lack human abilities to think geometrically, despite advances in graphical systems used in game engines and scientific and engineering simulations. (What those do can't be done by human brains.)

I'll offer a diagnosis of the problem and suggest a way to make progress, illuminating some unobvious achievements of biological evolution.


Three closely related talks on Meta-Morphogenesis:

Expanded and reorganised versions of slides originally prepared for a tutorial presentation at the 2013 conference on AGI at St Anne's College Oxford.
Video of tutorial
Video recording of the tutorial, made by Adam Ford:
http://www.youtube.com/watch?v=BNul52kFI74
- - - (about 2 hrs 30 mins -- audio problem fixed on 14 June 2013):
Medium resolution version also available on the CogAff web site:
http://www.cs.bham.ac.uk/research/projects/cogaff/movies#m-m-tut

Adam Ford also made available two related interviews recorded at the conference:


Talk 4: HOW TO UNDERSTAND NATURAL MINDS OF MANY KINDS.

This invited talk was presented at a workshop on Adaptive and interactive behaviour of animals and computational systems (AIBACS): organised by EPSRC and BBSRC at Cosener's House, Abingdon, on 28-29th March 2001.

The slides are available in Postscript and PDF here:

If reading the files using a postscript viewer, such as "gv" you may need to set the page size to A3.


Talk 3: VARIETIES OF AFFECT AND THE CogAff ARCHITECTURE SCHEMA

This talk was presented in the symposium on Emotion, cognition, and affective computing, at the AISB 2001 conference held at the University of York, March 2001.

A revised version was presented at University College London on 19th Jun 2002 (Gatsby Centre and Institute for Cognitive Neuroscience).
This overlaps with talk 24

The slides are available in Postscript and PDF here:

New version (June 2002)

Old version (April 2001)

Abstract

In the last decade and a half, there has been a steadily growing amount of work on affect in general and emotion in particular, in empirical psychology, cognitive science and AI, both for scientific purposes and for the purpose of designing synthetic characters, e.g. in games and entertainments.

Such work understandably starts from concepts of ordinary language (e.g. "emotion", "feeling", "mood", etc.). However, these concepts can be deceptive: the words appear to have clear meanings but are used in very imprecise and systematically ambiguous ways. This is often because people use explicit or implicit pre-scientific theories about mental states and process which are incomplete or vague. Some of the confusion arises because different thinkers address different subsets of the phenomena.

More sophisticated theories can provide a basis for deeper and more precise concepts, as has happened in physics and chemistry following the development of new theories of the architecture of matter which led to revisions of our previous concepts of various kinds of substances and various kinds of processes involving those substances.

In the Cognition and Affect project we have been exploring the benefits of developing architecture-based concepts of mind. We start by defining a space of architectures generated by the CogAff architecture schema, which covers a variety of information-processing architectures, including, we think, architectures for insects, many kinds of animals, humans at different stages of development, and possible future robots.

In this framework we can produce specifications of architectures for complete agents (of various kinds) and then find out what sorts of states and processes are supported by those architectures. Thus for each type of architecture there is a collection of "mental concepts" relevant to organisms or machines that have that sort of architecture.

Thus we investigate a space of architectures linked to a space of possible types of minds, and for some of those minds we find analogues of familiar human concepts, including, for example, "emotion", "consciousness", "motivation", "learning", "understanding", etc.

We have identified a special type of architecture H-Cogaff, a particularly rich instance of the CogAff architecture schema, conjectured as a model of normal adult human minds. The architecture-based concepts that H-Cogaff supports provide a framework for defining with greater precision than previously a host of mental concepts, including affective concepts, such as "emotion", "attitude", "mood", "pleasure" etc. These map more or less loosely onto various pre-theoretical versions of those concepts.

For instance H-Cogaff allows us to define at least three distinct varieties of emotions; primary, secondary and tertiary emotions, involving different layers of the architecture which we believe evolved at different times. We can also distinguish different kinds of learning, different forms of perception, different sorts of control of behaviour, all supported within the same architecture.

A different architecture, supporting a different range of mental concepts might be appropriate for exploring affective states of other animals, for instance insects, reptiles, or other mammals. Human infants probably have a much reduced version of the architecture which includes self-bootstrapping mechanisms that lead to the adult form.

Various kinds of brain damage can be distinguished within the H-Cogaff architecture. We show that some popular arguments based on evidence from brain damage purporting to show that emotions are needed for intelligence are fallacious because they don't allow for the possibility of common control mechanisms underlying both tertiary emotions and intelligent control of thought processes. Likewise we show that the widely discussed theory of William James which requires all emotions to involve experience of somatic states fails to take account of emotions that involve only loss of high level control of mental processes without anything like experience of bodily states.

We have software tools for building and exploring working models of these architectures, but so far model construction is at a very early stage.

Further details can be found here http://www.cs.bham.ac.uk/research/cogaff/


Talk 2: SIMAGENT: TOOLS FOR DESIGNING MINDS
A toolkit for philosophers and engineers

The slides are available in Postscript and PDF here:

Revised version March 2007

The toolkit is also described here in more detail. http://www.cs.bham.ac.uk/research/poplog/packages/simagent.html
Movie demonstrations of the toolkit are available here http://www.cs.bham.ac.uk/research/poplog/figs/

The slides are modified versions of slides used for talks at a Seminar in Newcastle University in September 2000, at talks in Birmingham during October and December 2000, Oxford University in January 2001, IRST (Trento) in 2001, Birmingham in 2003 to 2007, and York University in Feb 2004.

Abstract

The SimAgent toolkit, developed in this school since about 1994 (initially in collaboration with DERA) and used for a number of different projects here and elsewhere, is designed to support both teaching and exploratory research on multi-component architectures for both artificial agents (software agents, robots, etc.) and also models of natural agents. Unlike many other toolkits (e.g. toolkits associated with SOAR, ACT-R, PRS) it does not impose a commitment to a particular class of architectures but allows rapid-prototyping of novel architectures for agents with sensors and effectors of various sorts (real or simulated) and many different kinds of internal modules doing different sorts of processing, e.g. perception, learning, problem-solving, generating new motives, producing emotional states, reactive control, deliberative control, self-monitoring and meta-management, and linguistic processing.

The toolkit supports exploration of architectures with many sorts of processes running concurrently, and interacting in unplanned ways.

One of the things that makes this possible is the use of a powerful, interactive, multi-paradigm extendable language, Pop-11 (similar in power and generality to Common Lisp, though different in its details). This has made it possible to combine within the same package support for different styles of programming for different sub-tasks, e.g. procedural, functional, rule-based, object oriented (with multiple inheritance and generic functions), and event-driven programming, as well as allowing modules to be edited and recompiled while the system is running, which supports both incremental development and testing and also self-modifying architectures.

A collaborative project between Birmingham and Nottingham is producing extensions to support distributed agents using the HLA (High Level Architecture) platform.

The talk will give an overview of the aims of the toolkit, show some simple demonstrations, explain how some of it works, and provide information for anyone who wishes to try using it.

The talk may be useful to students considering projects requiring complex agent architectures.

FURTHER INFORMATION


Talk 1: VARIETIES OF EVOLVABLE MINDS
OR
How to think about architectures for human-like
and other agents
OR
How to Turn Philosophers of Mind into Engineers

This talk was presented in Oxford on 22nd Jan 2001 in the seminar series of the McDonnell-Pew Centre for Cognitive Neuroscience

The slides are available in Postscript and PDF here:

Also presented at the University of Surrey 7 Feb 2001, and in a modified form at a "consultation" between christian scientists and AI researchers at Windsor Castle, Feb 14-16, 2001.

The slides are modified versions of slides used for talks at ESSLLI in August 2000, at a Seminar in Newcastle University in September 2000, at a seminar in Nottingham University November 2000.


BACK TO LIST OF CONTENTS AND POINTERS TO COSY TALKS


OTHER COLLECTIONS OF SLIDES

Slides for IBM Symposium March 2002:
Architectures and the spaces they inhabit

Two invited talks were given at a workshop followed by a conference on architectures for common sense, at IBM T.J. Watson Research Centre, New York on 13th and 14th March 2002. The slides have been collected into a single long file.

The other main speakers at the Conference were John McCarthy and Marvin Minsky.

The slides attempt to explain (in outline) what an architecture is, what virtual machine functionalism is, what architecture-based concepts are, what the CogAff architecture schema is, what is in the H-Cogaff (Human-Cogaff) architecture, how this relates to different sorts of emotions and other mental phenomena, how architectures evolve or develop, trajectories in design space and niche space, and what some of the very hard unanswered questions are.

UK Grand Challenge Project Proposal 2002

Papers and slides prepared for the workshop in November 2002
http://www.cs.bham.ac.uk/research/cogaff/gc

And a more detailed specification:
http://www.cs.bham.ac.uk/research/cogaff/manip/

Presentation at DARPA Cognitive Systems Workshop Nov 2002
How to Think About Cognitive Systems: Requirements and Designs

http://www.cs.bham.ac.uk/research/cogaff/darpa02/


WARNING:
Any of my pdf slides found at any other location are likely to be out of date.
I try to keep the versions on slideshare.net up to date, but sometimes forget to
upload a new version.


NOTES and related references.

NOTE: Both Postscript and PDF versions of slides should have several coloured slides. If the colours in the postscript version don't show up when you read it in netscape, try saving the file and reading it with "gv". (This is probably a problem only on 8-bit displays). The colours are not crucial: they merely help a little.


Further papers on the topics addressed in the slides can be found in the Cognition and Affect Project directory http://www.cs.bham.ac.uk/research/cogaff/

Comments and criticisms welcome.

Our Software tools are available free of charge with full sources in the Free Poplog directory: http://www.cs.bham.ac.uk/research/poplog/freepoplog.html


ACKNOWLEDGEMENTS


Some of this work arises out of, or was done as part of, a project funded by the Leverhulme Trust on
Evolvable virtual information processing architectures for human-like minds (Oct 1999 -- June 2003)
described here.

The ideas are being developed further in the context of the EC-Funded CoSy project which aims to improve our understanding of design possibilities for natural and artificial cognitive systems integrating many different sorts of capabilities. CoSy papers and presentations are here.


Creative Commons License
This work is licensed under a Creative Commons Attribution 2.5 License.
If you use or comment on our ideas please include a URL if possible, so that readers can see the original (or the latest version thereof).


Last updated: 29 Nov 2009; 7 Jan 2010; 21 Jan 2010; 18 Feb 2010; 8 Mar 2010; 12 Mar 2010; 28 Mar 2010; 13 May 2010; 19 May 2010; 23 Jul 2010; 27 Jul 2010; 8 Aug 2010; 15 Aug 2010; 24 Sep 2010; 26 Sep 2010; 30 Sep 2010; 24 Dec 2010; 16 Jan 2011; 23 Feb 2011; 27 Feb 2011; 5 Apr 2011; 26 Aug 2011; 30 Aug 2011; 16 Sep 2011; 15 Nov 2011; 1 Feb 2012; 21 Sep 2012; 1 Dec 2012; 4 Dec 2012; 1 Jan 2013; 24 Jan 2013; 3 Mar 2013; 20 May 2013; 5 Jan 2014; 14 Jan 2014; ... 14 Aug 2014; ... 12 Oct 2014; ... 21 May 2016; 29 Jul 2017; 3 Oct 2017
Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham