Warning:
Jackie Chappell's section is in two formats, part portrait and
part landscape.
A PDF viewer will adjust, and printing works as
expected.
Jackie Chappell
Mike Denham
Steve Furber
Jeffrey L. Krichmar
Mark Lee
Peter Redgrave
Murray Shanahan
Aaron Sloman
Mark Steedman
Tom Ziemke
1 slide per page (PDF)Abstract
4 slides per page (PDF)
title page plus 4 slides per page (PDF)
Animals are much more successful than current robots in their ability to gather information from the environment, detect affordances, attribute causes to affects, and sometimes generate individually novel behaviour. What kinds of mechanisms might make this possible? I will discuss different mechanisms for acquiring information in animals, and their strengths and weaknesses given different life histories and niches. I will discuss experiments which have attempted to uncover the extent of animals' abilities to use information from their environment, and the mechanisms that might be used to accomplish this. The development of these kinds of competences (in evolutionary time and over the course of an individual's lifetime) is another interesting problem. Exploration and play seem to be very important for some kinds of behaviour, particularly flexible responses to novel problems, but there is also the possibility that animals come equipped with certain kinds of 'core knowledge', which might help to direct and structure the acquisition of more complex competences.Biographical statement[1] R. C. Barnet, R. P. Cole, and R. R. Miller. Temporal integration in second-order conditioning and sensory pre- conditioning. Animal Learning and Behavior, 25:221--233, 1997.
[2] J. Chappell and A. Kacelnik. New Caledonian crows manufacture tools with a suitable diameter for a novel task. Animal Cognition, 7:121--127, 2004.
[3] S. E. Cummins-Sebree and D. M. Fragaszy. Choosing and using tools: Capuchins (Cebus apella) use a different metric than tamarins (Sanguinus oedipus). Journal of Comparative Psychology, 119(2):210--219, 2005.
[4] M. Domjan and N. E. Wilson. Specificity of cue to consequence in aversion learning in the rat. Psychonomic Science, 26:143--145, 1972.
[5] M. Hayashi and T. Matsuzawa. Cognitive development in object manipulation in infant chimpanzees. Animal Cognition, 6:225--233, 2003.
[6] A. A. S. Weir, J. Chappell, and A. Kacelnik. Shaping of hooks in New Caledonian crows. Science, 297(9 August 2002):981, 2002.
Jackie was previously a researcher at the Behavioural Ecology Group, Oxford University and in September 2004 became a Lecturer in Animal Behaviour, School of Biosciences, University of Birmingham, UK.After completing her DPhil at the University of Oxford, she subesquently spent several years studying various aspects of animal cognition. Most recently, her work has focussed on the cognition of tool manufacturing behaviour in New Caledonian crows. These birds manufacture and use at least three distinct types of tool: hook tools made out of twigs, stepped and tapered tools made from Pandanus leaves, and straight sticks. This behaviour is unique among free- living non-humans because of the use of hooks, the degree of standardisation of the tools, and the use of different tool types. One interesting question is whether tool manufacture is rare because of the scarcity of selection pressure on species to use tools, or whether tool use and manufacture requires advanced cognitive capabilities which most species do not possess.
Since moving to the University of Birmingham, her interests have broadened to encompass investigating the cognitive architecture involved in the perception of affordances and causality, and the way in which this develops ontogenetically and phylogenetically. For example, how do animals integrate information about affordances and relationships discovered during exploration with their pre-existing knowledge?
She is co-author of a paper on The Altricial-Precocial Spectrum for Robots presented at IJCAI-05, and also contributed to the the Tutorial on Representation and Learning in Robots and Animals, at IJCAI'05.
Abstract
The talk will focus on the objectives of the EPSRC- and EU-funded projects I am involved in, including the role of the neocortical laminar microcircuitry in perception, cognition, and (dare I say it) consciousness. Of particular interest is the question of how the brain uses context to modify perceptual awareness, as illustrated for example by visual illusions.
Biographical statement
Professor of Adaptive and Neural Computation, University of Plymouth.
PhD in geometric dynamical systems theory in 1972 at Imperial College, followed by five years at Imperial, as a post-doc and then lecturer, and ten years at Kingston University, as Head of the School of Computing from 1984 to 1988, I joined the University of Plymouth in 1988 as a Research Professor. During 1987 I was a Visiting Professor at the University of California, Santa Barbara. In 1991 I founded the Neurodynamics Research Group, which was involved in developing dynamical systems models of information processing in the brain, and the Plymouth Engineering Design Centre, which investigated the application of adaptive computing methods to industrial engineering design problems. In 1994 I formed the Centre for Neural and Adaptive Systems (CNAS) from the work of the Neurodynamics Research Group. By 2003 this comprised eight academic staff.Currently head of the Centre for Theoretical and Computational Neuroscience. This was formed as a Research Centre of the University of Plymouth in October 2003, incorporating five of the academic staff from the CNAS and one from the Department of Psychology. The focus of the Centre's research programme is the application of rigorous quantitative approaches, including mathematical and computational modelling and psychophysics, to the study of information representation, processing, storage and transmission in the brain and its manifestation in perception and action. My external appointments have included serving as a member of the Computer Science Panel for both the 1996 and 2001 HEFCE Research assessment Exercises. In the past I have acted as a Specialist Advisor to the House of Lords Select Committee on Science and Technology (1984), and as a member of several Engineering and Physical Sciences Research Council committees, including committees in control engineering and engineering design and the two senior committees in the computing and information technology area, the Information Engineering Committee (1981-84) and the Information Technology Advisory Board (1993-94).
In 2000, I was elected to the Board of Governors of the International Neural Network Society (www.inns.org)on which I served until 2003. In 2001 I was invited to serve as one of the founder members of the UK Computing Research Committee (UKCRC), In 2003 I was invited to join the Medical Research Council's Brain Sciences Panel in respect of the MRC's Brain Sciences Initiative. Also in 2003 I was appointed as a member of the Animal Sciences Committee of the Biotechnology and Biological Sciences Research Council.
From 1999 to 2004 I was one of the founding directors and CTO of NeuVoice Ltd (www.neuvoice.com), a "spin-out" company founded in 1999 from research carried out in the CNAS on the human auditory system, and which produces state-of-the-art voice recognition products for the mobile communications/computing market.
I was the first chair/moderator of the GC5 steering group between November 2002 and May 2003.
Abstract
We propose a bottom-up computer engineering approach to the Grand Challenge of understanding the Architecture of Brain and Mind as a viable complement to top-down modelling and alternative approaches informed by the skills and philosophies of other disciplines. Our approach starts from the observation that brains are built from spiking neurons and then progresses by looking for a systematic way to deploy spiking neurons as components from which useful information processing functions can be constructed, at all stages being informed (but not constrained) by the neural structures and microarchitectures observed by neuroscientists as playing a role in biological systems. In order to explore the behaviours of large-scale complex systems of spiking neuron components we require high-performance computing equipment, and we propose the construction of a machine specifically for this task - a massively parallel computer designed to be a universal spiking neural network simulation engine.
Biographical statement
Steve Furber is the ICL Professor of Computer Engineering in the Department of Computer Science at the University of Manchester. From 1981 to 1990 he worked at Acorn Computers Ltd, and was a principal designer of the BBC Microcomputer and the ARM 32-bit RISC microprocessor. At Manchester he established the Advanced Processor Technologies research group which has interests in asynchronous logic design, power-efficient computing and hardware support for large-scale neural systems.
Abstract
Without a doubt the most sophisticated behaviour seen in biological agents is demonstrated by organisms whose behaviour is guided by a nervous system. Thus, the construction of behaving devices based on principles of nervous systems may have much to offer. Our group has built series of brain-based devices (BBDs) over the last 14 years to provide a heuristic for studying brain function by embedding neurobiological principles on a physical platform capable of interacting with the real world. These BBDs have been used to study perception, operant conditioning, episodic and spatial memory, and motor control through the simulation of brain regions such as the visual cortex, the dopaminergic reward system, the hippocampus, and the cerebellum. Following the brain-based model, we argue that an intelligent machine should be constrained by the following design principles: (i) it should incorporate a simulated brain with detailed neuroanatomy and neural dynamics that controls behaviour and shapes memory, (ii) it should organize the unlabeled signals it receives from the environment into categories without a priori knowledge or instruction, (iii) it should have a physical instantiation, which allows for active sensing and autonomous movement in the environment, (iv) it should engage in a task that is initially constrained by minimal set of innate behaviours or reflexes, (v) it should have a means to adapt the device's behaviour, called value systems, when an important environmental event occurs, and (vi) it should allow comparisons with experimental data acquired from animal nervous systems. Like the brain, these devices operate according to selectional principles through which they form categorical memory, associate categories with innate value, and adapt to the environment. Moreover, this approach may provide the groundwork for the development of intelligent machines that follow neurobiological rather than computational principles in their construction.1. Krichmar, J.L. and G.M. Edelman, Brain-based devices for the study of nervous systems and the development of intelligent machines. Artif Life, 2005. 11(1-2): p. 63-77.
2. Krichmar, J.L. and G.N. Reeke, The Darwin Brain-Based Automata: Synthetic Neural Models and Real-World Devices, in Modeling in the Neurosciences: From Biological Systems to Neuromimetic Robotics, G.N. Reeke, et al., Editors. 2005, Taylor & Francis: Boca Raton. p. 613-638.
Biographical statement
Jeffrey L. Krichmar received a B.S. in Computer Science in 1983 from the University of Massachusetts at Amherst, a M.S. in Computer Science from The George Washington University in 1991, and a Ph.D. in Computational Sciences and Informatics from George Mason University in 1997. Dr. Krichmar spent 15 years as a software engineer on projects ranging from the PATRIOT Missile System at the Raytheon Corporation to Air Traffic Control for the Federal Systems Division of IBM. In 1997, he became an assistant professor at The Krasnow Institute for Advanced Study at George Mason University. In 1999, he became a research fellow at The Neurosciences Institute in San Diego where he is currently a Senior Fellow in Theoretical Neurobiology at The Neurosciences Institute. Dr. Krichmar and his colleagues at The Neurosciences Institute have successfully constructed Brain Based Devices, robotic devices whose behavior is controlled by a simulated nervous system, to test theories of the nervous system having to do with perceptual categorization, primary and secondary conditioning, visual binding, motor control, and memory. Dr. Krichmar is author of approximately 40 scientific articles, has organized international conferences on brain-based robotics, and is chair of a new Robotic Soccer league which involves Segway robots interacting with humans.
Abstract
Much research has explored the issues involved in creating truly autonomous embodied learning agents but only recently has the idea of a developmental approach been investigated as a serious strategy for robot learning. This is now emerging as a vibrant new research area. We examine the goals and methods of Developmental Robotics, and assess the current state of play. We give some requirements for a developmental system and relate these to the UK Grand Challenge 5 (The architecture of mind and brain) in terms of design issues for future robotic systems.
Biographical statement
Mark Lee is Professor of Intelligent Systems in the Department of Computer Science at the University of Wales, Aberystwyth. He is the founding Director of the Centre of Excellence for Advanced Software and Intelligent Systems, and also a founding Director of the MONET European Network of Excellence. His main research interests are Intelligent Robotics and Model-Based Systems and he has collaborated widely, including with many companies in the UK. Current research topics include sensory-motor learning, including schema-based and constructivist methods, developmental learning, and action selection models.
As soon as an agent, biological or physical, is provided with two or more parallel processing sensory or motivational systems that can guide movement there is a problem. Indeed, the same problem arises when a single system has the capacity to represent two or more features that can guide movement. If competing systems/features seek to guide incompatible movements (e.g. approach/avoidance) which one should be given priority ? Our supposition is that one of the vertebrate brain's fundamental processing units, the basal ganglia, has evolved to deal with such issues. Throughout the brains of vertebrates, parallel processing sensory, motivational and cognitive systems that can direct movement all provide phasic excitatory inputs to the basal ganglia. In turn, the basal ganglia output nuclei provide returning tonically active inhibitory connections to all input structures. Thus, the architectural principle describing basal ganglia connections with both cortical and sub-cortical systems is one of largely segregated parallel projecting loops. Winner-take-all selection is achieved by selective disinhibition of behavioural systems targeted by basal ganglia output. Within this general framework, the implications of having of sub- cortical motivational systems (basic urges?) competing directly for behavioural expression with cortical (intellectual models of the world) will be considered. Additionally, an important quality of adaptive action selection systems is the capacity to adjust response probabilities (selections) based on reinforcement outcome. What is being reinforced by phasic dopaminergic neurotransmission within the basal ganglia is currently a topic of some dispute. Evidence will be considered suggesting dopaminergic teaching signals play a central role in identifying components of context and behaviour that are critical for causing unexpected biologically significant outcomes; in other words, learning those events for which the agent is responsible.
Biographical statement
Professor of Psychology, University of Sheffield.
For the past 20 years I have been interested in the sensory guidance of movement. The model system I have studied is the rodent midbrain superior colliculus. Anatomical, physiological and behavioural analyses of the relatively direct sensory input and motor output has lead to the conclusion that the colliculus is critical for the re-direction of gaze towards, or away from unexpected, biologically salient events. My work in this area has involved the study of the anatomy and physiology underlying orienting, approach, escape and avoidance and epileptic seizures. Much of this work has been conducted in collaboration with Paul Dean, Max Westby, Safa Shehab and Shaomei Wang here in Sheffield, and with John McHaffie and Barry Stein at the Wake Forest School of Medicine in North Carolina.A modern view of the brain is that it represents a 'society' of distributed, parallel processing functional units each with the capacity to guide or influence movement. The superior colliculus can be seen as one of these units. Recently, in collaboration with Kevin Gurney, Tony Prescott and John Mayhew, I have become interested in the issue of how the superior colliculus might share access to the limited motor and cognitive resources of the brain with other, potentially competing functional systems. We have, in 1999, proposed that the architecture of the vertebrate basal ganglia is ideally configured to resolve this selection/scheduling problem. We now have several computer simulations of basal ganglia circuitry, including one that can dynamically select the actions of a mobile robot. This work has shown that basal ganglia architecture has the capacity to generate coherent sequences of action selection which enables the robot to perform a 'purposive' task . This work continues while collaborations with Mayhew and Paul Overton in Sheffield and McHaffie, Stein and Terry Stanford in the USA are testing biological hypotheses derived from the computational work.
Abstract
Some contemporary theories posit an intimate link between cognition and consciousness. For example, according to Baars's global workspace theory, the hallmark of consciously processed information is that it involves competition between and broadcast to widespread, multiple brain regions, while non-conscious information processing is localised. On this account, consciously processed information - because it integrates the activity of massively parallel processing resources, sifting out the relevant contributions given the ongoing situation for the organism - is cognitively efficacious in a way that non-conscious information processing is not. From the perspective of understanding the "architecture of mind and brain", this suggests that the issue of consciousness cannot be ignored, but should be a central element of the research programme of our Grand Challenge.
Biographical statement
Dr. Murray Shanahan is Reader in Computational Intelligence in the Department of Computing, Imperial College London. He is on the steering committee for UKCRC Grand Challenge 5, "The Architecture of Mind and Brain", of which he is currently the acting chair. He has written many articles on logic-based artificial intelligence, and a well-known book on the Frame Problem. His main research interests at present are cognitive robotics, spatial cognition, brain-inspired cognitive architectures, and computer models of consciousness.
I regard explaining vision as the hardest unsolved problem in AI and psychology. In part that's because identifying the functions of vision is so difficult. What are the functions of vision? There are many AI and robot systems that include a small set of visual abilities, e.g. the ability to analyse static or changing video images, usually in a very limited way, e.g. identifying instances of a few types of objects (e.g. vehicles), or tracking moving objects treated merely as blobs, or locating a robot relative to a previously stored map. A vast amount of research is driven by benchmark-based competitions which use arbitrarily selected collections of tests, based on human performances that are not understood at all, and which do not relate vision to its animal functions in enabling and controlling actions in a 3-D world.In contrast, in humans and many animals, vision involves a rich and deep variety of functions, including perceiving static and changing 3-D structures, perceiving many kinds of positive and negative affordances, controlling actions both ballistically and online, and (at least in humans) interpreting the intentions of others, reading text and music, interpreting gestures, understanding how some mechanism works, solving mathematical problems with the aid of diagrams, and many more.
I previously thought (like many others) that most of these functions could be explained in terms of perception of structure at different levels of abstraction processed concurrently, from which perception of affordances (information about what is and is not possible) could arise. Recently, while working on 3-D manipulation tasks for the CoSy robot project, I realised that most normal perception is not of structures but of complex processes (represented at different levels of abstraction concurrently). For instance, as two objects move in relation to each other each typically has parts that move in relation to other parts and to parts of the other object. Thus we are surrounded by ``multi-strand'' processes.
This simple obsrvation has profound implications regarding requirements for explanatory models, which I shall attempt to explain. This is closely related to the Emulation Theory of Representation presented by Rick Grush in BBS 2004. In particular, detailed analysis of requirements for such capabilities sheds light on the variety of types of learning that need to occur, e.g. as a result of active and playful exploration of the environment, and also points to some deep requirements for cognition that are ignored by many researchers who emphasise the importance embodiment, for example the requirement to perceive ``vicarious affordances'' (affordances for others, or for oneself in the past or future).
An incomplete overview of background ideas for my talk can be found in a PDF presentation on vision as perception of processes , a PDF presentation on Two views of child as scientist: Humean and Kantian and this web page on Orthogonal recombinable competences acquired by altricial species.
Biographical statement
Aaron Sloman studied Mathematics and Physics in his first degree in Cape Town (1956), then did a DPhil in philosophy in Oxford (1962) after flirting with mathematical logic for a while. A few years later he decided that the best way to do philosophy was to do AI and ever since his first AI publication in IJCAI 1971, pointing out that the logicist AI strategy should be seen as part of a broader research programme, has been exploring a variety of aspects of the task of designing a functioning mind. Some of the ideas were reported in a book in 1978 The Computer Revolution in Philosophy , others in conference, workshop and journal papers and in recent years the Cognition and Affect web site. Besides vision he has worked on architectures, on diagrammatic reasoning, on the role of affect in intelligent systems (a too-fashionable topic), on consciousness (how to study it best by not mentining it), on causation in virtual machines, on relations between design space and niche space, on the precocial (genetically preconfigured) -- altricial (meta-configured) spectrum for animals and robots, and on languages and tools for AI teaching and research. He was elected fellow of AAAI in 1991, and shortly after honorary life fellow of SSAISB and fellow of ECCAI. He is now formally retired but working full time at the University of Birmingham, including promoting GC5
For both neuro-anatomical and theoretical reasons, it has been argued for many years that language and planned action are related. I will discuss this relation using a formalization related to those used in AI planning, drawing on linear and combinatory logic. This formalism gives a direct logical representation for the Gibsonian notion of "affordance" in its relation to action representation. Its relation to universal syntactic combinatory primitives implicated in language is so direct that it raises an obvious question: since higher animals make certain kinds of plans, and planning seems to require a symbolic representation closely akin to language, why don't those animals possess language in the human sense of the term? I will argue that the lexicalization of recursive propositional attitude concepts concerning the mental state of others provides almost all that is needed to generalize planning to fully lexicalized natural language grammar. The conclusion will be that the evolutionary development of language from planning may have been a relatively simple and inevitable process. A much harder question is how the capacity for symbolic planning evolved from neurally embedded sensory-motor systems in the first place.
Biographical statement
Professor of Cognitive Science, School of Informatics, University of Edinburgh.
Also adjunct professor in Computer and Information Science, University of Pennsylvania in Philadelphia.
Mark's PhD is in Artificial Intelligence from the University of Edinburgh. He was a Sloan Fellow at the University of Texas at Austin in 1980/81, and a Visiting Professor at Penn in 1986/87. He is a Fellow of the American Association for Artificial Intelligence, the British Academy, and the Royal Society of Edinburgh.He works in Computational Linguistics, Artificial Intelligence, and Cognitive Science, on aspects of speech, language, and gesture. Also interested in Computational Musical Analysis and Combinatory Logic. Generation of Meaningful Intonation for Speech by Artificial Agents, Animated Conversation, the Communicative Use of Gesture, Tense and Aspect, and Combinatory Categorial Grammar (CCG). Author of The Syntactic Process
Much of Mark's current NLP research is addressed to probabilistic parsing and to issues in spoken discourse and dialogue, especially the semantics of intonation. He is currently working with colleagues in computer animation using these theories to guide the graphical animation of speaking virtual or simulated autonomous human agents. Some of his research concerns the analysis of music by humans and machines. His recent presentation at the IJCAI Tutorial was on Plans and the Computational Structure of Language
Abstract
Much research in embodied AI and cognitive science emphasizes the fact that robots, supposedly unlike purely computional models of cognition, are "embodied". However, in this talk it is argued that the physical embodiment that robots share with animals provides only one aspect of the "organismic embodiment" that is underlying natural cognition, emotion and consciousness. The talk discusses the living body's relevance to embodied cognition and agency, and outlines a European research project that aims to model the integration of cognition, emotion and bioregulation (self-maintenance) in robots.
Biographical statement
Tom Ziemke is Professor of Cognitive Science in the School of Humanities and Informatics at the University of Skövde. After a German diploma degree in business informatics and a Swedish masters degree in computer science, he took his PhD at the University of Sheffield. Most of his research is concerned with embodied and distributed cognition, in particular theories and neuro-robotic models of how cognitive processes are shaped by the body as well as the material and social environment. He is associate editor of the journals "New Ideas in Psychology" and "Connection Science." From January 2006 he is coordinator of an EC-funded four-year integrated project called "Integrating Cognition, Emotion and Autonomy" (ICEA) as well as member of the executive committee of euCognition - The European Network for the Advancement of Artificial Cognitive Systems