GRADUATION SPEECH, SUSSEX UNIVERSITY
(Expanded version. See section 3.)
21 JULY 2006
Aaron Sloman
School of Computer Science
The University of Birmingham

To my great surprise, the University of Sussex, where I spent 27 years between 1964 and 1991 (except for a year in Edinburgh) decided to award me an honorary degree of Doctor of Science in July 2006, along with three other ex-Sussex people.
[Apologies for broken link: blame Sussex University administrators.]

At the ceremony Ron Chrisley introduced me and my work with some kind words and ended with a reference to the claim on my website that I tend to upset vice chancellors and other superior beings. After Ron, I had to make a short speech. I had prepared a few bullet points to be projected on the screen to remind me of what I wanted to say, but for some reason they never appeared, so I talked from memory. I remembered all the points except one, about computing education. Since that is a very important point, I thought I would retrospectively write out an expanded version of the acceptance speech here, with the omitted point included, below, as point 3, and other parts expanded and corrected, thanks to reminders from Sussex colleagues who have read and commented on this.

CONTENTS

  1. What Sussex University did for me: Thanks!
  2. Early days -- pre-COGS and COGS:
  3. Our vision for computing in education.
  4. What I've been doing.
  5. Looking Ahead: Information sciences in 3006:
  6. Here are some pictures taken on the day.


  1. What Sussex University did for me: Thanks!

    • I arrived as a young philosophy lecturer in 1964 and although I had already completed a DPhil in 1962 and had two years as a lecturer in Hull, I was still learning, and Sussex University was a wonderful place to go on learning.

      Sussex was committed to interdisciplinarity and gave me opportunities to meet and learn from many others including physicists, e.g. Tony Leggett (with whom I taught an 'Arts/Science' course for a mixed collection of first year students, and who recently won a Nobel Prize for physics), biologists (e.g. John Maynard Smith and Brian Goodwin), psychologists of several kinds (including Marie Jahoda, Stuart Sutherland and Keith Oatley), social scientists, e.g. Jennifer Platt and Donald Winch (who, by chance, was also awarded an honorary degree, the day before me), mathematicians, e.g. John Kingman who helped me formulate a new theory of the meaning of 'better', and many others.

      In those days there was no pressure on young lecturers to get grants and publish journal articles. (My first grant did not come till about 13 years after my first job started. I did not start writing an article until I had a good idea, and I did not publish anything until the work had been suitably rounded off. And nobody hassled me to publish or get grants.)

      So I had enormous freedom to continue learning by reading, meeting people, attending lectures and seminars outside my field, including the inspiring lectures by Max Clowes (pronounced 'clues') who arrived in 1969 as a reader in Artificial Intelligence and persuaded me to start thinking about Philosophy in a new way, by thinking about how to design a working mind (or fragments thereof) instead of merely arguing in the abstract about necessary and sufficient conditions as philosophers normally do. This led to my spending a year in Edinburgh in 1972-3 (thanks to Bernard Meltzer who obtained a grant to bring me to his Department of Computational Logic).

      As a result of all that, my way of doing philosophy was completely transformed, as later reported in a book mentioned by Ron: The Computer Revolution in Philosophy: Philosophy, science and models of mind

    • I also owe a tremendous dept to my wife, Alison, who has supported me in every way since 1965, despite having a full and busy life of her own. In particular she very often drew my attention to important biological phenomena relevant to my work, including phenomena that contradicted what I had assumed.

    • Could it happen now?
      Could a new young lecturer appointed in a 21st century university spend a decade or two continuing his or her education, learning about neighbouring disciplines? Probably not. If someone like me started as a new lecturer there would simply not be enough time to read and explore widely.

      It is an international problem, as I found, for example, when I gave an invited talk in Bremen in June 2006. There is a dreadful situation world-wide: the mechanisms many governments use in order to decide how to allocate research funding now produce tremendous pressure on everyone to keep on publishing and getting grants. This makes it very difficult for a young academic to spend as much time learning as I did, after doing a PhD and getting a job: so people have to remain narrow. It would be too risky for a young lecturer to start reading and thinking about topics that may not lead to publishable articles in the near future.

      Allocating funding on the basis of measurable targets produces pressure on people to meet the targets, instead of performing the services the nation needs, and doing the work humanity can most benefit from, as we are finding in schools, the National Health Service, universities, and probably other public service organisations. It also shifts motivation away from collaboration to competition, which is often highly counter-productive.

      This is partly a consequence of using performance metrics to evaluate individuals and determine funding allocations -- as if doing research were like selling cars. A better model for choosing researchers to support is choosing a wife or a husband: deciding what is worth finding out is more like deciding whom to marry -- and opinions can justifiably differ.

      You certainly would not wish to select a spouse on the basis of some government list of desirable features.

      Those of you who are graduating today who become leading politicians, captains of industry, and senior university managers in future should do everything in your power to reverse this disastrous process, so as to allow deep and creative research to flourish on its own time scales. Here are some alternative ways of doing things:

      1. Have very deep selection processes for academic staff in universities (more like those used in industrial research laboratories) and provide an ongoing mixture of internal guidance and monitoring and external reviews by visiting external experts, using process-based evaluation, not only performance-based evaluation.

      2. Be far more prepared to take risks with young researchers, especially if they are excellent teachers. There may be some wasted resources as a result. But there may also be much deeper and more creative contributions to our understanding of the universe, including ourselves. And the wasted resources could hardly be worse than the effects of constant generation of large numbers of narrowly focused, mutually congratulatory, and often half-baked, papers in order to justify tenure, promotion, research support, etc.

      3. Additional point added Sept 2006
        Governments with lofty objectives often make the mistake of trying to introduce huge changes in large monolithic projects. A recent example is the attempt to develop a massive new IT system for the National Health Service, which has already shown signs of floundering miserably even though it is still in its early stages. Another example is the project to introduce identity cards, and there are many more, e.g. massive projects to change how primary schools work. I recently wrote an article explaining why large, monolithic IT projects will inevitably fail, and proposing an alternative approach based on a collection of loosely coordinated exploratory projects with mechanisms to ensure that what is learnt in one sub-project, including those that fail, can inform the others (as happened over 30 years in the development of the internet, and is still going on), so that tax-payer's money is not wasted. I believe my analysis is also applicable to many other large projects (including foreign invasions!) but leave the arguments to another time.

        The analysis is presented in the form of an open letter to my MP Lynne Jones, along with a collection of news items about the NHS/iSoft fiasco, and a collection of comments from leading academics in computer science and others with practical experience, all accessible from here.

        I sincerely hope that future Prime Ministers and other national leaders will understand some of these arguments.


  2. Early days -- pre-COGS and COGS:

    • Ron Chrisley's quotation from my autobiographical note about upsetting 'superior' people reminded me that soon after I came to Sussex Jennifer Platt and I were both elected to the Senate. We were both fresh young lecturers at the time. At our first Senate meeting, the Vice Chancellor (John Fulton, to whom we all owe a great deal because of his vision of a new kind of university) gave a long speech. Afterwards Jennifer and I both started asking questions. Later we were told that it was unheard of for anyone to question the VC: people just sat and listened. My advice to everyone is never be afraid to question and comment on what anyone says, no matter how eminent. Give them the respect of assuming that they are sufficiently intelligent and flexible to understand and learn from criticism, no matter where it comes from. (Alas, sometimes the respect is misplaced.)

    • Besides Max Clowes there were several great colleagues at Sussex including Margaret Boden, John Lyons, Alistair Chalmers, then later Steve Hardy and Gerald Gazdar from all of whom I learnt much and with whom I was privileged to collaborate in founding what started as a 'Cognitive Studies' contextual programme within the School of Social Sciences, and later grew into a separate School of Cognitive and Computing Sciences (COGS).

    • All this depended on strong support initially from Donald Winch who was Dean of the School of Social Sciences when the programme started, and later from Peter Lloyd, an anthropologist who was Dean of SOCS, who urged us to start an undergraduate AI degree, and Margaret McGowan, Professor of French and Pro VC for Arts and Social Studies, who had the vision to see the importance of what we were doing. Thanks to them, and the others who contributed, COGS was an internationally known research and teaching centre by the time I went to Birmingham in 1991, tempted by a research chair. I can't list all the wonderful colleagues I interacted with and learnt from before I left, with some of whom I still collaborate.


  3. Our vision for computing in education.
    (Omitted from my speech on 21st July.)
    During the early 1970s some of us, especially Max Clowes and I, partly inspired by the work of John Holt, Ivan Illich and Seymour Papert, developed a vision of the future of computing in education, which I summarised in the Preface to my 1978 book The Computer Revolution in Philosophy, as follows:
    Another book on how computers are going to change our lives? Yes, but this is more about computing than about computers, and it is more about how our thoughts may be changed than about how housework and factory chores will be taken over by a new breed of slaves.

    Thoughts can be changed in many ways. The invention of painting and drawing permitted new thoughts in the processes of creating and interpreting pictures. The invention of speaking and writing also permitted profound extensions of our abilities to think and communicate. Computing is a bit like the invention of paper (a new medium of expression) and the invention of writing (new symbolisms to be embedded in the medium) combined. But the writing is more important than the paper. And computing is more important than computers: programming languages, computational theories and concepts -- these are what computing is about, not transistors, logic gates or flashing lights. Computers are pieces of machinery which permit the development of computing as pencil and paper permit the development of writing. In both cases the physical form of the medium used is not very important, provided that it can perform the required functions.

    Computing can change our ways of thinking about many things, mathematics, biology, engineering, administrative procedures, and many more. But my main concern is that it can change our thinking about ourselves: giving us new models, metaphors, and other thinking tools to aid our efforts to fathom the mysteries of the human mind and heart. The new discipline of Artificial Intelligence is the branch of computing most directly concerned with this revolution. By giving us new, deeper, insights into some of our inner processes, it changes our thinking about ourselves. It therefore changes some of our inner processes, and so changes what we are, like all social, technological and intellectual revolutions.

    This sort of vision led us to develop new kinds of teaching that allowed students to explore ways of giving computers human-like capabilities in order to deepen their understanding of those capabilities and in order to teach them to think creatively and analytically about complex structures and processes and how they interact. Our work led to the development of the Poplog system a multi-language development environment for teaching and research, whose successful marketing helped to fund the growth of COGS in the early years (thanks to the genius of its chief architect, John Gibson, building on earlier work by Steve Hardy and Chris Mellish).
    [Note added 11 Aug 2006:
    A summary of some of what we did to support student-driven learning first in the Pop-11 system then later in Poplog is now available here as part of a contribution to opposition to patents for ideas about e-learning.

    Wikipedia entries (added 2014):
    http://en.wikipedia.org/wiki/Pop11
    http://en.wikipedia.org/wiki/Poplog ]

    The teaching and research tools we developed are now freely available online at the Free Poplog web site. Now, as then, they can be used to help many people, including school children, learn to design, implement, test. debug, analyse, explain, compare and criticise, working systems, instead of merely copying and rearranging what others have created, which is what many people use computers for.

    This new mode of education also began to flourish in some schools with the spread of BBC micros. Many highly creative teachers inspired new adventurous and disciplined forms of learning in their pupils -- though many teachers had no idea what to do with computers because they had no suitable training.

    ALAS THE DREAM COLLAPSED.
    Politicians, parents, school teachers, and industrialists all started claiming that computers should be used to teach schoolkids how to use the tools that were being used in industry. This was a world-wide folly.

    So instead of learning how to THINK, children all round the world now use the potentially most powerful educational medium that has ever existed merely for the mundane task of learning how to USE the packages that run on Windows on a PC, such as word processors, browsers, email tools, databases and spread sheets --- most of which will be out of date by the time their own careers are launched.

    As a result many intelligent school leavers who have never encountered programming or artificial intelligence now don't see how computing could possibly be a university degree subject: they think it's like cooking -- you learn to use a computer as you learn to use an oven. I hope to show how wrong that is. But it will not be easy. Most people are now brainwashed into thinking that a computer by definition comes with Microsoft windows on it and the idea that people, including people like them, can actually design and modify the tools and packages that run on computers never enters their heads.

    I was intrigued to hear a senior Microsoft person on the radio a couple of weeks ago lamenting the fact that there are so few people coming out of schools wanting to study computing, because they think it is cool to use computer systems but don't realise it is cool to create new ones. He claimed this was seriously damaging the economy. He did not mention why this is happening.

    If some of the people now graduating can be made to understand this message, then perhaps when they are teachers or politicians or parents they will not make the same drastic mistake as was made by the previous generation.

    Alas, this may now be irreversible, world wide: a great tragedy of our time. Even if politicians recognize the mistake, it will take decades to produce enough teachers who have the competence to teach people to create working systems instead of merely using them.

    I have one small hope regarding a way of reversing this trend. If it makes progress I'll add a note here later. But I am not very hopeful.

    I've elaborated a little on these points in a paper for a conference on Grand Challenges in Computing Education in 2004. The paper is here.

    Added June 2014
    The Computing At School movement, which started in the UK in 2009 is now making a huge difference, but at present the potential for teaching AI in the spirit of Sussex in the 1970s and 1980s, seems to have been mostly ignored.
    http://computingatschool.org.uk/


  4. What I've been doing.
    Since those early days when my mind was stretched by the exciting mixture of new ideas that I picked up from many very bright people, I have continued exploring the idea that the best way to do philosophy is by doing Artificial Intelligence, that is trying to design and test various fragments of working minds and even trying to implement some simple complete 'toy' minds in order to learn more about the problems of putting pieces together. I have benefitted enormously from bright and creative students who found holes in my theories and started to plug them, or built new extensions that I had not dreamed of, for instance relating them to theories of attachment in infants and the mechanisms underlying human grief. There are summaries and pointers available via my home page.

    This work, begun at Sussex, and continued since I moved to Birmingham, involves learning from psychologists, neuroscientists, biologists, computer scientists, software engineers, AI researchers and philosophers.

    Recently I have realised that we can learn enormous amounts by looking at children with the mindset of an engineer asking:

    Could I design something that achieved that?
    Often it helps to look at videos rather than live children, because real life moves too fast, whereas a video can be viewed several times. Often you'll notice something important only on the third viewing and that will generate questions that cause you to go on noticing new things on subsequent viewings of the same video and others.

    An example video of an 11 month old child feeding his belly, his legs, the carpet and his mind by eating and playing with yogurt using a spoon is discussed so as to illustrate the point in this recent poster presentation (20 slides PDF) at an AI conference in Boston. For example at a certain stage he does not realise that if you wish to transfer yogurt from the tub to your leg, it is not enough to load the spoon and then press it on your leg. At a certain age the child's ontology does not include the idea that the bowl of the spoon prevents the transfer unless the spoon is rotated. There are many other examples. A major challenge in AI, which, for all I know, may take decades to solve, is explaining those learning processes in sufficient detail to allow us to design robots that can learn in similar ways through creative play and exploration. (People who are worried about what robots might do to us may find these notes on Asimov's Laws of Robotics useful.)

    Many people are doing similar work with the practical goal of trying to make smart useful new machines. My personal main goal is finding out more about humans and other animals and especially about the complex tradeoffs between knowledge and skills produced by biological evolution and those produced through individual learning. To understand more about what we are, we need to understand a lot better how we actually work. Apart from its intrinsic interest, many practical benefits could follow including much better forms of teaching which do not turn bright kids off mathematics. There could also be applications in counselling and therapy. But that's just sugar on the strawberries: the value of deeper understanding of how we work does not need to be based on practical applications.

    Alas many very bright potential contributors to such research have been steered away by current forms of computing education in schools (and some universities), though fortunately a few do realise the mistake and use the fleeting opportunities provided by conversion masters degrees and interdisciplinary research projects to compensate partly for the failures of their schooling.


  5. Looking Ahead: Information sciences in 3006:
    I have talked about studying and modelling human minds, which are to a large extent products of biological evolution, including the mechanisms involved in absorbing a culture, learning a language, and developing personal skills and interests.

    The results of such processes are not merely products of evolution, for they depend on the environment, but the processes that produce those products depend heavily on mechanisms provided by evolution.

    People sometimes ask:

    What percentage of us comes from the genes and what percentage from the environment?
    That's a silly question because we are not made of separate measurable bits of stuff of two kinds. The important question is
    How do the environment and genetically determined mechanisms interact with each other within an individual in such a way as to produce complex patterns of learning and development?
    Asking how to divide up the credit for the results is just silly. Asking what the various contributions are to the processes producing those results, and asking how they work, is not silly: it is a deep and mostly unsolved problem. Solving it requires deep new explanatory theories of development and learning.

    All of this is an illustration of my main final point: biological processes involve vast amounts of information-processing of many different kinds, including the processes controlling the development of an oak tree from an acorn or a giraffe from a fertilised egg, the digestion of food and distribution of components to where they are needed in the body, the detection and repair of injured or malfunctioning components all round the body, the operations of the immune system, the control processes in ecosystems.

    These processes involve many different sorts of information-processing mechanisms, all produced over millions of years by biological evolution, and most of them still not understood.

    KINDS OF MACHINES
    Up till the last century people mostly thought that machines were things that manipulate matter and energy, e.g. diggers and cranes that move large amounts of earth or pre-constructed parts of buildings, or vehicles that move people; and many kinds of engines that convert chemical energy, wind energy, water energy, into mechanical energy, or which convert mechanical energy or chemical energy into electrical energy, and so on.

    During the last century we gradually started to understand a third kind of machine: a machine that manipulates information, by acquiring, storing, transforming, analysing, combining, matching and using information. As with matter-manipulation and energy-manipulation we learnt how to build new information manipulating machines, namely computers of many types, and those machines were soon used repeatedly to help us produce the next generation of information-processing machines, so that the process accelerated unbelievably.

    Despite all those technical advances it remains the case that we currently understand only a tiny subset of the kinds of information processing machines that exist on earth because the vast majority are not the ones we designed and built but were 'designed' and built by evolution long before we existed, and we mostly know very little about them, including human brains, despite all the advances in new techniques for peering into them, which mostly don't show us what they do or how they do it, but merely tell us where some of the action is.

    On a different scale there are information-processing systems that exist in collections of organisms, including swarming insects, various kinds of symbiotic systems, in human societies and in ecosystems. The processes studied by economists, historians, sociologists, anthropologists and ecologists may include some rearrangements of matter and energy, and even money, but above all else they involve the acquisition, transfer and use of information of many kinds, including the latest tunes downloaded by technologically extended kids.

    So my prediction for 3006 is that by then Informatics, the science that studies information-processing systems of all sorts, will have expanded far beyond the study and use of computers and will have discovered far more about biological and social information-processing systems, though there could still be many unsolved problems about how they work even 1000 years from now.

    If all that is correct, then in 3006 (and maybe even by 2106) Informatics departments will subsume most of biology, neuroscience, psychology, ecology, and the social sciences.

    Maybe some of that new understanding will trickle down to schools, and maybe by then schools will no longer confuse the processes of stretching young minds to the full with the processes of training industry fodder.


Here are some pictures taken on the day.

Outside the Dome:

  1. Picture 1
    Dignitaries, award winners and others pose with the Chancellor, Richard Attenborough in searing heat, outside the Dome.
  2. Picture 2
    A small subset: Ron, Chancellor, Aaron, Vice Chancellor.
  3. Picture 3
    Dickie, Alison and Aaron
Inside the Dome:
  1. Picture 4
    Chancellor coming up the steps.
  2. Picture 5
    Chancellor speaking
  3. Picture 6
    Ron's eulogy.


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham
Last updated: 19 Nov 2006; 13 Mar 2019 (removed broken links).