Aaron Sloman, Fundamental Questions,
In C. Freksa, M. Kohlhase, and K. Schill (Eds.): KI 2006, LNAI 4314, pp. 439-441, 2007. Springer-Verlag Berlin Heidelberg
DOI: 10.1007/978-3-540-69912-5_33
I was asked to focus on 1966 to 1976, which was a bit difficult as I had not heard of AI before 1969, and I am not very good at doing history.
Below is a slightly modified version of the abstract provided in advance.
An expanded version of the slides used for the presentation is available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk37
A video recording of this talk is available on this site, here:
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ki2006
I first heard about AI in 1969 from Max Clowes, then an AI vision researcher, when I was a philosophy lecturer at Sussex University (with a background in mathematics and physics). Gradually I came to realise that the best way to make progress in most areas of philosophy (e.g. philosophy of mind, epistemology, philosophy of language, philosophy of science, philosophy of mathematics, and probably even aesthetics) was to do AI.Attempting (and usually failing) to design and implement working fragments of minds with human-like capabilities is a much more rapid route to understanding the real problems than the typical arm-chair analysis and smoke-filled seminar discussions of philosophers (in those days). That was partly because apriori philosophical analyses are usually based on ignorance of requirements and constraints that must be met by working systems and also ignorance of the full range of possible mechanisms, architectures, forms of representation, virtual machine types, etc.
For example, philosophical discussions about free will are often based on simplistic assumptions about the nature of human decision making and the kinds of mechanisms that might support such processes. This leads to spurious oppositions between determinism and freedom. By exploring a wide variety of information processing architectures, whether produced by evolution or by engineers and philosopher-designers, we can show that there are more varied and complex cases than philosophers had previously considered, and explain why desirable forms of freedom and responsibility depend on deterministic mechanisms rather than being incompatible with them.
Likewise by investigating architectures involving multiple concurrent sub-architectures, including some that monitor and modulate others, we can begin to understand more varieties of consciousness and self consciousness than philosophers were able to dream up in their arm chairs.
During those early years it became clear that whereas much of AI research in the past had been focused on algorithms and representations, it was also necessary to start thinking about how to put all the pieces together in an *architecture* combining multiple kinds of functionality, in concurrently active components, especially if we are to explain or model the kind of autonomy and creativity found in humans and other animals.
(This was the topic of Chapter 6 of The Computer Revolution in Philosophy (1978) now online at http://www.cs.bham.ac.uk/research/cogaff/crp/ )
Two other philosophers whose interest in AI grew in that period were Dan Dennett, whose book Brainstorms (1978) also attempted to build bridges between the two disciplines, and Margaret Boden whose two books (Purposive Explanation in Psychology (1972) and Artificial Intelligence and Natural Man (1978)) helped to spread the word to wider audiences. [OUP will shortly publish her new 2 volume History of Cognitive Science which will help to illuminate the early years of AI.] Other philosophers also became interested.
Apart from the impact of AI on philosophy there was also a need for AI researchers to develop philosophical expertise in order to help them in their work. One reason was that they were often insensitive to the crudeness of the questions they asked (e.g. how can we model emotions? learning? creativity? consciousness?) because they did not know how to analyse complex concepts, and tended (and still tend) to assume over-simple analyses, as a result of which they often make inflated claims (e.g. to have modelled learning, or emotions, or scientific discovery, when all they have modelled are very simple and shallow special cases).
One example was the strong tendency among many AI researchers in the first decade to think that all reasoning or problem solving had to make use of essentially logical or sentential information structures -- an assumption I challenged in my first AI paper in 1971, claiming that Fregean and analogical modes of representation and reasoning are both important. Many others have made the same point, but I think it is fair to say that whereas logicist AI has many important achievements there has been little success in modelling visual/spatial/diagrammatic reasoning: mainly because most of the problems of vision are still unsolved in AI, even though there has been a lot of work on sub-problems, such as recognition, tracking and route-finding. There is far more to seeing a spanner than recognising it, as you can tell by watching a 3-year old trying to use one.
[http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0603 ]Another reason for AI researchers to learn philosophy is that old philosophical problems can inspire new AI research. One example is the old philosophical debate between empiricists (e.g. Hume) and apriorists (mainly Kant) which, reformulated in modern terms, leads to investigations of nature-nurture tradeoffs. Unfortunately many AI theorists just assume that any learning system must start off with as little prior knowledge as possible, and must derive all its concepts by abstraction from experienced instances (concept empiricism/symbol grounding) --- as if proposing that the human genome should discard millions of years of learning about the nature of the environment: unlike all the many animals that start off highly competent at birth. The time is ripe to re-open that discussion in collaboration with biologists studying varieties of animal cognition.
One of the philosophical conflicts between Hume and Kant that drove my own research interests concerned the nature of mathematics. Kant criticised Hume for allowing nothing to exist between the empirical knowledge acquired through the senses and trivial tautologies that are true by definition. Kant thought mathematical discoveries were not empirical and truly expand our knowledge. He was right of course.
If we can shift from attempting to model the theorem-proving done by adult mathematicians (which many AI researchers have attempted to do) to attempting to model the processes of learning about numbers, shape, motion, and operations such as counting, grouping, constructing things, and learning to apply such operations to themselves (e.g. counting counting operations), as happens in many human children during the first decade, we may both come to understand better what needs to go on in a robot with human-like intelligence, including mathematical intelligence, and also understand better what goes wrong in much mathematical education in primary schools because it is based on incorrect models of learning and discovery. (Piaget tried, but lacked the conceptual tools.)
There were many technical achievements in AI in the 1970s, many of them concerned with new engineering applications including the early development of expert systems and many tools now taken for granted by researchers (e.g. Matlab, Mathematica). A major robotic achievement, now generally forgotten, was Freddy the Edinburgh robot, which could assemble a toy wooden car in 1973, though it could not see and act at the same time. Minsky's frame-systems paper was very influential, and inspired many formalisms and toolkits. Logic programming started to take off.
AI vision research was also starting to get off the ground, at last moving away from pattern recognition. E.g. pioneering work was done by Barrow and Tennenbaum, published in 1978, and by others working on ways of getting 3-D structure from static or moving image data. However many did not appreciate the importance of the third dimension and merely tried to classify picture regions -- a task that still occupies far too many researchers who could be doing something deeper.
Gibson's ideas were just beginning to be noticed around that time, especially his emphasis on the importance of optical flow and texture gradients. Some people were already trying to resurrect neural nets. Many worked on new higher level languages and toolkits (though not architecture toolkits). Prolog was an example. There was much work on natural language processing, including European translation projects and the DARPA speech understanding project. My own vision project (POPEYE) based on a multi-level multi-processing visual architecture made some progress then hit a funding wall. I also started trying, without much success, to get people to think about surveying spaces of possibilities and the tradeoffs therein instead of (vainly) competing to find the single best solution to a problem.
During the following decade the field started increasingly to fragment for several different reasons (including rapid growth in numbers), with many bad effects, including killing off some major promising developments (e.g. research on 3-D vision). AI has become far more a collection of narrow specialisms with most researchers barely aware of anything going on outside their own sub-fields. Perhaps we can now start re-integrating AI, both as engineering and as the most general science of mind. At least the hardware support is more powerful than ever before. For a 'Grand Challenge' proposal see http://www.cs.bham.ac.uk/research/cogaff/gc/ And for a suggested means to re-integrate AI see http://www.cs.bham.ac.uk/research/cogaff/gc/aisb06/sloman-gc5.pdf
[31 May 2006]
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham
Updated: 13 Apr 2010; 27 Mar 2016