I considered signing this Open Letter about the future of AI but decided not to:
I've noticed that some well known Fellows of the AAAI have not signed, and I wondered whether any of them shared my concerns.
I did not sign, not because it included statements I disagree with, but because
it says nothing about the most important long term aims of AI -- the reasons why
I first got into AI, at IJCAI-2, 1971, when I was a young philosophy lecturer,
pushed by Max Clowes, for whom there is now a tribute and biographical note
here:
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-clowestribute.html
Max had persuaded me that by learning about AI, and joining in doing AI, I would be able to arrive at new and much better answers to old philosophical and scientific questions about the nature of mind, the nature of knowledge, the nature of mathematics, the evolution of mind, how minds develop, what language is, how language works, and many more. Unfortunately those concerns now seem to be of no interest to the majority of AI researchers, or research funding organisations: who see AI as solely, or primarily, an engineering discipline, concerned with producing new useful technology.
The open letter seemed to me to be an inadequate response to current concerns, as if a collection of leading physicists, in response to concerns that research on fundamental physics could lead to very dangerous new experiments and technologies, responded by circulating a letter about all the useful things coming out of physics, without mentioning the main reasons for doing fundamental research.
When I decided that AI provided the best way forward in Philosophy and the science of mind, I did not think the task was easy, and I did not think the answers would be found soon. E.g. in my 1978 book, I considered the following theses:
I wrote (probably in 1977):
and in the chapter on vision:
(I was thinking of washable nappies/diapers held in place by safety pins, not the modern disposable press-on variety!)
But even without having to handle safety pins would you let a robot do this with your baby in the foreseeable future?
https://www.youtube.com/watch?v=cblA874bplg https://www.youtube.com/watch?v=5fvrDu3v4Lw
Driving cars is much easier. (A completely different set of degrees of freedom and possibilities for disaster?)
Although a lot has been learnt since 1978 (and there has been a huge amount of progress due mainly to several million-fold increases in speeds and memory sizes of computers) I don't think my temporary pessimism has been shown to be misplaced, even as regards modelling human mathematical abilities, e.g. of the kinds that led to Euclid's Elements, and the topological insights used in everyday life.
Such reasoning is not about what is probable (or useful) but about structures and processes that are possible or impossible, or relationships that necessarily hold, e.g. the relationship between being a triangle and having internal angles that add up to half a rotation). These have nothing to do with probabilities and statistics (except for the results of the mathematical theories of probability and statistics, on which some great mathematicians have worked -- but not by generalising from data samples.)
I don't think modern logic suffices as a basis for the ancient mathematical discoveries either. Even if some of the original results can be modelled in logic, it doesn't follow that Euclid was doing logic.
Example: can any current theorem proving system even understand the question:
how many equivalence classes of continuous, non-self-crossing, closed curves are
there on a plane, on a sphere, on a torus? -- where equivalence between two
curves in a surface means that one can be continuously deformed into the other
in the surface. Can any current AI system think about the space of possible ways
of deforming one curve continuously into another in the same surface?
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html
I can fairly quickly get intelligent non-mathematicians to understand this problem, even if they don't all get the answer right for a torus. Our ability to think about and make discoveries about topological and geometrical relationships seem to be deeply connected with out abilities to perceive and make use of affordances, in ways that J.J.Gibson seems not to have noticed.
There have been impressive and useful AI demonstrations of language learning, and speech comprehension (sort of) but humans don't learn their first languages -- they create them cooperatively, and usually while in a minority, unlike the Nicaraguan deaf children who went far beyond their teachers in creating a new sign language. (Some twins also create a new language they share.) The Nicaraguan example is presented here: https://www.youtube.com/watch?v=pjtioIFuNf8
There are many impressive vision-based AI demos of tracking, manipulation and detailed 3-D reconstruction from stereo data (which my brain can't do).
But AI systems are not yet able to see the networks of affordances, partial orderings, semi-metrical and topological relationships visible when you walk through a garden or a kitchen. (Abilities that I suspect later evolved to form the basis of discoveries in Euclidean geometry, long before there were mathematics teachers.)
[This may be connected with the fact that vision in humans, and other animals, does not use rectangular frame-grabbers, but very rapidly relocatable fine-to-coarse optical sampling devices, and other not yet understood mechanisms!]
I've never agreed with Hubert Dreyfus and others who claim that the deep explanatory goals of AI CANNOT be achieved in principle, but I still think we are scratching the surface, even as regards articulating the problems; and because of restrictions on the kinds of AI research for which funding is now available, it seems that requirements for getting jobs, tenure and promotion have pushed researchers into a narrow subset of problems -- the ones likely to lead to impressive new demos in a few years, leaving the deep, long-term AI research goals (nearly) as remote as ever. (Unless something is happening that I don't hear about),
It's possible that advances in the understanding of biological information processing may incidentally help to lay new foundations for AI, of a sort that I suspect would have interested Turing. I don't mean neural nets: I think there may be interesting long term promise in the sorts of biological phenomena discussed by Seth Lloyd in this talk:
https://www.youtube.com/watch?v=wcXSpXyZVuY
though the gap between that and Euclid is still enormous. Moreover, I don't think new mechanisms are enough: we'll need new ideas about architectures, forms of representation, forms of reasoning, and great clarity about the functions of the various biological mechanisms -- usually unrelated to popular benchmarks.
We probably also need researchers whose education has been broader, deeper and more varied. That includes learning more kinds of mathematics, and having experience of a much wider variety of programming paradigms. I meet AI researchers who are expert at using Matlab, but could not design and implement a parser, a planner, or a logical reasoner. The same is true of many neuroscientists, psychologists and philosophers. This means there are questions they cannot think about. Many of them have no idea that a virtual machine can have properties that bear little relation to the physical structures of the computer on which it is implemented, or how this can come about. So they are unable to think about some of the important questions about how brains work.
Anyhow, I don't disagree with the contents of the open letter, or the closing summary:
I am concerned that the public, and prospective students and future researchers, don't hear leaders in the AI community talking about the deeper scientific and philosophical goals and the related gaps in our understanding, and especially the need for long term fundamental research in AI, including some that may not produce useful new machines in the next few years, or even decades.
I am officially retired (though still tolerated in my department) so I am lucky in not being under pressure to produce grants or publications. I just get on with trying to understand how ancient mathematical capabilities came out of products of biological evolution starting on a planet formed from a cloud of dust -- apparently based on evolution of layer upon layer of increasingly complex construction kits, many still waiting to be discovered by humans.
Progress might be faster if there were cleverer people working on my problems. The long term contributions of AI to science, philosophy, education, and even engineering might then come much sooner.
I wonder if there are other AI Fellows who did not sign the letter for similar reasons?
Note added 16 Apr 2015, updated 11 Nov 2018
I have discovered that in a letter to W Ross Ashby written in 1946 Alan Turing
wrote:
See
https://www.bl.uk/collection-items/letter-from-alan-turing-to-w-ross-ashby
(Thanks to Rodney Brooks for that link. I previous linked to this site, now
defunct:
http://www.rossashby.info/letters/turing.html
)
Aaron Sloman
http://www.cs.bham.ac.uk/~axs
School of Computer Science
University of Birmingham, UK
Main current research:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html