Forum Facebook Page:
https://www.facebook.com/BBCForum
A partial index of discussion notes is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
I am not an expert user of Facebook or any other social web site. I tried posting this comment on the Facebook BBCForum web page, but it did not appear there. So I've decided to make it available here, where it will be easier to correct errors, add further references, etc. if necessary.
I apologise for a long reply!
I turned on BBC World service and found myself listening to an old friend and colleague from Sussex University, Geoffrey Hinton, and two other researchers, talking to Bridget Kendall, the "host" presenter, about deep learning. The technological achievements are very impressive, especially compared with what was possible a few years ago. But most of the work discussed was about learning, and in particular learning by collecting and analysing large amounts of information and looking for recurring patterns at different levels of abstraction.
In contrast, a great deal of human and animal intelligence is about *creating* new objects, new types of action, new solutions to old problems, new ways of thinking, new languages, new kinds of machinery, new deep theories, new tunes and other works of art, and new kinds of mathematics. It looks to me as if AI (including robotics) is still way behind many animals, including squirrels, crows, elephants, dolphins, octopuses, and young children. This gap was hinted at in the program, but I think it's important to be very clear about it.
Often what looks like learning in humans is actually creation (but not divine creation!) For example, in the Forum episode there was much discussion of computers learning to use language by being trained on examples. Young children SEEM to learn the language used by others around them. But there is evidence that human children are really doing something else: the main thing they do is *create* languages rather than *learn* them. Try searching for 'Nicaraguan deaf children' to find the most compelling evidence I know. E.g. https://www.youtube.com/watch?v=pjtioIFuNf8
Human twins sometimes create their own private language, for a community of two!
The creation process is cooperative, and requires interaction with other language users. Normally the younger creators are in a minority, which constrains the creations that work. So it *appears* that they are learning how older speakers use language. But I think that's an illusion: instead, the creative inventions of the younger speakers are constrained by what the older speakers already do, insofar as one of the tests for a newly created language (or language extension) is how well it works with other language users. That role of selection among creations is very different from a process of looking for patterns in data provided by older children. The latter data-driven process would not allow twins or deaf children to create their own new languages.
Moreover, the construction process works only because biological evolution has produced powerful creative mechanisms that other species seem not to have, even though they may be very good at learning and creating other things. For example, male weaver birds (and some females) have an amazing ability to develop competences that enable them to make a nest using up to a few thousand bits of vegetable matter (e.g. strips of grass, long thin leaves or other materials). A short extract from a BBC video on weaver birds is here https://www.youtube.com/watch?v=6svAIgEnFvw I don't know whether human infants could learn to do something similar.
Likewise I am not even sure that many adult humans would learn to make weaver-bird nests as quickly as male weaver birds do. (Has anyone tried?)
I am not saying that computer-based machines will never match human or weaver-bird intelligence, only that making that happen will require human developers to acquire a much deeper understanding of animal intelligence than we now have. By 'we' I include psychologists, biologists, geneticists, neuroscientists, linguists, philosophers, education researchers, and AI researchers.
I suspect that will require giving computers kinds of mathematical abilities, developed by biological evolution, that computers now lack, even though they have some outstanding mathematical abilities. Examples of what they lack include discovering ways of proving theorems in geometry and understanding geometric proofs produced by others, including finding flaws in inadequate proofs.
A taste of these geometrical abilities is presented here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html
and many more in this draft, incomplete, list of 'toddler theorems':
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
These are all part of the Turing-inspired Meta-Morphogenesis project:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
I am sure that when we have understood all these products of evolution better we'll still find a role for statistical/probabilistic learning, but it will be much less important than many current researchers now think.
Where would we be now if the main function of human intelligence was enabling us to learn to replicate what our forebears have achieved? I suspect most apparent cases of learning will turn out to be speeded up processes of creation.
When we understand the creative processes and mechanisms of Euclid, Aristotle, Bach, Shakespeare, Beethoven, Newton, Frank Lloyd Wright, and inventors of buttons, hooks and eyes, zips, and velcro, and toddlers learning to feed themselves and talk, all of which may one day also be demonstrated by robots, then I think we'll see that they need abilities to create, manipulate and use structures of many kinds, including both physical structures and abstract structures. I suspect that the ability to think up new possibilities, try them out, debug those that don't work, then redesign them, will play a far more important part than abilities to find correlations and patterns in records of past achievements.
Geoffrey may claim that the top levels of his deep learning systems are already doing what I describe. But I don't think the kinds of networks assumed have the right sort of representational power.
But I can't yet demonstrate that! It could take a century or more of further research to find out enough about human or squirrel intelligence to replicate it.
There are some very simple examples in past AI research, e.g. the analogy program of Thomas Evans in 1968, various planning and problem solving programs, theorem proving programs, automatic programming programs, Harold Cohen's painting program AARON, and various others that may already demonstrate important fragments. But it may turn out out that we also need new kinds of computers. It depends, for example, on how important the role of chemistry is in animal brains. There are far, far, more molecules than neurons in brains!
I don't want to disparage the work reported in the Forum episode. But it needs to be viewed in the context of what we do not yet understand.
Aaron
http://www.cs.bham.ac.uk/~axs
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham
--