By Aaron Sloman
http://www.cs.bham.ac.uk/~axs
School of Computer Science, University of Birmingham, UK
The Singularity Hypotheses: A Scientific and Philosophical Assessment
Eds. Amnon H. Eden, James H. Moor, Johnny H. Soraker and Eric Steinhart, pp 79--80.
Springer-Verlag. Berlin, Heidelberg The Frontiers Collection 2013
Table of contents and some online chaptersSchmidhuber's article is also online here:
https://docs.google.com/file/d/0BwK0OPe_m9QNbW4yUldmREx1VGs/edit?pli=1
I have problems both with the style and the content of this essay, though I have not
tried to take in the full mathematical details, and may therefore have missed
something.
I do not doubt that the combination of technical advances by the author and increases
in computer power have made possible new impressive demonstrations including
out-performing rival systems on various benchmark tests.
However, it is not clear to me that those tests have much to do with animal or human
intelligence or that there is any reason to believe this work will help to bridge the
enormous gaps between current machine competences and the competences of squirrels,
nest-building birds, elephants, hunting mammals, apes, and human toddlers.
The style of the paper makes the claims hard to evaluate because it repeatedly
says how good the systems are and reports that they outperform rivals, but does not
help an outsider to get a feel for the nature of the tasks and the ability of the
techniques to "scale out" into other tasks. In particular I have no interest in
systems that do well at reading hand-written characters, since that is not a task for
which there is any objective criterion of correctness, and all that training achieves
is tracking human labellings, without giving any explanation as to why the human
labels are correct. I would be really impressed, however, if the tests showed a robot
assembling Meccano parts to form a model crane depicted in a picture, and related
tests here
(Sloman 2011a):
http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane/
Since claims are being made about how the techniques will lead beyond human
competences in a few decades I would like to see sample cases where the techniques
match mathematical, scientific, engineering, musical, toy puzzle solving, or
linguistic performances that are regarded as highly commendable achievements of
humans, e.g. outstanding school children or university students. (Newton, Einstein,
Mozart, etc. can come later.) Readers should see a detailed analysis of exactly how
the machine works in those cases and if the claim is that it uses non-human
mechanisms, ontologies, forms of representation, etc. then I would like to see those
differences explained. Likewise if its internals are comparable to those of humans I
would like to see at least discussions of the common details.
The core problem is how the goals of the research are formulated. Instead of a robot
with multiple asynchronously operating sensors providing different sorts of
information (e.g. visual, auditory, haptic, proprioceptive, vestibular), and a
collection of motor control systems for producing movements of animal-like hands,
legs, wings, mouths, tongues etc., the research addresses:
... a learning robotic agent with a single life which consists of discrete cycles or time steps t = 1, 2, . . . , T. Its total lifetime T may or may not be known in advance. In what follows,the value of any time-varying variable Q at time t (t(1 <= t <= T )) will be denoted by Q(t), the ordered sequence of values Q(1), . . . , Q(t) by Q(<= t), and the (possibly empty) sequence Q(1), . . . , Q(t - 1) by Q(< t). At any given t the robot receives a real-valued input vector x(t) from the environment and executes a real-valued action y(t) which may affect future inputs; at times t < T its goal is to maximize future success or utility....
As far as I am concerned that defines a particular sort of problem to do with
data-mining in a discrete stream of input vectors, where the future components are
influenced in some totally unexplained way by a sequence of output vectors.
I don't see how such a mathematical problem relates to a crane assembly problem where
the perceived structure is constantly changing in complexity, with different types of
relationships and properties of objects relevant at different stages, and actions of
different sorts of complexity required, rather than a stream of output vectors (of
fixed dimensionality?). I would certainly pay close attention if someone demonstrated
advances in machine learning by addressing the toy crane problem, or the simpler
problem described in (Sloman 2011d)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/simplicity-ontology.html
But so far none of the machine learning researchers I've pointed at these problems
has come back with something to demonstrate. Perhaps the author and his colleagues
are not interested in modelling or explaining human or animal intelligence, merely in
demonstrating a functioning program that satisfies their definition of intelligence.
If they are interested in bridging the gap, then perhaps we should set up a meeting
at which a collection of challenges is agreed between people NOT working on machine
learning and those who are, and then later we can jointly assess progress. Some of
the criteria I am interested in are spelled out in these documents:
(Sloman 2011b, c).
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.htmlHowever, all research results must be published in universally accessible open access
http://www.cs.bham.ac.uk/research/projects/cogaff/evo-creativity.pdfAdded Mar 2014:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vision
A presentation of some hard, apparently unsolved, problems about natural vision and
how to replicate the functions and the designs in AI/Robotic vision systems.
See http://www.cs.bham.ac.uk/~axs/publishing.html
It is safest to ignore all printed versions of my papers and use only the online
versions. Then at least you'll be confronted only with errors that are my fault.