Papers on Requirements, to Guide Research in
Artificial Intelligence/Robotics/Cognitive Systems
Aaron Sloman
Last updated: 13 Jan 2007
Re-formatted: 25 Jul 2020 -- pdf version added
https://www.cs.bham.ac.uk/research/projects/cogaff/requirements.html
https://www.cs.bham.ac.uk/research/projects/cogaff/requirements.pdf
Introduction
I recently realised that most of what I have written since 1971 can be
viewed as contributing to a specification of
requirements for
human-like, or more generally animal-like, machines, especially robots
that can see and interact with the environment around them. This
realisation came with a conjecture that one of the main reasons for the
many failed predictions since the early days of AI is that most
researchers regard the requirements as obvious, thinking that the
remaining task is to devise designs and implementations that will meet
those requirements.
Thus, many of them attempt to estimate how soon the requirements will be
met without attempting to work out in any detail what the requirements
actually are. And since the requirements are full of hidden subtlety and
complexity that often goes unnoticed, the predictions are wildly
over-optimistic. In contrast, what I wrote in a book published in 1978,
probably looked wildly pessimistic to some:
"The reasons for saying
that existing computer models cannot be accepted as
explaining how people do things include:
- People perform the tasks in a manner which is far more
sensitive to context, including ulterior motives,
emotional states, degree of interest, physical exhaustion,
and social interactions. Context may affect detailed
strategies employed, number of errors made, kinds of
errors made, speed of performance, etc.
- People are much more flexible and imaginative in coping
with difficulties produced by novel combinations, noise,
distortions, missing fragments, etc. and at noticing short
cuts and unexpected solutions to sub-problems.
- People learn much more from their experiences.
- People can use each individual ability for a wider variety
of purposes: for instance we can use our ability to
perceive the structure in a picture like Figure 1 to answer
questions about spaces between the letters, to visualise
the effects of possible movements, to colour in the
letters with different paints, or to make cardboard
cut-out copies. We can also interpret the dots in ways which
have nothing to do with letters, for instance seeing them
as depicting a road map.
- More generally, the mental processes in people are put
to a very wide range of practical uses, including
negotiating the physical world, interacting with other individuals,
and fitting into a society. No existing program or robot
comes anywhere near matching this.
These discrepancies are not directly attributable to the
fact that computers are not made of neurons, or that they
function in an essentially serial or digital fashion, or that
they do not have biological origins. Rather they arise mainly
from huge differences in the amount and organisation of
practical and theoretical knowledge, and the presence in
people of a whole variety of computational processes to do
with motives and emotions which have so far hardly been
explored."
(From section 9.13 of
Chapter 9
of my 1978 book
The
Computer Revolution in Philosophy
A similar theme pervades a paper written in 1982 for the Rank Prize Fund
conference on image interpretation:
"There is a great discrepancy between the kinds of tasks that can be
performed by existing computer models and the experienced richness and
multiple uses of human vision. This is not merely a quantitative
difference which might easily be overcome by the use of better hardware.
There are too many limitations in our theoretical understanding for
technological advances to make much immediate difference. Given
computers many times faster and bigger than now, and much better TV
cameras, we still would not know how to design the visual system for a
robot which could bath the baby or clear away the dinner things, let
alone enjoy a ballet."
in
Image interpretation: The way ahead?
in Physical and Biological Processing of Images
Editors: O.J.Braddick and A.C. Sleigh.
Pages 380--401, Springer-Verlag (1982).
Work on requirements
The work on requirements has had many strands, covering topics such as
-
Work on human modes of reasoning and the varieties of forms of
representation we use -- the topic of
my first AI paper in 1971,
and several more since then.
-
Requirements for architectures capable of supporting human-like
capabilities, for interacting with a complex and partly unpredictable
environment, for performing concurrent tasks, for switching focus of
attention, for learning from what happens, etc. These architectural
requirements were discussed in
Chapter 6
of the 1978 book.
(Based on a paper written in 1973 while visiting
Edinburgh University to learn about AI.)
-
Requirements for human-like vision, involving perception of structure at
different levels of abstraction, using different ontologies, with
concurrent bottom up and top-down processing, discussed in
Chapter
9 of the 1978 book.
(At that stage I missed the significance of requirements for concurrent
perception of processes at different levels of abstraction.)
-
Requirements for learning about numbers as humans do in
Chapter
8 of the 1978 book.
-
Requirements for machines to have their own motives, as opposed to
having only motives given them by programmers, e.g. in section 10.13 of
Chapter 10 of the 1978 book.
-
Requirements for human-like emotions in
Why
robots will have emotions
in 1981, and
Towards a grammar of emotions
in 1982.
- Requirements for the evolution of human-like freedom of choice, in
a Usenet Posting in 1988,
How to dispose of the free will issue
(elaborated as Chapter 2 of Stan Franklin's book
Artificial Minds (MIT Press, 1995).
-
Why we should not be looking for unique solutions, but trying to
understand how different sorts of solutions (e.g. different sorts of
architectures) are suited to different sets of requirements, e.g. in
The structure of the space of possible minds 1984,
and later in
Exploring design space and niche space
(1995)
-
Requirements for machines to be able to refer to things and to
understand symbols they use, e.g.
In
IJCAI 1985 with a sequel
presented at ECAI
1986.
-
Requirements for sensors and effectors to be shared between many
different functions, sometimes concurrently and sometimes sequentially,
in
The
mind as a control system (1993)
-
Requirements for representational competences to develop prior
to the evolution or development of language, in
The primacy of non-communicative language (1979).
-
Requirements for a mixture of innate (pre-configured) and developed
(meta-configured) competences, the latter being a result of interactions
between the former and the environment.
The Altricial-Precocial Spectrum for Robots
and its sequel
Natural and artificial meta-configured altricial
information-processing systems
(both with Jackie Chappell).
-
Requirements for ability to develop
orthogonal, recombinable, competences of many kinds
-
Requirements for architectures that include
fully deliberative competences, and many intermediate cases between
purely reactive and fully deliberative.
-
Requirements for understanding both
Humean and Kantian notions of causation
-
Requirements for
going beyond sensorimotor-based (somatic) ontologies to using
exosomatic ontologies
with some arguments based on
a rotating necker cube.
-
Requirements for architectures that include
meta-semantic and meta-management competences.
-
Requirements for architectures that grow themselves.
... to be extended
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham
PAPERS AND PRESENTATIONS GROUPED UNDER VARIOUS HEADINGS
TO BE COMPLETED when I get time
On requirements for vision
On requirements for forms of representation
On requirements for whatever me mean by 'consciousness'
On mathematical reasoning
On mechanisms and architectures
On motivation, emotions and other affective states and processes.
On free will
On deliberative capabilities
On varieties of learning and development
....to be extended......