Some important questions for Computational Neuroscience,
Cognitive Robotics, and
related disciplines.
(DRAFT: Liable to change)
Installed: 3 Jul 2013
Last updated: 3 Jul 2013
Some current hard research questions relating philosophy,
cognitive science, robotics/AI, biology/evolution, and
the origins of life
- What forms of representation, and ontologies, are required for various kinds of
competence, and why?
-
What's the difference between learning about sets of possibilities and their
constraints (impossibilities) and learning about probabilities? When is the former
kind of learning more powerful?
-
How does the fact that the universe is (nearly) decomposable into distinct, but
recombinable, domains of structure and process, at various levels of abstraction
impact on the workings of evolution, development, learning, and cultural change.
[Are there implications for brain structure and development?]
-
What did James Gibson's theory of affordances achieve and what did he miss, that's
important for understanding perception, and its role in intelligent systems?
-
Why does the expression of genomes in species with cognitively more sophisticated
adults have to be 'staggered', so that some features are dormant while others are
expressed? What are the implications for development of cognition/intelligence... ?
I first suggested this as part of the explanation of differences between types of
mammal (e.g. grazers and hunters) and types of bird (e.g. chickens and crows) in
stages of development at birth/hatching and adult spatial intelligence many years
ago. The idea was refined in collaboration with Jackie Chappell in two papers in 2005
(conference) and 2007 (journal)
http://www.cs.bham.ac.uk/research/cogaff/05.html#200502
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609
Curiously, the same day I produced this web page Alice Roberts presented a Horizon
program on differences between humans and other species which mentioned some of the
same evidence, but completely missed the generalisation across precocial and
altricial non-human species See
www.bbc.co.uk/iplayer/episode/b036mrrj/Horizon_20122013_What_Makes_us_Human/
-
What kinds of information processing architectures are adequate for meeting various
requirements for animals and robots, e.g. the requirement for multiple varieties of
affective states and processes (involving desires, intentions, hopes, fears, moods,
likes, dislikes, preferences, curiosity, puzzlement, values, policies, attitudes,
personalities..., and their episodic instantiations, e.g. emotions).
[What's the structure of the space of possible architectures, and how much of that
space have we currently explored?]
-
What kinds of information processing mechanism and architecture can accommodate
internal conflicts of motivation better than current models that merely use two or
more computed numerical values and then choose the option with higher value. (That
fails to model conflict/pressure that endures after decision making.)
-
What are the similarities and differences between requirements for 'external
languages' for communication, and 'internal languages' for encoding/expressing
contents of perception, learnt information, desires, plans, questions, explanations,
hopes, fears, beliefs, theories, designs, ...?
In what ways do the internal and external languages depend on and influence each
other
at various stages of evolution and development?]
-
Why is the research on stereo vision inspired by Bela Julesz random dot stereograms
misguided -- leading to seriously inadequate artificial vision systems?
(Compare motion perception.)
-
What are the (many) functions of visual perception, and what sort of
information-processing architecture can accommodate all of them, and serve to relate
them to other cognitive functions? (There's obviously a more general question behind
this.)
-
Why is it the case that despite extremely impressive progress in forms of automated
(computer based) mathematical reasoning and theorem proving using logic, algebra and
arithmetic, outperforming most human reasoners, there seems to have been a complete
failure to model even the simplest forms of reasoning that our ancestors used in
discoveries leading to Euclid's elements, and partly related forms of reasoning in
other species, e.g. nest-building birds.
[There are several different 'steps' in those developments, that need to be
distinguished, some of them noted by Karmiloff-Smith.]
Example:
How can a robot, or animal, perceive and make use of differences in shape of
different (previously unknown) objects or different parts of such an object without
having metrical representations for length, angle, area, orientation, curvature, etc.
- Why does each generation of researchers in AI/Cognitive Science/Robotics seem to
be ignorant of major achievements three or more decades earlier? (Unlike physicists,
chemists, biologists, ...?) What can be done about it? (And which have I missed?)
To be extended/updated/corrected...
REFERENCES and LINKS (To be added)
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham