As the president of the Royal Society recently observed, a pervasive feature of life is use of information. Evolution produced a vast panoply of varying forms of mostly unobservable information-processing mechanisms, performing a vast array of functions. We cannot tinker with most of the mechanisms, so we cannot use traditional scientific experiments to test hypotheses. But we can explore a space of human-designed information processing systems, some biologically inspired, some solving hard engineering problems, using a theoretical framework that generates new types of testable hypotheses about biological mechanisms.The simplest organisms used chemical mechanisms for direct control of interactions with the immediate environment, and control of digestion, growth, repair and reproduction. Much later, a subset of organisms could cope with enduring entities of varying complexity in the environment, and could generate structured goals, plans, and actions of varying complexity. This eventually required recursive forms of representation for complex percepts, complex goals, complex action-plans, and complex meta-semantic information about the individual and other intelligent individuals. Much of that must have preceded linguistic forms of communication between individuals. A theory of "meta-morphogenesis" should explain (a) how changes in information processing can occur within individuals, (b) how changes can occur across generations, and (c) how the mechanisms for producing such changes can themselves change over time within individuals and across generations. Some forms of morphogenesis and some examples of meta-morphogenesis involve social/cultural processes.
I'll present a research framework, with implications for forms of interdisciplinary collaboration and for education of future researchers. It extends physics and chemistry with a theory of biological information processing, enabling us to move beyond the limits of our common-sense ontologies for formulating theories, which are no more adequate for describing animal minds than for describing complex man-made computing systems.
An extended abstract will be available here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kavli-rs-abstract.html (to be revised).
However, it has become increasingly clear in recent years that the
scientific study of life requires the inclusion of information as a
third primitive notion -- not information in the sense of Shannon
but information in the very much older sense, in which information
is about something, can be true or false, can be consistent or
inconsistent, can be derived from other information, can be
communicated, can be expressed in different forms (some physical,
some virtual) and above all can be used, often long after
being acquired.
[See for example the recent lecture by Paul Nurse,
FRS, Nobel Prize winner:
http://www.guardian.co.uk/science/video/2010/nov/05/paul-nurse-life-information-networks
As he points out, the idea is not new.
[See Note 2]
]
Variations in physical forms and physical behaviours of organisms have been studied for centuries -- with recent technology enhancing our means of investigating very small and deeply hidden structures and processes. In contrast, the study of information processing is seriously hampered by the fact that many of the processes and mechanisms are very abstract, not subject to physical detection and measurement, and do not leave fossil records. Moreover, it is not clear that we have a rich enough understanding of possible forms of information processing to be able to formulate either the questions worth asking or the explanations worth considering. Scientists considering possible answers to questions about how animals or human infants do things do not normally consider the possibility that the correct answer is not expressible using the concepts used by researchers who have no experience of designing, implementing or testing machines with similar capabilities.
We do have some evidence of past information processing capabilities in the form of indirect products of information processing, such as shelters found or built and and used, objects apparently designed, made and used (including tools, weapons, kinds of material, and coverings of various sorts), animal prey and other food acquired and consumed, predators that were successfully avoided, evidence of terrain traversed, obstacles and resources that could have been encountered, and many more. But inferences from such evidence to the information-processing mechanisms involved are always shaky and need to be supported from other sources. This is also true of attempts to use behaviours in laboratory experiments as evidence for information processing mechanisms. Any observed results of information processing can in principle be explained in very many different ways. We need to find constraints on theories capable of being true.
So investigation of biological information processing requires many parallel activities, relying on indirect evidence, and on deep theories about what the possibilities are. Deep theories often require deep new concepts, and new forms of representation, such as the differential and integral calculus in Newtonian physics, and new mathematical formalisms required for quantum physics. Many formalisms have been developed for programming computers, and for reasoning about computations, and additional forms of representation inspired by biological evidence (e.g. evidence relating to neural computation and chemical computation) have been investigated. But it is far from obvious that we have discovered or invented the concepts and forms of representation required for understanding all the important forms of biological information processing -- which include some of the most complex phenomena on this planet. Often apparent successes in computational modelling (e.g. 3-D vision) are based on mistaken theories of what animal systems do. E.g. a theory of vision that merely explains how image features are segmented and labelled is grossly inadequate for a robot that needs to be able to manipulate 3-D objects to assemble new objects. This rules out the majority of current computer vision models.
Even some of the human-created forms of information processing are not always well understood. As everyone knows, the human ability to produce new kinds of information processing technology, and to discover new uses for such technology has expanded spectacularly in the last few decades. Yet designers of sophisticated operating systems and multi-computer network systems are often surprised to discover that there are failure modes they had not anticipated, or unnoticed weaknesses that can be exploited by criminals. I think it is fair to say that human understanding of the variety of possible forms of information processing is still at a relatively primitive stage, as compared with what needs to be understood -- including the information processing involved in metabolism, digestion, the immune system, perceptual mechanisms, learning mechanisms, physical development, intellectual development, social development, and various kinds of mathematical, scientific and artistic creativity.
Some of the systems human engineers have created result from many different developments solving different technical problems: in materials science, in design of electronic components, in design of device interfaces, in design of physical networking infrastructure, and above all in design of protocols, interfacing standards, and ways of mapping functions to mechanisms through compilers, interpreters and use of layers of virtual machinery. We have begun to understand some of the variety of types of information contents, forms of storage and transmission, control capabilities, and types of manipulation of information that are possible (though Shannon's misuse of the pre-existing word 'information' has caused some confusion). It is now very clear that a distinguishing feature of all life (except perhaps on the margins) is use of information, so that instead of animal and plant behaviours simply following resultants of external forces (like a marble rolling down a helter skelter), they are products of processes of selection and control, in which internal stores of energy are selectively deployed to achieve, preserve, reduce, or prevent things being the case, and the selection among alternative ways of doing those things is based on information available. (Not necessarily always good information.)
It is obvious that the technology available for use in biological research has been transformed in spectacular ways in the last six or seven decades. What is not so obvious is that our grasp of concepts to use, questions to ask, forms of theory to construct, has also been transformed -- not just as a result of observing and trying to make sense of natural phenomena, as happened in the study of matter and energy, but especially as a result of the transformations in our ways of thinking that were required for hardware and software engineering, including putting many different kinds of functionality together in novel ways, to create new useful information processing systems. But all those transformations in our abilities to think about information processing may still leave our science in an early stage, in comparison with what is required. That's very clear from the limitations of current AI systems (no matter which AI fashions were followed in building them).
For example, over six or seven decades the forms of information processing human engineers have implemented have moved from being very directly mapped onto physical structures and processes involving currents flowing, voltages changing, magnetic state changes, switches switching, etc., to being so remote from physical details that the very same computational process can be implemented in computers with very different instruction sets, different materials, and making use of physical state changes and causal interactions that are only very loosely related to the informational state changes and causal interactions: we now make constant use of virtual machinery whose relationship with physical machinery is remote and very variable.
A corollary of this is that just as human programmers are better able to monitor and debug such systems by focusing on the virtual processes rather than the physical ones, so also would artificial systems that learn from self monitoring sometimes do better to monitor abstract virtual machine states rather than the physical processes. For example, an attempt at file access violation will be detected by comparing account details, use privileges and access constraints, not examining states of transistors.
It is arguable that the need for some important kinds of self-monitoring and self-control to be directed at high level virtual machine states and processes rather than physical and physical states and processes was "discovered" by biological evolution long before human designers noticed it. Moreover, it seems that biological uses of virtual machinery involving networks of non-physical causal interactions implemented in physical interactions are more complex, more diverse and more powerful than anything produced so far by human engineers.
As some science fiction writers have noticed, in machines whose perceptual processes involve layers of processing that, apart from the very lowest levels, use virtual machinery, self monitoring by artificial intelligent systems could lead to the very same discoveries as caused human philosophers and psychologists to think about sensory contents and to engage in philosophical debates as to the nature and causal status of qualia that are the contents of sensory consciousness. Likewise it is arguable that animals and machines that have the intermediate levels but without the self monitoring mechanisms have those qualia too, but without knowing that they do. And humans probably have some that they are unaware of, e.g. in some of the lower level intermediate stages in auditory or visual processing, or vestibular processing. (Optical flow patterns used unconsciously to control posture are a well known example.)
Since the use of virtual machinery can break the associations between important information processes and the underlying physical processes, it is to be expected that there are many forms of biological information processing that would be resistant to detection by brain imaging machines that can find only localised structures and processes. If so, far more indirect research methods are needed to find out what is happening.
I shall try to map out a programme for filling in the gaps in our understanding by emphasising the need to locate all investigations of particular animals or particular cognitive phenomena within a comprehensive survey of
This leads to an investigation of how new opportunities and requirements can emerge in evolution and how various sorts of evolutionary transitions are needed to cope with them, including changes in contents, changes in forms of representation, changes in function, development of new meta-semantic competences (i.e. the ability to represent things that represent), changes in architectures, changes in modes of development of individuals and changes in evolutionary mechanisms. Some of the changes require the use of social or cultural evolution.
I call this investigation of the morphogenesis of information processing systems and how the mechanisms required can change during evolution, during individual development, and during social/cultural evolution, the study of "meta-morphogenesis" (MM). Questions about MM require deep collaborations between disciplines. They also require studies of particular species or particular competences to be placed in a broad context imposing multiple requirements on adequacy of theories. Otherwise the theories are under-determined and cannot be taken seriously as explanations. (Like chemical theories based on and explaining only facts about one complex molecule.)
For example, I'll try to show how some theories about animal cognition are inadequate if they don't connect with theories about how humans can discover and prove mathematical theorems, such as theorems in Euclidean geometry. (This relates to old ideas of the philosopher Immanuel Kant in 1781 in Critique of Pure Reason, the Cambridge psychologist Kenneth Craik in 1943 The Nature of Explanation, and Annette Karmiloff-Smith's 1992 book Beyond Modularity. Unfortunately, no robot that I know of comes close, so far). I'll also show how some of the claims made for so-called "embodied cognition" make use only of a few shallow features of embodiment concerned with local real-time continuous interaction, while ignoring other aspects of embodiment that often require deep thinking before acting.
Conclusion
In future, before a researcher attempts to answer the question: "How
does this animal perform that task?", or "What information
processing capabilities do new-born Xs (humans, cats, capuchins,
squirrels) have?" it will be necessary to identify many different
kinds of relevant background knowledge that can help to constrain
possible answers. We can think of this as a process of "multiple
triangulation".
Multiple 'triangulations' are need to fix locations in a vast
space of hypotheses that can be taken seriously as explanations of
particular empirical data.
An example likely to come up in the conference and workshop is investigation of tool use by humans of various ages and epochs and by other animals. I suggest that the contrast between "tool use" and other forms of matter manipulation is ill-defined and arbitrary. By investigation of many forms of matter manipulation, including manipulation of one piece of matter by another, whether it is part of the individual or not provides opportunities for massive triangulation on types of functions and how they change in evolution, development, and learning and the kinds of mechanisms that may be involved in the before-and-after stages, and in producing the transitions.
Similar comments can be made about studies of "mind reading", of "causal reasoning", of "time-travel", of "imitation", of "conservation", of "language learning", of "communication", of "symbolic reasoning", of "visual perception" (and other kinds), of "number concepts", and many more.
In general we are nowhere near being able to do that now, but we can make progress by expanding the fields of investigations, and expanding the amount of cross-disciplinary communication.
Doing that is likely to expose major gaps in what we know about how animals process information.
Making progress will require starting to educate potential researchers of many kinds in forms of computation at a much earlier stage than we do now, and not just with a focus on teaching programming to potential computer science applicants and software engineers. They form only a small subset of the important users of computational thinking, a fact that is generally ignored in recent discussions of how to reform computing education in schools. Without a broader and deeper reform, the scientific research programme outlined here will not have enough researchers to push it forward.
[Note 2]
My answer to "What's Information?" is here:
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#905
The key point is that "information" (in the sense we require here)
cannot be explicitly defined. Rather, like "matter", "energy", and
other deep scientific concepts, it is defined mainly by its role in
the deep explanatory theories that make use of it, and partly by the
experimental methods, tools, instruments associated with such
theories, which can change over time, and help to "tether" the
theories to the world.
As the theories containing a concept are
extended the concept changes. For this to be possible it is
important that the concept is never completely defined.
Many people quote with approval a definition supposedly proposed by
Bateson: "Information is a difference that makes a difference".
This
is seductive nonsense and a misreading of Bateson, as I have argued
in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/information-difference.html
He stated that a bit of information is "a difference that
makes a difference". The insertion of "a bit of" is a difference
that makes a big difference.
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham