School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

Information-based compliant control
vs. Force-based compliant control
(DRAFT: Liable to change)

Aaron Sloman
School of Computer Science, University of Birmingham.

Installed:30 Jul 2012
Last updated:31 Jul 2012; 26 Jul 2014 (Reformatted+Minor additions)
A partial index of discussion notes, including this, is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
This paper is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/information-based-control.html
A PDF version may be added later (or use 'print to file' in your browser).
_______________________________________________________________________________

This is part of the Meta-morphogenesis project:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
The main aim is to chart some of the major transitions in information processing
in biological evolution, development and learning, since the earliest forms of
life, or pre-biota. Investigating transitions in information processing
contrasts with investigating transitions in morphology (physical form) and
transitions in behaviour. A draft incomplete illustrative list of types of
transitions in biological information processing is here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/evolution-info-transitions.html

_______________________________________________________________________________

Introduction

Throughout evolution, biological information processing has occurred in physical
processes on sub-microscopic scales: with new information being used to
determine when to turn processes on and off or modulate them, to select
alternatives and in some cases being stored for future use. (For an answer to
"What's information?" see (Sloman, 2011b).) My knowledge of research on the
earliest forms of biological information processing is very shallow, but it is
possible to get a sample of some of the experimental results and theories going
back several decades, from (Spiegelman, Haruna, Holland, Beaudreau, & Mills,
1965; Sumper & Luce, 1975; Baez, 2005).

There is no point attempting to define a sharp boundary physical or chemical
control and information-based control, though the the more indirect and context
sensitive the connection between a cause and its effect the more likely it is
that the cause provides information as opposed to a force, or physical trigger,
or an energy source.

In one sort of case complex collections of chemical interactions occur in which
molecules are formed and then decomposed under the influence of catalytic
reactions and external sources of energy (e.g. geothermal heat). In other cases,
some structures capture additional molecules and go on doing so until they can
divide into two structures that can repeat the processes of accretion and
division into either two replicas of the original structure or at least can
spawn an offshoot with the ability to continue the accretion until the original
is copied with the copy able to do its own accretion and spawning. In all of
these cases there two very different kinds of causal factors at work namely (a)
existence of structural correspondences between complex molecules that make
chemical combination possible and (b) spatial juxtaposition that allows the
chemical potential to be fulfilled. So the reproductive process requires both
growth of appropriate chemical structures and spatial reorganisation to align
portions of those structures.

As organisms became larger, with articulated bodies capable of producing motion
or manipulating other objects, and with internal and external sensors providing
information about states and events both inside the organism and in the
environment, the structures being controlled became larger, including mouths,
fins, articulated limbs, grippers, wings, etc. In all these cases, the processes
that are controlled include continuous (or partly continuous) motions, such as
translations, rotations, contraction, stretching, flapping, and coordinated
motion of two or more limbs.

Processes in the controlling mechanisms are very different from the processes
controlled. Often the mechanisms are very much smaller, and undergo much more
rapid changes. Insofar as they involve chemical mechanisms the controlling
processes will be discrete, even if the processes they control are mostly
continuous. (I'll return to the continuous/discrete contrast later.)

Rigid vs flexible control
Engineers have known for a long time that there are tradeoffs between physical
design and software control requirements, and that compliance, e.g. in a wrist,
e.g. the tendency to move or deform slightly in response to changing forces,
often reduces the control problems, as compared with rigid control.

Such design principles are also evident in many biologically evolved systems.
Taking this to an extreme, it is sometimes suggested that no internal control is
required, as demonstrated by passive walking robots that "walk" down a slope
under the influence of gravity, waddling from side to side. For an example see
this video:
http://www.youtube.com/watch?v=N64KOQkbyiI

But this sort of argument can be compared with arguing that a marble
demonstrates intelligence when released at the top of a helter-skelter, since it
finds its way to the bottom. The marble is constantly controlled by its own
momentum, gravity, and the forces applied to it by physical surfaces that it
comes into contact with, and the same is true of the robot. In both cases the
competences are very limited. The passive walker robot cannot walk up a hill,
follow a spiral slope, or detect a brick in its path and take action to avoid
falling, to name but a few of the limitations that would not affect a suitably
designed, or trained, information-based walker, and which normally do not affect
biological walkers.

More generally, control often involves use of information to determine when and
how to release internal energy to oppose other influences, such as gravity,
wind, fluid currents, and forces applied by other large objects, some of them
other animals, such as predators. Information about both the environment and
current goals or needs is required for the appropriate movement options to be
selected and used.

Sometimes opposition to an external force requires no special control process,
for example, a small pebble rolling down a slope hits the foot of an animal and
stops rolling, just as if it had hit a larger stone or a tree trunk. In other
cases, opposition to external forces is based on external sources of information
and is more effective than physical opposition: for example, detecting a large
rock rolling downhill and either moving out of its way or moving another large
rock into its path before it arrives can be more effective than detecting the
impact when the rock arrives and passively resisting its motion: a fatal
strategy in some cases. In such situations, effective strategies require
internal energy resources be deployed to move out of the path of the rock, or to
push an obstacle into its path, and the details of what can be done depend on
information about the environment acquired in advance of and during the
actions.

Common human experiences show clearly that often the most effective response to
a physical situation is not based simply on immediate reactions to physical
interactions, but requires use of information acquired well in advance of some
critical situation, information that can be used to select and perform
preparatory actions that anticipate the future, for example building a dam to
meet future irrigation requirements.

In that sort of case, as in the vast majority of engineering projects, most of
the details of the construction process need to be worked out well in advance,
and last minute changes in height, thickness or materials of the dam wall in
response to new information about rainfall, or new irrigation needs, is not
possible; whereas in other cases of anticipation some of the final details can
be left unspecified, until the last moment, for instance running to catch a ball
while allowing the precise position of hands to be based on continuously
tracking the path of the ball. Contrast that with trying to predict the exact
location and pose of the body when the ball is caught, using only information
available when the running starts: in general a humanly impossible task.

The use of partial, or imprecise, advance planning combined with continual
adjustment (sometimes up to the lost moment, sometimes not) can be seen as an
information-based strategy that has much in common with the force-based strategy
using physical compliance. The former is "information-based compliant control"
(IBCC), the latter "force-based compliant control" (FBCC).

Biological evolution has clearly produced both mechanisms. 2.2 Information-based
vs force-based compliant control In some cases of IBCC, the processes of control
can use widely applicable strategies (e.g. tracking motion while heading for a
continuously adjusted interception point) so, in those cases, it is possible for
a version of the IBCC strategy to be selected by biological evolution.

In other cases, the anticipatory actions need to make use of specific features
of a local environment, which may be different for different members of the same
species, e.g. information about spatial locations of various nutrients. E.g.
some might be in a cave, some on a specific tree, some on a hillside. Knowledge
of the spatial layout of the terrain might be used for route-planning when going
from one of the locations to another, which could be out of sight initially. If
these locations are unchanging across many generations, then the information
could be absorbed into the genome like the migratory information apparently
inherited by some birds. In other cases, the genome can specify only learning
mechanisms for discovering where the nutrients are located and how to use the
information to meet changing needs. What is specified genetically in this case
could include forms of representation plus associated mechanisms, usable by
individuals to construct specific (possibly map-like) stores of information
about large-scale spatial structures.

Note:
Thinking about the evolution, development and learning of information-processing
capabilities leads to the conclusion that the "Baldwin Effect", a postulated
process by which what is first learnt is later absorbed into the genome, is just
one among many forms of trade-off between species learning and individual
learning. See this account of evolution as a "blind mathematician":
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/bio-math-phil.html

This sort of evolutionary transition may require a process of abstraction from a
working mechanism, to produce a more schematic genetic specification that can be
instantiated differently in different individuals. The best known and most
spectacular example of that is the human genome's apparently unique support for
processes of learning and development capable of leading thousands of
significantly different human languages. But that is itself likely to be a
variant of something more general found in a wider range of species, as
suggested below. (Programmers with experience of a wide variety of programming
languages and paradigms, and programming tasks of varying complexity will
recognize that similar transitions occur in the mind a programmer, or in a
programming community, over many years. These are relatively new forms of human
and cultural development, which suggest ideas for evolutionary transitions that
were previously unimaginable, e.g. by Darwin.)

More sophisticated organisms can use not only information about impending
physical events but also intentional information-based influences coming from
other organisms, e.g. threat signals, collaborative signals, invitations, sexual
approaches, arguments, demonstrations, and many more. The ability to be
influenced by external information has many forms ranging from very ancient and
simple reactions such as blinking when detecting a rapidly approaching object to
making use of complex learning and reasoning abilities. In a huge variety of
cases, though not all, the information processing requires internal manipulation
of information bearing structures usually implemented in microscopic
physical/chemical mechanisms.

The ability of sub-microscopic chemical processes within an organism to control
application of forces to much larger objects requires the ability of the
smallest entities to modulate release of stored energy in larger structures,
such as membranes, cells, muscles, limbs, and in some cases external machinery.
Deciding what to do and when to do it, and knowing why, all require the ability
to use information. However, as indicated in the CogAff architecture schema,
different processing layers in the same organism, which evolved at different
times can manipulate and use very different sorts of information (Sloman, 2003).
An earlier version of this idea was proposed in MacLean's theory of the "triune
brain" (MacLean, 1990),

The core of our problem is not how varied physical forms evolved or how varied
physical behaviours became available but how ever more sophisticated
information-processing capabilities evolved, making use of the physical forms,
to produce the behaviour -- and what they were.

A necessary condition for the observed variety of physical forms and physical
behaviours is the existence of sub-microscopic reconfigurable components (i.e.
physical sub-atomic particles, atoms, molecules, etc.), capable of being
combined to form a huge variety of more or less stable or multi- stable
structures, on many scales. Many of the structures composed from myriad parts
can move as coherent wholes through 3-D space, in some cases changing their
structures (internal relationships) and their (external) relationships to other
structures.

Some of the motions and other changes are produced solely by external
influences, while others are self controlled. Some use only information about
the immediate environment, and discard it after use (on-line intelligence),
while others refer to possible future events and possible past events that could
explain current facts, or to planned and unwanted events (off-line
intelligence). The difference is discussed in more detail later.

Initially only physical and chemical processes were controlled, e.g. reactions
to contact with nutrients or noxious substances, and motion towards or away from
other occupants of the immediate environment. Later, control mechanisms were
controlled, e.g. selecting between two competing control mechanisms, or creation
of new control mechanisms and many other forms of information processing now
found in individuals, social systems, and ecosystems.

Various sorts of (positive and negative) affordances for producing change
provided by physical environments of different sorts, are ubiquitous, e.g.
opportunities to collide with, avoid, push, pass through, pull, twist, bend,
rotate, squeeze, tear open, or eat physical objects. As a result, it seems that
overlapping information processing capabilities emerged independently in very
different evolutionary lineages, namely perceptual and motor information
processing required for control of widely used physical actions (e.g. in
octopus, felidae (cats), parrots, squirrels, corvids, elephants, and primates.
(I am using a generalisation of Gibson's notion of "affordance", as explained in
(Sloman, 2011c)). Convergent evolution of cognitive competences can provide
opportunities for convergent evolution of meta-cognitive competences.

In the more recent history of the planet, the growth in physical size and
complexity of animals, along with increasing sophistication of control
mechanisms, seems to have been accompanied, in some organisms, by continual
growth of meta-cognition: understanding of what is possible in advance of
action, and of what has been done, and what else might have happened, and how to
select among competing cognitive and meta-cognitive capabilities
(meta-management (Beaudoin, 1994), or reflective intelligence (Minsky, 2006)),
including abilities to compare and reason about not only alternative possible
actions, but also alternative goals, or alternative planning or reasoning
strategies.

Later on, increasingly sophisticated control processes were themselves created
and controlled by other control processes. Meta-control took on increasingly
intricate and varied forms, including, in some cases, use of external records
and reasoning aids, and training or teaching of one individual by another,
shortening times required for individuals to develop some of the more
sophisticated forms of information processing. Although there have been various
isolated attempts to design meta-cognitive capabilities in AI (some of them
echoing aspects of Freud's "Super-ego" concept), e.g. (Sloman, 1978a; Shallice &
Evans, 1978; Minsky, 1987; Newell, 1990; Russell & Wefald, 1991; Karmiloff-
Smith, 1992; Beaudoin, 1994; Cox & Raja, 2007; Sloman, 2006b; Shallice & Cooper,
2011), and very many more, most of them refer only to humans, or to a specific
proposed AI system. As far as I know nobody has attempted to compile a
comprehensive survey of types of meta-cognition that can be useful for
biological or artificial systems, and the environmental and other sources of
pressure to select them.

We do not yet have an adequate vocabulary to describe all these "high level"
control capabilities, though familiar words and phrases like, "sense",
"perceive", "learn", "want", "imagine", "decide", "resist", "refrain from",
"plan", "attend to", "think about", "understand", "communicate", "conscious of",
and many more, point at some of these capabilities and processes. Recent
additions, such as "cognition", "meta-cognition", "executive function", and
other jargon used by philosophers and scientists, may suggest deep theoretical
advances, but often merely indicate groping towards new forms of understanding
about the varieties of information processing in humans and other animals.

We cannot expect good theories until we have much better ontologies for possible
constituents of explanatory theories, based on deeper analysis of more varied
naturally occurring control problems.

[This is a hasty first draft. There's much more work to be done.]
_______________________________________________________________________________

References (to be pruned)

Anscombe, G. (1957). Intention. Blackwell.

Apperly, I. (2010). Mindreaders: The Cognitive Basis of "Theory of Mind".
London: Psychology Press.

-- Baez, J. (2005, Dec). Subcellular Life Forms.
http://math.ucr.edu/home/baez/subcellular.html

Balon, E. K. (2004, May-Aug). Evolution by Epigenesis: Farewell to Darwinism,
Neo- and Otherwise. Rivista di Biologia, 97(2), 269--312.

Beaudoin, L. (1994). Goal processing in autonomous agents. Unpublished doctoral
dissertation, School of Computer Science, The University of Birmingham,
Birmingham, UK.
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#38

Block, N. (1996). What is functionalism?
http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/functionalism.html
(Originally in The Encyclopedia of Philosophy Supplement, Macmillan, 1996))

Cellerier, G., & Etienne, A. (1983). Artificial intelligence and animal
psychology: A response to Boden, Margolis and Sloman. New Ideas in Psychology,
1(3), 263--279. http://dx.doi.org/10.1016/0732-118X(83)90040-5

Chappell, J., & Sloman, A. (2007). Natural and artificial meta-configured
altricial information-processing systems. International Journal of
Unconventional Computing, 3(3), 211--239.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609

Cox, M. T., & Raja, A. (2007). Metareasoning: A manifesto (BBN Technical Memo
No. TM-2028). Cambridge, MA: BBN Technologies.
(http://www.mcox.org/Metareasoning/Manifesto/manifesto.pdf)

Craik, K. (1943). The nature of explanation. London, New York: Cambridge
University Press.

Dennett, D. (1987). The Intentional Stance. Cambridge, MA: MIT Press.

Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA:
Houghton Mifflin.

Jablonka, E., & Lamb, M. J. (2005). Evolution in Four Dimensions: Genetic,
Epigenetic, Behavioral, and Symbolic Variation in the History of Life. Cambridge
MA: MIT Press.

Kant, I. (1781). Critique of pure reason. London: Macmillan. (Translated (1929)
by Norman Kemp Smith)

Karmiloff-Smith, A. (1992). Beyond Modularity: A Developmental Perspective on
Cognitive Science. Cambridge, MA: MIT Press.

MacLean, P. (1990). The Triune Brain in Evolution: Role in Paleocerebral
Functions. New York: Plenum Press.

Maynard Smith, J., & Szathmary, E. (1995). The Major Transitions in Evolution.
Oxford, England:: Oxford University Press.

McCarthy, J. (2007). From Here to Human-Level AI. In (Vol. 171, p. 1174-1182).
Elsevier. (doi:10.1016/j.artint.2007.10.009, originally KR96)

Minsky, M. L. (1963). Steps towards artificial intelligence. In E. Feigenbaum &
J. Feldman (Eds.), Computers and thought (pp. 406--450). New York: McGraw-Hill.

Minsky, M. L. (1987). The society of mind. London: William Heinemann Ltd.

Minsky, M. L. (2006). The Emotion Machine. New York: Pantheon.

Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard
University Press.

Russell, S. J., & Wefald, E. H. (1991). Do the Right Thing: Studies in Limited
Rationality. Cambridge, MA: MIT Press.

Scheutz, M., & Logan, B. (2001). Affective vs. deliberative agent control. In et
al.. C. Johnson (Ed.), Proceedings Symposium on Emotion, cognition and affective
computing AISB01 Convention. York.

Shallice, T., & Cooper, R. P. (2011). The Organisation of Mind. Oxford: OUP.

Shallice, T., & Evans, M. E. (1978). The involvement of the frontal lobes in
cognitive estimation. Cortex, 14, 294--303.
http://www-personal.umich.edu/~evansem/shallice-evans.doc

Singh, S., Lewis, R. L., & Barto, A. G. (2009). Where Do Rewards Come From? In
N. Taatgen & H. van Rijn (Eds.), Proceedings of the 31th Annual Conference of
the Cognitive Science Society (pp. 2601--2606). Austin, TX, USA: Cognitive
Science Society.
http://csjarchive.cogsci.rpi.edu/Proceedings/2009/papers/590/paper590.pdf

Sloman, A. (1971). Interactions between philosophy and AI: The role of intuition
and non-logical reasoning in intelligence. In Proc 2nd ijcai (pp. 209--226).
London: William Kaufmann.
http://www.cs.bham.ac.uk/research/cogaff/04.html#200407

Sloman, A. (1978a). The computer revolution in philosophy. Hassocks, Sussex:
Harvester Press (and Humanities Press).
http://www.cs.bham.ac.uk/research/cogaff/crp

Sloman, A. (1978b). What About Their Internal Languages? Commentary on three
articles by Premack, D., Woodruff, G., by Griffin, D.R., and by Savage-Rumbaugh,
E.S., Rumbaugh, D.R., Boysen, S. in Behavioral and Brain Sciences Journal 1978,
1 (4). Behavioral and Brain Sciences, 1(4), 515.
http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#713

Sloman, A. (1979). The primacy of non-communicative language. In M. MacCafferty
& K. Gray (Eds.), The analysis of Meaning: Informatics 5 Proceedings ASLIB/BCS
Conference, Oxford, March 1979 (pp. 1--15). London: Aslib.
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#43

Sloman, A. (1996). Actual possibilities. In L. Aiello & S. Shapiro (Eds.),
Principles of knowledge representation and reasoning: Proc. 5th int. conf. (KR
`96) (pp. 627--638). Boston, MA: Morgan Kaufmann Publishers.
http://www.cs.bham.ac.uk/research/cogaff/96-99.html#15

Sloman, A. (2002). Diagrams in the mind. In M. Anderson, B. Meyer, & P. Olivier
(Eds.), Diagrammatic representation and reasoning (pp. 7--28). Berlin:
Springer-Verlag.
http://www.cs.bham.ac.uk/research/projects/cogaff/00-02.html#58

Sloman, A. (2003). The Cognition and Affect Project: Architectures,
Architecture-Schemas, And The New Science of Mind. (Tech. Rep.). Birmingham, UK:
School of Computer Science, University of Birmingham.
(http://www.cs.bham.ac.uk/research/projects/cogaff/03.html#200307 (Revised
August 2008).)

Sloman, A. (2006a). Four Concepts of Freewill: Two of them incoherent.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/four-kinds-freewill.html
(Unpublished discussion paper (HTML))

Sloman, A. (2006b, May). Requirements for a Fully Deliberative Architecture (Or
component of an architecture) (Research Note No. COSY-DP-0604). Birmingham, UK:
School of Computer Science, University of Birmingham.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0604

Sloman, A. (2007). Why symbol-grounding is both impossible and unnecessary, and
why theory-tethering is more powerful anyway. (Research Note No. COSY-PR-0705).
Birmingham, UK.
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models

Sloman, A. (2008a). Evolution of minds and languages. What evolved first and
develops first in children: Languages for communicating, or languages for
thinking (Generalised Languages: GLs)? (Research Note No. COSY-PR-0702).
Birmingham, UK.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0702

Sloman, A. (2008b). The Well-Designed Young Mathematician. Artificial
Intelligence, 172(18), 2015--2034.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807

Sloman, A. (2009a). Architecture-Based Motivation vs Reward-Based Motivation.
Newsletter on Philosophy and Computers, 09(1), 10--13.
http://www.apaonline.org/publications/newsletters/v09n1

Sloman, A. (2009b). Some Requirements for Human-like Robots: Why the recent
over-emphasis on embodiment has held up progress. In B. Sendhoff, E. Koerner, O.
Sporns, H. Ritter, & K. Doya (Eds.), Creating Brain-like Intelligence (pp.
248--277). Berlin: Springer-Verlag.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0804

Sloman, A. (2010a, August). How Virtual Machinery Can Bridge the "Explanatory
Gap", In Natural and Artificial Systems. In S. Doncieux & et al. (Eds.),
Proceedings SAB 2010, LNAI 6226 (pp. 13--24). Heidelberg: Springer. Available
http://www.cs.bham.ac.uk/research/projects/cogaff/10.html#sab from http://www.cs.bham.ac.uk/research/projects/cogaff/10.html#sab

Sloman, A. (2010b). If Learning Maths Requires a Teacher, Where did the First
Teachers Come From? In A. Pease, M. Guhe, & A. Smaill (Eds.), Proc. Int. Symp.
on Mathematical Practice and Cognition, AISB 2010 Convention (pp. 30--39). De
Montfort University, Leicester.
http://www.cs.bham.ac.uk/research/projects/cogaff/10.html#1001

Sloman, A. (2010c). Phenomenal and Access Consciousness and the "Hard"
Problem: A View from the Designer Stance. Int. J. Of Machine Consciousness,
2(1), 117-169.
http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#906

Sloman, A. (2011a, July). Evolution of mind as a feat of computer systems
engineering: Lessons from decades of development of self-monitoring virtual
machinery.
http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1103
(Invited talk at: Pierre Duhem Conference, Nancy, France, 19th July 2011)

Sloman, A. (2011b). What's information, for an organism or intelligent machine?
How can a machine or organism mean? In G. Dodig-Crnkovic & M. Burgin (Eds.),
Information and Computation (pp. 393--438). New Jersey: World Scientific.
Available from http://www.cs.bham.ac.uk/research/projects/cogaff/09.html#905

Sloman, A. (2011c, Sep). What's vision for, and how does it work? From Marr (and
earlier) to Gibson and Beyond.
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk93
(Online tutorial presentation, also at
http://www.slideshare.net/asloman/

Sloman, A. (2012a). Paper 1: Virtual machinery and evolution of mind
(part 1). In S. B. Cooper & J. van Leeuwen (Eds.),
Alan Turing - His Work and Impact (p. ???-???). Amsterdam: Elsevier.
http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1106a

Sloman, A. (2012b). Paper 2: Virtual machinery and evolution of mind
(part 2). In S. B. Cooper & J. van Leeuwen (Eds.),
Alan Turing - His Work and Impact
(p. ???-???). Amsterdam: Elsevier.
http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1106b

Sloman, A. (2012c). Paper 4: Virtual machinery and evolution of mind
(part 3) meta-morphogenesis: Evolution of information-processing machinery.
In S. B. Cooper & J. van Leeuwen (Eds.),
Alan Turing - His Work and Impact. Amsterdam: Elsevier.
http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1106d

Sloman, A (2012 Ecai)
Meta-morphogenesis and the Creativity of Evolution, in
Proc ECAI 2012 Workshop on Computational Creativity, Concept Invention, and
   General Intelligence Monday 27th August 2012, ECAI, Montpellier,
Eds. T. R. Besold, K-U Kuehnberger, M. Schorlemmer and Alan Smaill,
http://www.cs.bham.ac.uk/research/projects/cogaff/12.html#1203

Sloman, A., & Chappell, J. (2005). The Altricial-Precocial Spectrum for Robots.
In Proceedings IJCAI'05 (pp. 1187--1192). Edinburgh: IJCAI.
http://www.cs.bham.ac.uk/research/cogaff/05.html#200502

-- Spiegelman, S., Haruna, I., Holland, I. B., Beaudreau, G., & Mills, D.
(1965). The synthesis of a self-propagating and infectious nucleic acid with a
purified enzyme. Proc Natl Acad Sci USA, 54(3), 919--927.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC219765

Strawson, P. F. (1959). Individuals: An essay in descriptive metaphysics.
London: Methuen.

-- Sumper, M., & Luce, R. (1975). Evidence for de novo production of
self-replicating and environmentally adapted RNA structures by bacteriophage
Qbeta replicase. Proc Natl Acad Sci U S A., 72, 162--166.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC432262

Turing, A. M. (1952). The Chemical Basis Of Morphogenesis. Phil. Trans. R. Soc.
London B 237, 237, 37--72.

Weir, A. A. S., Chappell, J., & Kacelnik, A. (2002). Shaping of hooks in New
Caledonian crows. Science, 297, 981.

Wright, I. (1977). Emotional agents. Unpublished doctoral dissertation, School
of Computer Science, The University of Birmingham.
http://www.cs.bham.ac.uk/research/cogaff/phd-theses.html

Zadeh, L. A. (2001). A New Direction in AI: Toward a Computational Theory of
Perceptions. AI Magazine, 22(1), 73--84.
_______________________________________________________________________________

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham