School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

Kinds Of Dynamical System: A request for help

Installed: 23 Nov 2010
Last updated: 23 Nov 2010; 13 Jul 2011; 10 Aug 2012; 7 Aug 2013

Out of date: Please see later document
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/multipic-challenge.pdf
A Multi-picture Challenge for Theories of Vision
Including a section on types of dynamical system relevant to cognition.

Common views about dynamical systems

This is a request for help/comments/criticisms.

An early pre-cursor of this document was a note I posted to the Usenet discussion
group Psyche-B, in August 2000, in response to a message posted by John McCrone.
That posting is available here:
    http://tinyurl.com/BhamCog/misc/dynamical-systems.txt
    NOTE ON DYNAMICAL SYSTEMS

That in part reflected some of the criticisms of dynamical systems protagonists that
I had previously published in

    Aaron Sloman, The mind as a control system,
    in Philosophy and the Cognitive Sciences, Cambridge University Press, 1993,
    Eds. C. Hookway and D. Peterson, pp. 69--110,
    http://tinyurl.com/BhamCog/81-95.html#18

More recently, I started to summarise some more detailed ideas about dynamical
systems in this unfinished paper:
    http://tinyurl.com/BhamCog/misc/dynamical-systems.html

I am now trying to write up in a bit more detail some ideas about the sorts of
(mostly virtual, not physical) dynamical systems that seem to run on brains and are
likely to be needed in future intelligent machines, not all of which have been
considered by dynamical system theorists.

Most discussions of dynamical systems have made a number of highly restrictive
assumptions, listed below. I shall also list ways in which these assumptions need to
be relaxed.

Help is requested both in the form of pointers to existing work on the broader
notions of dynamical system and assistance/collaboration in improving the
specifications below, including production of a collection of illustrative examples
especially of natural (biological) dynamical systems of the types described here.

Common views about dynamical systems are too restrictive

1. A Dynamical system (DS) is often assumed to have a single state defined by a
state-vector which is a set of numbers ("scalar values") or in some cases labels,
each being a member of an ordered set, ("nominal values") where the values in the
vector represent physical states of components.

The set of possible state vectors is the state space. The fact that only one state,
represented by one state vector, represents the whole system led to the criticism
that this theory takes account only of atomic state dynamical systems.

Below I describe several types of dynamical system composed of multiple, relatively
independent, small or large dynamical systems which interact with one another, some
of which are created and destroyed dynamically. This is a criticism of theories that
postulate a single dynamical system to encapsulate all the cognitive (including
control) functions of an animal or a robot.

2. It is often assumed that state changes of a DS are subject to a set of dynamical
laws that can be expressed as differential equations or difference equations (for
instance in the case of cellular automata -- like Conway's Life). In contrast I think
we require far more varied kinds of states and transitions, including for example
construction of novel percepts, motive construction, plan formation, plan execution,
construction of questions, construction of theories, and many more.

3. It is often assumed that changes that are discrete are synchronized (i.e. all the
difference equations describe what happens in a time step). In contrast, a more
general theory allows for asynchronous operation of subsystems.

4. For continuous changes it is often assumed that there's a global time-frame, so
that rates of change in different parts of the system are comparable and coordinated.
In contrast I suggest that some of the different components of a complex biological
organism may be as uncoordinated as different parts of a forest. The requirement for
coordination and centralised control becomes stronger when needs or goals can only be
satisfied by harnessing many subsystems, e.g. when walking or running.

5. It is often assumed that a DS has (discrete, or continuous, or hybrid)
trajectories defined by the changes of the state-vector. Instead there many be many
independent trajectories of sub-systems.

6. A DS may be autonomous (e.g. it just starts off in some state and then runs, like
most versions of Conway's life) or embedded in a larger system, which itself is a
dynamical system and can influence or be influenced by some of the components of the
state vector.

7. It often required that a DS has attractors: regions of the state space that a
trajectory will not leave once it has entered, unless some external influence alters
the trajectory.

There are other features of state-spaces arising from relations between attractor
basins and whether the behaviour is or is not chaotic etc.

8. A DS is often assumed to be a fixed structure insofar as the number of components
of the state-vector is fixed and the set of dynamical laws is fixed. In contrast, we
need to allow for many types of DS including some that grow themselves, ad or remove
subsystems, modify their control mechanisms, etc.

The dynamical system constituting a newborn human infant will be very different from
the DS that individual has developed several decades later.

Requirements for cognitive dynamical systems (CDSs)

We need to allow for dynamical systems that violate almost all the constraints listed
above, if we want to understand animal cognition.

I'll talk about a CDS (Cognitive Dynamical System).

1. Instead of having a single state, with a single trajectory, with global
attractors, a CDS needs more or less disconnected enduring sub-systems which are
separate CDSs, which are more or less closely coupled with other CDSs, and where the
different CDSs may change their states at different rates and asynchronously relative
to one another.

For example, a human-like (adult) cognitive system typically needs several varieties
of perceptual subsystems, various action control subsystems, various subsystems
concerned with generation and management of motivation, subsystems capable of
reasoning, planning, problem solving, explanation-generation and self-monitoring,
different kinds of learning, language understanding and language generation
sub-systems: putting these all into one big mush of a dynamical system would produce
something totally unmanageable -- even by evolution.

2. The changeable components of a CDS are not restricted to scalar or nominal values,
but can include a variety of structured objects including logical or algebraic
formulae, grammars (or something with similar functionality), parse trees or charts
or other structures concerned with understanding complex inputs, (representations of)
geometric shapes, fragments of shapes (e.g. a portion of a curved surface),
topological structures (including ordered lists, trees, graphs, other discrete or
continuous manifolds, and perhaps things that map onto things like chemical
structures), and perhaps also processes (e.g. program executions, vibrating
structures, a rehearsed password, fragments of tunes...)

Not all the changes are continuous, or usefully representable by differential
equations, or difference equations. The possibilities for a given substructure need
not form a linear space: e.g. the permutations of a list of names, the set of
operators applicable to a particular equation, or the set of possible quadrilateral
shapes.

For all those reasons, the state space of a particular CDS is not usefully
represented as a point in a vector space of fixed dimensionality.

3. the subsystems need not have fixed numbers of components: e.g. the parse tree or
graph grown by a sentence-processing subsystem can have different numbers of
components at any one time (which may be borrowed from and later returned to a pool
of components that are available for temporary use in different sub-systems. (I.e. a
'heap').

4. to be continued...
   Some additional requirements for dynamical systems underlying animal and
   human cognition can be found in this (incomplete, but growing) survey of 'toddler
   theorems':
   toddler-theorems.html http://tinyurl.com/BhamCog/misc toddler-theorems.html

Varieties of dynamical system
Previously included in this discussion of toddler-theorems:
http://tinyurl.com/BhamCog/misc/toddler-theorems.html

This will need to be updated.

Types of dynamical system
(Updated/expanded 10 Aug 2012)

Using increasingly sophisticated ontologies: Somatic, exosomatic, meta-semantic....

  1. A simple type of dynamical system closely coupled with its environment, and with control mechanisms restricted to using a somatic ontology:

    dyn0

    One kind of dynamical system, depicted crudely above, is closely coupled with the environment through sensors and effectors.

  2. Escaping from the restrictions of close environmental coupling
    The next kind, below, has many sub-systems, with different levels of abstraction and decoupling from the environment, producing hierarchical control mechanisms. (See also Rodney Brooks on Subsumption architectures).

    Some subsystems change continuously, especially those closely coupled with the environment, while others change discretely, e.g. generating new instances of known types, and new connections (for instance as the grammatical structure of a sentence is being understood, or as the structures and processes in the environment are perceived.

    At any time many subsystems are dormant, but able to be activated (or possibly instances of subsystem-types, may be activated) very rapidly by constraint-propagation mechanisms.

    Evolution produced both kinds of subsystem (above and below) and many intermediate kinds. The more complex ones grow themselves after birth, under the influence of interactions with the environment (e.g. a human information-processing architecture).

    dyn1

    The type of system depicted above can be thought of as a hierarchical control system that has layers of control influencing the interactions with the immediate environment. A more sophisticated type of system, crudely depicted below, clearly found in humans and some other organisms and a small subset of robots, can also refer beyond the skin, and beyond what's directly sensed to entities, properties, relationships, states in the environment (using an exosomatic ontology), including things that may be currently unperceived because they are in remote locations or there are obstacles in the way.

  3. Extending the scope of the semantic contents used

    dyn2

    In this kind of multi-layer system, some layers have states and processes that are closely coupled with the environment through sensors and effectors, so that all changes in those layers are closely related to physical changes at the interface. (See the first dynamical system diagram above). In that sort of system, all the semantic contents, in the interface layers are "somatic", referring to patterns, processes and invariants in the input and output signals.

    Other subsystems, operating on different time-scales, with their own (often discrete) dynamics, can refer to more remote parts of the environment, e.g. the internals of perceived objects, past and future events, and places, objects and processes existing beyond the current reach of sensors, or possibly undetectable using sensors alone. These subsystems use "exosomatic" semantic contents (exosomatic ontologies).

    The red lines indicate such reference to remote, unsensed reality. Some of these sub-systems, using exosomatic ontologies, may be products of evolution: their main features and modes of operation are encoded in the genome ("preconfigured").

    But for some (altricial) species and some future robots (or current robots using SLAM techniques -- simultaneous mapping and localisation), the higher layers and their ontologies are constructed as a result of play and exploration, partly under the control of the environment, and partly the genome (or the programmer, in the case of SLAM systems), and earlier developmental processes. These are referred to as "meta-configured" competences in Chappell and Sloman 2007 (above).

    Besides factual information referring to entities beyond "the skin", some of the higher level subsystems can include questions, motives, preferences, policies, plans, and other control information, also referring (amodally) to more or less remote, or past or future entities.

  4. Using semantics with more sophisticated ontologies.

    As indicated crudely in the diagram below, some organisms (and future robots) can make use of external representations as well as internal ones, e.g. trails left by prey or conspecifics or self-produced trails, diagrams, and later on many kinds of verbal, mathematical, engineering, musical, and other forms of representation. dyn3

    More sophisticated organisms (and robots) and also refer to past and future states and processes, including subsystems concerned with theorising, reasoning, planning, remembering, predicting and explaining, formulating questions, thinking about answers, or perhaps even free-wheeling daydreaming, wondering, inventing stories, etc.

    A more detailed discussion relevant to toddler theorems, would have to distinguish representing specific states and processes (e.g. as a game-engine does) from representing and reasoning about classes of states and processes -- as required for formulation of and discovery of toddler theorems, and development of mathematics.

See also


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham