School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project

Synthetic Biology: Information Engineering
A major challenge for engineering, science and education.
Aaron Sloman
Last updated: 19 Feb 2008

The UK Innovation, Universities and Skills Committee, established by
the UK Parliament, has launched an enquiry into engineering here
http://www.parliament.uk/parliamentary_committees/ius/ius_290108.cfm

The following terms of reference have been agreed for this inquiry:

It turns out that a relevant point has been made in a document
recently circulated by the RAE (Royal Academy of Engineering),
which states:

    The RAE has identified Synthetic Biology as an area of
    scientific and national importance and Professor Richard Kitney
    will be chairing the RAE Working Group to look in to this.

What follows is an attempt to explain that a particular aspect of
synthetic biology, namely investigation and development of means of
replicating biological information processing, is potentially of
profound scientific and engineering importance.

What is synthetic biology?

The Wikipedia entry starts:

    The term synthetic biology has long been used to describe an
    approach to biology that attempts to integrate (or "synthesize")
    different areas of research in order to create a more holistic
    understanding of life. More recently the term has been used in a
    different way, signaling a new area of research that combines
    science and engineering in order to design and build
    ("synthesize") novel biological functions and systems.

Compare the definition in:
    http://syntheticbiology.org/FAQ.html

    Synthetic Biology is
    A) the design and construction of new biological parts, devices, and
    systems, and
    B) the re-design of existing, natural biological systems for useful
    purposes.

Most people in the field of computer science and engineering regard
computation as being by definition something done by computers
roughly as we know them (e.g. built on principles formulated by
people like Turing and von Neumann). But that is a very narrow view
of computation.

Biological evolution produced a far greater variety of forms and
mechanisms of computation (information-processing) than human
scientists and engineers have dreamed of yet. It is usually
forgotten that Turing machines were invented as a biologically
inspired model of a particular capability of humans, namely
mathematical thinking.

Turing was trying to capture, in the simplest possible form, the
essence of mathematical modes of reasoning. However, there are some
modes of mathematical reasoning that do not fit naturally into
Turing's framework, namely certain kinds of reasoning about geometry
and topology that seem to make use of human understanding of spatial
structures and processes. For more on that see this presentation:
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#math-robot
    Could a Child Robot Grow Up To be A Mathematician And Philosopher?

Varieties of biological information processing

Some people think that all biological computation can be subsumed
under evolutionary computation and neural computation, but this
ignores two facts:

 (a) the vast majority of organisms do not have brains or neurons

 (b) the vast majority of what goes on in individual organisms is
     chemical/molecular computation.
     E.g. human brains could not be built but for that.

We are only just beginning to understand what such mechanisms can do
and to model their operation. By the end of this century the fruits
of such understanding could dominate artificial information
processing systems.

In any case there is still much that is not understood, both about
evolution (e.g. the role of epigenesis) and how brains work (e.g.
what different sorts of neurones and configurations of neurones are
for, what they do and how they do it).

The need to learn more about what whole organisms can do

One of the common mistakes is to assume that we know what humans can
do and the only problem is to find out how they do it and then
replicate that. Unfortunately we then focus on mechanisms, and fail
to study the requirements properly. For that, we need to look
closely at the environments and competences of whole organisms to
investigate the information processing demands.

    Some critical comments on ambitious artificial intelligence projects
    that focus primarily on mechanisms:
        Bill Gates in Scientific American
        The Numenta project of Jeff Hawkins

When we can specify in much more detail the functions of whole
organisms, including the many problems they have to overcome in
coping with complex environments that are sometimes hostile and
sometimes supportive, then we shall be in a better position to
design working systems that explain they do it.

The potential engineering applications will be of enormous
significance, e.g. in building far more intelligent and robust
artificial systems of many more kinds than we can build now. (There
are understandable concerns about trusting such systems, but that
is a topic for another occasion.)

There are many problems of automation that people hope can be solved
by extensions of current techniques, but which seem to require
entirely new biologically inspired approaches. E.g. what the human
visual system can do, as demonstrated in these two little
experiments

    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/multipic-challenge.pdf
    http://www.cs.bham.ac.uk/research/projects/cogaff/challenge.pdf

is far beyond anything on the horizon in AI/Machine vision (many of
whose practitioners confuse perception with recognition).

Self-constructing information-processing architectures

Human and some animal information-processing systems do not start
off fully programmed but grow their information processing
architectures, including developing their own ontologies and some
new forms of representation. In part this process is driven by
finding out from the environment what the problems are and what
works. However innate mechanisms of suitable power and flexiblity
are required to use the opportunities provided by the environment.

    For a more detailed analysis of the tradeoffs involved see
    http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609 (PDF)
    Natural and artificial meta-configured altricial
    information-processing systems
       Jackie Chappell and Aaron Sloman, IJUC, 2007

At present, the biological information processing mechanisms that
drive and enable that process are still not understood. It may well
be that many future artificial systems will have to be developed in
a similar way, because human designers will have no way of
determining in advance in sufficient detail what the requirements
are for the systems -- e.g. systems that have to operate in
unfamiliar, hostile environments. Moreover the requirements can
change, as happens during the life of humans.

The UKCRC Grand Challenges Initiative

In 2002, the (UKCRC) UK Computing Research Committee sponsored
a discussion, led by Tony Hoare and Robin Milner leading to the
adoption of a number of computing research grand challenges
described here.

At least three of the UKCRC grand challenges are concerned with
issues within the proposed Synthetic biology grand challenge:

    GC 1: http://fizz.cmp.uea.ac.uk/Research/ivis/
        In Vivo -- In Silico
        The Virtual Worm, Weed and Bug
        Breathing Life into the Biological DataMountain
        A Grand Challenge for computational systems biology

    GC 5: http://www.cs.bham.ac.uk/research/cogaff/gc/
        The Architecture of Brain & Mind
        Integrating high level cognitive processes with brain
        mechanisms and functions in a working robot.

    GC 7: http://www.cs.york.ac.uk/nature/gc7/
        Journeys in Nonclassical Computation
        The Challenge:
            to produce a fully mature science of all forms of computation,
            that unifies the classical and non-classical paradigms

This contrasts with many analyses by computer scientists and
software engineers in academe and industry of important future
trends: very often the words 'biology' 'biological' 'neural'
'natural' 'evolution' 'intelligence' 'brain' 'perception' 'learning'
and 'grand challenge' do not occur in their documents about future
trends, which focus mainly on developing current research in
computing systems and formalisms and their applications.

Summary of a manifesto for research in biologically inspired computing:

    One of the potentially important long term engineering
    developments, in which this country is already among the
    research leaders, is modelling, developing and applying
    information-processing mechanisms inspired by those produced by
    biological evolution, including far more intelligent systems
    than we currently know how to build. The potential importance
    of this is recognised in the Synthetic Biology Study of the
    Royal Academy of Engineering led by  Professor Richard Kitney.
    The UK computing research community has already begun to address
    problems in this area in its 'Grand Challenges' and in related
    research projects funded by EPSRC and the EU.

Educational implications:

Unfortunately, since the swing away from teaching programming and
design of working systems in schools, replacing those activities
with learning to use tools thought to be required by future
employers, the opportunities in schools for young people to learn
about and be inspired by the exciting new problems and opportunities
related to understanding and designing information processig systems
have not only diminished, but what replaced them taught students to
regard computers as useful but essentially boring tools, like
cookers and washing machines. If we are to grasp the new
opportunities, computing-related educational practices will need
drastic re-thinking.

    The need to produce researchers and engineers able to develop
    and apply new results in synthetic biology will require new
    initiatives in schools and universities preparing students for a
    combination of cross-disciplinary thinking and techniques for
    analysing, designing, building, testing, comparing and
    explaining complex information processing systems of many kinds,
    not just systems based on current hardware and software
    technologies. Students will also need to learn how to
    investigate and think about ethical and other implications.

NB: that problem is not addressed by educational proposals that
teach children to regard computing as "fun". Rather, the brighest
ones need to learn to appreciate it as deep and challenging with the
potential to continue changing our lives.

A sample educational proposal for the last few years at school can
be found here.

That sort of initiative would, ideally, be supported by new forms of
learning starting much earlier.

Alas, it seems to me too unlikely that enough people in this country
will understand the problems and support appropriate action.


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham