Life, Levels and Schrödinger

Unnoticed implications of
Schrödinger's discussion of life

How did spatial intelligence and mathematical competences evolve?

[THIS DOCUMENT IS BEING RECONSTRUCTED
PLEASE IGNORE IT UNTIL THIS NOTICE IS REMOVED]


Aaron Sloman
http://www.cs.bham.ac.uk/~axs/
School of Computer Science
University of Birmingham


DRAFT (INCOMPLETE) CONTENTS LIST
THIS CONTENTS LIST
I WISH TO THANK
ABOUT THIS DOCUMENT
NOTE ADDED 9 Oct 2020: Schrödinger on Mind and Matter
The Problem To Be Addressed
Some facts about spatial reasoning
Figure Crows Nest
Video of Kitten In Tree
Requirements for spatial reasoning mechanisms
The need for cross-level explanations
Examples of human reasoning about necessity/impossibility
Turing on mathematical intuition vs mathematical ingenuity
Two roles for biological evolution
Isomerism
Figure Isomers
REFERENCES AND LINKS

I WISH TO THANK
Jackie Chappell https://www.birmingham.ac.uk/staff/profiles/biosciences/chappell-jackie.aspx who has deeply influenced my thinking (in our Meta-Configured Genome theory) about relationships and tradeoffs between evolution and learning in humans and other intelligent animals;

Peter Tino https://www.birmingham.ac.uk/staff/profiles/computer-science/tino-peter.aspx who introduced me (around September 2019) to some of the important relevant features of chemistry-based mechanisms of gene expression;

Anthony Leggett https://physics.illinois.edu/people/directory/profile/aleggett who kindly discussed a first draft short presentation of my ideas on this topic (in September 2020), leading to the revised, improved version presented here. Some key features of the current theory concerning evolved metaphysical layers were triggered by Tony's criticism of a simpler draft proposal.

None of these three can be held responsible for any errors in this document, which they have not read.


ABOUT THIS DOCUMENT
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/schrodinger-life-levels.html
Also PDF
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/schrodinger-life-levels.pdf

This is an adjunct to an older document Commentary on Schrödinger(1944), which summarises and comments on some key points in Schrödinger's little book What is life? published in 1944, providing some of the background assumed here. The ideas in What is life? inspired many researchers, including Watson and Crick when they worked on the structure of DNA, reported around 1952. (I don't know whether Rosalind Franklin had read it.) It does not seem to be widely known that Schrödinger wrote a sequel, Mind and Matter (published 1958), later re-published in combination with What is life? in Schrödinger(1967)

The key feature of the 1944 book, from my point of view, was its demonstration that quantum physics makes it possible for a collection of atoms to be capable of forming a stable molecule in more than one way, illustrated by two forms of propyl alcohol (shown on page 57), which differ only as far as an oxygen-hydrogen pair may be attached to the end of a chain of carbon atoms, or to the middle atom, both configurations being stable. Switching between the two states is possible only with a temporary supply of energy (or presence of a catalyst?). Without fully anticipating all the processes by which DNA is replicated in biological reproduction, or used to produce multiple side products during gene-expression in developing individual organisms Schrödinger pointed out that the chemical stability in complex molecules required for reliable biological reproduction is comprehensible only in the framework of quantum physics: nothing in Newtonian physics can explain this and other features of chemical compounds that are essential for accurate biological reproduction --- and other biological uses of chemicals derived from DNA, not discussed by Schrödinger.

This paper attempts to present a new key idea developed in September 2020, about relations between levels in complex systems implemented at the "lowest" level using mechanisms with the features highlighted by Schrödinger. I'll suggest that new mechanisms implemented using those physical mechanisms may be used to implement discovery and reasoning processes conjectured by Kant as required for ancient mathematical discoveries, not restricted to what can be derived from definitions using purely logical reasoning. I.e. ancient mathematicians (and also young children and some other intelligent animals) can make non-empirical discoveries of synthetic necessary truths concerning geometric structures and processes, although only older humans can recognise what they are doing and reflect on it, teach it to others, and organise the results in a body of shared knowledge that grows over time. (One of the earliest such products was Euclid's Elements, written in 300 BCE, perhaps the most important book ever written, in view of its profound influence in mathematics, science, engineering, architecture, education, and even philosophy.)

All this raises the question: what are the reasoning mechanisms that enable human brains to make such discoveries including finding proofs? Could computer-based machines (e.g. future robots) make the same discoveries, or the same sorts of discoveries?

The required mechanisms may not be implementable on digital computers (or Turing machines) if they necessarily include the ability to produce and inspect continuous changes (e.g. moving, rotating, folding/unfolding molecular structures), going beyond familiar logical/algebraic abilities to produce and inspect purely logical, discrete forms of representation. Digital computers can refer to and reason about continuous processes but they cannot make use of them.

The key ideas of this paper arose from a hunch, expressed in Sloman(2013), that Alan Turing's unstated motivation for his work on chemistry-based morphogenesis, reported in Turing(1952) included his hunch (hinted at in his Mind 1950 paper) that chemical computers using both discrete and continuous processes might be needed for a full explanation of brain functions (in humans and probably also some other intelligent animals), because such mechanisms exceed the powers of digital computers (including Turing machines).

Turing's hunch (or belief?) was expressed in 1936 in terms of a distinction between mathematical ingenuity (of which computers are capable) and mathematical intuition (of which they are not capable, though he did not explain why not).


NOTE ADDED 9 Oct 2020: Schrödinger on Mind and Matter
I have discovered that there is an important supplement to What Is Life?, and the combination of the two is available as Schrödinger(1967). I have not yet had time to do more than sample portions of the new part (Mind and Matter). For example, a text search for comments on Kant (of which there are several in the book) revealed this on page 161: "Einstein has not -- as you sometimes hear -- given the lie to Kant's deep thoughts on the idealisation of space and time; he has, on the contrary, made a large step towards its accomplishment". But I have not, so far, found any agreement or disagreement with Kant's claim that there are types of mathematical knowledge that can be acquired non-empirically (they are a priori but not innate) which are also non-contingent, i.e. they are concerned with necessity and impossibility, and they are not based solely on logical consequences of definitions, i.e. they are synthetic not analytic: claims made by Kant which I have been trying to explain and defend since my DPhil thesis 1962 Sloman (1962).

For a while (from around 1971 until the Turing centenary, 2012) I thought it would be possible to defend Kant's claims by building a computer model of the required methods of mathematical discovery. But I now suspect that digital computers cannot replicate the relevant sub-neural biological mechanisms for spatial reasoning, based on chemical operations combining discreteness (based on quantum bonds as discussed by Schrödinger in What is life?) and continuity insofar as molecular interactions can include structures moving together or apart, twisting, folding etc.

The discussion below, was originally triggered by thinking about what Schrödinger wrote about quantum mechanisms permitting discrete changes between relatively stable states in genetic molecules. It now includes some draft speculations about evolved mechanisms of spatial reasoning in humans and other animals based on (currently unidentfied) hybrid sub-neural chemical interactions that make use of both discrete and continuous changes, with important roles in ancient forms of mathematical discovery about geometric structures and processes (e.g. processes of construction and modification of diagrams).

I claim that the concept of computation (or information processing) needs to be extended beyond what can be done by Turing machines and equivalent digital computers. The extension should include reasoning mechanisms that operate on a mixture of discrete and continuous processes, since biological control mechanisms (some of which are discussed below) require that mixture. That includes forms of reasoning are about spatial structures and processes presented in Euclid's Elements, and which also make use of spatial structures and processes to perform the reasoning. In effect, this presupposes a concept of computation that includes far more than the types of computation that can be performed on Turing machines and equivalent digital computers: rejecting claims about universality of Turing machines. (Turing himself rejected such claims in 1936.)

It is possible that closer reading of Schrödinger's later book Mind and Matter, included in Schrödinger(1967) will reveal useful contributions to this topic, though it is not clear yet whether he thought seriously about alternative physical implementations for mechanisms of mathematical discovery.


The Problem To Be Addressed

This investigation was originally motivated by an attempt to explain what enabled human brains to make ancient mathematical discoveries of the sorts reported in Euclid's Elements, which used to be a standard part of mathematical education, e.g. when I was at school in the 1950s, but is no longer taught, for bad educational and mathematical reasons.

While a graduate student, I discovered that Immanuel Kant 1781 had described features of such ancient mathematical discoveries that corresponded well with my personal experience of finding a proof, or a new construction, whereas my philosophical friends had been taught that Kant had been refuted by the discovery (by Einstein and Eddington) that physical space is not Euclidean, and by the logicisation of Euclidean geometry by David Hilbert(1899). So I switched to philosophy in order to prove them wrong.

In my DPhil thesis Sloman (1962) I defended Kant but could not provide a detailed explanation of how ancient mathematical brains worked. After learning about AI around 1969 and reading the I hoped to build a computer model demonstrating how ancient mathematical minds made non-empirical discoveries that made essential use of kinds of spatial reasoning that did not reduce to using logic to derive consequences from definitions.

However, by the Turing centenary in 2012, neither I nor anyone else had produced such a demonstration, so, partly inspired by Turing(1952), I began to explore the possibility of using forms of computation based on brain chemistry, since chemical machinery, unlike digital computers, combines both continuous processes (motion through space, twisting, etc.) and discrete processes based on switching chemical bonds. Schrödinger (1944) pointed out that quantum physics could for the first time provide explanations of features of the surprising reliability of reproduction of particular features (e.g. a lip-abnormality) across multiple generations, thereby contributing more generally to the discovery of the role of DNA in biological reproduction.

Is it possible that such reproductive machinery and reasoning mechanisms in brains share some features of chemical information processing?

As far as I know this is not a question considered by Schrödinger, and it is not mentioned in What is life? It may have been mentioned in the later book xxx But it is worth exploring because there is no established explanation of either the spatial intelligence of ancient mathematicians who made extraordinary discoveries in geometry and topology, nor the spatial intelligence of young children and other animals. In various publications, the physicist Roger Penrose has also drawn attention to forms of spatial reasoning that are not reducible to use of logic and definitions to draw conclusions about spatial impossibilities and necessary connections, but it isn't clear to me what sorts of mechanisms he thinks can do this. E.g. see his video presentation at the Models of Consciousness conference in Oxford, in 2019, in which he ends by referring to sub-neuronal microtubules and the ideas of Stuart Hameroff, though I have not been able to understand how they think the microtubules are able to make discoveries about spatial impossibility or necessity.
A recent conference presentation by Penrose (September 2019, Oxford) is available here
https://www.youtube.com/watch?v=3trGA68zapw.
Sir Roger Penrose - AI, Consciousness, Computation, and Physical Law


Some facts about spatial reasoning

Many non-human animals are good at spatial reasoning, as shown by the actions they do and do not perform, in many cases unmatched by current AI systems and robots.

Examples include nest-building birds that assemble twigs to form a semi-rigid structure supported by branches growing out of a tree-trunk, for example, weaver-birds, crows, magpies and other corvids.

------------------------------------------
Crows-nest
With thanks to Reju.kaipreth (Wikimedia)
Could you make a nest like that, using only one hand, or a hand and your mouth,
to manipulate the parts. if you were given all the materials required?
(Unlike the birds who have to find and fetch all the materials.)
Figure Crows-Nest
------------------------------------------

A kitten climbing down a highly flexible tree buffeted by strong gusts of wind has to deal with different challenges, including handling, in real time, information about a rapidly changing complex environment, as illustrated in the following video:

http://www.cs.bham.ac.uk/research/projects/cogaff/movies/vid/kitten-windblown-tree.mp4
This six month old kitten did not need to be trained to climb down from the wind-blown
tree, despite the wildly changing, probably unique, stream of sensory-motor patterns.
He had, however, previously experienced climbing much simpler static
structures, such as a clothes-drying frame, and other trees.
Video of Kitten In Tree
------------------------------------------

When I describe these animals as being good at spatial reasoning, I am referring only to "online" reasoning, namely taking in spatial structures, opportunities, or constraints, and using information about structures and relationships that are relevant or potentially relevant to their immediate needs, goals, or in some cases fears (e.g. being aware of the presence of a dangerous predator, or noticing an infant moving toward a dangerous drop).


Requirements for spatial reasoning mechanisms

The above examples involve whole organisms dealing with complex changing external environments. Long before such animals existed, however, there were also internal components of organisms interacting with and manipulating complex, changing, molecular structures, in processes of reproduction, growth, decomposing and distributing molecular structures in food, dealing with waste materials, detecting and responding to infected or damaged body parts, growing new parts, e.g. massive annual growth of leaves and production of flowers, seeds or fruit, and in many cases discarding products of those processes as winter approaches.

Some of the sub-microscopic molecular processing details are presented in Peter Hoffman's 2012 video lecture, with this summary:

"Below the calm, ordered exterior of a living organism lies microscopic chaos. Our cells are filled with molecular machines, which, like tiny ratchets, transform random motion into ordered activity, and create the "purpose" that is the hallmark of life. Tiny electrical motors turn electrical voltage into motion, nanoscale factories custom-build other molecular machines, and mechanical machines twist, untwist, separate and package strands of DNA. The cell is like a city-an unfathomably, complex collection of molecular workers creating something greater than themselves."

For a few seconds from 15:00 there is a short segment of a Japanese video of a living molecular stepping/walking machine at work, though unfortunately the commentary is slightly out of sync with video.

Note:
I don't know why Hoffman used the phrase "microscopic chaos", since the whole point of his lecture, and much of his work, is that the microscopic (and sub-microscopic) goings on in living organisms are the opposite of chaos: they are highly controlled, in some cases orchestrated (e.g. synchronised) processes performing essential roles in the growth, development, and use of body parts on many scales of organisation. His 2012 book is also highly relevant Hoffman(2012).

In 1944 Schrödinger could not have known about most of those processes and his book does not indicate that he had thought about such details. (I have not yet looked closely at his later book/lectures Schrödinger(1967).)

The requirements for mechanisms able to support such internal processes are very complex. They need to make use of information about physical types, locations, spatial relationships, and relative velocities, and accelerations of physical/chemical structures of varying complexity within organisms, in biological processes of reproduction, growth, and control of many internal processes performing a wide variety of different biological functions, including growth on various scales, tissue repair, provision of energy and material resources, distribution of waste products, responding to invading organisms, and many more, as discussed in Sloman(2020).


The need for cross-level explanations

It might be thought that processes of control, and scientific explanations of all those processes, need to use only information about the fundamental particles involved and their equations of motion, as happens in physical studies of relations between masses, velocities and impacts of molecules in a gas -- using relatively minor extensions of Newtonian physics, e.g. to include entropy.

However, Newtonian theories and formalisms cannot cope with the complexities of biological processes, or even processes of formation of rocks, planets, seas, etc., none of which can be accommodated within Newtonian physics, because Newton did not allow for any kind of bonding of particles to form new particles or larger persistent rigid or flexible structures. Newtonian physics cannot account for formation of planets, or rocks, or new chemical elements or molecules, let alone living organisms. (Could Newton's recognition of these limitations be part of the explanation for his interest in Alchemy?)

I suggest that part of that explanatory gap has nothing to do with the explanations being Newtonian. There is a deeper, unobvious point that few seem to have noticed, perhaps because philosophers of mathematics, especially Kant-inspired philosophers of mathematics and developmental biochemists do not normally communicate about their research problems.

In particular, as far as I know, neurodevelopmental theorists who are deeply interested in mechanisms of gene expression, including gene expression in developing brains, don't typically wonder how brain mechanisms can explain abilities to detect impossibility, or, equivalently, necessity. Perhaps some of those who noticed the problem, believed the widely shared, but mistaken view, that Kant's formulation of the problem had been demolished by Hilbert's demonstration in (1899) that logic-based reasoning suffices for all geometric discoveries. What Hilbert showed is that an important subset of propositions expressing the results of the ancient discoveries can be systematically organised in a tree of purely logical derivations from logical formulations of Euclid's axioms (with some gaps, e.g. information about the "between" relation, filled).

In contrast, human engineers, architects, and maintainers of complex machines, including those who designed, constructed (and presumably maintained) huge ancient temples and monuments, have developed useful ways of thinking about complex structures and processes by identifying important persistent parts and their relationships, instead of having to do all their reasoning in terms of the molecules, or sub-molecular particles, involved. (Some of the parts persist only temporarily: e.g. a bomb designed to blow up a bridge. But those cases are exceptions.)

In organisms there are many processes, on many scales, concerned with reproduction, growth, development, repair, and control or use of parts, or coordinated collections of parts. The processes need to use information about those parts and the processes in which they are involved, in order to select between alternative processes or actions available at any time: e.g. which muscles should be contracted, which tissues need repair, which waste products need to be disposed of, which body parts need more oxygen, etc., which peristaltic contractions will produce the required motions of gut contents.

Those tasks would be completely intractable if all the decisions were taken on the basis of sensing and modifying coordinates of the fundamental particles of which body parts are composed. Long before there were any human engineers or scientists thinking about such matters evolution must have produced biological control mechanisms able to use information about states, properties and relationships of relatively complex (multi-atomic) persistent (though not all rigid) parts of the organisms.

The control processes would then be concerned with changing key "macro" relationships rather than operating directly/explicitly on all the individual sub-atomic particles. The need for control of the control mechanisms, would then generate an infinite regress, which is avoided if controlled parts become increasingly complex with few control-parameters per part. [Is there a clearer way to express that?]

For such control to be possible, the controlling systems must be able to acquire and represent information about the important enduring, interacting parts of organisms (including sub-microscopic parts), rather than being restricted to using and manipulating information about the most basic physical particles involved.

A crucial point that I don't think Schrödinger discussed in 1944, is that these processes involve use of information, e.g. using information about the current situation and alternative available possibilities in deciding which alternative possibilities to realise. As I'll argue below, extending ideas in Sloman(2020), such "informed control decisions" can be taken at many levels in a complex organism, and at many different stages of development after an egg has been fertilized. That's why a genome (e.g. implemented as a DNA molecule) does not need to specify every detail of an organism's development. Clearly much genome-controlled development must occur before a fully formed human brain is available to start learning and taking decisions. Without more fundamental (chemistry-based) informed control decisions brains could not be built. (Is that what motivated Alan Turing's comment on chemistry in 1950 and the explorations reported in his 1952 paper on chemistry based development?)

RATCHET MECHANISMS

Examples of human reasoning about necessity/impossibility

Anyone who has studied Euclidean geometry, with personal experience of finding proofs and constructions, has had first-hand experience of the human abilities that Immanuel Kant discussed in 1781, which I claim are not explained by any mechanisms developed so far by AI researchers or theories produced by neuroscientists or psychologists.

In particular, any theory that assumes human mathematical discoveries are based on collecting many examples and then deriving probabilities from statistical evidence cannot explain discoveries that are about what is necessarily the case, e.g. "Every simple closed planar polygon with N sides also has N vertices", or what is impossible, for example: "It is impossible for two distinct circles in the same plane to have more than two boundary points in common", and "It is impossible for three planar surfaces to enclose a finite volume".

Many additional examples can be found in this document, and others linked in it:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

A related web site presents examples of spatial reasoning ("Toddler theorems" discovered and used) by young children, including pre-verbal children, as shown by their actions:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html

Moreover, the forms of reasoning used by the ancient mathematicians who made the discoveries reported in Euclid's Elements did not start from axioms expressed in a logical formalism, using methods of logical deduction to derive consequences. They used reasoning about what is possible, impossible, or necessarily the case, based on inspection and manipulation of spatial structures, as did Mary Pardoe when she came up with her non-standard proof of the triangle sum theorem, described in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-sum.html

Turing on mathematical intuition vs mathematical ingenuity

The ability of humans to make such discoveries using spatial, not logical, reasoning is not yet replicated by AI reasoners, despite the ability of some of them to make discoveries in a space of logical proofs derived from an explicit set of axioms for Euclidean geometry (e.g. Hilbert's axiomatisation of Euclid), seems to justify Alan Turing's claim in 1936 that there is a difference between mathematical intuition (including the ability to reason about spatial structures and processes by using spatial representations of those structures and processes) and mathematical ingenuity (the ability to manipulate symbols in a space of logic-based proofs), whereas digital computers are capable only of mathematical ingenuity. For a discussion of that claim and its relationship to Immanuel Kant's philosophy of mathematics, see
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html (also PDF).

Two roles for biological evolution

What has all this to do with Schrödinger's discussion in 1944? I'll try to show that there are two important (but unobvious!) consequences of the previously mentioned features of quantum mechanisms that he did not discuss (though I don't know whether he had ever thought about them).

The first consequence is that the presence of chemical bonds linking physical particles can produce "high level" constrained motion patterns that (I suggest) cannot be described in the basic formalism describing possible configurations of individual particles and their possible motions relative to other particles, without using intractably large disjunctions of combinations of coordinates to cover all cases -- e.g. all the cases of grasping a physical object by bringing finger and thumb closer together with the object between them, a class of processes which I have just shown can be described in English using 16 words. It seems obvious that young pre-verbal humans can think about such processes, as can many intelligent non-human animals that can manipulate objects, e.g. peeling a banana, or making a nest out of twigs, or woven leaves. I'll return to this point about complexity reduction, and some of its consequences, later XXXXX.

[XXXX rewrite this para]
The second consequence is that this poses challenges and opportunities for biological evolution insofar as many organisms perceive (or sense) and act in a space that includes not only interacting basic particles but also interacting linked assemblages of particles constrained by quantum bonds. Reasoning about what can and cannot happen in such an assemblage, or what the consequences of various changed relationships will be, is an intractable task using the underlying space-time coordinates of all the individual molecules, atoms, or sub-atomic particles involved. But there is often a different space in which the only things that vary are relationships between relatively rigid substructures, and compared with the number of such changing relationships the number of individual particles is huge and reasoning about consequences of changes can be intractable, whereas reasoning about interactions between a small number of parts, each composed of millions of fundamental particles is not intractable. For example, consider what will happen if a sphere made of plasticene is placed between the jaws of a pair of pliers, whose handles are moved together. Answering this question either for a particular configuration of plasticene and pliers, or in a general form as posed above, would be completely intractable if the reasoning had to be done in terms of all the sub-atomic particles making up the plasticene, the pliers, and perhaps the hand (or machine) squeezing the the handles together. But many humans (not very young individuals, nor humans with various kinds of brain damage) can reason at a different level: the two main movable parts of the pliers are rigid, therefore if the handles are moved together then the jaws must also move together, and therefore the portion of plasticene between the jaws will be squashed (flattened) which is a general description covering a huge variety of cases that can differ in their precise details (e.g. size and shape of original lump, amount of compression applied by the jaws, etc. numbers of molecules of various sorts involved, etc.).

Such organisms will benefit from control mechanisms using more sophisticated forms of representation and reasoning mechanisms than those that suffice for representing and reasoning about motions of individual particles or collections of such particles not bonded to form larger structures.

Moreover if different sorts of linked assemblages (e.g. different edible matter, including other organisms) are found in different environments then genetically specified control mechanisms may need to be incompletely specified, i.e. parametrised, with gaps to be filled at relatively late stages of development in ways that depend on the individual's current environment and previously acquired information (as clearly happens during layers of linguistic development in very different linguistic environments, under the control of a common human genome).

In such cases the gene-expression mechanisms may have to produce partially unspecified (abstract) designs, that are instantiated within each organism using information acquired previously by the developing individual organism. For example, genetic mechanisms that allow each normal human to produce and understand a wide variety of complete (spoken or written) linguistic communications (like this sentence) must use products of multiple earlier stages of learning and development that allow use of smaller linguistic fragments that can vary widely across languages, including minimal linguistic sounds, syllables, words, word-modifiers (tense, number, mood), phrases, etc. Jackie Chappell and I proposed that similarly layered environmentally influenced developmental processes are required for a wide variety of forms of development, using what we called a "meta-configured" genome (MCG), described in Chappell and Sloman (2007) and later papers.

So the development of complex components and control mechanisms whose details make use of information acquired at earlier developmental stages is not only a challenge for biological evolution, but also an opportunity, insofar as organisms whose information-processing mechanisms include abilities to represent and reason about larger structures, their motions and their interactions will derive important benefits from those multi-layered abilities, as illustrated by the variety of contents and uses of linguistic communications. Our claim is that biological mechanisms enabling this sort of layered individual development must have evolved for many aspects of complex development, in many species, long before such mechanisms were used for human languages.

In the case of human linguistic development a common meta-configured genome supports individual development in a vast variety of different linguistic communities using languages that (partially) share types of layers (e.g. phonemes, morphemes, lexemes, syntactic forms, pragmatic functions, etc).

A meta-configured genome drives processes of learning and creation at different levels of abstraction, where higher levels are instantiated in ways that depend on how lower levels were instantiated.

So there are not different genomes for Latin speakers, French speakers, Swahili speakers, Xhosa speakers, etc. Similar remarks apply to the genetic mechanisms that support acquisition of sign languages. I've argued elsewhere that internal languages must have evolved before both spoken and sign languages, e.g. internal languages for encoding sensory/perceptual contents, action-control processes, intention formation, intention modification, etc.

For example, a nest-building bird doesn't merely react to the current environment but forms intentions that produce actions that (normally) lead to finding, fetching, and using appropriate materials at different stages of construction -- where the materials, stages, and products can vary enormously across species.

All this suggests that many animal species have evolved powerful abilities to represent and reason about structures and processes involving bonded assemblages, instead of having to do all their reasoning at the level of fundamental particles and their interactions. There are organisms that make use of much smaller, simpler portions of matter, for example the millions of bacteria in the gut of each human, without which humans could not digest and make use of food. The information they use to control their actions, and the mechanisms of control are very much simpler than those of the hosts, but they still illustrate the point that life essentially involves abilities to acquire and use information in selecting and performing actions, on many scales, including information used in producing and using various kinds of food crops, information used in hunting for and preparing (e.g. tearing open) animals that are eaten, and information used in digestive processes and a vast array of other processes that make use of products of digestion -- including waste-disposal processes.

The difference this makes is unobvious but of great biological importance.


Isomerism

In the 1944 book Schrödinger points out that the same group of atoms can unite in more than one way to form a molecule. Such molecules are called isomeric ('consisting of the same parts'). This is illustrated using two isomers of ispropyl alcohol shown below in Fig. Isomers.


XX

Two molecules with the same types of atoms connected differently
Each may be stable in the absence of a disruptive external influence
Figure Isomers
------------------------------------------

As shown above, the two isomers of propyl alcohol differ only in whether the oxygen atom (the blue "O" in the figure) is bound to the central carbon atom or an end carbon atom. Each state is stable because all their neighbouring states have more energy, so the change to a neighbouring state cannot occur without an external source of energy. If a sufficiently energetic impulse is received it can push the molecule over the energy "hump" and into another stable state. (In some cases states can be switched by catalysts, without requiring so much external energy.) This example is used in section 39 of the 1944 book, as the basis of several observations relevant to biological evolution. In particular

Isomerism is illustrated in the figure Isomers above, copied from the book. The two molecules have the same constituents, but because the oxygen atom has different locations in the two molecules the molecules have very different physical and chemical properties. And neither state can easily be transformed into the other because the transition between the two states requires the molecule to pass through intermediate configurations which have significantly more energy than either of them. ES writes (in 1944):

The remarkable fact is that both molecules are perfectly stable, both behave as though they were 'lowest states'. There are no spontaneous transitions from either state towards the other.

The transition from one to the other can only take place over intermediate configurations which have a greater energy than either of them: the oxygen has to be extracted from one position and has to be inserted into the other. There does not seem to be a way of doing that without passing through configurations of considerably higher energy.

The stability of such configurations explains why they are useful for encoding genetic information that should not easily be perturbed. However, for reproduction processes it is also necessary that the structures can reliably be copied, a point ES apparently ignored in 1944. Genetic copying mechanisms began to be understood only after the discovery of the double helix structure in DNA, several years later.

Evolved construction-kits


REFERENCES AND LINKS
(Not yet sorted)


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham