The Turing-inspired Meta-Morphogenesis (M-M) project asks:
How can a cloud of dust give birth to a planet
full of living things as diverse as
life on Earth?
Part of the answer:
By producing layers of new derived construction kits
based on the fundamental construction kit: Physics/Chemistry.
(Including quantum mechanisms.)
Additional topics are included or linked at the main M-M web page:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
This paper is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/quantum-evolution.html
NOTE: some of the methodology being developed here is presented in
a separate document on "Explanations of possibilities",
defending Chapter 2 of
The Computer Revolution in
Philosophy (1978) against criticisms made by reviewers:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/explaining-possibility.html
A partial index of discussion notes here is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
One type of evolution seems to be production of new construction kits used in the production of individuals of more recently evolved species. Different construction kits are used in different species. For instance, some plants need construction kits able to produce very large, very strong, vertical structures, e.g. giant redwood trees, able to cope not only with downward gravitational forces, but twisting and bending forces produced by wind. Quite different construction kits are needed to produce organisms that depend on large networks of fibres transporting chemicals and supported by the surrounding soil: these have no need for the strength of a tree trunk.
Construction kits that evolved at different times in different lineages are also needed to produce a wide variety of information processing mechanisms, from the simplest control mechanisms in the earliest organisms to the highly intelligent vertebrates, including crows, orangutans, elephants and humans.
Another document [Sloman 2015] (work in progress), introduces a (still evolving) theory based on the hypothesis that, starting with the initial "Fundamental Construction Kit" (FCK) provided by "low level" physics and chemistry, evolution directly and indirectly produces and uses increasingly complex concrete (physical/chemical) construction kits, abstract construction kits and hybrid construction kits with concrete and abstract components. As a result the FCK is supplemented with increasingly complex and powerful "Derived Construction Kits" (DCKs) produced by evolution, development, learning and culture
More recently, Humans have also produced and used all three types of derived construction kit, especially in the last century.
Concrete construction kits (e.g. Lego, Meccano, plasticine, sand) use physical components and relationships. Others (e.g. grammars, proof systems and programming languages) are abstract construction kits, producing abstract entities, e.g. sentences, proofs, and new abstract construction kits. Mixtures of the two are hybrid kits, illustrated with board games using static and movable physical parts, and abstract sets of rules specifying permitted changes.
Concrete construction kits have no rules, merely intrinsic sets of possibilities and constraints on possibilities, though intelligent animals may discover those possibilities and constraints and use an abstract construction kit to build a theory, make inferences, etc. There are also meta-construction kits: able to create, modify or combine construction kits.
These ideas are introduced and elaborated (partially) in [Sloman 2015] (work in progress). A part of that paper was concerned with the possible role of quantum mechanisms in construction kits for information processing. As it was very speculative, and grew rather large, I moved the discussion into this new paper, which will make use of some of the ideas in the construction-kit paper in an exploration of possible links between quantum mechanisms, concurrency and intelligence.
The two papers (and possibly later spin-offs) introduce a large research programme that I hope will turn out to be "progressive", in the sense of Imre Lakatos (1980) (i.e. not what Lakatos called "a degenerating research programme").
There are two main ideas that may or may not turn out to have deep connections with Quantum Mechanics:
First there are different sorts of causation involved in the processing of information, and some of them seem to be capable of being simulated (to some level of approximation) on normal computers without necessarily being replicated in the simulation -- e.g. because they do not reproduce exactly the same causal relationships, including the same true counterfactual conditional statements about what could have happened, and what would have happened in various things had happened. (I have made this point about the difference between true concurrency and simulated concurrency in several previous publications, e.g. )
It may be that some form of quantum entanglement provides the appropriate kind of causal connection in biological construction kits (and perhaps also future AI systems).
Second the causal differences between processes that truly run in parallel and processes in which parallelism is simulated on a very fast serial computer may be important for animal intelligence. It may, or may not, be possible to remove these differences on a quantum-based computer. This paper makes no strong assertions about the actual role of quantum mechanisms. The aim is merely to raise some questions, but not the questions about consciousness and quantum mechanics raised by Hameroff and Penrose e.g. in many publications over the last few decades, most recently recently in (2014) summarised below.
Some of the causal differences between real and simulated parallelism were discussed in earlier papers, e.g. [Sloman, 1974], Sloman (1981), Sloman (1986b on Searle), Sloman (1992), Sloman (1993), and others, listed below. My arguments are completely different from those used by Searle, against what I have called a "weak strong" AI theory (see Sloman,[1986b]), though I suspect Searle may have been trying to make similar points, while lacking the required experience of design, implementation, testing and debugging of working systems (like most philosophers, psychologists and neuroscientists -- though things are beginning to change slowly).
Spatial embedding of products allows new construction kits to be formed by combining two or more concrete kits. In some cases this will require modification of a kit, e.g. supporting combinations of lego and meccano by adding new pieces with lego studs or holes alongside meccano sized screw holes. In other cases mere spatial proximity and contact suffices, e.g. when one construction kit is used to build a platform and another to assemble a house resting on the platform. In organisms, products of different construction kits may use complex mixtures of juxtaposition and adaptation. Evidence presented by Seth Lloyd and others (referenced below) suggests that some organisms make essential use of non-local quantum effects in complex mechanisms made of multiple interacting components. In these cases spatially separated entities can interact in ways that solve hard computational or control problems. The ideas about possible biological uses of quantum mechanisms presented below start from a set of design problems I have encountered in thinking about and implementing AI models, including AI vision systems, and multi-layered control mechanisms.
To avoid misunderstanding I want to distance my arguments from claims about consciousness made in Penrose, [1994]. Penrose, an outstanding mathematician and theoretical physicist, attempted to show how features of quantum physics explain obscure features of human consciousness, especially mathematical consciousness. Several other scientists have made related claims, including Stuart Hameroff, Henry Stapp, and many more.
Very often those who make claims about human consciousness, and especially human mathematical abilities, ignore the intermediate products of biological evolution on which mental functions in many animals rely. Human mathematics, at least the ancient mathematics done before the advent of modern algebra and logic, such as ancient discoveries in geometry, topology and arithmetic recorded in Euclid's Elements must have built on previously evolved animal abilities, for instance abilities to see various types of affordance, including those discussed in [Gibson 1979]. These older biological mechanisms are likely to form part of the explanation for modern forms mathematical reasoning about diagrams and spatial processes discussed by Penrose, with need to be taken far more seriously than his claims about Goëdel's theorem. Compare [Sloman 1971]
In particular, it seems unlikely that there are very abstract human mathematical abilities that somehow grow directly out of quantum mechanical aspects of the FCK, without depending on the layers of perceptual, planning, and reasoning competences produced by billions of years of evolution.
Several other scientists have made related claims, including Stuart Hameroff, Henry Stapp, and many more. I'll offer a different possible role for quantum mechanisms below.
There are some fairly uncontroversial facts about the relevance of quantum mechanisms to biology that have been known for decades. Quantum mechanics added important constraints to 19th century chemistry, including both the possibility of highly stable structures (e.g. biological molecules with structures that withstand thermal buffetting, as required for genetic materials such as DNA) and also the possibility of chemical locks and keys that can rapidly and precisely create and disassemble chemical structures in catalytic processes. Both the stability and the precise control are essential for life as we know it, including forms of information-processing produced by evolution (mostly not yet charted).
Research in fundamental physics is a search for the construction kit that has the generative power to accommodate all the possible forms of matter, structure, process, causation, that exist in our universe. However, physicists generally seek only to ensure that their construction kits are capable of accounting for phenomena observed in the physical sciences. Normally they do not assemble features of living matter, or processes of evolution, development, or learning, found in living organisms and try to ensure that their fundamental theories can account for those features also. There are notable exceptions mentioned above, such as Schrödinger and Penrose.
Quantum mechanisms were mentioned at various points in a separate (still evolving) document on evolved construction kits [Sloman 2015], including their role in catalysis and the long term stability of chemical structures important for life, including DNA, emphasised by the physicist Philip Morrison in lectures broadcast on BBC television several decades ago. The topic had earlier been discussed in some depth in Schrödinger (1944), emphasising the role of quantum mechanisms in explaining multi-stable molecular structures.
Various philosophers, biologists, neuroscientists and physicists have made further claims about links between quantum mechanisms and biology, consciousness, free will, non-computability, speed of processing, and other matters. Some of these claims are dubious and have been criticised by others.
But recent discoveries indicate that some biological mechanisms use quantum-mechanical features of the FCK that we do not yet fully understand, providing forms of information-processing that are very different from what current computers do.
E.g. a presentation by Seth Lloyd, summarises quantum phenomena used in deep sea photosynthesis, avian navigation, and odour classification.6 These examples may turn out to be the tip of an iceberg of quantum-based information-processing mechanisms important for biology.
I don't think there is anything to be gained by attempting to link quantum mechanics, or any other aspect of physics, directly to free will or consciousness, for reasons I have explained elsewhere:
In the context of such detailed computational explanations of phenomena of consciousness, the noun "consciousness" is best thought of as having a meaning that is derived from the adjective "conscious", as in "X is conscious of Y" where the meaning of the adjective is highly polymorphic since what is said in a sentence of that form depends on what X and Y are. E.g. there are huge differences between a microbe being conscious of contact with something noxious and a human being conscious of his growing unpopularity in the community.
This can be compared with the polymorphism of "efficient" in "X is an efficient Y", or "X is efficient at/for Y". An efficient method of solving an equation is very different from an efficient machine for mowing a lawn, and both are different from an efficient deep sea drilling mechanism. This phenomenon is directly supported by many programming languages, namely those that support parametric polymorphism. Long before programming language designers discovered the need for this, similar phenomena had existed in natural languages. There is some evidence that Gilbert Ryle at least partly understood this sort of thing when he wrote The Concept of Mind in 1949. The analysis of "better", "ought" and related concepts in Sloman (1969, 1970) used similar ideas.
I am not aware of any wide-ranging survey of types of consciousness parametrised by what X is and what Y is. I am particularly interested in varieties of mathematical consciousness: cases where someone is aware of and can think about possible variations in a structure and case where such a person notices limits on what is possible within such variations. Some examples of consciousness of properties of triangles relevant to theorems in Euclidean geometry and consciousness of properties of curves on the surface of a torus are presented in:
Added 4 Nov 2018
I recently learnt that in 1938 Alan Turing had distinguished mathematical
intuition from mathematical ingenuity, suggesting that computers
were capable only of the latter, but without saying anything about mechanisms
underlying the former (intuition). A discussion of that distinction was added
here in December 2018.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html
also (pdf)
I once encountered a young child who was pleasantly surprised to discover that counting the fingers on one of my hands gave the same result when the order of counting was reversed. He was not yet conscious of features of the counting process that made it impossible for the two results to differ (if the counting includes each finger, without any repetition).
Contrary to a common philosophical opinion, a child who has understood what "or" means, and can use it in answering questions, for example, may not have noticed that if Mary is in the hall or in the pantry, but is not in the hall, she must be in the pantry (disjunctive syllogism).
This realisation can be triggered by the experience of seeing Mary go into a corridor that leads only to kitchen and pantry, then seeking her in the kitchen. When Mary is found not to be in the kitchen a child who has not made the discovery may then look to see whether she is in the pantry, whereas a child who is conscious of the mathematical structure of the problem may realise that she already has enough information to be able to report that Mary is in the pantry, because other locations have been ruled out. (There seem to be philosophy students who learn that rules of logic have to be taught and memorised: they are not given the experience of discovery, unfortunately.)
This suggests that there can no more be a unique physical basis for consciousness than there can be a unique physical basis for efficiency, or betterness. But there may be particular physical mechanisms at the core of different sorts of efficiency, and different sorts of consciousness. A failure to deal with the possibility of polymorphism can vitiate important ideas e.g. those presented by Hameroff and Penrose, over many years, most recently (2014). In particular, I make no use of meta-mathematical incompleteness theorems.
Although these ideas provide a framework for demonstrating that many varieties of consciousness can be produced by evolution, there remain many issues about the detailed mechanisms, and especially their power and speed. It may be that some of the ideas of Hameroff and Penrose about the role of Quantum mechanisms will turn out to be relevant, though I think they will need to be more deeply integrated with ideas about forms of computation, e.g. in constraint-propagation systems.
Many have discussed two "strange" features of quantum phenomena: Entanglement/Non-locality and the process of "collapse" from a state in which alternative possibilities coexist to a state in which only one of the alternatives remains (Schrödinger's cat, explained in http://en.wikipedia.org/wiki/Schroedinger's_cat). Since the alternative possibilities can involve entities separated in space (even by huge distances) the collapse process seems to involve instantaneous causation faster than the speed of light. One of the commonly expressed ideas is that for the superposition to collapse into a determinate state (e.g. the cat is alive or the cat is dead) an observation must be made by a conscious observer. This suggestion is usually thought to be absurd (e.g. by Einstein). Another alternative, the "objective collapse" or "objective reduction" (OR) theory http://en.wikipedia.org/wiki/Objective_collapse_theory, is that physical interactions (e.g. interactions involving gravitational forces) can produce the collapse without any conscious observer, as proposed by Penrose and others. A third view, suggested by Hugh Everett, is that there is no collapse, and the alternative possibilities continue to exist in parallel in many branching universes as explained in http://en.wikipedia.org/wiki/Many-worlds_interpretation.
A variant of the objective collapse mechanism that seems more likely to be of use in understanding brains, for reasons that I'll try to explain below, could be called a "distributed continuous causation" mechanism.
Something like this seems to be needed for complex forms of perception in which multiple percepts are formed concurrently at different levels of abstraction, as illustrated crudely in the following Figure illustrating the functioning of the POPEYE vision system, from Chapter 9 of Sloman, [1978].
This configuration could be the result of an interpretation process that is triggered by low level visual configurations (e.g. in (a)) that form groups concurrently, with groups activating various previously learnt types of image and scene fragments with multiple constraints allowing some fragments to be grouped together while others are excluded. Since 1978 such systems have become far more sophisticated using many new techniques for constraint propagation.
This is extremely vague, and I cannot offer a mathematical formulation of the proposal, but the idea has some important consequences despite its vagueness. In particular it could be a mechanism allowing forms of concurrent interacting computation that would not be possible on a normal computer, and there are important differences between concurrent processes and serial simulations of concurrent processes that are often ignored.
It is possible that evolution produced mechanisms that allowed animals to learn things about the environment which caused large numbers of information structures relevant to the environment to become available, and capable of very rapidly being triggered by new perceptual cues, by other available structures, by current intentions, and by recent history. These mechanisms may also be relevant to processes of rapid reading and comprehension of text, and sight-reading of music, such as a piano score, and perception of mathematical structures and relationships in spatial configurations and processes. It may be that neural linkages are too simple and too restrictive to allow the rapid assembly of multi-layered percepts.
In that case, the quantum-mechanical mechanisms that seem to allow non-local forms of interaction, proposed by researchers such as Hameroff and Penrose to explain very ill-defined notions of consciousness may have a more specific explanatory role in the forms of constraint propagation and coordination that evidently occur in human (and other animal?) vision, but not yet in robots.
Some of the ideas about such mechanisms are presented in a crude preliminary form here:
Related ideas about links between functions
of vision and evolution of language are
tentatively proposed here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk111
That example is a simple case of a general phenomenon that has many more complex examples: the laws of physics allow some physical configurations to have one or more stable states, in which the energy of the system is at a local or global minimum, so that changes away from that state will require provision of energy -- possibly a lot of energy.
The lid of a car boot ("trunk" in USA) is often built with a system of levers and springs producing two stable states with energy minima: (a) fully open, with the springs overcoming gravity, (b) fully shut, with gravity overcoming the springs. Between those two states there may be an intermediate region in which the forces produced by springs and gravity are equal, but the lid is unstable: any slight change will cause it to continue moving towards one of the stable states. However, if there is a lot of friction in the hinges, states in that intermediate region may be stable but easily perturbed: a slight movement of the lid up or down beyond a certain limit will cause motion towards one of the two more stable states.
When large numbers of physical components with two or more stable states are connected in such a way that for each component the state can be changed if some combination of connected states change, and some combinations of states in a subset of items are consistent with only one combination of states in another subset, then forced small changes in some parts can cause massive numbers of flips in the rest of the system.
In a mechanically linked system those influences may propagate slowly across the system, and be damped by friction and viscosity, so that the system as a whole can be in "intermediate" or "mixed" states. In a digital computer influences can propagate very rapidly with many binary switches changing state and no intermediate switch states allowed by the electronics. However if a change in one part of a constraint network requires many changes in the remainder of the network to preserve consistency, then in a single serial computer, or a network of computers with fewer CPUs than network nodes that need to be changed, the computer will have to pass through "inconsistent" stages in which some but not all of a collection of bits have been flipped. The only way to avoid this incoherent intermediate state would be to arrange for flipping of all relevant switches in parallel. That might be done by a processes of setting up all the state changes in advance, to be done at a certain time.
This would require every memory cell to be far more complex than a mere flip-flop. More importantly, it would require the final consistent combinations of states to be known in advance, and in general they can only be found by searching through sets of alternatives. So no amount of parallelism in a conventional computer can produce instant solutions to hard constraint propagation problems.
If different layers of networks organised in this way can switch modes many times per second, that might be useful for an artificial visual system in which many retinal cells are constantly receiving new information from which large-scale global interpretations need to be derived. For this to work, gaze direction may need to be constantly held "fixated" on particular scene locations, long enough for the lowest level network to "settle-down" and transmit its state to another part of the system, before the input pattern is changed by a saccade, or something else.
Consider a multiple-constraint problem, such as the problem of simultaneously interpreting a very complex image with many ambiguous components, where the interpretations assigned to different fragments may or may not be mutually consistent. AI systems have addressed this problem in various contexts, including vision research e.g. using Waltz filtering, Relaxation and related techniques (See Freuder & Mackworth (1994)).
Instead of a static image to be interpreted there could be a continuously changing view of a complex changing scene such as a garden with a mild intermittent breeze, illustrated in the videos here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vision/plants
Producing a useful, detailed, and consistent interpretation of a stream of dense data in real time is a major challenge for AI vision systems. Moreover, specifying what sort of information content a "useful interpretation" would need to be is an unsolved problem for AI, psychology and biology.
Designers of stereo vision systems often assume that the output of stereo vision should be something like a 3-D depth map, or a 3-D model of all the visible surfaces. But that leaves open the question how such data-structures could be useful for intelligent systems. They are useful for displaying images of 3-D scenes but that's not what brains use vision for.
Most researchers seem to assume that it is obvious what the functions of vision
are, though they don't all agree on what they are. Moreover, very few vision
researchers seem to have noticed that one of the functions of vision is
perception of necessary connections between geometrical and topological
relationships, of the sorts that led to discoveries presented in Euclid's
Elements. For more on the functions of vision see
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vision-functions.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html
Conventional computers can only consider one proposed interpretation at a time, where each proposal may involve setting possible values for a large collection of variables. A global measure of "goodness" or "cost" (negative goodness) may be computable for each solution, or partial solution, e.g. by counting how many constraints are violated, the fewer the better. Then the problem is to find a set of assigned values for which the total cost is minimised. If there are two or more such "best" solutions they should all be found so that they can be matched against other criteria later.
A highly parallel computing system can explore alternative sets of values in parallel using multiple CPUs, though some book-keeping scheme will be needed for preventing duplication (the same possible solution being tried more than once) and for rejecting partial solutions that are already worse than the best so far. Moreover, propagating consequences of changes in a part of a constraint network may require a significant number of time steps even in a multi-CPU implementation.
I have no idea whether there is some feature of quantum mechanics that could make this possible, using superposition of possible states, entanglement to implement constraints, with multiple non-local interactions based on constraints to be satisfied by good solutions. If animal brains could do something like that, solving different constraint problems at different levels of abstraction in parallel, as indicated for a simple problem in the figure above, it could be of enormous value for animals that need to move quickly in complex changing environments, e.g. birds or apes moving quickly between branches of trees, animals running across rocky terrain, or animals engaged in fights or attempts to bring down prey struggling to escape.
Perhaps such a system could also be part of an information processing system
able to reason about spatial possibilities and constraints, in the way that
mathematicians do on the basis of a mixture of discrete and continuous
mechanisms.
See
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
Warning: the ideas being developed and presented on this web site may go on
changing for a long time. Links to recent versions of the most important papers
should be available in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
and
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html
1
Euclid's Elements
http://www.gutenberg.org/ebooks/21076
2
http://plato.stanford.edu/entries/democritus/#2
http://en.wikipedia.org/wiki/Democritus
3 http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#chap2
4
Extended in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
5 http://www.cs.bham.ac.uk/research/projects/cogaff/misc/explaining-possibility.html
6 https://www.youtube.com/watch?v=wcXSpXyZVuY
7
See
http://en.wikipedia.org/wiki/Control_theory
http://en.wikipedia.org/wiki/Nonlinear_control
8
The role of entropy is discussed briefly in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/entropy-evolution.html
9
http://www.theguardian.com/cities/2014/feb/18/slime-mould-rail-road-transport-routes
10 http://www.cs.bham.ac.uk/research/projects/cogaff/misc/shirt.html
11 http://www.it.bton.ac.uk/Research/CIG/Believable%20Agents/
12 http://en.wikipedia.org/wiki/Two-streams_hypothesis
13
Some examples are here:
http://bicasociety.org/cogarch/
14
The Birmingham SimAgent toolkit is an example
http://www.cs.bham.ac.uk/research/projects/poplog/packages/simagent.html
15
As discussed in connection with
"toddler theorems" in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
(Contributions from observant parents and child-minders are welcome.
Deep insights come from individual developmental trajectories
rather than statistical patterns of development across individuals.)
16
For more on Kantian vs Humean
causation see the presentations on different sorts of causal reasoning in humans
and other animals, by Chappell and Sloman at the Workshop on Natural and
Artificial Cognition (WONAC, Oxford, 2007):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/wonac
17 http://en.wikipedia.org/wiki/Symbiogenesis
18
Some of them listed in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/mathstuff.html
19
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html#blind-theorem
20
For more on this see:
http://en.wikipedia.org/wiki/Church-Turing_thesis
21
Examples of human mathematical reasoning in geometry and
topology that have, until now, resisted replication on computers are presented
in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html
and
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-sum.html
22
http://en.wikipedia.org/wiki/Pupa
http://en.wikipedia.org/wiki/Holometabolism
23 http://en.wikipedia.org/wiki/J.\_B.\_S.\_Haldane
24 http://www.cs.bham.ac.uk/research/projects/cogaff/misc/befm-sloman.pdf
25
Illustrated in these discussion notes:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/changing-affordances.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-theorem.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html
26 http://en.wikipedia.org/wiki/Conway.27s.Game.of.Life
27
One of many online explanations is
http://www.theprojectspot.com/tutorial-post/simulated-annealing-algorithm-for-beginners/6
28
An interview with the author (Wagner)is online
at
https://www.youtube.com/watch?v=wyQgCMZdv6E