The examples were from Euclidean geometry and variants of Euclidean geometry, including the triangle sum theorem and [Sloman Trisect], explaining angle trisection using the neusis construction, in the context of what I've called P-geometry (as a tribute to Mary Pardoe, a former student, who gave me the idea). These examples illustrate a collection of issues relating mathematics, philosophy, psychology, evolution, animal intelligence, human development, and gaps in current AI.
Gaps between biological and artificial intelligence arise partly from serious gaps in the implicit or explicit requirements analysis behind much research in AI/Cognitive science/Neuroscience, and the inadequacy of currently available modes of representation and limits of both logic-based reasoners and statistical/probabilistic learning mechanisms.
A tentative partial analysis of representational and architectural requirements for more human-like AI mathematical reasoners was presented. As far as I know the capabilities involved making these mathematical discoveries are not yet available in any computer-based mathematical reasoner. Moreover, the "deep learning" mechanisms that are producing very useful practical applications in AI and stunningly good game playing abilities, e.g. in GO, are not capable of making these mathematical discoveries, e.g. in geometry and topology.
The gaps have mostly gone unnoticed by almost all AI researchers, even though the abilities involved are relevant to far more AI goals than modelling human mathematicians. For example, some precursors of these mathematical competences can be found in pre-verbal human toddlers and intelligent non-human species, such as squirrels, elephants and weaver birds.
This is part of the Turing-inspired Meta-Morphogenesis (M-M) project which
investigates evolution of varieties of information processing between the very
simplest organisms (or pre-biota) and current life forms. For an overview see:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Some of the M-M ideas are developed further in various other documents and videos on this web site, including discussions on evolution of "construction kits" of various sorts, introduced in the tutorial presentation to the ESSENCE Summer school, shortly after this talk.
The rest of this document presents some of the theoretical background to the discussion of gaps in mathematical reasoning abilities of AI systems, and gaps in understanding of capabilities of animal brains and the mechanisms supporting those capabilities.
Perhaps the single most important point is that these gaps seem to have been invisible to the vast majority of researchers.
A tentative answer: The Meta-Morphogenesis (M-M) project: an attempt to identify and explain significant transitions in types of information-processing capabilities produced by evolution, and products of evolution, since the earliest (proto-)life forms and current organisms, including changes that modify evolutionary mechanisms.
Many products of evolution exploit mathematical structures and constraints, e.g. in construction and use of new physical/physiological mechanisms, new control mechanisms, new types of information, new forms of representation, new ways to store and manipulate information, and new layers of increasingly sophisticated virtual machinery.
A simple example is homeostasis: use of mathematical properties of negative feedback loops in controlling temperature, pressure, or amount of some chemical in a fluid. Many biological control mechanisms make use of the mathematical properties of negative feedback to implement control functions. A recent scientific discovery about this is reported here:
http://phys.org/news/2016-06-negative-feedback-loops-function-mutated.htmlHow the useful properties of negative feedback loops (homeostasis) were "discovered" by biological evolution is, as far as I know, unknown. This is just one type of example of a use of a mathematical structure in living things. There are many more examples.
Negative feedback loops help maintain the function of mutated proteins June 22, 2016
More complex examples were studied by D'Arcy Thompson, the biologist Brian Goodwin, and others. E.g. the design for a four limbed organism that controls its movements during an extended period of growth, with varying sizes (and relative sizes), angles, weights, shapes, strengths of parts, seems to need control mechanisms based on parametrized mathematical abstractions that can use changing parameters provided during growth and development.
There are also transitions between different modes of locomotion in the same organism, i.e. transitions between different control regimes -- for instance differences between walking, trotting, cantering and galloping and their usefulness at different speeds. In some cases control changes may use a fixed mechanism with modified parameters, whereas in others it is necessary to turn different control mechanisms on and off, for example switching between galloping across a field and grazing.
Far more complex examples of use of mathematical structures, including non-numerical structures, for control purposes occur in language evolution and language development. Linguists inspired by Noam Chomsky (building on ideas of Frege mentioned below) have developed theories according to which human genes provide an abstract characterisation of a type of linguistic structure or function that can be instantiated in many different ways by filling gaps in the structure, or providing arguments for the function. This is often referred to as "Government and binding" theory, or "Principles and parameters".
The key idea is that despite the huge variations found between different human languages they all share a common mathematical abstraction, or collection of mathematical abstractions, somehow genetically encoded, whose instantiations cover thousands of different human languages.
It seems that most linguists discussing this are completely unaware that that is a special case of a very familiar general notion used in logic, mathematics and computer science, for which different labels are used by different groups of researchers. For example, one of the labels used in computer science is parametric polymorphism. Implementations of this idea are used in a variety of kinds of of programming languages including so-called "Object Oriented (OOP)" languages.
The Chappell+Sloman "Evo-devo" ideas depicted below suggest that the relationships between genetic specification and instantiations in individuals can be more complex than that description suggests because of the use of several layers of instantiation within an individual's development, each requiring a period of environmental interaction for detailed development, and with different sorts of environmental influences possible at different stages, allowing more varied patterns of mathematical instantiation, leading to a much greater variety of differences between adult individuals sharing the same (or mostly the same) genome.
x is Red = Red(x) x is bigger than y = Bigger(x, y) fred is between mary and jane = Between(mary, fred, jane)But some functions produce results other than truth values, e.g.
the mother of fred = mother(fred) the eldest child of mary and joe = eldest(mary, joe)But most spectacular of all he showed that higher order functions could take lower order functions and showed that this notion gave a deep new understanding of old linguistic concepts like "all", "some", and "exists".
It also turned out that functions could produce other functions as results, or values. A trivial example would be a function that transforms a two-input function to a one-input function (sometimes referred to as "partial application" in programming languages. Let's call such a function "freeze-first". It takes a two input function, such as sum-of, or product-of, and a number and produces a one input function.
For example, lets use 'sum(x,y)' for the function that adds two numbers to produce a number as a result, and 'mult(x,y)' for the function that multiplies two numbers to produce a number. then we can apply freeze-first, to produce new functions as follows:
freeze-first(sum, 3) produces a function sum3, such that sum3(4) = 7 sum3(9) = 12 freeze-first(mult, 3) produces the function mult3, such that mult3(4) = 12 mult3(9) = 27What should happen if freeze-first is applied to the function sub (where sub(x,y) is x - y) or the function div where div(x,y) is x/y, i.e. x divided by y?
Those are trivial examples, but mathematicians are familiar with far more complex examples of "higher order" functions that can be applied to functions to produce new functions, including the functions "integrate" and "differentiate" used in the differential and integral calculus independently invented by Newton and Leibniz.
Frege attempted to show that all the concepts, truths and proofs in arithmetic (i.e. the study of the natural numbers) could be shown to be parts of pure logic, because logic was richer than older philosophers had realised. That ambition was undermined by Russell's discovery that Frege's logical system led to contradictions (Russell's paradox based on the concept of the set of all sets that do not contain themselves, a topic that need not concern us here.
In contrast Frege did not believe that geometry could be shown to be based on logic, despite the fact that David Hilbert had produced a logical axiomatisation of Euclidean geometry. For further details see the excellent discussion in Blanchette, 2014. So Frege sided with Kant as regards the nature of geometry, but not arithmetic. Compare the defence of Kant in Sloman(1962).
Frege's work helped philosophers and linguists realise that such function-transforming higher-order functions are commonplace in human languages, raising the question how it was possible for biological evolution to be capable of creating such sophisticated higher-order information processing mechanisms.
Wild speculation: It is possible that some "higher order" aspects of the parametric polymorphism in human languages evolved in connection with attempts to use more primitive forms of language to communicate about language, e.g. to help younger members of a community to develop more sophisticated uses of language. Requirements for self-debugging after plans have failed, or predictions proved erroneous may also have added tendencies to favour meta-cognitive and meta-semantic mechanisms, provided that evolutionary mechanisms had the power to produce them.
In order for such novel functionality to be selected because of its value (ultimately its reproductive value), the genetic mechanisms must already have had the power to generate mechanisms producing the novel functionality. That's a general point about pre-requisites for natural selection that is forgotten by those who over-estimate the explanatory power of the Darwin-Wallace notion of natural selection. I have argued in a separate paper that fundamental and evolved construction kits of many kinds, with different mathematical properties, are essential parts of the explanation [Sloman 2014ck].
This can be expressed by saying that in linguistic contexts Frege's functions have an implicit additional argument, something like "The world", or "The relevant portion of the world", where only a restricted region of space-time is relevant to what is being said and the current actual contents of that region are implicitly referenced.
This world-dependence of values of functions expressed by linguistic terms has the consequence that Frege's notion of the "Course of values" (German Wertverlauf) needs to be amended. In arithmetic, the course of values of a function F corresponds roughly to the set of pairings of arguments of F with corresponding values, or, if F is a multi-input function, like "plus" or "average", the set of pairings of sets of arguments of F with corresponding values. In arithmetic the pairings are uniquely determined by F. Frege's notion is closely related to the mathematical concept of the "extension" of the function (a set of ordered pairs of inputs and outputs, where each input may be a vector). For technical reasons that are not important here, Frege could not simply use that concept.
Frege's extension of the mathematical notion of function to include concepts of human languages raises a problem that is often ignored. Unlike mathematical functions, the value value of a function like "the wife of...", "the tallest tree in..." for given arguments will depend on the state of the relevant portion of the world. The properties of a number and its numerical relationships could not have been different from what they are now, whereas many of the objects referenced in ordinary language, e.g. "the wife of Fred" (assuming Fred is uniquely identified) depend on how the world is or was at the relevant time. In such cases the "course of values" combining all the argument-value or input-output pairings will also depend on how the world is. So, for such non-mathematical functions, instead of a unique Wertverlauf a given function/rogator will have a different Wertverlauf in different possible configurations of the world, a point that, as far as I know, was never noticed or acknowledged by Frege.
The metaphor of asking the environment, or the state of some portion of the world, a question, in order to find out the value associated with an expression, led to the proposal in Sloman(1962) (also presented in Sloman(1965b), to replace the word "function" with "rogator", based on the Latin for "to ask", namely "rogare".
This implicit reference to the current state of some relevant portion of the world in an implied an extra argument for linguistic functions is especially important for predicates describing properties and relations that can change rapidly e.g.
length-of(obj),
  where obj is a piece of stretchable elastic,
location-of(obj),
  where obj is something that could have been in a different location
between(obj1, obj2, obj3),
  where the three objects could have been arranged in a different order.
As mentioned above, the problem of accommodating situation dependence could not be solved simply by adding a time as an extra argument, as in
length-of(obj, t)since if the time is in the past, then in principle things could have been different at that time if some event had or had not occurred. Likewise, if the time is in the future, then there are different possible ways the future may develop. In either case it is not merely the time that makes a difference to the value but how things are at that time, i.e. the properties and relationships of objects, structures, and processes in some portion of the world. Many portions of the world could have been different in different possible situations, e.g. if some earlier portion of the world had been different.
location-of(obj, t),
between(obj1, obj2, obj3, t),
As mentioned above, this requirement to extend the standard concept of function to include an implicit extra argument (the current state of the relevant portion of the world) was discussed in Sloman(1962), and Sloman(1965b). There are important consequences including consequences for a theory of modality (possible, impossible, necessary, contingent), as explained in defence of Kant's philosophy of mathematics in Kant (1781). This is totally different from theories of "Possible-world" semantics for modal concepts, since rogators need not be concerned with the whole universe -- only a small relevant portion of the universe.
As far as I know the distinction between functions and rogators has received no attention from logicians and philosophers, even after it was presented at an international logic colloquium in Oxford in 1963. (The exception is a footnote in a paper by David Wiggins.) However, the distinction is crucial for understanding the mathematical achievements of biological evolution, since in many forms of biological control there are mappings from inputs to outputs that depend on possibly changing environmental features. In particular, mappings between sensory inputs and motor outputs for organisms whose size, shape, weight, and muscular strength change during development cannot use the fixed input-output pairings of normal mathematical functions. They pairings must be context sensitive.
Insofar as evolution also discovered abstract designs for control subsystems that could be used across different species (e.g. species derived from some common ancestor), the input-output mappings had to be susceptible to modification across genomes for different lineages -- e.g. the genomes of mammalian quadrupeds with different shapes, sizes, and environments.
For some quadrupeds the variety of forms of motion can vary enormously, possibly requiring separately evolved control subsystems that can be turned on or off. In particular the control requirements for four-limbed organisms, like orangutans, that can move both across roughly horizontal terrain and in climbing up tree-trunks and along branches, and by swinging on vines or lianas, are far more demanding than for organisms that can move only along surfaces that constantly provide upward forces, e.g. various kinds of roughly horizontal or slightly sloping terrain. Non-climbing organisms are constantly prevented from moving downwards by the upward pressure of the terrain on feet, hooves, or equivalent, whereas climbing animals may hang from horizontal supports, cling to the sides of vertical supports, and in some cases swing while supported by hanging vines or lianas. The requirements for organisms that can constantly switch between different modes of locomotion on the ground, on tree-trunks, along branches, etc., require rapid yet smooth transitions between very different control regimes. As far as I know, how their motion control systems evolved, how they develop in individuals, and how they work in expert adults are all unknown. (I suspect current methods of training robots are incapable of producing similar combinations of competences.)
Frege's ideas were extended in various directions by Bertrand Russell and other logicians and mathematicians especially Alonzo Church (in his lambda calculus) which had tremendous impact on developments in programming languages and the theory of computation, and significant portions of modern mathematics. (It was the basis of Lisp, which for a while was the main AI programming language used for exploring designs for intelligent machines, though it was never the only language.)
Early examples of such use of abstraction in information processing show that evolution implicitly discovered and used a wide range of mathematical features of the world: refuting the common belief that mathematics is a human creation.
Such mathematical information, for example information about topology and geometry, seems to be acquired by many young animals on the basis of playful exploration during development, a process that is often grossly mis-described as acquisition of statistical information about the environment, or information about sensori-motor regularities. (However, I am not denying that a part of what learnt may fit that description.)
Most of the details of these processes of learning and development are unknown, though some ideas about them developed with Jackie Chappell, are summarised in this diagram crudely showing features of epigenetic development of intelligence, based on Chappell and Sloman 2007.
Chris Miall helped with the original
version of this diagram, published in
2007.
Alan Bundy commented usefully on an early version.
This is intended to replace Waddington's epigenetic landscape idea:
here an individual's "landscape" of opportunities for further development
is constantly being rebuilt, on the basis of both genetic influences
and influences from the environment of the individual.
The ability of developing individuals to find new useful abstractions
at different stages of development, in very different environments
depends crucially on evolution having found powerful, higher level,
though possibly simple, abstractions in the (possibly distant) past.
Discovery of new concepts and relations and new possibilities for combining old ones and new constraints found in new structures realising those possibilities has nothing to do with statistical learning. (I think Piaget realised this dimly, though Kant saw it very clearly much earlier Kant (1781)).
However, I suspect neither understood the full implications of the kind of parametric polymorphism implicit in Frege's theory of functions of many levels.
In other words, I claim that various aspects of intelligent information processing in humans, including pre-verbal toddlers, as discussed here http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html, and in many other intelligent species that are able to make discoveries about their environments that enable them to behave intelligently in novel ways, all essentially involve this ability to use new combinations of old concepts, along with novel invented concepts that are found useful, created at first by environmentally driven novel instantiations of genetically provided schemata, then later using newly formed high level abstractions to open up new spaces of possibilities.
This is a truly deep form of biological creativity, based on mechanisms that, at least in the case of humans, and some of their domesticated animals, never seem to stop adding new complexity and new functionality, often in response to problems generated by making use of previous creations, either in practical (engineering) applications or in processes of reasoning about new possibilities.
That is, either mechanisms of evolution, or developmental products of those mechanisms in individual animals or communities or ecosystems, continually produce new competences based on use of increasingly complex or increasingly abstract new mathematical structures, which in most cases are used unconsciously, like the uses of grammatical constructs in human verbal communication.
These capabilities, although based on abstract mathematical powers implicitly provided by the genome, do not, contrary to some misconceptions about mathematics, rigidly programme cognitive developments in the individuals, because the application of functions at various levels of abstraction can depend on previous achievements during development and on new problems posed by the environment.
Many of these developments include grasping and using relationships between possibilities, including new combinations of possibilities, rather than mere use of previously acquired laws in predicting what will happen. The ability to play thoughtfully (or think playfully) about spaces of possible changes, and chains of changes in the environment is a crucial feature of abilities to make plans, apply them debug them, extend them. etc. Some aspects of this have been demonstrated in AI planning and debugging systems (including Sussman's HACKER program G.J. Sussman, 1975.
The key role of discovery and explanation of new possibilities, as opposed to new laws or statistical regularities in the advance of science was presented in Chapter 2 of "The Computer Revolution in Philosophy" (1978) http://www.cs.bham.ac.uk/research/projects/cogaff/crp/#chap2
Its "proofs of possibility" are implicit in evolutionary and developmental trajectories that lead up to instances of the possibilities. So mathematics is not a human creation, as many believe, and the early forms of representation and reasoning are not necessarily similar to recently invented logical, algebraic, or probabilistic forms.
The "blind mathematician" later produced at least one species with mathematical meta-cognitive mechanisms. These allow individuals who have previously made "blind" mathematical discoveries, e.g. what I've called "toddler theorems" (see below) and implicit grammatical discoveries, to notice their discoveries and then go on to modify them, apply them more selectively, and in later developments allows the discoveries to be parametrised, combined, tested, and improved on the basis of increased understanding of constraints.
Later still, meta-meta-(etc?)cognitive mechanisms allow products of earlier meta-cognition to be communicated to other members of the species, challenged, defended, organised, and formally taught, eventually leading to collaborative advances, and documented discoveries and proofs, e.g. Euclid's Elements (sadly no longer a standard part of the education of our brightest learners). Many forms of applied mathematics grew out of the results.
Unfortunately, most of this pre-history is still unknown and may have to be discovered piece-meal, using intelligent guesswork and cross-species comparisons.
Similar trajectories occur, largely unnoticed, in young children. Theories that assume they learn mathematics, language, and much else from adults, ignore the fact that something more than passive learning is required, since originally there were not adult speakers and mathematicians to learn from.
Related points can be made about requirements for evolution of vision in intelligent animals, including humans, nest-building birds, elephants, squirrels, and many others:
Mathematical advances are built on discoveries of new possibilities and impossibilities (constraints on possibilities): this is utterly different from statistical learning about probabilities as in Bayesian learning. Possibility and necessity are not points on a probability scale.
Illusionists implicitly use the fact that humans don't need a mathematical education to understand some mathematical (e.g. topological) impossibilities. E.g. young non-mathematicians can understand (how?) that it is impossible for two solid rings to become linked, and then to become unlinked, as shown by their responses to apparent counter examples e.g. http://www.ellusionist.com/messado-linking-rings-magic.html
How do humans and other animals discover, and represent, possibilities and impossibilities -- including humans who lived long before the development of modern logic and topology? How do intelligent nest builders, such as crows and weaver birds, (mostly) avoid wasting time on "obviously" impossible construction steps? Does every useful and obstructive configuration have to be learnt empirically?
I also have a collection of online examples of human abilities to discover and reason about possibilities and impossibilities with various kinds of mathematical structure, some very simple others requiring considerable sophistication.
An example (presented in the seminar) is the use of P-geometry, which is a
variant of euclidean geometry discovered (implicitly) by Mary Pardoe several
decades ago when trying to get her pupils to understand the triangle sum
theorem, as explained in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/triangle-sum.html
An extension to Euclidean geometry that is slightly richer than P-geometry makes it possible to trisect an arbitrary angle in a plane, as shown in [Sloman Trisect].
Shirt mathematics -- the set of types of 3-D trajectories by which a shirt can
be put on a child is discussed in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/shirt.html
A more complex example involves equivalence classes of continuous closed curves on the surface of a torus http://www.cs.bham.ac.uk/research/projects/cogaff/misc/torus.html
Some of the abilities of human toddlers to discover "toddler theorems" are
illustrated and discussed here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Features of 3-D visual perception involving abilities to perceive and reason
about possibilities
and impossibilities, and a host of related facts about vision
and space, are discussed here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html
There is a very interesting (and deep?) collection of examples in a lecture presented in Edinburgh in 2014 by a well known mathematician and computer scientist, Dana Scott
https://www.youtube.com/watch?v=sDGnE8eja5o
Prof. Dana Scott - Geometry Without Points
Recorded on Monday 23 June at the University of Edinburgh.
The slides for the lecture seem to be available on line here:
https://www.logic.at/latd2014/2014%20Vienna%20Scott.pdf
Look at how he uses not only diagrams but also hand-motions to convey concepts and forms of reasoning. It seems to me that much of what he is communicating could not be translated into a logical notation. Moreover a logical version would not engage with common human reasoning powers of the sorts used in making the original discoveries in geometry and topology in the distant past, and until fairly recently used by bright young mathematics students learning geometry in school and university.For people who are unfamiliar with traditional presentations of geometry a good overview video tutorial of some of the history of mathematics including ancient mathematics can be found here:
https://www.youtube.com/watch?v=YsEcpS-hyXw
History of Mathematics in 50 Minutes Published on 21 Sep 2012
Professor John Dersch reviews many historical innovations in math.
A version accompanied by an approximate textual transcript is here:
http://www.allreadable.com/528bBf4
THIS SECTION NEEDS TO BE ENRICHED AND ROUNDED OFF
Can it be modelled? As far as I know, there are no AI theorem provers or learning robots that are capable of modelling these commonplace mathematical (or proto-mathematical) discoveries in humans and other animals, and the need for this has mostly gone unnoticed (though Piaget studied examples in his last two books, on "Possibility" and "Necessity"). But that does not mean this is beyond the scope of AI/Automated reasoning. It may merely require major advances!
I think Kant [Critique of Pure Reason1781] made some important steps towards characterising mathematical knowledge and its acquisition, using three distinctions (logical, epistemic, and metaphysical) to characterise mathematical truths: they were synthetic, not analytic, apriori, not empirical, and necessary, not contingent. Here 'apriori' has nothing to do with being innate (Sloman 1965) or infallible (Lakatos 1976). But so far his ideas have not been instantiated in AI systems, partly because he formulated only rather abstract requirements, not designs for working systems, or neural theories.
In my 1962 DPhil thesis I attempted to defend Kant's theories, using some of the ideas presented here. But at that stage I knew nothing about AI or computers. The thesis has recently been digitised and made available here: http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-1962.
Contemporary work on Foundations of mathematics seems to focus only on ways of constructing formal systems capable of modelling known mathematical structures, e.g. following Frege's nearly successful attempt to model arithmetic in logic. It seems to be widely believed that doing that would show arithmetical knowledge to be analytic. But Frege argued (against Hilbert) that modelling geometry in logic and arithmetic changes the subject.
In any case, the fact that one branch of mathematics M1 can be modelled in another M2 does not show that M1 is a part of M2. This also applies to arithmetic: the branch of mathematics concerned with numbers as known to ancient mathematicians long before formal axiomatic systems and modern logic had been developed: a demonstration that the whole of arithmetic can be modelled in logic would not show that arithmetic is logic. This point is independent of Goedel's incompleteness theorem which can be interpreted as demonstrating that it is impossible to model all of arithmetic in logic.
A deep and general characterisation of the nature of mathematics, including arithmetic, topology and geometry requires a new approach. I'll present some ideas in the context of the Meta-Morphogenesis project. Using the example of trisection of an arbitrary angle: impossible in Euclidean geometry because of its restriction to unmarked straight edge and compasses. But a simple extension known to Archimedes makes it possible. This fact has deep implications about the nature of geometrical knowledge.
Some evolutionary transitions seem to be recapitulated in individual development: e.g. human toddlers discover and use mathematical possibilities and constraints ("toddler theorems") before they can talk. Likewise intelligent non-human species: squirrels, corvids, elephants, apes, etc.
However, research in AI/Robotics and neural modelling seems to have ignored the cognitive processes and mechanisms involved in mathematical discoveries of new classes of possibilities (requiring ontological extensions), and new limitations on possibilities (mathematical laws, not probabilities).
But mathematical discovery began long before the discovery of logic or formal methods. By the time of Euclid's Elements a great deal had already been learnt about geometry and arithmetic, providing powerful tools for subsequent science and engineering. (Sample Euclid here: http://www.gutenberg.org/ebooks/21076).
After Descartes showed how geometry could be modelled (or partly modelled) in arithmetic this extended some of the applications of geometrical reasoning, e.g. in Newton's mechanics.
Later Frege and others showed how arithmetic could be (partly?) modelled in logic.
It usually goes unnoticed (though not by David Hilbert) that the use of spatial notations for logic (e.g. marks on paper) shows that logic can be modelled in geometry. However, all those structural relationships between branches of logic leave unexplained the discovery processes that are possible for human mathematicians and future possible robot mathematicians. What sorts of mechanisms did the ancient mathematicians use? A related question is: How did biological evolution produce those mechanisms? How do they develop in individual human mathematicians? To what extent do other animals have such capabilities.
Can they be replicated in computers?
[This document needs to be reorganised]
REFERENCES
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/essence-kits-tut.html
Tutorial introduction to the task of investigating
evolved construction kits (concrete, abstract, and hybrid).
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham
--