(And some surprisingly complex variants of a simple problem.)
With grateful thanks for help from
Auke Booij
School of Computer Science University of Birmingham |
Diana Sofronieva
Department of Philosophy Leeds University |
N.B. Aaron Sloman is responsible for any errors in this document.
Why can't (current) machines reason like Euclid or even human toddlers?
(And many other intelligent animals)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html
I'll call that "The workshop web page" below.After I mentioned the triangle-stretching example in a CS theory group seminar in Birmingham on 29th September 2017, Auke Booij pointed out the answer to a question I had left open. That prompted me to move the discussion of the example into this new document. This paper is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/deform-triangle.pdf
The PDF version may be slightly out of date, from time to time.On 3rd Nov 2017 Diana Sofronieva heard me talk about this in seminar in Leeds University. The next day, at a conference in Leeds, she gave me a surprisingly complex answer to one of the questions I raised below, using the Apollonius construction in Euclidean geometry.
Her answer is presented and discussed in a separate document created on on 17 Nov 2017.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/apollonius.html-----------------------------------------------------------------------------------------------
An incomplete draft paper on requirements for understanding cardinal and ordinal numbers that generally go unnoticed by researchers on mathematical cognition and its neural underpinnings can be found here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cardinal-ordinal-numbers.htmlA closely related side-shoot of all these discussions is a speculation about missing mechanisms in AI, Neuroscience, and psychology:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
A Super-Turing Membrane Machine for Geometers
(Also for toddlers, and other intelligent animals)These are all part of the Turing-inspired Meta-Morphogenesis project
(Partly repeating the IJCAI workshop web page,
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html.)
Updated 27 Nov 2017
Why talk about deforming triangles? Because I think there are deep, largely
unnoticed, aspects of the ways human and non-human animal minds work that are
closely connected with the mechanisms underlying important non-numerical
mathematical discoveries by ancient mathematicians, i.e. topological and
geometrical discoveries (which I've argued elsewhere are at the root of
competences relating to cardinal and ordinal numbers, as opposed to
spuriously similar pattern recognition capabilities).
It is not always remembered that for ancient mathematicians the axioms and postulates in Euclidean geometry were not arbitrarily chosen starting formulae from which conclusions could be derived using pure logic, in modern axiom-based mathematics: the ancient axioms were all discoveries.
There were also important discoveries that were not included: for example the possibility of extending Euclidean geometry with the "neusis" construction, which was known to Archimedes and which made it easy to trisect an arbitrary triangle -- impossible in Euclidean geometry. For more information see: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html
I suggest that the mechanisms involved in thinking about what happens to angles of a triangle as it gets stretched by motion of one vertex relative to the other two were also available to ancient mathematicians, whether they thought of this example (explained below) or not.
Other things ancient mechanisms are good at include topological categorisations, e.g. X and Y are connected/disconnected, X is contained in or overlaps with or outside Y, and more complex cases.
Those mechanisms, or closely related mechanisms, are also needed for intelligent perception and action in richly structured, extremely varied spatial environments, especially
I suspect that all the core mechanisms for making and using such discoveries were available in brains that had evolved in environments in which how things look, and what you can see, change as you move, or as objects move. In some animals those mechanisms were extended to enable consideration of structures and processes that are not actually occurring, but which might occur, whether caused by the perceiver or not. (I have previously labelled these, and other examples of perceived possibilities for change in the environment, "proto-affordances", extending Gibson's theory of affordances Gibson(1979).)
The reflective ability to notice uses of such abilities, think about them, discuss them, and understand why they work, probably evolved only in humans, though the perception and use of proto-affordances and other sorts of affordances seems to occur in many intelligent species.
Many researchers, especially AI vision and robotics researchers, and some psychologists and neuroscientists assume that spatial percepts necessarily require use of numerical measures, e.g. of distance, width, height, speed of motion, angles, areas, etc., or at least probability distributions over numerical measures.
However, biological sensors are mostly not good at precise numerical measurement, though some of them are good at detecting changes, or direction of change (increasing/decreasing) in physical relationships across space or time -- often using partial orderings, e.g. A is further than B, A is further than B by more than C is further than D, and many more, without being able to tell whether A is further from C than from D.
Such competences with partial orderings allow use of descriptions of structures and processes that are accurate but not precise, and which therefore, when they are available, suffice for control of actions without complex numerical computations of probabilities, now widely used in AI, including robotics. (For some simple examples explaining the power of partial orderings and topological relationships, see Sloman(2007-14).)
I suspect that at some (remote?) future date we'll understand how those non-metrical spatial detection mechanisms can contribute to rich and extremely useful structural and ordering information used by our remote ancestors and some other intelligent species interacting with complex structures and processes, including other agents -- offspring, mates, friends, live food, and foes, and perceived structures and materials used for many purposes, including feeding, building nests, and in some cases making tools.
I suggest that the current concern of scientists and engineers with numerical measures and numerical control mechanisms, combined with use of probabilities to compensate for limitations of accuracy and completeness in available measures in studying and modelling various aspects of natural intelligence, is more a product of the prejudices currently built into our scientific and engineering educational practices than requirements for working mechanisms -- natural or artificial.
Biological optical sensors in birds and mammals (e.g. as described in https://en.wikipedia.org/wiki/Visual_system#Retina) seem to have far more complex functionality built in to their low level design than any human-engineered video camera, even those with the complexity of "Three-CCD" cameras, summarised in https://en.wikipedia.org/wiki/Three-CCD_camera.
Perhaps in future we'll learn more by finding out what all that complexity is actually used for than by building models based on assumptions about what it must be used for!
Some of the hypothesised topological and geometrical reasoning
abilities are analysed below, and will be discussed further in a separate
document exploring the idea of a Super Turing machine, mentioned briefly below.
Many more examples of mathematical impossibilities and necessities are
presented in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html
Later I'll discuss the problem of specifying mechanisms that can perform these tasks. Since the relationships found involve extreme modalities necessity and impossibility, they cannot be inferred from empirical evidence: they do not involve probabilities, for example, as explained in my DPhil Thesis Sloman 1962, defending claims in Kant(1781).
Note:
I think related topological mechanisms were originally required for (gradual) development and use of concepts of cardinal and ordinal number, both based on implicit understanding of properties of 1-1 correlations, long before anyone thought of modern axioms for natural numbers. Some relevant ideas are in chapter 8 of Sloman 1978, and Sloman(2016) (in the Section: "What about arithmetic?" Pages 10-13), expanded in this draft document (Nov 2017): http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cardinal-ordinal-numbers.htmlIf those conjectures are correct then all claims that cardinal or ordinal number concepts are innate are false. What seems to be innate (and shared across several species) is merely a superficially related template-based pattern recognition ability that for small numbers produces what look like answers to questions about cardinality. [Links/references to be added.]
This is one of several online discussions of human (and in some cases non-human) abilities to perceive impossibilities and necessary connections that seem to have evolved in the mechanisms required for dealing with spatial affordances, and later formed the basis of number competences. Those abilities were used in ancient mathematical discoveries long before the development of modern logic and formal systems. Some aspects of those ancient mathematical competences also seem to characterise abilities of pre-verbal toddlers and other animals, illustrated in videos linked from my IJCAI 2017 AGA Workshop web page: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/ijcai-2017-cog.html.
The currently dominant strategy in AI, psychology, and neuroscience for explaining natural intelligence postulates use of statistical evidence to build up re-usable information about probabilities. In contrast, for many years I have been collecting examples suggesting that that is a completely misguided theory of natural intelligence, and a poor basis for robot intelligence. (My work is partly inspired by ideas in Kant(1781).)
Some of the evidence comes from close observations (preferably video recordings) of pre-verbal humans and other animals and from analysis of requirements for intelligent decision making and the kinds of information (e.g. about topological structures and partial orderings rather than numerical measurements) readily available in the environment, discussed in Sloman(2007-14) and related online documents.
For example, you can safely walk through a doorway by aiming for a location between the left and right doorposts and well clear of both, without having either precise measurements of the locations and distances of the doorposts or probability distributions over numerical values.
Many examples can be found in spontaneous, often surprising, reactions to objects or opportunities in the environment, in young children and other animals. Many of these are not repeatable, but that does nothing to diminish their relevance to deep science in which the primary advances come from discovery and explanations of what is possible, rather than discovery of laws or regularities, as explained in Chapter 2 of Sloman 1978. (This contradicts Popper's widely, but mistakenly, accepted claim that all scientific statements should be empirically falsifiable.)
For several decades I have been collecting examples of ancient mathematical
discoveries in geometry and topology, and some previously apparently unnoticed
discoveries (like the triangle stretch discovery below) that seem to be linked
to achievements of pre-verbal humans and other intelligent animals, e.g. when
using perceived spatial structures and relationships to select and control
actions. How such abilities, not specified in the genome, can depend on and be
related to more general and abstract features of the genome is the topic of
another document: The meta-configured genome (based on work with Jackie
Chappell):
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html
In contrast, most published research on human and non-human mathematical intelligence focuses on numerical competences, ignoring the possibility that those competences were built on older, more fundamental abilities to reason about spatial or spatio-temporal structures and relationships, including features of structural changes from which ideas about one-to-one correspondences (bijections) emerged, later followed by spatial reasoning abilities used in discoveries and theories concerned with numbers of various types. Centuries (or millennia) later, those were supplemented by use of much more formal mechanisms known to and understood by professional mathematicians, but irrelevant to most of those who regularly use number concepts.
The developments I am trying to understand must have occurred long before the advent of modern formal methods in mathematics. Numerical competences are not the main topic here, however. This paper is about older, deeper competences.
Any theory of intelligence or design for intelligent machines that does not take account of these ancient spatial reasoning components of natural intelligence will fail to explain commonplace abilities of humans and other intelligent animals and their computer models will fail to replicate important aspects of natural intelligence, including failing to model processes of mathematical discovery, even if they produce results that look superficially similar, achieved by very different mechanisms.
"If ABC is a triangle of any shape, and the vertex A is moved in a straight line away from the opposite side BC, what happens to the size of the angle at A?"
Everyone I talked to (adult academics of various sorts) seemed to find it obvious that the angle at A must steadily decrease in size as A moves further from the opposite side -- no matter how far it has moved and how small the angle already is.
I use this question and the general form of the responses obtained, to motivate questions about the nature of the perception and reasoning mechanisms that allow such a discovery to be made and to be seen to express a necessary truth (provided that the line on which A moves passes between B and C, i.e. crosses the opposite side of the triangle). That raised interesting questions about the cognitive machinery required to grasp such a necessary truth, including questions discussed by Immanuel Kant in Kant(1781).
Insofar as people notice that the angle necessarily grows smaller as the distance from the opposite side increases, and do so without having done any measurements or physical experiments or collected statistics (except perhaps implicitly and unwittingly over sub-ranges of shape change), they cannot be reporting an empirical generalisation.
There's an interesting discussion by Francesco Beccuti of what happens in the limit, in Beccuti(2018).
Instead they are somehow conscious that there is a necessary connection between the change of length and the change of angle. Consciousness of such necessities is a characteristic feature of mathematical discoveries (as Kant pointed out). So any theory of consciousness that does not account for mathematical consciousness (including consciousness of mathematical necessities and impossibilities) must be inadequate as a general theory of consciousness: which applies to almost everything I have read about consciousness, except in Kant's work. (Mathematical theories of consciousness are easily found online, but the examples I have encountered do not explain, or even describe, mathematical consciousness!)
During a presentation to the Theoretical Computer Science group in my department on 29th Sep 2017 based on the material in the workshop web page, I mentioned these examples of reasoning about consequences of triangle deformations, including the conjecture that motion of a vertex away from the opposite side always caused the angle at the vertex to shrink in size. This was challenged by Auke Booij.
During a subsequent email discussion he pointed out my omission: I had not thought about some of the details, presented below, using diagrams depicting shape-changing triangles. Later it turned out that this problem had extraordinary mathematical complexity linked to a problem and a solution discovered by ancient mathematicians -- about which I first learnt from Diana Sofronieva after she heard me talk about this topic in Leeds University in November 2017.
This suggests to me that requirements for perception and reasoning about spatial structures in the physical environment led to evolution of increasingly sophisticated topological and geometrical (mostly non-numerical) forms of information processing in biological organisms eventually producing forms of perception and reasoning that used abilities to consider unperceived, but possible, spatial changes with structural features that increased in sophistication over many generations long before the knowledge acquired was organised by ancient mathematicians.
By the time the first humans existed the mechanisms that had evolved to meet a wide collection of practical perception and reasoning problems, including those presented in Sloman(2007-14), already had the power also to support hypothetical, theoretical reasoning about surprisingly complex changing structures and processes. I suggest that these laid the foundations for later discoveries presented in Euclid's Elements.
Why the stretched triangle example?
The example of reasoning about a stretched triangle is worth considering as one
among many windows into unobvious sophistication in those animal competences.
The initial observation about the effect of stretching a triangle seems so
obvious and simple that it is (at least to me) surprising how quickly it leads
into a sophisticated problem in Geometry, pointed out to me by Diana Sofronieva
(henceforth Diana).
I'll start with the observations of Auke Booij who had previously helped me notice some unobvious complexities in other geometric problems). A separate document presents the analysis provided by Diana, who identified a connection with the problem of Apollonius, which I had not previously encountered.
What will happen if you start with a triangle, like the blue triangle ABC in the figure, and move the top vertex (corner), A, away from the opposite side (the base of the triangle) along a line going through the base. (Other traversal lines are considered below.) The red triangle in the figure, with a new vertex A', illustrates one of the possible new locations to which the triangle could be moved. Try to formulate an answer that is independent of the size, shape, and orientation of the triangle.
You probably find it obvious that the angle at the moving vertex will continually decrease in size as the vertex moves further from the base, the side opposite it in the triangle. How can one know that?
Most people asked the question about the figure on the left seem to find the answer obvious, saying that as the vertex A moves further from BC the angle at A must get smaller. It is not at all clear what form of reasoning they are using -- nor what their brains are doing.
Two mathematical continua
The people I have asked don't seem to be aware that in answering the question
they have identified two mathematical continua (the
continuum of locations of the moving vertex on the line through the triangle,
and the continuum of angle sizes for the vertex as it moves), and a systematic
(monotonic) relationship between them.
Each continuum is involved in the motion of the vertex, and each can be attended to separately. Everyone I have asked also finds it obvious that the two are rigidly connected: changes in the location cannot occur without changes in the angle size. This is not an explicit mapping between individual vertex locations and individual angle sizes, rather it is a rigid relationship between directions of change in the two: each direction of motion of the vertex is necessarily connected with one of the directions of change of size of the angle at the vertex. What sort of reasoning mechanism can make such a discovery so quickly will be discussed later in connection with a proposed new non-digital computing machine.
I have so far not asked anyone whether returning the vertex to a previous location will necessarily restore the previous angle size. Anyone with knowledge of Euclidean geometry will find the answer obvious because the initial and restored triangles must be congruent, insofar as restoring the (relative) location will produce sides of the same lengths as the initial triangle. But that background knowledge is not required for answering the question about direction of change.
In general, however, two processes may have necessarily linked directions of change without having necessarily linked locations in the space of possibilities. (E.g. there could be "hysteresis" i.e. dependence of state on previous history. I'll leave that possibility for discussion another time.)
Moving on a different line
The situation changes if the direction of motion of the vertex changes so that
it moves on a line that does not intersect the base of the triangle (the side
opposite the moving vertex)? This case was discussed briefly in the workshop web
page, leading up to the idea of the membrane mechanism. Below I'll discuss
alternative lines through A along which A can move, and ask what difference the
line makes to how the size of the angle at A varies. This apparently simple
question uncovered (with help from Auke, then Diana) a surprising "bag of worms",
extending requirements for the Super-Turing machine, for which ideas are under
development here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
I claim, but will not argue here, that all of these examples are related to mechanisms that form part of everyday perception of spatial structures, processes, and affordances in humans (including infants) and other intelligent animals Sloman(2007-14). In particular, changes of relative size or angle as you move towards or away from something, or move one thing relative to another, can produce perceived changes in visible relationships between edges, corners and other features of objects. I argue in Sloman(2007-14) that this plays a far more important role in spatial intelligence than has been generally noticed, and if implemented in robots could simplify some of their control problems.
Evolution also produced mechanisms for envisaging or imagining such changes when they are not actually occurring, without doing any numerical calculations of values of point coordinates or angles or areas, as a typical current robot or other AI system would have to do. (Future AI systems may be more like humans as a result of this work.)
An example is looking at two blobs of colour and deciding whether one is larger than another by visualising one sliding over the other rather than doing measurements and calculations.
There have been attempts to give machines related competences, e.g. machines that have mechanisms for painting an image into a spatial data-structure. Those may be fairly close to some of the cases considered below, but without the meta-cognitive abilities to notice and reason about consequences of such spatial operations.
Traversal in a visual field can arise in various contexts, including monitoring of an object moved by the perceiver, perception of an object that is moving independently of the perceiver, monitoring of effects of changes of view direction (e.g. saccades), consideration of a possible, visually imagined motion, or motion of the perceiver.
In some cases the object seen to be moving is fixated, so that everything else moves in the visual field. In other cases an object moves without being fixated, so that it moves across the visual field. We'll ignore those differences here, though they are important for a complete account of visual cognition.
These features of spatial perception are important in previously unnoticed ways, for a complete account of mathematical cognition. Consideration of unnoticed details in many different geometrical examples can help to draw attention to unnoticed features of much broader classes of perception, reasoning, planning, and control of actions.
Questions:
What will happen to the appearance of the triangle in that case? In particular,
what will happen to the perceived angle at vertex A as the tilting causes A to
move away from you? Will the angle in your visual field increase in size, remain
the same, or decrease? How do you know?
The variety and prevalence of perceived changes involving two or more related changes of structures or relationships when intelligent organisms perceive or produce changes in the environment, some of them produced by other intelligent organisms, does not explain how evolution produced these detection mechanisms, but it does explain why they should be preserved by reproductive and other processes after they first become available, and why their absence in current AI spatial perception mechanisms is a very serious deficiency.
NOTE:
This problem about how angles change as one corner of a triangle moves is
loosely related to ways of reasoning about how the area of a triangle
changes as the triangle is deformed (the "area stretch" theorems) explored in
another document.
The new type of case is harder to reason about. In Figure 2 neither of the top angles is contained in the other. The new example requires a notion of comparison of size of two angles that is independent of orientation of the angles, and whether one angle can obviously be moved so as to be contained in another. In the workshop web page, finding a proof for the new configuration was left as an exercise for the reader, and as stimulation to investigate the differences between the two cases.
I turn now to an objection raised when I conjectured that in Figure 2 it is also the case that as the vertex moves up along the line intersecting the base of the triangle (extended here) the angle at the vertex constantly gets smaller.
If the vertex starts far enough to the right on the line of motion, e.g. at A', close to where the line of motion meets the extension of the base of the triangle, at location X in Figure 3, below, the angle at A' is smaller than the angle at A, and as A' moves closer to the intersection at X, the angle at A' must get smaller. (Why?)
Consider the red triangle whose shape is represented by A'BC, in Figure 3 above, and how it might be deformed into red triangle A"BC by moving the vertex at A' in a straight line towards A". At an intermediate stage the triangle's location is shown in blue as triangle ABC. (Compare the blue triangles in Figure 1 and Figure 2 above.) There are some "implicit" discontinuities in the trajectory depending on
Figure 4 shows how lines perpendicular to BC can divide up the sloping line. A mathematician may investigate whether one of those perpendiculars defines a location of maximum angle for the moving vertex, and then conclude that perpendiculars to BC do not help to identify a point at which the vertex A is maximal. However this is not at all obvious. Is there any other way to identify a point at which the angle A is largest?
Ingenious readers may be able to think of more subdivisions between possible types of location of the moving vertex, arising out of the relationships between items involved in the initial problem formulation. For example, another subdivision, involves circles.
There are no circles visible in the earlier diagrams (before Figure 5), yet the Euclidean plane allows infinitely many circles to exist. In particular, there are infinitely many different circles passing through the two points B and C, the bottom vertices of triangle ABC in Figure 5 above. It turns out, surprisingly, that the answer to the question where the angle size is maximal depends on circles!
Three examples are the blue circles shown. Each circle passes through points B and C, and because the centre of each circle must be the same distance from B and from C, the centres must all be on the perpendicular bisector of the line BC, shown in Figure 5 as a vertical black dashed line.
Exercise for the reader
Added 10 Nov 2017
Think about circles with centres at various locations on that vertical line, with various diameters, and see if you can divide the circles into different categories in terms of how they relate to points B and C and the line XA". For points consider whether they are outside the circle, or on the circle, or inside the circle. For an infinite line consider whether it goes through the interior of the circle, or merely touches the circle as a tangent, or has no point in common with the circle.What brain mechanisms make it possible to engage in such thinking?
Could you design a robot that could have such thoughts -- visualising various configurations and noticing how certain changes inevitably produce other changes, and also discovering impossibilities, like the impossibility discussed as a "Transitional case" below?
Contemplating Fig 5 should convince you that there are infinitely many circles that pass through the points B and C and are entirely below the line from A' to A", and share no point with that line, as illustrated by the lowest blue circle in Figure 5. Likewise there are infinitely many circles through B and C that pass above the line A'A", cutting it in two different points, as illustrated by the highest blue circle.
However, as the diameter of a circle through B and C shrinks or expands, while the centre moves up or down the perpendicular bisector of B and C, there must be one, and only one, circle through points B and C that has line A'A" as a tangent, i.e. exactly one circle that touches the line at exactly one location above the line through the base of the triangle, indicated by the middle circle in Fig 5.
There is another, much larger, circle through points B and C, that touches the line A"X extended from below, far to the right of the points in the diagram. For details see the paper on the Apollonius construction.
But symbols on a Turing machine cannot be superimposed in the way that generates new geometrical sub-structures two shapes are superimposed, e.g. a circle and a triangle, or a straight line and a triangle. A Turing machine does not allow two or more symbols to be superimposed: each has a part of the tape to itself.
Example: Circle and triangle problem
For example you can combine a circle and a triangle on a planar surface, and by
sliding them around you can produce different numbers of points common to the
triangle and the circle, including no points if they do not overlap, a maximum
of six points, and intermediate numbers. (Exercise for the reader: apart from 0
and 6 points common to the circle and triangle, what other numbers are possible,
and in what configurations?)
Below, I'll try to show how exploration of the initial hypothesis that vertex size always decreases steadily with distance from the opposite size of a triangle, can lead to unobvious counter-examples as a result of unanticipated consequences of interactions between parts of the diagram: changing some geometric relationships can cause surprising new relationships, or constraints, to emerge.
These examples help to extend the requirements specification
for a conjectured "Super-turing membrane mechanism" for spatial reasoning,
explored in a parallel document:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
I conjecture that biological evolution originally produced such a mechanism to support processing of visual information of mobile animals, including perception of changing affordances, as discussed in Sloman(2007-14), and from time to time added increasing complexity to the mechanisms, as the organisms became more sophisticated in their needs, physical capabilities, and uses of spatial information.
The changes included forms of meta-cognition, and meta-meta-cognition, including what could be called "experimental meta-cognition" namely consideration of how contents of cognition (percepts) would change under various conditions.
It is worth noting that humans born blind may have a-modal spatial reasoning mechanisms that originally evolved as parts of visual perception then visual reasoning, but became amodally accessible so that they could also be related to tactile, haptic and auditory spatial perception. Perhaps also vestibular perception of self-motion?
I think such a mechanism was used by ancient mathematicians (unwittingly of course). Parts of the mechanism that evolved long before humans, are already present in human toddlers and other intelligent animals, but without the meta-cognitive additions available to adult humans, especially mature mathematicians, which allow the spatial reasoning processes to be attended to, recorded for later access, and eventually facilitated teaching and learning processes in which one individual thinks about, refers to and comments on the thinking processes of another.
Understanding the nature of those ancient (mostly unconscious) mathematical competences, and their evolutionary and developmental histories is an important part of understanding what animal minds are and how they work (as I think Kant understood, though I shall not argue for that here).
It is also an important part of the project to replicate intelligence of humans and other animals in machines. It is not yet clear to me whether virtual machines implemented on digital computers can support implementation of this competence, or whether another kind of machine, tentatively labelled a "Super Turing Membrane machine" can do the job.
Note: Kenneth Craik
There is an extraordinarily perceptive (partly prophetic), though incomplete, discussion of how brains might represent and reason about geometric figures and their relationships in Craik (1943) before digital computers were available, in Chapter 6, section headed "Abstraction and brain mechanisms".Craik also anticipated some of the arguments in Sloman(2007-14)) concerning the relative importance of partial orderings vs absolute measures for intelligent perception and action. (It's possible that I was influenced by having read Craik's book around 1967, before I encountered AI.)
Note: Answer to question about triangle and circle
The number of points common to a triangle and a circle can vary between 0 and 6. Some of them are 'touching' points, others overlap points. What sort of machine can discover (a) that there are exactly 7 possibilities and that all of them can occur, some of the numbers in more than one way.Would inspecting lots of randomly generated pictures of a circle and a triangle be a good strategy? If not, why not?
Relevance to theories of consciousness
Any theory of intelligence, or theory of consciousness, that does not
address this ancient kind of mathematical intelligence or associated types of
consciousness of impossibility or necessity cannot be a
serious candidate for a general theory of human or animal consciousness,
including consciousness of pre-verbal
toddlers, nest-building birds, hunting mammals, octopuses,
cetaceans, orangutans, elephants, and ancient mathematicians!
Eventually I hope that the evidence collected in this and related documents will help us assemble new deep requirements for the information-processing mechanisms that explain those ancient discoveries and related aspects of natural intelligence, including perception of affordances, and reasoning about affordances.
Some of the material previously in the IJCAI-17 workshop web page(above) has
been moved out into a first draft (incomplete) attempt to specify features of
the information-processing machinery that evolution seems to have provided that
so far has not been replicated in AI and has not been studied in psychology or
neuroscience, which I have loosely labelled "The super-turing membrane machine",
with requirements and speculative partial designs collected in this incomplete
draft:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html
The remainder of this document focuses on a collection of increasingly complex spatial discoveries that can emerge from use of spatial imagination in idle reflection on possible distortions of a triangular shape. A slightly different problem, first drawn to my attention by Auke Booi, turns out to have a far more complex solution, found by Diana Sofronieva, and explained in a separate document. It raises issues about highly specialised cognitive abilities of ancient mathematicians whereas this document is about mathematical features of commonplace widely shared, but largely unnoticed, abilities.
Sloman and Chrisley(2003) showed how certain kinds of virtual machinery could explain the ineffability and incomparability of forms of consciousness in different individuals. Further work on causal powers of virtual machine events and processes that in an important sense are not reducible to the physical mechanisms in which they are fully implemented are explained in Sloman (2013). A closely related viewpoint was presented in Maley and Piccinini (2013).
Gilbert Ryle's theory of consciousness as polymorphous is related to this, but I don't think he ever applied it to mathematical consciousness. However, the chapter on imagination in Ryle (1949) shows that he had come close to pre-inventing the idea of a virtual machine with causal powers.
A quick reading of https://www.wired.com/story/new-math-untangles-the-mysterious-nature-of-causality-consciousness/ suggests that Erik Hoel has produced some related ideas, without noticing what has been learnt about phenomena in engineered virtual machinery that are not based on the mathematical mechanisms of noise reduction.
Evolution is a deeply creative (blindly mathematically sophisticated) engineer, so far unmatched by human engineers in many respects, though overtaken in others, as explained in the theory of evolved construction kits Sloman(2017) (work in progress).
A new paper, added: 23 Jul 2020
Some new, more complex, ideas regarding biological evolution, individual
development, environmental changes, and the complex interactions between all
three used by genetic mechanisms in humans and other animals, are presented in
Aaron Sloman, (2020),
Varieties Of Evolved Forms Of Consciousness, Including Mathematical Consciousness,
Entropy, MDPI, 22(6:615),
https://doi.org/10.3390/e22060615
Kenneth Craik, 1943, The Nature of Explanation, Cambridge University Press, London, New York,
Euclid and John Casey,
The First Six Books of the Elements of Euclid,
Project Gutenberg,
Salt Lake City, Apr, 2007,
http://www.gutenberg.org/ebooks/21076
Also see "The geometry applet"
http://aleph0.clarku.edu/~djoyce/java/elements/toc.html
(HTML and PDF)
Gallistel, C.R. & Matzel, L.D., 2012(Epub),
The neuroscience of learning: beyond the Hebbian synapse,
Annual Revue of Psychology,
Vol 64, pp. 169--200,
https://doi.org/10.1146/annurev-psych-113011-143807
J. J. Gibson, The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, MA, 1979.
Seth G.N. Grant, 2010,
Computing behaviour in complex synapses: Synapse proteome complexity and the
evolution of behaviour and disease,
Biochemist 32, pp. 6-9,
http://www.biochemist.org/bio/default.htm?VOL=32&ISSUE=2
Immanuel Kant,
Critique of Pure Reason,
Macmillan, London, 1781. Translated
(1929) by Norman Kemp Smith.
Various online versions are also available now.
I. Lakatos, 1976, Proofs and Refutations, Cambridge University Press, Cambridge, UK,
Tom McClelland, (2017)
AI and affordances for mental action, in
Computing and Philosophy Symposium,
Proceedings of the AISB Annual Convention 2017
pp. 372-379.
April 2017.
http://wrap.warwick.ac.uk/87246
Corey Maley and Gualtiero Piccinini, (2013)
Get the Latest Upgrade: Functionalism 6.3.1,
in
Philosophia Scientiae, 17 (2) 2013, pp. 1--15,
Journal:
http://poincare.univ-nancy2.fr/PhilosophiaScientiae/
Philippe Rochat, 2001,
The Infant's World,
Harvard University Press,
Cambridge, MA,
Gilbert Ryle, 1949, The Concept of Mind, Hutchinson, London,
Erwin Schrödinger,
What is life?,
CUP, Cambridge, 1944.
Commented extracts available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/schrodinger-life.html
Aaron Sloman, Jackie Chappell and the CoSy PlayMate team, 2006, Orthogonal recombinable competences acquired by altricial species (Blankets, string, and plywood) School of Computer Science, University of Birmingham, Research Note COSY-DP-0601, http://www.cs.bham.ac.uk/research/projects/cogaff/misc/orthogonal-competences.html
Dana Scott, 2014,
Geometry without points.
(Video lecture,
23 June 2014,University of Edinburgh)
https://www.youtube.com/watch?v=sDGnE8eja5o
A. Sloman, 1962,
Knowing and Understanding: Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth
(DPhil Thesis), PhD. dissertation, Oxford University, (now online)
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-1962
A. Sloman, 1971, "Interactions between philosophy and AI: The role of
intuition and non-logical reasoning in intelligence", in
Proc 2nd IJCAI,
pp. 209--226, London. William Kaufmann. Reprinted in
Artificial Intelligence,
vol 2, 3-4, pp 209-225, 1971.
http://www.cs.bham.ac.uk/research/cogaff/62-80.html#1971-02
An expanded version was published as chapter 7 of Sloman 1978,
available here.
A. Sloman, 1978
The Computer Revolution in Philosophy,
Harvester Press (and Humanities Press), Hassocks, Sussex.
http://www.cs.bham.ac.uk/research/cogaff/62-80.html#crp
A. Sloman, 1984,
The structure of the space of possible minds,
in
The Mind and the Machine: philosophical aspects of Artificial Intelligence,
Ed. S. Torrance,
Ellis Horwood,
Chichester,
http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#49a
A. Sloman, 1996,
Actual Possibilities, in
Principles of Knowledge Representation and Reasoning
(Proc. 5th Int. Conf on Knowledge Representation (KR `96)),
Eds. L.C. Aiello and S.C. Shapiro,
Morgan Kaufmann,
Boston, MA,
pp. 627--638,
http://www.cs.bham.ac.uk/research/cogaff/96-99.html#15
A. Sloman, 2002,
The irrelevance of Turing machines to AI, in
Computationalism: New Directions, Ed. M. Scheutz,
MIT Press,
Cambridge, MA,
pp. 87--127,
http://www.cs.bham.ac.uk/research/cogaff/00-02.html#77
http://www.cs.bham.ac.uk/research/projects/cogaff/03.html#200302
Aaron Sloman and Ron Chrisley,
Virtual machines and consciousness,
Journal of Consciousness Studies, 10, 4-5, 2003, pp. 113--172,
NOTE:
A detailed commentary (and tutorial) on this paper by Marcel Kvassay, comparing and
contrasting our ideas with the anti-reductionism of David Chalmers, was posted on
August 16, 2012: http://marcelkvassay.net/machines.php
A. Sloman (2007-2014),
Discussion Paper: Predicting Affordance Changes:
Steps towards knowledge-based visual servoing.
(Including some videos).
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/changing-affordances.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
A. Sloman (2013, revised later)
Virtual Machine Functionalism (VMF)
(The only form of functionalism worth taking seriously
in Philosophy of Mind and theories of Consciousness)
Aaron Sloman, 2016,
Natural Vision and Mathematics: Seeing Impossibilities, in
Proceedings of Second Workshop on: Bridging the Gap between Human and Automated
Reasoning,
IJCAI 2016, pp.86--101, Eds. Ulrich Furbach and Claudia Schon,
July, 9, New York,
http://ceur-ws.org/Vol-1651/
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-bridging-gap-2016.pdf
Aaron Sloman 2017,
"Construction kits for evolving life (Including evolving minds and mathematical
abilities.)" Technical report on ongoing long term project.
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html
An earlier (slightly more polished) version, Construction kits for biological evolution, frozen during 2016, was published in a Springer Collection in 2017:
https://link.springer.com/chapter/10.1007%2F978-3-319-43669-2_14
in The Incomputable: Journeys Beyond the Turing Barrier
Eds: S. Barry Cooper and Mariya I. Soskova
https://link.springer.com/book/10.1007/978-3-319-43669-2
Arnold Trehub,
1991,
The Cognitive Brain,
MIT Press,
Cambridge, MA,
http://people.umass.edu/trehub/
Trettenbrein, Patrick C., 2016, The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?, Frontiers in Systems Neuroscience, Vol 88, http://doi.org/10.3389/fnsys.2016.00088
A. M. Turing, (1952) "The Chemical Basis Of Morphogenesis", Phil. Trans. Royal Soc. London B 237, 237, 37-72.
Note: A presentation of Turing's main ideas for non-mathematicians can be found in
Philip Ball, 2015, "Forging patterns and making waves from biology to geology: a commentary on Turing (1952) `The chemical basis of morphogenesis'",
http://dx.doi.org/10.1098/rstb.2014.0218
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham