School of Computer Science
(DRAFT: Liable to change)

Misty Window Mathematics

Visual experiences of a bus passenger
And ancient mathematical competences.
A challenge to current theories about brains and minds,
and current forms of computation.

Abstract

A type of visual experience is described, involving several different scenes superimposed, with objects moving in different directions. Implications for brain mechanisms are discussed, especially the forms of representation that might be capable of implementing such experiences. There seem to be strong challenges for theories of natural and machine vision, including a requirement for a new mathematical specification of the structures and processes involved, which don't seem to be explicable by any standard theory of visual perception.

The bus-window experience

I suspect I am not the only person to have had the following sort of experience. I was in a sideways-facing seat in a bus at night with dust/condensation on the windows, looking out through the windows at a dimly lit scene outside. As the bus started moving I could see the following scenes superimposed:

3D: The people and other objects *outside* the bus, and their movements, with optical flow from that scene generally going from right to left.
3D: The *reflections* of people and their movements inside the bus, including my own reflection whose movements I could control. Direction of optical flow of reflected objects depended on whether they were moving or stationary relative to the bus.
3D: The motion of the bus itself through the external scene, observed from inside (probably also making use of proprioceptive and inertial --semi-circular canals-- sensors detecting my acceleration, as well as sensed velocity of window texture (dust, condensation, scratches, stickers etc.) across external scene.
2D: Patterns of dust/condensation on the windows, rigidly moving with the bus and fixed in my field of view as I stared out through an approximately fixed location on the window surface.

I described this experience during a discussion session at a conference on vision in London in 1981, though I can't now find any online record of that discussion. It was not included in the conference proceedings O.J. Braddick & A.C. Sleigh (1983).

All of those different more or less changing perceived configurations grow in parallel out of one pair of retinal sensory arrays (partly aided by vestibular and other bodily sensors (pressure, inertia, etc.)

On the basis of experience with one eye shut or covered, I suspect the above processes would also work with one eye: i.e. stereo is not essential. I also suspect there are connections with the task of the visual system of a bird flying through branches and foliage to its nest, or to some fruit or insect prey in the tree, or the visual requirements of squirrels and other animals moving rapdly through tree-tops. A sample video demonstrating some aspects of the bus-window scenario is available here:
https://photos.app.goo.gl/tcTtEXD6wSayGuPn9 (This works only in browsers that can handle mp4 format. If the video is not displayed immediately click on the window.)

The bus window example and the video illustrate the fact that by combining a wide variety of relatively imprecise measures, including measures of change and rates of change, our brains can derive complex interacting percepts involving 3D structures and their motions. (A more precise description of the achievement is needed, to act as a target for system builders.)

I suspect that current AI vision systems that are intended to achieve 3D scene perception by using triangulation algorithms and laser mechanisms identifying directions and distances of a large but rapidly changing subset of surface points would fail dismally in the bus scenario. Collecting large amounts of very precise but constantly changing pointwise static 3D information, such systems make it hard to work out what's changing and what's preserved across changes in complex scenes during motion of the perceiver. (This needs a mathematical argument.)

Conjecture:

Use of 2D information projected to a common surface (or sensor array) gives up the high 3D point precision of kinect-like systems, but facilitates far more useful complex grouping across space and time.

Walking through a botanical centre full of unfamiliar plants with constantly changing visibility of surfaces at different distances would be another test scenario. (E.g. https://www.birminghambotanicalgardens.org.uk/ now closed, alas.)

Does my admittedly vague description of uses of constantly changing imprecise (partially ordered) collections of information point to some well known mathematical structure answering Steve's question, that I have not encountered?

A crazy(?) idea: Mobile robot experiments might use a large vertical mirror with lots of dust on its surface so that robot views a reflected cluttered scene through which the robot moves, using a mixture of directly visible and reflected structures.

Implications

After a recent discussion in a theoretical computer science seminar, a colleague, posted some reflections regarding where real numbers come from, in the experience of mathematicians. He wrote:

"Sensor inputs may look like rationals, but I guess for the numerical analysis it would be a blunder to treat them as exact rationals."

The rest of this document expands my response to his question, suggesting that the physical functions and mathematical properties of some biological sensor values are far more complex, and they are used in far more sophisticated ways, than most researchers imagine, and that this may be related to ancient abilities to do mathematical reasoning about spatial structures and processes that are currently missing from AI systems and unexplained by neuroscience. This is closely related to Immanuel Kant's claims about mathematical knowledge -- widely, but mistakenly, believed to have been refuted over a century ago by the work of Einstein and Eddington. (Compare the criticisms of Kant in Hempel(1945).)

In his Critique of Pure Reason Kant(1781) https://archive.org/details/immanuelkantscri032379mbp, Kant claimed that important kinds of mathematical knowledge

(a) are not empirical (i.e. they are not derived from, and are not subject to refutation by, sensory experience), but
(b) are also not innate, since a new-born infant does not already have the knowledge, and
(c) they involve necessary not contingent truth, e.g. because it is impossible (e.g. spatially impossible, not logically impossible) for situations to exist that refute them.

I suspect important properties of spatial sensor mechanisms that underpin those kinds of ancient mathematical discovery are also relevant to unreflective intelligent uses of spatial information by many intelligent species, e.g. squirrels, apes, elephants, and also the spatial intelligence of pre-verbal human toddlers, illustrated in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html

As far as I know these (still half-baked) ideas have never been developed in philosophy of mathematics, in theories of biological vision, or in neuroscience. At present I can only present some incomplete suggestions about what may be included in some future testable theory of spatial cognition, in intelligent animals and robots, and especially in ancient mathematicians.

Making all this more precise may require new kinds of mathematics and new forms of biologically inspired computation. I shall give a few pointers and illustrative examples, which suggest the need for forms of computation using networks of linked partial orderings.

A popular, but mistaken (or incomplete), view of biolgical sensors

A fairly common (but misleading) theory is that individual biological sensors produce numerical values with very low precision and reliability, compensated for by use of complex neural networks to extract reliable high precision numerical values from large numbers of unreliable, low precision measures of neural sensor outputs, e.g. William Calvin How Brains Think: Evolving Intelligence, Then and Now (1996). https://en.wikipedia.org/wiki/William_H._Calvin

There is a very different, more subtle, but still underspecified view, which I'll try to explain.

Many years ago I learnt from a colleague, Mike Ward, who worked on designing an artificial robot skin to replicate properties of (e.g.) the skin on human fingers, that although biological sensors are poor at measuring exact values they can be very good at detecting direction of change across time or space -- e.g. whether a pressure or other input at a particular location is increasing or decreasing over time, or whether inputs to a collection of spatially adjacent sensors are increasing or decreasing across the surface, at a particular time, using abilities to compare changes across space rather than time.

A suitably connected array of such sensors could tell that something is sliding across the surface of the skin in a particular direction, or that skin is sliding across the surface of an external object -- using additional sensor and/or motor information, e.g. proprioceptive sensors in joints, or motor signals sent TO joint controllers. E.g. interpreting sensed changes in roughness at a fingertip may require using motor control information about how the finger is being moved across the surface.

Detecting speed as well as direction of motion requires more complex collaboration between sensors and effectors, etc. These comments are related to some of the ideas in James Gibson's books: Gibson(1966) and Gibson(1979) though I think there are phenomena Gibson did not notice, including detection that something is possible (though not occurring) or impossible, or will necessarily will have certain consequences if it occurs. Note: these are not probabilities: possibility, impossibility and necessity don't have degrees.

What I think is much more surprising, and generally ignored, is a closely related but more complex collection of modes of cooperation across visual sensors, producing complex collections of information about spatial structures and processes changing over space (e.g. faster motion in one location than in another) and time, e.g. texture flow increasing at a particular location.

I think there are complex mixtures of such capabilities that as far as I know nobody understands at present.

---------


I suspect there is nothing in current neuroscience that explains the bus window
phenomenon.

(Does Stuart Hameroff's theory about the functions of sub-neural microtubules
provide any relevant mechanisms? See his
recorded talk at the Oxford models of consciousness conference:
https://www.youtube.com/channel/UCWgIDgfzRDp-PmQvMsYiNlg/videos
Roger Penrose had an invited talk at that conference, but his examples were
all very different from this, and I don't think he specified relevant
mechanisms.)

References

O.J. Braddick and A.C. Sleigh (Eds.), 1983, Physical and Biological Processing of Images, (Proceedings of an international symposium organised by The Rank Prize Funds, London, 1982.) Springer-Verlag.

Jordana Cepelewicz, 2016 How Does a Mathematician's Brain Differ from That of a Mere Mortal? Scientific American Online April 12, 2016
https://www.scientificamerican.com/article/how-does-a-mathematician-s-brain-differ-from-that-of-a-mere-mortal/

Jackie Chappell and Aaron Sloman (2007a). Natural and artificial meta-configured altricial information-processing systems. (2007a) International Journal of Unconventional Computing, 3(3), 211-239. http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#717

Jackie Chappell and Aaron Sloman, (2007b) Two ways of understanding causation: Humean and Kantian,
Contributions to WONAC: International Workshop on Natural and Artificial Cognition Pembroke College, Oxford, June 25-26, 2007, http://www.cs.bham.ac.uk/research/projects/cogaff/talks/wonac

Kenneth Craik, 1943, The Nature of Explanation, Cambridge University Press, London, New York
Craik drew attention to previously unnoticed problems about biological information processing in intelligent animals. For a draft incomplete discussion of his contribution, see
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kenneth-craik.html

Euclid and John Casey (2007) The First Six Books of the Elements of Euclid, Project Gutenberg, Salt Lake City, Third Edition, Revised and enlarged. Dublin: Hodges, Figgis, \& Co., Grafton-St. London: Longmans, Green, \& Co. 1885,
http://www.gutenberg.org/ebooks/21076

H. Gelernter, 1964, Realization of a geometry-theorem proving machine, reprinted in Computers and Thought, Eds. Edward A. Feigenbaum and Julian Feldman, McGraw-Hill, New York, pp. 134-152,
http://dl.acm.org/citation.cfm?id=216408.216418

Robert Geretschlager, 1995. Euclidean Constructions and the Geometry of Origami, Mathematics Magazine, 68, 5, pp. 357--371, Mathematical Association of America, http://www.jstor.org/stable/2690924

J. J. Gibson, 1966, The Senses Considered as Perceptual Systems, Houghton Mifflin, Boston, MA, USA.

J. J. Gibson, 1979 The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, MA, USA.

Carl G. Hempel, Geometry and Empirical Science, 1945, American Mathematical Monthly, Vol 52, Reprinted in Readings in Philosophical Analysis, eds. H. Feigl and W. Sellars, New York: Appleton-Century-Crofts, 1949,
http://www.ditext.com/hempel/geo.html

David Hilbert, 1899, The Foundations of Geometry,, available at Project Gutenberg, Salt Lake City, http://www.gutenberg.org/ebooks/17384 2005, Translated 1902 by E.J. Townsend, from 1899 German edition,

Immanuel Kant's Critique of Pure Reason (1781), has relevant ideas and questions, but he lacked our present understanding of information processing (which is still too limited). An online version is here:
https://archive.org/details/immanuelkantscri032379mbp

Imre Lakatos, Proofs and Refutations,
Cambridge University Press, 1976,

John McCarthy and Patrick J. Hayes, 1969, "Some philosophical problems from the standpoint of AI", Machine Intelligence 4, Eds. B. Meltzer and D. Michie, pp. 463--502, Edinburgh University Press,
http://www-formal.stanford.edu/jmc/mcchay69/mcchay69.html

David Mumford, 2016, Grammar isn't merely part of language, Online Blog,
http://www.dam.brown.edu/people/mumford/blog/2016/grammar.html

Tuck Newport, Brains and Computers: Amino Acids versus Transistors,
2015, Kindle,
Discusses implications of href="#von-Neumann-brain">von Neumann 1958,
https://www.amazon.com/dp/B00OQFN6LA

Jean Piaget, (1952). The Child's Conception of Number. London: Routledge & Kegan Paul.

Piaget, 1981, 1983 Jean Piaget's last two (closely related) books written with collaborators are relevant, though I don't think he had good explanatory theories.

Possibility and Necessity
Vol 1. The role of possibility in cognitive development (1981)
Vol 2. The role of necessity in cognitive development (1983)
University of Minnesota Press, Tr. by Helga Feider from French in 1987

(Like Kant, Piaget had deep observations but lacked an understanding of information processing mechanisms, required for explanatory theories.)

Erwin Schrödinger (1944) What is life? CUP, Cambridge,
I have an annotated version of part of this book here (also PDF):
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/schrodinger-life.html

Dana Scott, 2014, Geometry without points. (Video lecture, 23 June 2014,University of Edinburgh)
https://www.youtube.com/watch?v=sDGnE8eja5o

Frege on the Foundation of Geometry in Intuition Journal for the History of Analytical Philosophy Vol 3, No 6. pp 1-23,
https://jhaponline.org/jhap/issue/view/271

Siemann, J., & Petermann, F. (2018). Innate or Acquired? - Disentangling Number Sense and Early Number Competencies. Frontiers in psychology, 9, 571. doi:10.3389/fpsyg.2018.00571
https://www.ncbi.nlm.nih.gov/pubmed/29725316

Sloman, A. (1962). Knowing and Understanding: Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth (DPhil Thesis), Oxford University. (Transcribed version online.)
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1962

Aaron Sloman, 1965, "Necessary", "A Priori" and "Analytic", Analysis, Vol 26, No 1, pp. 12--16.
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1965-02

A. Sloman, 1971, "Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence", in Proc 2nd IJCAI, pp. 209--226, London. William Kaufmann. Reprinted in Artificial Intelligence, vol 2, 3-4, pp 209-225, 1971.
http://www.cs.bham.ac.uk/research/cogaff/62-80.html#1971-02
A slightly expanded version was published as chapter 7 of Sloman 1978, available here.

A. Sloman, 1978 The Computer Revolution in Philosophy,
Harvester Press (and Humanities Press), Hassocks, Sussex.
Free, partly revised, edition online:
http://www.cs.bham.ac.uk/research/cogaff/62-80.html#crp

A. Sloman, (1978b). What About Their Internal Languages? Commentary on three articles by Premack, D., Woodruff, G., by Griffin, D.R., and by Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. in BBS Journal 1978, 1 (4). Behavioral and Brain Sciences, 1(4), 515.
http://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1978-02

Aaron Sloman (2012-...), The Meta-Morphogenesis (Self-Informing Universe) Project (begun 2012, with several progress reports, but still work in progress).
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.pdf

"Some (possibly) new considerations regarding impossible objects" Aaron Sloman, 2015, ff.). (Including their significance for (a) mathematical cognition, (b) serious limitations of current AI vision systems, and (c) philosophy of mind, i.e. possible contents of consciousness).
The web page is based on a set of notes and examples prepared for an invited talk on vision at Bristol University, on 2nd Oct 2015, substantially extended at various times since then:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

Aaron Sloman, 2013--2018, Jane Austen's concept of information (Not Claude Shannon's)
Online technical report, University of Birmingham,
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.pdf

Aaron Sloman, 2016, Natural Vision and Mathematics: Seeing Impossibilities, in Proceedings of Second Workshop on: Bridging the Gap between Human and Automated Reasoning, IJCAI 2016, pp.86--101, Eds. Ulrich Furbach and Claudia Schon, July, 9, New York,
http://ceur-ws.org/Vol-1651/
http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-bridging-gap-2016.pdf

A. Sloman (with help from Jackie Chappell), 2017-8, The Meta-Configured Genome (unpublished)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html

A. Sloman, 2018a, A Super-Turing (Multi) Membrane Machine for Geometers Part 1
(Also for toddlers, and other intelligent animals)
PART 1: Philosophical and biological background
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-phil.html

A. Sloman, 2018b A Super-Turing (Multi) Membrane Machine for Geometers Part 2
(Also for toddlers, and other intelligent animals)
PART 2: Towards a specification for mechanisms
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/super-turing-geom.html

Aaron Sloman, 2018c,
Biologically Evolved Forms of Compositionality
Structural relations and constraints vs Statistical correlations and probabilities
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/compositionality.html (also PDF).
Expanded version of paper accepted for First Symposium on Compositional Structures (SYCO 1)
Sept 2018 School of Computer Science, University of Birmingham, UK
http://events.cs.bham.ac.uk/syco/1/

Trettenbrein, Patrick C., 2016, The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?, Frontiers in Systems Neuroscience, Vol 88,
http://doi.org/10.3389/fnsys.2016.00088

A. M. Turing, (1950) Computing machinery and intelligence,
Mind, 59, pp. 433--460, 1950,
(reprinted in many collections, e.g. E.A. Feigenbaum and J. Feldman (eds)
Computers and Thought McGraw-Hill, New York, 1963, 11--35),
WARNING: some of the online and published copies of this paper have errors,
including claiming that computers will have 109 rather than 109 bits
of memory. Anyone who blindly copies that error cannot be trusted as a commentator.

A. M. Turing, (1952), 'The Chemical Basis Of Morphogenesis', in
Phil. Trans. R. Soc. London B 237, 237, pp. 37--72.
(Also reprinted(with commentaries) in S. B. Cooper and J. van Leeuwen, EDs (2013)).

A useful summary of Turing's 1952 paper for non-mathematicians is:
Philip Ball, 2015, Forging patterns and making waves from biology to geology: a commentary on Turing (1952) `The chemical basis of morphogenesis', Royal Society Philosophical Transactions B,
http://dx.doi.org/10.1098/rstb.2014.0218

John von Neumann, 1958 The Computer and the Brain (Silliman Memorial Lectures), Yale University Press. 3rd Edition, with Foreword by Ray Kurzweill. Originally published 1958.

Wikipedia contributors, 2018, Mathematics of paper folding Wikipedia, The Free Encyclopedia,
https://en.wikipedia.org/w/index.php?title=Mathematics_of_paper_folding&oldid=862366869


This work, and everything else on my website, is licensed under a Creative Commons Attribution 4.0 License.
If you use or comment on my ideas please include a URL if possible, so that readers can see the original, or the latest version.


cc-license Creative Commons License


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham