School of Computer Science THE UNIVERSITY OF BIRMINGHAM

Yet Another Singularity of Intelligence
SOCC: The Singularity of Cognitive Catchup

(This one may already have arrived.)

Aaron Sloman


Installed: 19 Apr 2010
Last updated: 19 Apr 2010; 7 Aug 2010; 11 Jun 2011
7 Aug 2010: Added section on a possible escape from the singularity.
16 Feb 2018: Reformatted

Various Singularities Predicted

Several people (e.g. Ray Kurzweil) have discussed singularities involving the development of machines that are more intelligent than humans (often ignoring the obscurities of the notion 'more intelligent' and the possibility that intelligence is of so many kinds that there is no sensible way of assigning them a linear order).

The Singularity of Cognitive Catch-up (SOCC)

This is a first draft document reporting on a different sort of singularity concerning human intelligence, of a kind that does not depend on the development of intelligent machines, but which may force us to rely more and more on intelligent machines. It could be named "the Singularity of Cognitive Catch-up (SOCC)".

This singularity may already be upon us.

Human evolution produced mechanisms that make it unnecessary for each individual to repeat the learning done by predecessors. Instead once something has been learnt it is often possible for others to learnt in a different way, far more quickly, guided by those have learnt it previously.

Those mechanisms are not well understood. I am trying with a biologist colleague to understand some of the general features. See

   Natural and artificial meta-configured altricial information-processing systems,
   International Journal of Unconventional Computing, 3, 3, pp. 211--239,
   online here. (PDF)

This has meant until the recent past that individuals could advance knowledge by first learning what has previously been achieved and then building on it by making significant advances: acquiring new concepts, new ontologies, new forms of representation, new theories, new skills, new, facts about the world and its history, and producing new machines, toys, shelters, games, etc.

I call this process of (standing on the shoulders of predecessors to see ahead) 'Cognitive Catch-up'.

It is a process that, until recently, has allowed each generation to cognitively overtake its predecessors.

It depends on three products of evolution:

  1. Cognitive architectures that allow individuals

    (a) to become aware of things they have learnt (concepts, ontologies, theories, explanations, particular facts about what exists or has happened or will happen including spatio-temporally remote, inaccessible facts) and

    (b) to find ways of representing things they have learnt other than simply using them.

  2. Forms of communication, verbal and non-verbal, including forms of instruction, teaching and training, and types of toys and games, that allow what has been learnt to be communicated to others, either explicitly or implicitly through accelerated re-discovery.

  3. Cognitive architectures including mechanisms that allow individuals to learn things from others (both explicitly and implicitly) far more rapidly than if they had to discover them for themselves.

The Singularity of Cognitive Catchup (SOCC)

This continual cognitive catchup and overtaking of one generation by another must break down when so much has already been learnt, discovered, invented, etc., that every significant advance that is possible in principle, requires more prior learning from predecessors than can be accommodated in a lifetime -- either because of lack of time in a normal life span, or because of individual memory limits.

It is possible that this singularity can be postponed if rates of learning from predecessors can be increased, or if the learning processes can be shared, or delegated, e.g. to machines.

But there must be a capacity limit -- even if we consider *only* mathematical knowledge since there are infinitely many uncompressable mathematical truths to be discovered.

Even if superhuman machines are eventually developed (we have some superhuman competences already in machines, but they have serious limitations), those machines will also have a limit, if they are restricted to what can be accommodated on one planet.

A corollary

If we have reached the singularity (perhaps only in some fields of enquiry?) then we are doomed from now on (in those fields) to merely continually reinvent old ideas, possibly with new names, and slight variations in the arguments.

I see evidence of this happening in the fields that interest me: especially philosophy, cognitive science, AI, though there are sub-fields where some progress is still being made.

A Possible Escape From The Singularity

This idea came from a conversation with Helge Ritter about this singularity, when he visited a couple of weeks ago. It would be possible to avoid the finality of the singularity as described here, if it is true that for every level of description of any kind of domain, as the details become more rich and complex there will inevitably be a new higher level at which structures can be observed, analysed, thought about, where the complexity is hugely reduced despite new gains in explanatory power. (Or something like that.)

This is not an entirely new idea: the notion that there is always a new kind of simplicity to be found is (if I remember correctly) a major theme of two books by Jack Cohen and Ian Stewart:

Jack Cohen, Ian Stewart, The collapse of chaos,
Penguin Books, 1994

Ian Stewart and Jack Cohen, Figments of Reality: The Evolution of the Curious Mind,
Cambridge University Press, 1997.

Some responses to the above

In May 2011, following, a flurry of online discussion by philosophers reacting to Hawking's claim that "Philosophy is Dead" (discussed here, for example), I posted a link to this page and a summary. This produced a small number of responses, including one from Anne Jaap Jacobson, a philosopher at the University of Houston. With her permission, I've added here, some of our subsequent interchange (with a few minor changes). I also received some comments from others, and will later attempt to summarise.


From: Aaron Sloman Sat May 28 12:29:18 BST 2011
To: Anne Jacobson
Subject: Re: Hawking and philosophy -- and the singularity of cognitive catchup

Thanks for your message. This is the sort of thoughtful comment I was hoping for. I
have looked at your web page, and it seems that you are one of the unusually broad
philosophers (Professor of Philosophy and Electrical and Computer Engineering -- a
great combination! -- did you do both at Oxford?) who are exceptions to the
excessively narrow education and vision that afflicts most philosophers I encounter
-- face to face or in print.

I would have liked to keep "philosophy" in my title (I started as a lecturer in
philosophy in 1962, and my title was changed to "Artificial Intelligence and
Cognitive Science" 22 years later). but the powers that be, first at Sussex
University, then here in Birmingham decided on "Cognitive science and artificial
intelligence").

You should make your papers available online. I would like to read these two, if
possible:

    comments on E. Machery's Precis, Behavioral and Brain Sciences,

     "What Should a Theory of Vision Look Like?" Philosophical
        Psychology, 2008, 21 (5), pp. 641-655.

I've been collecting requirements for a theory of vision for many years, in part
because of its relevance to philosophy of mathematics, the subject of my oxford
DPhil, and the topic that got me into AI and fuels my interest in robotics.

Examples:

    http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane/
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk35
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk88
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk59

> I usually tell students that when they want to challenge what someone
> has said, they should try to see why what the person said might be true.

Yes that's very important. Popper's version, which I recommend to students (and
colleagues), is that before attacking a thesis you should try to present it in as
strong a form as possible (which may be better than the proponent's presentation) so
as to avoid rebuttals claiming misunderstanding etc.

[I don't always succeed in following my own advice... It's quite hard sometimes when
a theory is very bad but very popular.]

>  There are obvious advantages to doing so, and I think it is possible to read in
> Hawking's comment a strong challenge to philosophy.  It involves reading the part
> about not keeping up with the sciences as meaning philosophy is not keeping pace with
> the sciences in providing transformative new views.

Yes.

The fuss about Hawking echoes an earlier fuss between philosophy and physics: Arthur
Eddington's popular book on the nature of the physical world started with a claim
that there's the table we think we see which is solid, etc. and the table that's
really there, which is mostly empty space, etc. and ignored by philosophers. Another
distinguished physicist, James Jeans, also challenged philosophers. Susan Stebbing
wrote an extended critique ("Philosophy and the physicists"  1937). I forget most of
the details of the debate, though!

(I recently stumbled across an overview discussion of some of the issues online here:

    http://articles.adsabs.harvard.edu/full/seri/QJRAS/0035//0000249.000.html
    Title: A Most Rare Vision - Eddington's Thinking on the Relation
        Between Science and Religion
    Authors: Batten, A. H.
    Journal: R.A.S. QUARTERLY JOURNAL V. 35, NO. 3/SEP, P. 249, 1994 )

One of the examples on which I've made several failed attempts to engage with other
philosophers is what we have learnt (implicitly, but mostly failed to attend to)
about virtual machinery and causation in the last half century.  Is that something
you have thought about?

I shall be making another attempt at a philosophy of science conference in Nancy in
July. I think that we are now in a position to defend Darwin against his critics who
argued that natural selection can only produce changes in physical form and
behaviour, and cannot account for the existence of minds and consciousness, etc.
(I.e. we can close/bridge Huxley's "Explanatory gap" -- frequently
reinvented/rediscovered and relabelled.)

A draft of that paper is now here:
    http://www.cs.bham.ac.uk/research/projects/cogaff/11.html#1103
    Evolution of mind as a feat of computer systems engineering: Lessons
    from decades of development of self-monitoring virtual machinery.

> To some extent this connects with Aaron Sloman's concern that we might be coming to
> the end of transformative new views.  But I find Sloman's remarks puzzling on two
> counts:
>
> 1.  I would have thought there is a fair amount of thought in the sciences about how
> to get one generation's knowledge absorbed by the next, and this has led to a
> distribution of scientific knowledge within and over research groups.

Many attempts have been made. Whether they have succeeded is another matter. One of
my summary observations is: interdisciplinarity has to happen in minds not buildings.

(Perhaps that should be "not only in buildings".)

Perhaps you'll recognise what I am complaining about. Maybe you were more successful
than most in your university.

Sometimes there is true interdisciplinarity in a new research group, but usually
achieved at the cost of a new kind of narrowness. I see this in groups concerned with
so-called "embodied cognition" for example. E.g. most of them seem to focus entirely
on the sorts of information processing involved in dynamic online interaction with
the environment ignoring most of the cognition involved in science, engineering,
architectural design, use of complex machines and buildings, etc.

They don't get anywhere close to social interaction based on meta-semantic
competences, including dealing with referential opacity and related problems. (A few
try, but progress is very slow.)

> In addition to groups, conferences have become important ways to pool knowledge.  It
> is possible that in fact knowledge acquisition has always been more of a group
> endeavor than our Western individualism has allowed.

I don't know that there's anything 'western' about this. The reason I mentioned
printing in the message to which you responded is that before modern communications
printed papers (and letter writing) allowed geographically dispersed leading thinkers
to share ideas and criticise one another. Kant's criticisms of Hume illustrate this,
don't they?

But unless you are claiming that somehow research groups can produce a kind of
collective understanding leading to collective major new insights and theories, the
use of groups will not overcome limits to what individuals can absorb in a lifetime.
Group memories (e.g. in libraries) often become non-functional, in my experience. The
use of things like google can help to counter this, but I don't know if that sort of
thing merely postpones the time when there are no new, deep, broad ideas being
developed -- even if lots more specific facts are being discovered.

Going back to education: since around 2004 I have been involved in Europe-wide
attempts to advance robotics by bringing together ideas from different disciplines
about how to assemble different functions in a working robot. It has been extremely
difficult to find post-docs and PhD students whose education prepares them for the
kind of work that is required. The educational system in the UK and Europe certainly
does not address the need (even when paying lip service to it). Maybe things are
different in the USA?  (I'd be very surprised.)

[A presentation attempting to characterise the need and making some suggestions about
how to address it is here:
    http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk92
Prepared for http://www.computingatschool.org.uk/ ]

A dreadful recent development that makes things worse (at least recent in the UK,
over the last two or three decades) is the enormous pressure on young academics to
get grants and to publish in prestigious journals, almost immediately after they
start their first jobs. This is only possible with a very narrowly focused, highly
concentrated effort. In contrast, at Sussex university in the 1960s nobody cared
whether I got grants or not, and I published only when I thought I had something
written that was worth publishing. (I had my first grant 13 years after I started as
a lecturer in 1962 at Hull).

So, in those days, people like me could continue to broaden their education at
post-doctoral level in ways that would now be fatal to the career prospects of young
academics. (I thing a biologist who has been collaborating with me on theoretical
issues has harmed her career as a result: partly because it is very hard to get
grants for research of our kind.)

> 2.  On the site linked to, Sloman says "
>
> If we have reached the singularity (perhaps only in some fields of enquiry?) then we
> are doomed from now on (in those fields) to merely continually reinvent old ideas,
> possibly with new names, and slight variations in the arguments.   I see evidence of
> this happening in the fields that interest me: especially philosophy, cognitive
> science, AI, though there are sub-fields where some progress is still being made. "
>
> I find this surprising because over the last 10-15 years, cognitive science, and
> particularly recent cognitive neuroscience, has produced a transformation of our view
> of human cognition that is quite significant.

I think there have been massive advances in neuroscience. I spend quite a lot of time
with the psychologists and neuroscientists in Birmingham. Unfortunately the attempts
to link the real advances regarding neural mechanisms to issues in cognition strike
me as mostly very shallow and based on very narrow views of what the problems are
that need to be solved. E.g. I keep meeting people who think that the function of
vision is to recognise objects in the environment, ignoring most of the things they
use vision for in everyday life.

>    It has in effect been done in groups widely spread out.   Taken together, it
> arguably surpasses Kant, Freud and other (supposedly) great theorists of human
> cognition.

Yes: collectively we now know much more. But there are, for example, deep things in
Kant's ideas about the nature of mathematical knowledge that are completely beyond
the ken of most people in developmental psychology and neuroscience who are
interested in mathematical cognition.

Annette Karmiloff-Smith's 1992 book Beyond Modularity, makes some important
steps in the right direction. Unfortunately I only read it recently despite knowing
of its existence for some time. I've started writing a very personal, still very
disorganised, review of it here, if you are interested:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/beyond-modularity.html

(I never manage to finish anything. So my web pages just grow and are reorganised
from time to time.)


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham