School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project

Online and Offline Creativity
Aaron Sloman
http://www.cs.bham.ac.uk/~axs

Installed: 21 Jan 2009
Last updated: 10 Feb 2009; 14 Jan 2018 (re-formatted)

Introduction and background

I recently learnt that this very interesting 2005 paper by Karen Adolph is available online:
http://www.psych.nyu.edu/adolph/PDFs/MinnSymp2005.pdf
This is the full reference
Karen Adolph
Learning to Learn in the Development of Action. In
Action As An Organizer of Learning and Development: Volume 33 in
Volume 33 in the Minnesota Symposium on Child Psychology
Series 2005
Edited by John J. Rieser, Jeffrey J. Lockman, Charles A. Nelson

I had never read it, though I heard her talk about the work at the CoSy Project workshop in Sept 2007. workshop website:
http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/latest.html

She addresses the question whether great variability in some aspect of the environment can *prevent* associative learning from happening:

     "Variable contents may help discourage simple associative pairings."

I think is somewhat misleading.

It's not a case of discouraging, but simply not enabling the appropriate frequencies to be encountered. If variable contents are encountered often enough some associative learning will happen, and that's one of the mechanisms by which fluency in complex new areas (including language learning, mathematics, music, athletic performances, etc.) develops. (For example, if children learning arithmetic memorise various simple cases, such as all the sums and products of pairs of single-digit numbers, and other frequently encountered patterns, that can help to make them more fluent in creative problem solving in mathematics. A more subtle kind of pattern recognition is learning indicators of potential errors and techniques that don't work well, which can help learners avoid false trails in mathematical problem solving, though it could also block some fruitful lines of exploration.

There is a more complex story about this in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html

One reason why pattern learning is of limited value in arithmetic is that every multiplication of two numbers is different from every other, and the first time you need to know what this product is

   5 x 13
you have to work it out. If you encounter that problem often enough you can 'cache' the result. Some people only need to do it once to remember the result forever. But attempting to use easily recognized patterns in pairs of numbers in order to avoid the effort of multiplication in future will certainly lead to error. There cannot be a stage when you have have encountered enough multiplications to be able to recognize any new example, or even a tiny subset of all possible new examples, since there are infinitely many pairs of numbers that can be multiplied and as they get bigger the process of working out the result gets longer except for a tiny minority of very special cases, e.g. multiplying a number by 1 or 0.

Further, as any mathematician should know, attempting to use simple patterns in digits to disguish prime numbers from non-prime numbers will fail, as will attempting to use pattern recognition techniques to decide whether one number is a factor of another, except simple cases, such as recognising multiples of 2, 3, 4, 5 or 9.

Humans transfer vast amounts of knowledge from temporary results of perception or reasoning to long term stores, in some cases only if the results are repeated. Presumably that can also happen to a subset of the arboreal locomotion problems of birds. (There is also some one-shot learning, e.g. listening to a lecture on a mathematical proof and remembering how the proof goes, or Mozart hearing a performance and then writing it down).

But when the combinatorics imply that the total set of possibilities for acting or reasoning in a type of situation is very large, or even unbounded, many examples that are encountered for the first time will require the use of reasoning rather than pattern matching to work out the meaning (which includes numerical calculation in some cases, and working out an appropriate action in others, e.g. avoiding a new obstacle while running, or describing a complex object seen for the first time).

For example, it is possible for two or more convex polygons (triangles, quadrilaterals, etc.) to be drawn (or imagined) on the same flat surface, with or without overlaps. There's an infinite variety of possible configurations. What mechanism allows you to recognize a new structure of that kind the first time you see it, provided that it is not too complex? If it is very complex, being sure that all the polygons are convex could be difficult.

In some cases a collection of lines that appears to include a non-convex polygon can be re-analysed in terms of overlapping convex polygons. As in the figure on the left, below, which may appear at first to have two crescent-like non-convex polygons. It can also be analysed as composed of two convex but partly overlapping polygons, in this case black versions of the two octagons indicated as red and blue on the right.

convex

So instead of talking about variety 'preventing' learning, we should talk about what doesn't happen as preventing the learning).

===

A important distinction: online/offline planning or intelligence

Reading Adolph's paper reminded me of an important distinction that needs to be made explicit.

It's the distinction between dealing with novel situations 'online' (the word she uses) and dealing with them 'offline', e.g. during planning of possible actions, reasoning about what would have happened if, making predictions about consequences of something observed, creating explanations of observed evidence.

Adolph's examples all (or nearly all?) involve dealing 'online' with novelty, and she emphasises that explicitly.

This means that the performance of a complex action is done incrementally and each sub-step is actually performed, then a subsequent sub-step selected from the possibilities that then become available (sometimes from a continuous range of possibilities rather than a discrete set).

This does not require the information processing system to be able to represent multiple branching sets of future states and actions. There is only one set of branches at any time, the set of possible next things to do -- where the agent is already physically poised to select and do one of them.

The situations we were envisaging testing were computationally more sophisticated because they required envisaging choices in possible *future* situations, where the choices could lead to new choices.

Perhaps 'anticpatory' could be contrasted with 'online' in this context.

I don't think I have ever read, or written, or said anything very clear about the difference between online creativity and anticipatory creativity, though I point to the difference in the video of the child pushing a broom, which I think you have seen me use: http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/sloman/vid/ In that example we can distinguish the two sorts of creativity.

Online creativity involves dealing with situations where the broom handle is caught between rails, or when the broom has been pushed into a skirting board that restricts its motion, and maybe when the broom is being steered down a corridor for the first time.

Anticipatory creativity is demonstrated when in advance of a doorway leading off to the right the toddler starts moving the broom handle to the left so that it can then be pushed forward into the room when he has reached the doorway. That's only one step anticipation, whereas I think the orangutans have to solve more complex problems.

In part they can do the more complex anticipation by ignoring details that cannot be discarded in the online case, e.g. the precise positions of limbs, the precise contents of the visual field of view, the precise momentum produced by previous movements, etc.

Making the problem more abstract by ignoring details increases the sophistication of the mechanisms and forms of representation required (the ability both to derive abstractions from detailed percepts and detailed sets of motor signals, and also to use the abstractions in producing detailed behaviours), but that greater sophistication results in much reduced computational load -- there's much less information to process in selecting the actions.

It seems to require a large extension of the brain to process that kind of much-reduced information! (Including frontal lobes?)

Is what I have just written familiar from any of the animal behaviour literature? I don't recall ever having read it in the context of AI/Robotics, but it is so obvious that someone must have noted it.

Two sorts of creativity

Links with AI and Cognitive Science viewpoints

To be added.


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham