Date: Mon, 3 Nov 1997 11:35:07 +0000
Reply-To: "PSYCHE Discussion Forum (Biological/Psychological emphasis)"
<[log in to unmask]>
Sender: "PSYCHE Discussion Forum (Biological/Psychological emphasis)"
<[log in to unmask]>
From: Aaron Sloman <[log in to unmask]>
Subject: Re: Volition, levels, causation, enslavement
I found myself resonating agreeably with some of the comments from
Walter Freeman <[log in to unmask]> though I would express
some details differently.
> Electrophysiological studies by GW
> Walter, Kornhuber, Deecke, Libet and others of humans engaged in
> self-paced voluntary acts reveal neural processes that precede awareness
> of an intended act by about half a second.
Later on you seem to talk of awareness as a high level dynamical control
state in a circular causal relationship to lower level neural states.
This seems correct to me.
In that case it's not obvious that awareness can have sufficiently
precise temporal bounds to stand in such a precise relation to neural
events, though of course a behavioural manifestation of awareness (like
pressing a button) can do so.
Suppose becoming aware of something is more like a football team coming
to dominate its opponents than like a ball crossing a goal line. Then
just as global relationships between teams are not necessarily embedded
in the same temporal topology as relations between physical events
involving the ball, similarly events involving "higher level" mental
states (control states) need not be embedded in the same fine-grained
time frame as the physical events that implement them -- making precise
temporal comparisons meaningless.
For a system which not only detects patterns in complex phenomena but
also applies that capability to itself, the process of becoming aware of
becoming aware may have temporal properties and relations that are even
more disconnected from time-scales of the physical/physiological
implementation details.
Compare a self-monitoring computer operating systems which detects that
it is thrashing (spending more time paging and swapping etc. than doing
useful work, i.e. running user programs). It may first of all detect
that thrashing state and then start to take remedial action of various
kinds e.g. blocking new logins, changing time quanta for processes,
lowering priority of long running processes, taking more care over
memory allocation so as to optimise CPU usage, etc. In some cases this
may "restore normal functioning".
However, although the computer's fundamental operations may be so fast
that there are several hundred million every second, it will not
necessarily make any sense to ask, to the nearest hundred millionth of a
second, when the operating system initially detected the need for
remedial action, nor when it had fixed the problem.
There may be several different issues here. For instance:
(a) the global state may change gradually (e.g. interactive
response delays reduce by degrees as control is regained)
(b) the important properties may be statistical averages which
fluctuate so that thresholds are not crossed at a well defined
instant
(c) if a multi-component global state is not defined in terms of
necessary and sufficient conditions there may not be any well
defined thresholds anyway: the change may be qualitative and
structural rather than numerical.
(d) requirements for adequacy of response may be relative to the
environment -- e.g. someone who is distracted from his terminal
by a phone call may not care that the last command took 20
seconds instead of 0.2 seconds to complete.
People who think that becoming conscious of something is an "all or
nothing" event that has a definite temporal location are probably
deceived by introspection. Introspection is just another biologically
based process of information acquisition, i.e. perception, and like all
forms of perception can be misleading in many ways, in addition to the
difficult conceptual problems arising out of reflexivity.
> In my book, "Societies of Brains" (1995),
Sounds like yet another book I ought to read...
> I make a case that *all*
> intentional acts are initiated by limbic dynamics without prior
> awareness; that after initiation only a fraction of them come to
> awareness; and that in Western cultures there has developed the illusion
> of control of action by awareness.
I suspect that what you are trying to say is correct, but how you
say it could be improved!
It's not so much
an illusion of control by awareness,
more
an illusion of awareness of control,
partly based in an illusion of conceptual clarity and precision where
none is present. (Compare the discussions of "free will" in Stan
Franklin's book Artificial Minds, and Dennett's Elbow Room.)
We are (normally) in control, but not necessarily aware of whatever it
takes to be in control. We just know that it's a normal situation. In
that sense I am as confident (in normal situations) of your being in
control of your actions and decisions as I am of my being in control.
You may partially lose control without being aware of it, even though
your friends find it obvious that you are being driven by infatuation or
jealousy which you deny, even to yourself. (Poets, novelists,
playwrights, etc. often make use of this *fact* about human minds.)
If you ask what it means to be in control of yourself, to be responsible
for your actions, it is very difficult to define this notion positively.
Defining types of reduction of control is easier.
If you are a normal person functioning normally in normal circumstances
then you are as responsible and in control as it is *reasonable* to
require anyone to be: your actions and your decisions flow from and are
in accordance with your desires, preferences, attitudes, standards,
personality, beliefs, etc. As many philosophers have asked: if all those
conditions are satisfied what *additional* type of freedom (or control)
could anyone want? (All the answers most often suggested are
unsatisfactory: they either include some sort of causal gap, which is
hardly any kind of control or they postulate some unexplained additional
causal entity: a "self" or "soul" or whatever. It's not clear that any
additional entity besides your beliefs, desires, attitudes, preferences
etc. can leave YOU in control.)
However, when there is clearly something abnormal interfering with your
actions or your decisions, e.g. drug addition, hypnotic effects, a
revolver held at your head, extreme emotional disturbance (e.g.
overwhelming grief or jealousy), intense conflict between desire and
duty, brain damage, powerful distorting influence from a deviant
sub-culture, etc. etc. then you are not in "normal" control.
You may still be in control in the sense that nobody is physically
forcing you to say things or move your limbs or fall off a cliff. But
that sort of control is not a desirable sort: there's "too much"
extraneous influence. Too much for what? One answer is: too much for
normal moral judgements to be applicable.
There are other more biologically oriented answers. E.g. something like
"too much external influence to allow normal biological optimisation to
be achieved". (That's not quite right: optimisation is too strong.)
There may be all sorts of intermediate states in which there is no well
defined answer to the question whether you are in control. That's
because, like so many of these concepts, being "in control" (like "being
conscious", "being intelligent", "intending an action", etc.) is a
*cluster concept* referring to a partially ill-defined cluster of
conditions.
The lack of definition shows up most clearly when you ask at what stage
a foetus becomes in control of its actions, or conscious, or whether
various kinds of drugged states, brain damage or deterioration, do or do
not involve consciousness, or which other animals are conscious. Or
which animals are in control of their actions.
Were the suicide bombers in control? Or were they, like their neurons,
functioning as parts of a larger system where the control mechanisms are
not easily detectable by looking at the components?
Are women in control when they intensely desire to have children despite
the very high risks to their health (and marital relationships) and
inconvenience involved in child-bearing? Or are they actually being
controlled by powerful genetic mechanisms without which the rational
decisions of individuals would prevent survival of gene pools in
intelligent animals able to reflect on the consequences of their
actions?
>
> In other words, Descartes got it backwards. The problem of
> self-destructive action, what Donald Davidson (1980) called
> "incontinence", from the physiological point of view is as follows.
> How can awareness of past actions, some remote, some as recent as half a
> second, shape an impending next action?
Compare: how can the momentum of a wave shape an impending future
process (conveying a surfer to the beach)? Looking at the motion of
individual particles in the wave will not give you a good answer: it's
the wrong level to see the important pattern, even though at that level
there is a causal story that is complete. The story leaves out more
global processes that are far more important to the surfer.
The brain is its own surfer! (Or rather the organism is.)
> More specifically, what is the
> neural form of the dynamical state of awareness, such that it can serve
> as an "order parameter" to "enslave" (Hermann Haken, "Synergetics",
> 1983) the limbic populations of neurons, guiding them into the next step
> of their self-determined trajectory?
I believe that the logical conclusion of this line of thinking, which I
think is fundamentally on the right track, is that there will not be any
such thing as "*the* neural form of the dynamical state of awareness".
I.e. there could be so many different ways in which this high level
control state (awareness) can be implemented (even in brains which are
largely similar at a low level of detail) that no *neural* description
can capture the important high level features of the system. (A very
crude analogy: a complete conjunctive description of the position,
velocity, acceleration, etc. of every atom involved in a tornado will
not capture what it is that interests us about tornadoes. A better
analogy: no complete electronic description of a chess playing computer
will capture what it means to say that it is a chess machine of grand
master strength.)
> Neurobiologists will be helped in their formulations of experiments by
> neurophilosophers who might wish to comment on this hypothesis.
I've never called myself a neurophilosopher before, but I think the
hypothesis looks approximately right, subject to some detailed niggles,
begun above and continued below.
My first close encounter with the "scientific consciousness community"
in large numbers was at the Elsinore conference in August. There I got
the distinct impression that there were two classes of people present:
(a) those who felt that all the philosophical issues would ultimately be
resolved when we understand the kinds of architectures which support
multi-level causality, including circular causation (simultaneous
"downwards" and "upwards" causation of the kind discussed by Hermann
Haken, whose work on "synergetics" I had never previously encountered,
and a number of others).
(b) those who found the ideas incomprehensible or irrelevant.
Those in category (a) mostly seem to agree that we still have a long way
to go in grasping the variety of possible (information processing)
architectures that can occur in the physical, biological, and social
worlds, and the many kinds and levels of "order parameters" and
"enslaving" that are waiting to be explored and understood.
Solving the problems requires coming up with a new, much deeper,
analysis of the concepts of "causation", "supervenience" and
"implementation" than we currently have. (I've put some first
draft thoughts on this in a long, but incomplete paper
http://www.cs.bham.ac.uk/~axs/misc/supervenience )
Many of those in category (b) think they can solve the philosophical
problem of consciousness with a *quick fix*. E.g. bring in quantum
collapse, or some other "key" idea and all will become clear. These
ideas are all far too shallow to account for the deep conceptual tangles
involved in thinking about systems capable of bootstrapping themselves
into semantic states.
An example: consider a system with a complex array of sensory inputs,
which uses some adaptive mechanism to develop ways of categorizing its
inputs. It doesn't make much difference what sort of adaptive mechanism:
it could be some sort of neural net, or a large collection of weighted
condition-action rules. (There's no difference really).
Here are some reasons why it may be impossible usefully to define the
classes it "learns" in terms of patterns of sensory stimulation. The
induced partitioning of the hyperspace of possible inputs may depend on
the "training history" in a complex fashion making it impossible to give
necessary and sufficient conditions for a particular pattern of
stimulation to be in category X, or Y or Z.
(a) e.g. because some of the patterns may lead to oscillation or some
other kind of non-convergence
(b) e.g. because the categorization changes over time according to the
sequence of inputs. (One of the panelists at Elsinore gave a nice
example of hysteresis in our visual categorisations.)
(c) e.g. because what's important about the categorizations is not just
the sensory patterns but how they relate to the current context,
including the needs of the organism and the adaptive mechanism,
which may itself be in the process of adapting: i.e. the
categorizations involve not only the patterns in the sensory input but
also their immediate functional relevance to the rest of the system.
Now suppose that such a system attempts to categorise its own states
using a similar adaptive mechanism, whose inputs are signals recording
the states and processes in various *internal* sub-mechanisms, and whose
outputs play some sort of role in a higher order control process. (Many
people have suggested reasons why various kinds of self-monitoring may
be biologically useful, or useful from an engineer's viewpoint.)
Suppose its self-categorisations are also constantly influenced by a
mixture of training history and current functional requirements (i.e.
changing purposes for which categorisations are being made). These high
level self-descriptions, and self-modifications may in turn cause
changes in the original lower-level mechanisms classifying external
inputs.
Further, consider that there are "communities" of agents whose brains
are implemented using such systems and endeavouring to adapt
cooperatively in such a way as to be able to benefit from various kinds
of division of labour (surgeons, philosophers, street sweepers, etc.) in
less than idyllic conditions which are themselves not static (lean
years, periods of plenty, etc.)
This kind of multi-level feedback in multi-layered adaptive systems may
produce complete chaos. Alternatively it may produce many levels of
relatively stable (though constantly changing) patterns of activity with
horizontal and vertical circular causation.
(I've not yet read what Haken has written about synergetics, though from
his talk in Elsinore I would regard all this as a natural development if
he has not already said it all.)
From this viewpoint, trying to *define* or precisely characterise a
high level state in terms of its neural implementation would be quite
misguided. Just as the particular water molecules making up a breaking
wave and the patterns of motion of those individual molecules keep
changing throughout the enduring history of a particular wave, so too
the precise neural implementation of high level control states may keep
changing and may differ from one individual to another, even though
there is something constant about how that control state relates to
other states at a similar level, and to biological and social functions
of the complete organism.
Still, there may be coarse-grained regularities in the ways that the
neural architecture (at different stages in the development of an
individual) constrains the kinds of higher level "virtual machines" that
can be supported.
Much brain research seems to focus on what's common to human and other
brains. But there may be major differences waiting to be understood. (A
human can sometimes be in control of thought processes, sometimes not,
e.g. when infatuated or grieving or jealous. Can a rat ever be in
control of its thought processes -- does it have any thought processes
to control or lose control of: if not, what's missing? Is it neural or
environmental or what? What about bonobos?)
I suspect a lot of cross species work, as well as work on differences
between infants and toddlers and schoolkids and professors and people
with various brain disorders and people in different cultures will be
needed before we can understand what we are talking about. (At Elsinore
Doug Watt repeatedly stressed the need for adequate theories to
incorporate "disorders of consciousness" and development of mental
states, processes and capabilities in childhood, and not just the
familiar phenomena that come to mind when you are having a philosophical
discussion on consciousness.)
I've not yet commented on the Baars & McGovern article on global
workspace theory because I think a very detailed analysis is required to
show why although the theory may at first look like a contribution to
the above discussion of architectures, in fact it is still too wedded to
a collection tempting metaphors which will have to be jettisoned and
replaced by more detailed architectural and functional specifications.
A full analysis of consciousness must account for a whole range of cases
including the evolution of a wide variety of forms of consciousness, a
host of stages through which a human mind bootstraps itself to normal
consciousness, many kinds of disorders of consciousness, etc.
I think we are going to need a new language for doing this. That will
require another kind of bootstrapping, in which a variety of metaphors
will be discarded in favour of concepts and techniques that are
applicable to control engineering (in the broadest sense).
Aaron
==
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs )
School of Computer Science, The University of Birmingham, B15 2TT, UK
EMAIL [log in to unmask]
Phone: +44-121-414-4775 (Sec 3711) Fax: +44-121-414-4281
Back to: Top of message | Previous page | Main PSYCHE-B page
|