The following message was posted on 30 May 2005 in response to messages from two other posters to the ISRE (International Society for Research on Emotions) discussion list.
I have quoted (with her permission) parts of a previous message written
by Louise Sundararajan
Dear Isre-lites
I have been watching a lot of the discussion on the side-lines without
intervening because I know that usually the sorts of things I want to
say are found very hard to understand or to accept by most of the sorts
of people who study emotions.
That's because my comments are based on an approach to the science of
mind that attempts to integrate philosophy of mind, philosophy of
science, AI, computer science, software engineering, psychology,
neuroscience, and biology. (I am not expert in all these areas, but I
try to compensate for that by reading and talking to colleagues in other
departments.)
One consequence of that integration is that it leads to disagreements
with everyone, for each of the subfields is changed by the integration
and of course there's a subset of the community that cannot tolerate the
idea that new ideas from computing might be deeply relevant to
understanding how human and other animal minds work.
But I've decided to take the risk of mystifying/alienating most people
again, because recent postings, especially by Jaak and Louise, have come
so close to things I have been working on for a long time, especially
the philosopher's notion of 'supervenience' (introduced originally by
G.E.Moore about 100 years ago to describe the relation between ethical
and factual statements, and more recently extended to cover other
things, including the relation between mind and matter, as discussed by
Jaegwon Kim, referred to by Jaak).
I apologise for length: the topic is complex. Ignore this message if you
are not interested in how levels of analysis and explanation can be
related.
What I have to say reaches conclusions that sound similar to what Louise
wrote in response to Jaak's comments on supervenience:
There are several themes here. I'll try to develop them in my own way.
This seems to lead to a closely related viewpoint. Or perhaps I have
misunderstood and hallucinated similarities.
But the philosophers (and psychologists, neuroscientists, etc.) often
don't know enough about details of software engineering systems (e.g.
how they are designed and built, how they go wrong, how they are
debugged, extended, etc. and how the different layers interact during
all of those processes).
So many have wrong assumptions and formulate false generalisations about
possible relations between levels.
Meanwhile, the software engineers and some computer scientists, who have
deep 'craft' knowledge of these matters, which they use in designing,
building, or analysing working systems, do not have the philosophical
expertise required for analysing their own concepts and making their
presumptions and the concepts and theories they use explicit.
So both philosophers and engineers can over-simplify or make errors in
things they say about relations between levels, as can people looking in
from outside. We need to combine their knowledge in a new synthesis.
One common error is the 'nothing-buttery' mistake, thinking that what
goes on in a computer is 'nothing but' processes in a physical machine
(which might be viewed as a collection of digital electronic processes,
or a collection of interactions of atoms and molecules, or something
that can only be described using the abstruse mathematics of quantum
mechanics).
But, as Louise pointed out, there is something important about the fact
that a software engineer can design, implement, test, discover bugs, fix
bugs, or otherwise create, modify, or extend very complex software
systems without knowing much about the underlying computing machinery,
let alone quantum mechanics (or its future successors!).
What he or she does know about is a complex collection of real states,
processes, properties, relationships, and causal interactions that are
not physical: they occur in what are technically labelled 'virtual
machines'. They run on physical machines but they are not physical
machines: they cannot be weighed or measured using the devices employed
by the physical sciences.
A rule that decides whether to turn on a new sub-process has no mass,
volume, charge, velocity, momentum or even a location in space, though
we may metaphorically speak of it as 'in' the computer.
Philosophers whose theories of causation are too simple tend either to
deny the reality of the higher level entities and processes, or try to
identify them with certain collections of entities and processes at
the lower levels.
One reason for denying the 'independent' existence of the higher level
entities is that some popular over-simple theories of causation imply
that there is no 'logical space' for events in 'high level' machines to
have causal powers if everything that happens depends on causal
processes in underlying machines.
If that were true it would be impossible for poverty to cause crime, for
jealousy to cause war or murder, for pity to cause alleviation of
suffering, for economic inflation to cause poverty among pensioners, or
for newly acquired knowledge to cause diseases to be cured, skyscrapers
to be built or horrible weapons to be made.
It is not just because we are feeble-minded that we have to think of all
these kinds of causation as 'real': they exist, but what that means is
very hard to analyse. (Perhaps analysing the concept of 'cause' is the
hardest unsolved problem in philosophy -- Hume's proposed solution,
roughly reducing causality to correlation, is accepted by some
philosophers, but not all, and it doesn't cope with all the facts).
What makes such causal statements or counterfactual conditional
statements true may be facts of many kinds, at many levels. But that
does not alter the fact that statements at a particular level, or
statements about causation between levels, are sometimes true: poverty
can cause crime and jealousy can cause a knife to be plunged through a
heart.
It is true that if X had not been intensely jealous of Y he would not
have stabbed Y ('other things being equal', to cut a long story short).
Its being true is not altered by the fact that the truth depends on a
huge variety of very complex mechanisms that make it possible for
jealousy and actions like stabbing to exist, including the existence of
physical atoms and molecules in X, in Y, in the knife, in the air
breathed by X, and in many other things.
Bricks and windows can be thought of as entities in a virtual machine
layer implemented in more remote and obscure layers of physical
mechanisms. But the claim that a brick hitting a window caused the
window to break is not undermined by showing that there is another true
description of the process expressible only in the abstruse mathematical
language of quantum physics.
Anyone who thinks otherwise must believe that we will never know
anything about what causes what until physics has found some rock-bottom
description of reality where the 'real' causes happen.
That view is based on a failure to understand how the notion of
causation works in our ontology: we could not cope with everyday life if
we did not use it at multiple levels. But until recently we have not
understood much about some of the more complex ways in which multiple
levels of causation can coexist.
[NOTE:
The new understanding I am trying to characterise has come from building
new kinds of 'high level' machines which are mostly much simpler than
those produced by evolution, and therefore easier to understand.
It also helps if you can make and modify examples of something you wish
to understand. As Louise points out that can use knowledge and actions
concerned only with higher level machines.
At that level it may not matter whether the software runs on a PC, an
apple Mac, a sun Sparc, or some other general purpose computer for which
suitable compilers and interpreters are available. Those compilers and
interpreters, in different ways, provide some of the mappings between
levels in computing systems.
The designers of operating systems, compilers and interpreters, unlike
the designers of systems that use them, have to have a very deep
understanding of relationships between the levels, though even they can
differ in what they know, depending on what sorts of programming
languages they write compilers and interpreters for, or what sorts of
operating systems they produce, which parts of an operating system they
work on.
For the pure software engineers it may be enough to assume that a
certain sort of high level machine (a virtual machine) exists and works
and can be programmed in a certain kind of language. Such engineers may
know that some things that go wrong are hardware failures or design
faults, others software bugs, and they learn techniques for
distinguishing the causes, and then either call the hardware engineers
or take steps to identify and fix the software bugs.
All of you who produce web pages use only a high level language for
driving a very high level, distributed, abstract machine. For that
purpose you don't need to know anything about the physical details,
whether the communication mechanisms use copper, fibreglass, or
wireless, what CPUs are involved or how they work, how much of the low
level control is done by changeable software and how much is compiled
into fixed circuitry, etc.
People who work on software learn that some improvements can come from
hardware changes which they are unable to bring about, except by buying
new hardware that other people have produced (e.g. smaller, faster
processors using less energy, larger memories, more robust hardware,
etc.) whereas other improvements come from software changes while using
the same hardware (e.g. a wider range of contingencies considered by the
computer in planning its actions, better self-awareness to enable the
running system to learn from its mistakes, more checking of
preconditions, faster detection of and reactions to critical conditions,
etc.)
One of the most important scientific and philosophical advances of the
last half century has been growing understanding of the variety of kinds
of relationships between virtual machines (running software systems) and
physical machines on which they are implemented.
In particular we have learnt deep reasons why most forms of behaviourism
are false: many virtual machines have externally unobservable
interacting sub-systems, whose properties and states cannot be defined
in terms of input-output relations of the whole system, and which cannot
easily be discovered by anything like either the laboratory methods of
experimental psychology (which cannot probe deeply enough into a
collection of interacting virtual machines) or the brain-measuring
methods of neuroscientists (because those methods produce information
about the wrong level of machinery: like trying to work out what a
computational virtual machine is doing by measuring changing voltages
and currents in electronic circuits.)
There is more on this in some online tutorial slides (PDF):
http://www.cs.bham.ac.uk/research/cogaff/talks/#inf
and various papers at the same web site (cogaff).
Equally some physical machines cannot perform their functions without
the software-based virtual machines that process information, recognise
problems and opportunities, learn, take decisions, and so on. There's a
mutual dependence.
In some cases a virtual machine can detect signs of hardware failures
and take evasive action, e.g. using another part of the machine, or
using 'error correcting' algorithms that compensate for a subset of
hardware failures.
Hardware can control software and software can control hardware.
But the mappings can be complex and subtle.
Some virtual machines, including the internet and various higher level
virtual machines that run on it, e.g. email systems, distributed
project-management systems, airline-booking systems, etc., are
distributed over many physical machines, and some of the physical
machines can be removed or replaced without the higher level virtual
machines stopping.
Some virtual machines are capable of having their states saved,
transmitted to a new physical machine, and then resurrected on it --
sometimes using new better technology. That can make some virtual
machines more enduring than the underlying physical machines on which
they run. Some people might say the same about gene-pools in an
ecosystem.
Safety-critical systems and various kinds of 'mission-critical' systems
(e.g. where customers depend on constantly available services)
increasingly use redundant hardware, where the virtual machines
automatically adjust themselves to take account of performance
bottlenecks, network blockages, machine failures or 'maintenance
downtime'.
Emotions within individuals can play both good and bad roles in such
social virtual machines.
Understanding this depends on noticing some of the variety that can
exist in virtual machines. Some have homeostatic properties without
having any representation of a goal to be achieved, like the mechanisms
that keep the water surface level in a goldfish bowl as it is tilted.
Others have a prior representation of what is sought that controls
behaviour (e.g. an animal seeking food or a mate), and some of those
merely respond to opportunities whereas others (e.g. humans and
hook-making crows) use those representations to create new
opportunities. The variety of possible teleological mechanisms in
virtual machines is huge, and still largely unexplored, except in simple
cases. (I fear current theories of human and animal emotions are based
on over-simple conceptions of what is possible.)
For example the chess concepts used to describe and explain the
behaviour of a chess virtual machine (concepts like 'pawn', 'queen',
'capture', 'win', 'draw', 'attack' etc.) cannot be defined in terms of
the 'bit-level' concepts that define a typical CPU architecture. That's
obvious from the fact that we used the concepts of chess long before we
knew anything about the possibility of implementing a chess player on a
bit-manipulating virtual machine. (Incidentally Ada Lovelace had a good
appreciation of these ideas a century before Turing, von Neumann, etc.)
Likewise, concepts like jealousy, poverty, economic inflation, learning,
appreciating a joke, are not definable in terms of concepts used to
describe the low level implementation machines on which they depend, not
least because those concepts can be understood and used without knowing
anything about the lower level virtual machines.
But, as in the case of computer-based chess-players, that
indefinability does not rule out implementability. We just have to
understand more about the very subtle relations between definability and
implementability.
(That's part of understanding what can be supervenient on what: e.g. a
chess-playing machine cannot be supervenient on a computer with so few
bits that it cannot support all the state transitions required. But in
some cases infinite virtual machines can be implemented in finite
physical machines, if the implementation admits the possibility of
run-time extensions, like Turing's tapes. The mind and brain of a
mathematician -- i.e. all of us -- are probably similarly related.)
The biological virtual machines that run on biological 'hardware',
including the aforementioned 'social' virtual machines are in many ways
far more sophisticated than anything we have so far been able to design
and build.
In particular, human-engineered systems of the kinds that we have so far
designed and know how to debug, maintain and modify, usually have at
most a few (e.g. up to about twenty?) levels of virtual machinery, with
relatively few kinds of two-way causal interactions between levels.
In contrast, biological systems, like humans and other animals, and the
larger systems that include them, e.g. ecosystems, have far more levels
of organisation with much richer coupling between levels (which is why
you can feel unwell 'all over your body' during some infections, or why
tiredness, physical stress, hormonal changes can both influence high
level 'mental' virtual machines and be influenced by them).
One corollary of all this is that there need not be a simple, linear
hierarchy of levels -- there can be 'parallel' branching layers of
virtual machines that recombine to support higher level virtual
machines, as illustrated crudely in this diagram
Another fact that is sometimes ignored is that part of the
implementation of a virtual machine can be outside the physical machine
on which the virtual machine 'runs'.
E.g. if a robot plans to climb through a particular window, part of what
makes its mental state have semantic content that refers to that
window, rather than other windows exactly like, it is the collection of
spatio-temporal and causal relations between the robot and the window.
I.e. the implementation of virtual machines with referential content
cannot lie wholly within the 'bodies' associated with the virtual
machines, for some virtual machine states are inherently relational.
(This point, like many of its subtle complications, was noted by
P.F.Strawson in his book 1959, Individuals. Sometimes this is referred
to by more recent philosophers as 'broad content'.)
Clearly, since so much of a human mind is semantic content referring to
all sorts of things outside the individual, each of us is implemented
partly in a web of relationships to many things in the environment,
past, present and future, and not just in our brains.
As I've already remarked, discussions of philosophers about
supervenience (the alleged relations between levels) often do not take
account of many of the things we have learnt in the last half-century.
But that does not refute claims of modularity in the virtual machines.
It merely refutes the assumption that high level modularity must mirror
lower level (e.g. physiological) modularity. For instance, motor control
modules may be distributed over many different regions of the brain
concerned with higher and lower level aspects of the organisation of
actions and different kinds of perception that are involved in the
fine-grained or coarse-grained control of those actions. E.g. part of
the control of walking uses optical flow to control posture.
This is wrong because, as illustrated previously, a virtual machine can
be made of many interacting virtual machines that run asynchronously and
to some extent independently, some discrete and some continuously
varying, so that there is no 'unitary' functional state as assumed in
finite state machines. A virtual machine with multiple concurrent
interacting sub-virtual machines partly sharing input and output
mechanisms is crudely depicted here
http://www.cs.bham.ac.uk/~axs/fig/vmf-io-varied.jpg
But that picture does not allow for the fact that virtual machines can
grow extensions to themselves over time.
The simpler notion of virtual machine as a finite-state automation
completely fails to represent a modern operating system, for example and
also fails to represent a mind made of enduring interacting components
including concurrently active subsystems concerned with motivation,
evaluation, learning, deliberating, evaluating, reflecting, imagining,
controlling physical actions, etc.
Many of the sub-systems in human virtual machines, like many of the
physical subsystems, are 'shared' with much older species, and most of
them operate unconsciously. Moreover, new sub-systems can be produced by
learning, and development, e.g. when learning to read, learning to play
a musical instrument, developing various kinds of athletic or social
competences, acquiring new forms of self-control or self-awareness.
I hope it is now clear that virtual machines need not be organised in a
linear hierarchy, that they need not be conceptually reducible, that
they can involve causal powers, that physical machines can sometimes
depend on virtual machines for their continued functioning or existence,
and that we probably still understand very little of the variety of
possible types of virtual machine, and the variety of types of
relationships between levels.
I think that's what I am saying also.
I think I have at least shown in outline how to demolish this ghost,
even in connection with computer-based virtual machines.
Again I think we agree. I was once asked by a journalist under what
conditions a machine can have pride. Some of our correspondence is here
http://www.cs.bham.ac.uk/research/cogaff/pride.html
One of the problems may be over-simple notions of what it is for
neuroscience to say anything relevant, given the definitional
discontinuities mentioned above.
(Some kinds of meta-cognition probably do require architectural
extensions at the neural level. Others may emerge from interactions
among sub-systems that have other functions.)
But I also encounter emotional, unscientific, ill defined
anti-reductionism which also needs to be flogged...
The long term effect of this will to kill a most important virtual
machine in which ideas from many disciplines concerned with many levels
of enquiry interact to generate profound new insights into deep
problems.
I am not sure I want to remain a member of that kind of academic
community.
I think we are chewing on the same bone in a friendly fashion.
Again: apologies for length (and for British spelling). Shortening this
could take me several days. I hope it's of some use to someone in its
present form.
Aaron
> Text from her message is shown in the format used for this line.
To: ISRE-L MAILING LIST
Subject: Re: [ISRE-L] slipping and sliding (and virtual machines)
From: Aaron Sloman
Date: 30 May 2005
> This hierarchical model of linkage is what I call foundationalism. Even
> granted that some levels of analysis are more basic or fundamental than
> others, linkage to the more basic level of analysis is not always
> warranted. For instance, the physics of particles is at a level of
> analysis more basic than biology, but not too many neuro scientists are
> convinced by Penrose who resorts to quantum mechanics to explain
> consciousness. In my opinion, the argument for the "basicness" of the
> linkage is circular: linkage to the brain is basic, because the brain
> is the "basic" level of analysis, and the brain is the most basic level
> of analysis, because the linkage stops there and does not go further
> down the ladder of more basic levels of analysis. . . because further
> down the ladder wouldn't be relevant. If in the final analysis what is
> basic is a matter of relevance, then it is the linkage, the inference
> making, that decides the basicness of a particular level of analysis,
> not the other way around. Let me propose an alternative paradigm: every
> tub on its own bottom. Otherwise put, every level of analysis has its
> own basics. For instance, my AI colleague Len admitted unashamedly on
> many occasions that he did not know much about the computer.
KEY IDEA:
The concept of 'supervenience' is very close to the notion familiar to
software engineers as 'implementation' or 'realisation': both the
philosophers and the engineers are concerned (as Louise pointed out)
with differences of levels in reality and their relationships.
DISTINCT ONTOLOGIES
Moreover, the concepts required for thinking about what happens in a
virtual machine are NOT definable in terms of concepts used to describe
the lower levels (on which more below). In that sense the ontology could
be called 'emergent' (one of many senses of the word!).
CAUSATION AND TRUE COUNTERFACTUAL CONDITIONALS
I can't offer a complete account of causation here but merely observe
that what we are referring to when we speak of A causing B has to do
with the truth or falsity of various complex conditional statements
about whether B would occur, or would-have occurred in various kinds of
circumstances in which A might or might not have occurred. So causation
and truth-values of counterfactual conditionals are intimately
connected. (No simple formula suffices to express the relationship.)
Some philosophers will object to everything I have been writing e.g. by
denying that counterfactual conditionals can have truth values, denying
that causation should be analysed in terms of relations between
existence of causes and truth of statements expressed using 'if', or by
defending a thesis of identity between phenomena at different levels:
which is a form of 'nothing buttery'.]
ENGINEERING LEVELS
An AI software engineer or cognitive modeller may be concerned only with
'high level' processes in a computational virtual machine (e.g.
processes like parsing a sentence, finding a plan, weighing up the pros
and cons of different strategies, resolving a conflict, detecting an
emergency or reacting to an important opportunity that requires rapid
global re-direction of activities).
A LITTLE LEARNING CAN CONFUSE
Too many people know what a turing machine is or have heard about the
Turing test (a giant red herring), and think that's all they need to
know in order to know what virtual machines in computers are or can do.
They are wrong.
WHAT'S MORE BASIC? WHAT DOES 'BASIC' MEAN?
Virtual machines are highly dependent on the physical mechanisms on
which they run. The software cannot run if there is no physical
mechanism; and if the physical mechanisms break, the behaviour of the
virtual machines at the software level can change, or even abort totally
(as happens with human minds). That makes the physical mechanisms
'basic' in some sense, but that can be misleading.
SOCIO-CULTURAL-ECONOMIC VIRTUAL MACHINES
This is in some ways analogous to social, economic, legal and political
virtual machines that are distributed over many humans and their
products. Those socio/economic/political virtual machines may have a
'life' of their own that cannot easily be redirected by human decisions,
even though they are to a large extent products of past human decisions
and depend for their continued operation on many human decisions. They
can sometimes preserve themselves through control of human decisions
using processes of cultural transmission across generations (including
forms of indoctrination), and using various mechanisms for transmitting
information (rumours, etc.) that can marshal defences against attacks
from sub-systems.
OVERLAPPING VIRTUAL MACHINES
Emotions, desires, attitudes, preferences, moods, values can also be
mechanisms within biological virtual machines produced by evolution
concerned with propagation of genes. In some cases the a process (e.g.
arousal of sexual desire) can be simultaneously part of a social virtual
machine and part of a gene-replicating biological virtual machine. Its
role in both depends on extremely complex events and processes in
physico-chemical machines, but nevertheless the arousal of desire is a
real process that can cause things to happen.
DEFINITIONAL DISCONNECTION
One of the important points about levels implicit in the previous
comments is that they can have definitionally disconnected ontologies:
the concepts required for describing level C may be incapable of being
defined in terms of the concepts used to describe things, processes,
events and causal interactions at lower levels, B, A, etc.
http://www.cs.bham.ac.uk/~axs/fig/levels.jpg
(though it leaves out many kinds of things).
VIRTUAL MACHINES PARTLY IMPLEMENTED IN THE ENVIRONMENT: SEMANTIC MACHINES
COMPLEX STRUCTURE-MAPPING
For example some people believe that every distinct component of a
virtual machine has to be implemented in a distinct component of the
underlying virtual machine. That is just false; virtual machines P, Q,
and R can be distributed over physical machines W, X, Y, Z sharing
resources (like the distinct distributed internet virtual machines that
share the CPUs of multiple computers).
CARICATURES OF FUNCTIONALISM/COMPUTATIONALISM
Another common mistake is to assume that all virtual virtual machines
can be analysed in terms of the notion of a finite state which is always
in one state, with a transition table that defines state-transitions
that change input-output relations, as crudely depicted here:
http://www.cs.bham.ac.uk/~axs/fig/fsm.jpg
> This hierarchical model of linkage is what I call foundationalism.
> Even so, I think this hierarchical
> perspective misses the creativity inherent in forging linkages. In the
> best scenarios, the linking of different levels of analysis may entail a
> merging of horizons, which, among other things, means that the end
> result may be different and better than either level of analysis could
> have produced alone.
> Our two models of linkage have different implications for research. One
> reason I am leery about the hierarchy model is the scepter of
> eliminationism, which predicts that science will one day supercede "folk
> psychology," when the former finally figures out how things REALLY work.
> I don't think you subscribe to this extreme view, but the ubiquitous
> presence of this scepter makes me bark at the slightest hint.
> let me
> give a hypothetical case scenario to see how our two models play out. A
> while ago there was the move to vote pride into the pantheon of basic
> emotions. Let's assume that in the near future the biochemistry of
> pride is unraveled, and that it turns out that good and bad pride each
> has its discrete neuro circuits. But this picture may not jive with the
> cultural narratives of pride.
> For instance, in the classical Confucian
> texts, bad pride (arrogance) is bad and banned, and good pride (self
> esteem) does not fare any better--it is hardly mentioned. In contrast
> is the Taoist tradition, in which pride, both good and bad, is flaunted
> as one of the hallmarks of a lofty hermit and/or a creative genius, such
> as Li Po in whom the poet and the eremitic have coalesced. What to do
> when the neuroscience of pride and the cultural narrative of the same do
> not tally?
> The hierarchical model of linkage may dismiss the latter as
> uninformed speculations that bark up the wrong tree. My model would
> suggest otherwise. The fact that what's good or bad is contingent upon
> cultural contexts does not necessarily make cultural phenomena more
> "slippery" than the wet ware of neuroscience. What the hypothetical
> scenario of pride illustrates is that cultural narratives may involve
> one dimension of emotional experience that falls outside the pale of the
> animal model of emotions. I am referring to the so-called "second order
> desires" (Frankfurt,1971; Sundararajan, in press)-the desire of desire,
> a metacognition that decides whether a desire is desirable (good pride)
> or not (bad pride). Of course metacognition has its neuro circuits.
Not necessarily. It may be one of the distributed virtual machines in
our brains that share their implementation with other virtual machines.
Assuming that there are dedicated metacognition neuro-circuits that do
it all, or even most of it, may be a mistake.
> So
> folk psychology, in this hypothetical scenario, can pose interesting
> questions that stimulate further research in neuroscience.
Yes. Likewise reading what novelists, poets, and playwrights have to
say.
> The position
> I arrive at is not any different from what you and Ross have already
> stated; I am simply pointing out that my model of linkage is more
> conducive to this position than your hierarchical model. Or am I
> beating a dead horse?
Alas no: I keep encountering it in many different contexts.
> >What is the tragedy for your field?
>
> I am too much of a post-modernist to be talking about tragedies (the
> postmodern word is "banality"). But if you insist, I would say that it
> would be a tragedy for any field, if it were to have enough seriousness
> to stifle the playfulness of ideas, but not serious enough to experience
> a crisis in theory.
An enormous tragedy I see is that academic, administrative, competitive,
economic, and intellectual pressures on young academics make it more and
more difficult and risky (e.g. if tenure is at stake) for them to
spread their wings and learn and think about many things. They have to
focus more and more narrowly to get journal papers published, grant
proposals funded, and keep their department heads happy.
> Thanks again for the thoughtful response. To paraphrase a Chinese
> proverb: we did not get to know each other till we had a dog fight.
> Louise
http://www.cs.bham.ac.uk/~axs/
Installed 30 May 2005