AUTHOR: Jim Cunningham, Communicating Agents Group, Department of Computing, Imperial College, 180 Queens Gate, London SW7 2BZ http://comma.doc.ic.ac.uk, http://medlar.doc.ic.ac.uk/~rjc POSTER TITLE: Towards an Axiomatic Theory of Consciousness ABSTRACT:This is a step towards an axiomatic theory of consciousness as a quantified form of introspective awareness. A crucial step for presentation of the theory is use of an interval temporal logic to give formal expression to on-going conditions such as those represented by the progressive aspect in natural language. In this way we are able to enrich more stative mental models so that an agent's internal activities and its perception of external processes can be expressed more faithfully as current but durative and changeable logical properties.
Realisations of the intentional stance, notably variants of the
One route to the realisation of abstract axiomatic theories of
agent intelligence is through the computational mechanisation of
the intensional logics in which they are typically expressed. But
from the perspective of an agent designer, extant intensional
theories of rational agents focus on stative concepts like
Once we have the ability to express temporal relations between
interval based activities as logical properties, the interactions
between activities and between activities and other mental states
can be expressed by axioms. We may for instance consider that the
formal axiom
expresses the idea that sensory perception amounts to an ongoing
inferential interaction between sense and belief. Of course we
have the ability to express other tenses, and also atemporal
conditions, such as the idea that to sense a condition is
coextensive with sensing by some subset of available senses,
including in the human case the haptic sense as part of the sense
of feel. This latter stipulation may be important if we are to
allow a perception of self to enter into a property of awareness.
To be aware is more tantamount to a mental activity than a data
state, awareness includes both perception and belief, but differs
by degree of mental focus - we switch awareness of a condition on
and off by paying attention to it, either in response to change
in perception through the senses, or through deliberate selection
and volition, primitive mental processes whereby mental activity
and ultimately action are controlled. In positing these processes
we follow Carl Ginet by arguing not only for such philosophical
abstractions themselves but in appreciation that they relate to
notions of free will and causality through neurological elements
such as the motor cortex.
We remark at this stage that despite being merely elements an
axiomatic theory we do not consider that the properties of
sensory and control activites we have so far discussed are
distinct from those which a sophisticated but unconscious
automaton could deploy in pursuit of a preset goal, and that
enhancing capacity to plan and to learn does not change this,
except in level of autonomy. To introduce consciousness, and
critically, a consciousness of responsibility, we first allow the
activity of being aware to be positively introspective to some
degree, so that when an agent is aware it is also, at least for
some conditions, aware this fact; and if it learns or itself
posits a causal relationship, an agent with adequate
introspective power can become aware that it is aware of this
relationship, and of its consequences. Because the scope and
degree of introspection can be graded there seems to be no
evolutionary argument against the acquisition of introspective
awareness, indeed it seems necessary for a sense of social
responsibility. However the degree of introspection will always
be limited, for full positive and negative introspection will
each lead to forms of omniscience.
Finally we claim that once an agent has mental activities of
positive introspective awareness it also has a form of
consciousness, that in its weakest form consciousness is just an
activity of being introspectively aware of something. We also
note that by providing both a progressive temporal model of
durative mental activity, leading to activities of focused
awareness, and by indicating that there are graded degrees of
conciousness we have plausible evolutionary justification to
answer some of the substantive criticism from sceptics of
mechanisation such as Tallis.
J. Allen, "Towards a General Theory of Action and Time",
M. Bratman,
P.R. Cohen and H.J. Leveques, "Intention is choice with commitment",
C. Ginet, On Action, Cambridge University Press 1990
M.F. Leith, "Modelling Linguistic Events", PhD Thesis, Imperial
College,
University of London 1997
A.S. Rao and M.P Georgeff, "BDI agents: from theory to practice",
Proc. Internat'l Conf. on Multi-agent systems, (ICMAS-95), San
Francisco,
CA, 1995 pp312-319
R.Tallis,
================================================================
Short CV:
Formally I am a Senior Lecturer in the Department of Computing at
Imperial College, and lead an interdepartmental research group on
Communicating Agents. The group embraces pure and applied
research on software agents. The group has been involved in six
recent ACTS projects on agents, embracing telecomms, e-commerce,
and human computer interaction. The group has interests in agent
software, cognitive robotics, automated reasoning, speech,
language and human face & guesture recognition.
As an academic I have always been been research active, but an
under achieving eclectic, despite some technical and managerial
skills. I left control systems for programming language theory in
'66, introduced object oriented progamming (Simula 67) 72-78
before tiring of pressure for Pascal and functional languages,
was an early user of Prolog but insufficiently doctrinal, left
formal methods despite award winning papers once the evidence
pointed to poor Human Computer Interaction as the main deficiency
in safety critical systems. I have had a long interest in making
automated reasoning more applicable and accessible for human
cognition, introduced term Knuth-Bendix rewriting methods to UK,
but tired of less productive detail, then decided intensional
rather than extensional logics were more appropriate for
expressing human language and contextual reasoning, and that
Agent technology is an appropriate experimental vehicle.