WHAT IS AN ONTOLOGY? Notes arising out of CoSy email discussion (May 2005) A colleague wrote: > When we start thinking about ontologies, when we start thinking about > modeling reality on the basis of ontologies -- we are, in the end, > talking about a notion of truth, and that in relation to the old > "division" between the physical and the mental. Some say, truth is > normative assessment, and it is normative assessment that sets an > (intentional) agent apart from non-mental entities. > > What does that mean for the systems we are developing? Particularly, > when we start to think about models and ontologies. The question in the > end, I think, (and in the beginning) seems to: where is the truth in > thinking about reality? where do we **put** the truth? My approach is much more simple-minded! I've been doing something much less philosophical when writing about the ontology needed for our robots (especially PlayMate) e.g. in here http://www.cs.bham.ac.uk/research/cogaff/misc/ontology-for-a-manipulator.txt Some early draft notes on requirements http://www.cs.bham.ac.uk/research/cogaff/misc/question-ontology.html http://www.cs.bham.ac.uk/research/cogaff/misc/ontology.pdf Two versions of a longer draft driven in part by reflections on how to characterise the space of questions and puzzles (i.e. explicitly identified information gaps) that can arise in the robot's mind. http://www.cs.bham.ac.uk/research/cogaff/challenge.pdf Some questions about the ontology required for seeing surfaces as collections of positive and negative affordances for a robot with a hand. http://www.cs.bham.ac.uk/research/cogaff/sloman-vis-affordances.pdf (on varieties of implicit and explicit information in organisms and machines, especially section 3) This task has not been concerned with the design of any *formal* specification, language, tool, inference procedures, etc. I have merely been trying to think about some of the long term ambitious scenarios we have stated (in the CoSy proposal) that we will aim at. In the above documents (some of which I began writing before CoSy) I tried writing down notes on the kinds of entities, properties, relationships, states, events, processes, .... a robot (or crow) may have to be able to refer to in its percepts, thoughts, puzzles, plans, intentions, plan execution, beliefs, and communications (including, for human-like robots its assertions, questions, requests, instructions, or interpretations of what others say or write.) My research on the web suggested that there were no existing ontologies in that sense that came anywhere near meeting our requirements, although there are various useful fragments available. There are also quite rich ontologies developed for other kinds of applications (e.g. business systems). The main gaps I found are concerned with the sort of thing Pat Hayes referred to about 30 years ago as 'naive physics', a profoundly important project that sort of ground to a halt mainly because it was so difficult (though there are still some people working on it -- e.g. Tony Cohn, who talked to us in Bled). I think one of the things that project lacked was the driving force of very detailed robot scenarios: it was done too much in the abstract. It also focused too much on structure and not enough on affordances. (I may have mis-remembered.) This exploration of ontologies required for a robot to produce particular sorts of scenarios is a very informal process, which is very open ended, and would have to be gradually narrowed down as the scenarios are narrowed down. E.g. if there are no scenarios in which someone is concerned about someone else who has been injured than perhaps pity, regret, guilt, etc. would not be part of the ontology. On the other hand if PlayMate is trying to explain something to a learner, then mental states like knowing about something, not knowing about something, being puzzled or confused, needing help, having a false belief, etc. would all be things in the ontology. If the robot has a meta-management system that is able to monitor, categorise, evaluate, reflect on, generalise about, intervene in, its own information processing (e.g. perception, learning, planning, problem-solving, plan execution, motor control) then it will need a meta-semantic ontology in which it refers to things that have semantics (i.e. things with information content). My understanding is that very few current robots do that, but CoSy is committed to doing that -- or more precisely *trying* to do that and reporting on requirements for the task even if we don't have successful demonstrations. If the robot has some very fluent, fast, behaviour control mechanisms, e.g. used for rapidly moving towards and grasping something, or catching a ball, or quickly understanding a sentence without deliberating about its meaning, then that will probably use reactive mechanisms employing a previously learnt collection of low-level cues of kinds that emerge from patterns in the robot's previous history. This 'low-level' competence is likely to require an ontology that we may find very hard to think about in advance. (In fact such low-level ontologies may be partly unique to each individual -- they will certainly differ between species of animal, or robot.) [Ricardo Poli once described the task of finding out what had been learnt or produced in a robot by such processes of learning or simulated evolution as 'artificial psychoanalysis'. It can be extremely difficult to do: but without doing it we have no science, just another craft, which may or may not work]. For now I am going to ignore the objection that any behaviour can be produced in infinitely many different ways including use of huge lookup tables. That's true in principle, but constraints on sizes of physical memories, on opportunities to construct complete tables, on limited time and opportunities for various kinds of training procedures, etc. can rule out many theoretically possible implementations. [That's part of the point of Jackie's and my IJCAI paper.] However that leaves open the possiblity of remaining alternatives: if we can't choose between them on the basis of implementability, or some other criterion that interests us (e.g. biological plausibility) then we may have to explore more than one option. And maybe we'll learn what doesn't work! (Negative results can be important for science.) But if we can demonstrate that two or more quite different approaches can produce scenarios with similar depth and generality that will be a very significant result. Both PlayMate and Explorer will probably have to be able (eventually) to think about causation (e.g. what caused the tower to collapse, what makes the structure stable, what caused him to misunderstand me, etc. etc.), so causation will have to be part of the robot's ontology. It's essential for a learner that does any debugging of its actions and strategies. > .... It's worth noting that in in the scenarios I am thinking about causes would be as much part of the world of the robot (or for that matter a young child, or a nest-building crow) as surfaces, affordances, actions, gaps between objects, intentions, etc. [We have to give up the idea that only what can be sensed can be referred to: that's no way to engage with world like ours.] Identifying such an ontology is part of the process of producing an engineering design specification for an intelligent robot rather than a philosophical enterprise, though it benefits from experience of philosophical (conceptual) analysis. Of course designers can get the ontology wrong, usually because they fail to think about many of the details that will actually turn out to be needed in the working system. (E.g. vision researchers did not at first realise the importance of specularity, and many still don't pay attention to affordances.) That an ontology is wrong or incomplete (for a planned scenario) shows up in the incompetence or mistakes of the robot, or during the process of implementation when issues often turn up that designers had not thought of when first analysing the scenario. That's why actual implementation is so important for a project like this. (I feel that not having our arm yet may have led us down wrong paths. But I hope that I will be proved wrong.) So there is no *general* problem of truth that we need to solve for the purposes of CoSy, only the problem of design error or design omission (relative to a planned set of scenarios). A more subtle point (related to your comments on multiple ontologies) is that some of the ontology that a learning robot may need to develop for itself (or which may be produced across generations by an artificial or natural evolutionary process) may have a kind of richness that is not easily expressible in the kind of language that *we* can use for our thinking and writing. I.e. our language and our ontologies may need to be extended. That's especially a problem with some of the so-called sub-symbolic information that a robot may use in its reactive mechanisms, e.g. information expressed in patterns of activation of neurons or in distributions of weights in synapses. So we may need to invent new meta-ontologies for talking about such things. But if we use the working systems without understanding them we are not doing science: just craft (which can be very useful, of course -- or dangerous if we don't know their reliability envelopes). In your first message you wrote: > By "ontology" I understand a structure that describes concepts, and > relations between concepts, whereby that structure enables inheritance > or a subtype/supertype characterization. I think that's an important point, but a special case. Inheritance is just one example of inference (derivation of information). E.g. if all birds can fly and tweety is a bird then tweety can fly. There are other types. E.g. the fact that relations are transitive may allow other kinds of inference, such as: if dumbo is bigger than snoopy and snoopy is bigger than tweety then dumbo is bigger than tweety. Part-of is another important case. Specifying the properties of relationships, e.g. transitivity, symmetry, etc. will be part of the task of specifying the ontology used by the robot. A meta-semantic ontology will have to include differences in existential implications: if dumbo kicks a rock, then the rock exists. But if dumbo thinks about or refers to or asks about a rock, then the rock need not exist. PlayMate may have to be able to tell whether someone talking to it has made such a mistake. (Compare research on 'theory of mind' in children.) One of the interesting questions about learning/development will be whether there are intermediate stages in which the robot's ontology does not make these sharp characterisations of properties of properties and relations. How ontologies grow is a hard problem for our project. Another interesting question is how the robot learns to deploy properties of its own ontology in solving problems (e.g. learning to use transitivity of 'inside' when planning, or when debugging plans). There are probably many interesting intermediate stages to be explored. AN IMPORTANT DISTINCTION: ROBOTS vs THEIR DESIGNERS One thing that we shall eventually need to be clear about, though I've not bothered much about it so far (apart from the comments about self-organised ontologies in robots), is the distinction between ontologies required for robots performing various tasks in our scenarios and ontologies required by *us* when we design such robots (or design simpler robots that can develop into such robots) or when we try to understand how our robots work. There will need to be some overlap between the ontologies used by us and the robots (especially if we want to interact with the robots) but their requirements are different. E.g. if the robots are not designing robots they may never need to include ontologies in their ontologies! However if we start introducing philosophical discussions with a robot into our scenarios (see page 31 of the CoSy workplan) then the robot may have to think about ontologies (e.g. realising that some of its mistakes were previously due to not including friction, or momentum, in its ontology, or making the discovery that it is capable of having false beliefs or muddled concepts). ==== > ..... I'll send a separate message about the history of the word 'ontology' and the variety of meanings with which it is now used. Arguing about which is the *right* meaning is, I am sure you will agree, completely pointless. Cheers. Aaron ======================================================================= From Aaron Sloman Thu May 12 11:51:11 BST 2005 Subject: Meanings of 'ontology' and some history The multiplicity of the meanings of the word 'ontology' can generate confusion. There's no *right* meaning. Here's a 'potted' history (very much over-simplified) to draw out a few main types of use of the word 'ontology', showing (crudely) their conceptual and historical relationshiop. I end with the question whether CoSy needs ontology tools. Apologies for over-simplification and for boring people who already know all this. A: The oldest sense of the word, going back to Aristotle I think, and perhaps earlier, as indicated by the suffix 'ology', is as a name for a subject of study or expertise or discussion (Greek: 'logos' = 'word', 'meaning', 'thought'....). In that sense ontology is a branch of philosophy close to metaphysics (in the same way as these are names of subjects of study or areas of expertise: zoology, biology, geology, philology, oncology etc.) In that sense 'ontology' refers to an investigation of the nature of being, what exists, why there is anything rather than nothing, constraints on possible worlds (e.g. could causation exist without space and time?), and perhaps whether what exists could be improved on or is the best of all possible worlds, etc. etc. I presume we are not going to spend much time on doing ontology in that old sense. Note that in that sense 'ontology' is not a count noun (e.g. it has no plural and you can't easily refer to 'an ontology' or 'different ontologies'). However, like other 'ologies' it developed different uses including uses as a count noun. E.g. 'geology' can be a name for an area of investigation, but can also refer to features of a part of the world, e.g. the Alps and the Himalayas have different geologies. (?) Ecology has also developed that way quite recently (damaging the ecology of X isn't damaging the science but damaging the features of X described by ecology the science). B: Recently, philosophical usage of the word 'ontology' changed, to refer to specific conceptual frameworks. (I think this was a 20th centuray change, but I may be misinformed.) This change was partly inspired by Strawson's 1959 book 'Individuals: an essay in descriptive metaphysics', and also by the discovery by anthropologists and others (developmental psychologists?) that different people have different views about what exists or can exist e.g. some do and some don't include souls of dead ancestors, tree spirits, transfinite ordinals, uncountable infinities, quarks, etc.). I.e. they have different ontologies. Strawson distinguished o Descriptive metaphysics: the task of expounding the conceptual/metaphysical framework *actually* in use by some community or by various communities (we could add, or various individuals at different stages of development, or various species) from o Revisonary metaphysics: the task of arguing about which is the *correct* framework, usually including the claim that we've got it wrong so far, The products of descriptive metaphysics are offered as accounts of ontologies actually in use, not as accounts of how things have to be. In this new usage (sense B) the word 'ontology' became a count noun referring (roughly) to the most general conceptual assumptions underlying a specific system of beliefs. In that sense there could be *different* ontologies, e.g. the ontology of the ancient Greeks, the ontology of Buddhists, a modern scientific ontology that includes genes, ecosystems, quarks, economic inflation, etc., an intuitionist or a platonist mathematical ontology, the capitalist economic ontology, etc. This sort of ontology can be a complex structure including several sub-ontologies (as our current scientific ontology does). This second sense (B) is the one that I have mostly been using recently in talking about the kind of ontology required for CoSy (as described in my previous message.) C: Even more recently the word 'ontology' has been taken up by people in software engineering and AI as people came to realise the following: (a) Engineers designing complex systems need to think clearly about what sorts of things they are designing and what sorts of things their machines have to interact with, prevent, produce, maintain, etc., so that they need to think about and if possible make explicit the ontology they use as designers. This became increasingly important as engineering moved beyond systems that can be characterised by sets of differential equations and the like. and (b) Insofar as these complex systems process information, designers need to specify *what* kinds of information a system can be expected to process, which semantic contents it may have, how the information will be represented, how it will be manipulated and used, etc. So they started using 'ontology' to refer to the ontology used by their products. Because developing either kind of ontology can be a difficult and in some cases very complex process, it was soon realised that designers need tools, techniques, and formalisms to specify *what* kinds of information a system can be expected to process, which semantic contents it may have, how the information will be represented, how it will be manipulated etc. Some of the tools might also be used by the systems produced, e.g. ones that don't have a fixed ontology, but go on extending their ontology. (Outside AI this was connected with the growth of object-oriented programming languages and so-called object-oriented design -- both of which generated much confusion as well as useful programming techniques). So, engineers started moving away from informal descriptions and started producing formally specified ontologies for themselves to use (i.e. of type (a)) and for their machines to use (i.e. of type (b)), where the two could, of course, overlap in some cases. D: Ontologies as formal structures Ontologies of type (a) and (b) are often expressed informally using familiar human forms of representation such as natural language, diagrams of various sorts, tables, etc. But as part of the drive towards theoretical clarity and elegance, and the need for more automated tools for producing working systems there was also a move towards requiring ontologies, even when produced by humans, to use a specific formalism with well defined conventions. So there is now a sub-community of researchers whose only experience of the word 'ontology' is in the context of that kind of science and engineering, so for them the word usually refers to a formal structure specifying an ontology. (E.g. they may say 'My ontology uses XML and requires a megabyte of filestore'.) In parallel with all those developments, some people started producing more or less formal ontologies specific to their field of study or research, e.g. an ontology to describe the various stages, techniques, formalisms, products, processes, etc. involved in doing software engineering, or the ontology of biology. This kind of discipline meta-ontology can be very important for teaching a discipline, or for preventing reinvention of wheels among researchers, etc. That contrasted with producing ontologies for specific application domains, e.g. banking, weather forecasting, process control, etc. FORM vs CONTENT OF SOME BIT OF THE WORLD Sometimes ontologies are referred to as 'meta-models', because people sharing an ontology (general set of concepts) might use it to produce different competing models of the same domain: the ontology merely specifies types of things that are conceptually possible, leaving open alternative theories specifying what actually exists and happens. This distinction (which in the past I've labelled a distinction between the form of the world and its content) is very important in science, and often ignored in philosophy of science http://www.cs.bham.ac.uk/research/cogaff/crp/chap2.html ONTOLOGY TOOLS Because the tasks can be very difficult, especially for people not well trained in philosophy, software engineers and AI researchers have started developing (and re-inventing) more and more sophisticated methodologies and tools to aid the process of ontology development especially collaborative development of domain ontologies. However, most of them (as far as I know) tend to assume a uniform representation (e.g. some kind of logic) and, as GJ pointed out, that may be too restrictive for a project like CoSy, for the reasons I tried to expand on in my last message. ONTOLOGY TOOLS FOR COSY? If we don't find suitably general tools that already exist we may need to develop our own tools for producing, maintaining and using ontologies suited to multi-level multi-functional robot architectures. [Lot's more stuff can be found by giving search engines phrases like ontology "software engineering" or ontology engineering ] Later I'll put this stuff on the web. Aaron http://www.cs.bham.ac.uk/~axs/