This file can be referenced at
http://www.cs.bham.ac.uk/research/projects/cogaff/meta-requirements.html
An automatically generated PDF version is
here.
This is also referenced on the Birmingham Cosy Project web site:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0701
The next lot require meta-semantic competences.
A meeting on the euCognition roadmap project was held at Munich Airport on 11th Jan 2007. Details of the meeting, including links to the presentations are available online at http://www.eucognition.org/six_monthly_meeting_2.htm.
It is often assumed that research starts from a set of requirements, and tries to find out how they can be satisfied. For long term ambitious scientific and engineering projects that view is mistaken. The task of coming up with a set of requirements that is sufficiently detailed to provide a basis for developing milestones and evaluation criteria is itself a hard research problem. This is so both in the context of (a) trying to produce systems to elucidate scientific questions about intelligent animals and machines, as in the UKCRC Grand Challenge 5 project or (b) trying to advance long term engineering objectives through advancing science, as in the EU's Framework 7 Challenge 2: "Cognitive Systems, Interaction, Robotics" presented by Colette Maloney here.
An explanation of why specifying requirements is a hard problem, and why it needs to be done, along with some suggestions for making progress, can be found in this presentation:
Working on that presentation led to the realisation that certain deceptively familiar words and phrases frequently used in this context (e.g. "robust". "flexible", "autonomous") appear not to need explanation because everyone understands them, whereas in fact they have obscure semantics that needs to be elucidated. Only then can we understand what the implications are for research targets. In particular, they need explanation and analysis if they are to be used to specify requirements and research goals, especially for publicly funded projects.
First draft analyses are presented below. In the long term we would like to expand and clarify those analyses, and to provide many different examples to illustrate the points made. This will probably have to be a collaborative research activity.
Following an early draft, David Vernon contributed a substantial expansion of the scope of this paper based on the Software Engineering research literature on 'ilities' mentioned below.
The 'ilities' (pronounced 'ill'+'it'+'ease)'are things like 'flexibility', 'usability', 'extendability'. It seems that software engineers have been discussing them for some time and regard them as expressing 'non-functional' specifications. In contrast, we suggest they are higher-order (or schematic) functional specifications, as explained in this document.
Instead of expressing a concept that specifies criteria for instances, a word like 'robust' or 'flexible' expresses a concept that specifies ways of deriving criteria or requirements when given a set of goals or functions. (Such a concept could be called a "meta-concept", a "higher-order concept" or a "schematic-concept".)
There are many words of ordinary language that are like that, as philosophers and linguists have noted. For example, if something is described as "big" you have no idea what size it is, whether you would be able to carry it, kick it, put it in your pocket, or even whether it is a physical object (for it could be a a bit idea, big mistake, or a big price reduction). If it is described as a big pea, a big flea, a big dalmation, or a big tractor, etc. you get much more information about how big it is, though the information remains imprecise and context-sensitive.
In that sense "big", when used in a requirement, specifies a meta-requirement: in order to determine what things do or do not satisfy the requirement, a meta-requirement M has to be applied to some other concept C, and often also W, a state of the world, so that the combination M(C, W) determines the criteria for being an instance in that state of the world. Without W, you get another meta-requirement or schematic requirement M(C) that still requires application to a state of the world to produce precise criteria. E.g. what counts as a big tree, or a big flea, can depend on the actual distribution of sizes of trees or fleas, in the environment in question.
Thus the combinations Big(Flea) and Big(Tree) determine different ranges of sizes; and further empirical facts (about the world) determine what counts as a normal size or a larger than normal size, in that particular state of the world, or geographical location. In another place the average size of fleas or trees might be much larger or much smaller.
It's more subtle than that, because sometimes in addition to C, the concept, and W, the state of the world, a goal or purpose G must also be specified, in order to determine what counts as "big enough" (e.g. a big rock in a certain context might be one that's big enough to stand on in order to see over a wall, independently of the range of sizes of rocks in the vicinity). Often we don't explicitly specify W or G, because most people can infer them from the context, and use what they have inferred to derive the criteria. Notice that such meta-requirements can be transformed in various ways, e.g. 'big enough', 'very big', 'not too big', 'bigger than that', etc. using syntactic constructs that modify requirements.
Another example is 'efficient'. If you are told that something is efficient, you have no idea what it will look like, feel like, smell like, what it does, how it does it, etc. If it is described as an efficient lawnmower, or an efficient supermarket check-out clerk, or an efficient procedure for finding mathematical proofs, then that combination of meta-concept 'efficient' with a functional concept (e.g. 'lawnmower') will provide information about a set of tasks or a type of function, and what kinds of resources (e.g. time, energy, fuel, space, human effort, customer time) the thing uses in achieving those tasks or performing those functions. Someone who understands the word 'efficient', knows how to derive ways of testing whether X is efficient in relation to certain tasks or functions, by checking whether X achieves the tasks or functions in question well, while using (relatively) few of the resources required for the achievement. The object in question will not in itself determine the criteria for efficiency: the object in your shed may be an efficient lawnmower but not an efficient harvester. Or it could be an efficient doorstop in strong winds while being an inefficient lawnmower.
There are many meta-concepts in ordinary language which are often not recognised as such, leading to pointless disputes about their meaning. But this paper deals only with a small subset relevant to requirements for intelligent machines.
For each meta-requirement M, the criteria will be determined in a specific way that depends on M. So different meta-requirements, such as 'robustness', 'flexibility', and 'efficiency', will determine specific criteria in different ways. Each one does so in a characteristic uniform way, just as "efficient" has roughly the same meaning (or meta-meaning) whether combined with "lawnmower", "proof procedure" or "airliner", even though in each case the tests for efficiency are different. Likewise "big" has the same meta-meaning when applied to "flea", "pea", "tree" and "sea", even though the size ranges are very different. (Though its use in connection with "idea" or "mistake" is more complex.)
It is a non-trivial task to specify the common meaning, or the common meta-requirement, for a word referring to meta-criteria for cognitive systems. So what follows is an incomplete first draft, which is liable to be extended and revised. This draft will need to be followed up later with detailed examples for each meta-requirement. We start by giving a list of commonly mentioned meta-requirements and provide a first draft 'high level' analysis for each of them.
Each of the meta-requirements is capable of being applied to some category of behaving system. This system may or may not have a function, or intended purpose, though in most cases there is one or a set of functions or purposes, assumed by whoever applies the label naming the meta-requirement. For example, if we talk about a domestic robot that deals flexibly with situations that arise, then we are presupposing a specific (though possibly quite general) function that the robot is intended to serve in those situations. So the meta-requirement, in combination with the function allows us to derive specific requirements concerned with forms of behaviour, or more generally with kinds of competences that are capable of being manifested in behaviour, even if they are not actually manifested. I.e. the derived criteria are dispositional, not categorical.
This is closely related to the notions of "polymorphism" and "parametric-polymorphism" used in connection with object-oriented programming, where there are different classes of objects and certain functions, predicates or relations are capable of being applied to one or more class-instances at a time, with results that depend on the types of the instances (the parameters). See Sloman-Poly.
For example, the concept "X gave Y to Z" allows the variables, X, Y and Z to be instantiated by different sorts of entity: e.g. X and Z can be humans, other animals, families, communities, corporations, nations, and what makes an instance of the schema true can depend in complex ways what types of entity X, Y and Z are. Consider what happens when X is not a donor in the normal sense, but something abstract (X) that gives an idea (Y) to a thinker or reader (Z).
It should be obvious that many of the meta-requirements below exhibit such parametric polymorphism, e.g. "X is safe for Y to use", "X can easily teach the use of Y to Z", where X is some abstract tool.
This concept of polymorphism is relevant to many meta-requirements, and also to philosophical analysis of many complex concepts, such as "consciousness" (as Gilbert Ryle noted in 1949).
The generic formula is:
In other words, our meta-criteria are concerned with features of what the machine can do in relation to the envelope: e.g. how varied the behaviour transitions are within the envelope, and how the machine can extend or modify the envelope over time. E.g. a meta-requirement transforms the specified behaviour envelope in a systematic way to produce a new set of functional requirements. What that means will differ according to what the set of behaviours and purposes is, and what the meta-criterion is.
The concrete requirements derived from the meta-requirements all relate to a space of circumstances in which behaviour can occur and a space of possible behaviours in those circumstances. Given a specification of the behaviour envelope, the notions of robustness, flexibility, etc. determine requirements for the behaviours and the envelope, but they do so in different ways.
Some of the meta-requirements are concerned only with
whereas others are concerned with
or
This is not a lexicographical exercise to determine what should go into a dictionary (though dictionary makers are welcome to make use of this). Rather it is an exercise in what has been labelled the study of 'Logical topography', which is a modified version of Gilbert Ryle's notion of 'Logical Geography'. The difference is explained in this Web document: Two Notions Contrasted: 'Logical Geography' and 'Logical Topography' Variations on a theme by Gilbert Ryle: The logical topography of 'Logical Geography'.
Roughly, 'logical topography' refers to the space within which a set of concepts can be carved out, and 'logical geography' refers to a particular way of carving out that space, which may correspond to how a particular community conceives of some aspect of reality. The logical topography supports the possibility of dividing things up in different ways with different tradeoffs, as different cultures divide up articles of furniture, or animals or plants in different ways, though they are all talking about the same underlying logical topography, whether they recognise it or not. Our logical topography is concerned with the variety of relationships between a machine and its envelope of possible behaviours, or the possible sequences of envelopes if the envelope can change over time.
Robustness can be construed as resistance to dysfunctional perturbation of behaviour within an envelope of possible states and processes in which the system is behaving. Exactly what that means in each case will depend on what functional behaviour is, what the environment is, what sorts of causes of perturbation or disruption can occur, etc. Only when those have been specified can we derive the specific requirements for a system to be robust.
Thus the requirements for a robust robot vacuum cleaner could be concerned with its ability to cope with difficult configurations of furniture as well as a range of possible mechanical deficiencies, whereas a robust robot guide for a blind person would have far more complex requirements concerned with obstacles on pavements (side-walks to Americans), traffic conditions, kerb heights, other pedestrians, and the ability to detect the need for re-charging batteries before moving too far from the nearest charging point.
It seems that all these cases of robustness are concerned with the number of 'defensive' transitions between behaviours available for the system within its envelope of possible behaviours, and the consequences of those transitions in overcoming or preventing problems.
How this extension is produced can vary. The triggers and mechanisms for extending competence beyond the old envelope will arise from new contexts of different sorts. Contexts that trigger extension can include explicit training by a teacher, instruction in some high level language (e.g. something like a human language, use of maps or diagrams or possibly some more formal and restricted notation), training by example may use the system's ability to imitate observed behaviours (a much more subtle and complex capability than many who discuss imitation, mirror neurons etc. appreciate), or the system itself doing retrospective analysis of failures or poor or failed performances (self-debugging).
The changes in competence may involve either persistence or expansion of function. In the former case no new tasks or goals can be achieved, but they can be achieved in a wider range of contexts, or in the face of a wider range of obstacles or difficulties. In the latter case (expanded function) the system acquires the ability to perform new kinds of tasks or achieve new kinds of goals.
As indicated above the process that produces the change may or may not involve explicit intervention by a user: that is part of what determines whether the system is autonomous or not, a notion discussed below.
These competence-extending developments can sometimes include combining old competences in new ways, e.g. noticing that the functions of a missing object with a particular function (e.g. screwdriver) can be provided, albeit imperfectly, by something else (e.g. a knife, or a coin), or noticing that a new type of task can be achieved by giving an old object a new function, e.g. using a big book as a step to reach a high shelf easily. (Note: some people would refer to these cases as 'creative'.)
In some cases the change make use of the pre-existing ontology used by the robot for categorising goals, objects, situations, and processes, but merely depends on it learning a new correlation, e.g. doing action A in situation S can achieve a goal of type G. Then action A will be selected in situations where it would previously not have been selected.
In other cases the changes require kinds of learning that extend the system's ontology by adding new sub-categories as a result of finding empirically, or being told, that it is useful to distinguish those sub-categories. For instance, a robot for a blind person may notice that it can define sub-categories within its action repertoire that are correlated with indications of discomfort or difficulty from the person and other sub-categories that produce expressions of pleasure or gratitude. On that basis the robot may attempt to avoid the former types of actions when there is an alternative and to select the latter types when they are available.
Or it may learn to distinguish states of the person that correlate with different preferences, e.g. wishing to move more slowly when tired or when in unrecognised surroundings. In some cases the learning involves noticing the possibility of grouping things into a new super-category. E.g. bricks, wooden blocks, large books, and small step-ladders may be grouped into a category of "things useful for getting at high shelves by stepping on them". (AI work on learning sub-categories and super-categories in various kinds of training goes back at least 30 years. The same mechanisms could work without explicit training.)
More generally, the ability to create new categories can be used in detecting patterns in sequences of things that occur, patterns expressible as generalisations that can be used in subsequent predictions and decisions.
The discovery of useful new sub-categories and super-categories presupposes that the existing ontology of the robot and the formalisms it uses already include means of defining the new categories. So the learning involves realisation of potential that was already implicit in the system's knowledge, even though specific types of events were required to trigger the realisation. Insofar as the sub-categories of objects, actions, situations, etc. are related to things the robot can do and ways in which its goals can be achieved or its functions fulfilled, this kind of learning includes learning of new affordances.
A further kind of flexibility, which requires more detailed discussion would involve modification of the set of goals or functions of the machine, either under the control of users or as a result of the system having motive generators that enable it to acquire new kinds of motives. (As discussed in Chapter 10 of The Computer Revolution in Philosophy (1978).
NOTE (b):
The specification of 'Target outcome (a)' for IPs and STRePS on
page 3 of
Colette Maloney's slides,
includes two criteria of which the first, namely
artificial systems that:
covers both robustness and flexibility as defined here.
The second criterion might be given a label something like 'usability', 'naturalness' or even 'sociability', namely systems that:
This remains a meta-criterion that only specifies actual criteria when
information about the function or purpose of the system
has been provided.
An example of competence extension could be the machine noticing the possibility of combining several objects or actions in a new way to achieve some goal or provide some new function. This typically requires the ability to construct, manipulate and compare representations of alternative combinations of objects or actions without actually combining the objects or performing the actions.
In some cases creativity may involve extending the ontology by introducing new concepts that are not definable in terms of the pre-existing ontology and formalism. This is deemed impossible by the explicit or implicit theories of many philosophers and AI researchers, but has clearly happened many times in human history and in individual human development. The claim in Fodor's book The language of thought, that all such concepts are explicitly definable in terms of some innate language that is already present in a newborn infant is entirely without foundation.
A type of robot that is to be used in a wide variety of cultures, in many different households, over many years, will require the ability to extend its ontology from time to time, not simply by adding new explicitly definable sub-categories, but by introducing a new explanatory theory in which there are new theoretical terms that are not definable using pre-existing categories, as has happened many times in the history of science and mathematics. For a robot helper this could include coming up with new theories about mental states and processes in humans which explain otherwise unpredictable changes in their behaviour, or new theories about unobservable properties of different kinds of materials or kinds of foods, which explain some of their effects, or new concepts and theories related to new developments in household goods, games, medicines, diseases, legal requirements, etc.
The ability to extend an ontology in ways that go beyond formulating definitions expressible using an old ontology has been demonstrated many times in the history of human science and culture.
The changes of behaviour or of the envelope of behaviours mentioned in connection with robustness, flexibility, and creativity may be fast or slow. It seems that 'agility' is often used to refer to the speed with which such accommodations can occur: the faster they occur the higher the agility. Note that this is orthogonal to the quality and significance of the changes. For example, we would not wish a flexible robot vacuum cleaner to require thousands of training examples to discover the need to avoid a type of situation that causes it problems.
A related notion (or dimension) of 'agility' would refer to the ease with which users can bring about the changes required for flexibility and creativity.
Agility, and more generally speed, may or may not come with the cost of reduced efficiency (e.g. faster consumption of resources, increased wear and tear).
So one sort of autonomy, which we could call 'minimal autonomy', is concerned with the ability of a system to be left to get on with its task without anyone constantly overseeing and taking decisions for it. A typical automobile does not have that sort of autonomy though some tram and rail services do. Airliners may have it for parts of their flights, but not necessarily through all stages.
More significant kinds of autonomy include the system having the ability to change long term goals, short term goals, preference criteria, ways of resolving conflicts, ways of deriving means from ends, and all the other things listed in connection with robustness, flexibility and creativity. From this viewpoint each of robustness, flexibility and creativity, enhances autonomy, in different ways and to different extents.
Asimov's laws of robotics were proposed as principles for limiting the autonomy of machines, including robots, but since they were first proposed, many difficulties and objections have been pointed out.
Note that in some cases 'autonomy' does not refer to capabilities of the system to permissions that have been granted to it. So something that is perfectly capable of taking certain decisions may be forbidden from doing so without consulting a human. That would be an extrinsic (organisational) reduction of autonomy.
to be continued/expanded.
For more on sharing sensors, motors and other subsystems between sub-functions in a complex architecture see The Mind as a Control System (1993)
Moreover, the AI researchers who defend the requirement of common sense do not always realise that it too is a meta-criterion, because the detailed requirements will be different for a nursery-school teacher, a car mechanic, a chicken farmer, a house-builder, a chef, or a researcher in mathematics. However, insofar as there is a collection of widely relevant common sense demands that arise out of acting in a 3-D physical environment, we can see that the common-sense requirement has a complex structure.
For some examples of common-sense challenges see The Common Sense Problem Page, for example the egg-cracking problem.
Challenges close to the common sense required in a kitchen are posed in this crockery manipulation scenario. Some examples relevant to what a young child learns about manipulating objects are in Orthogonal Recombinable Competences Acquired by Altricial Species (Blankets, string, and plywood).
There are many more in McCarthy's unpublished paper 'The well designed child', and other papers on his web site. Roboticists need to heed many of the comments in that paper, including
"the world is not structured in terms of human input-output relations"
"Animal behavior, including human intelligence, evolved to survive and succeed in this complex, partially observable and very slightly controllable world. The main features of this world have existed for several billion years and should not have to be learned anew by each person or animal."
Exactly how this is achieved could vary according to the nature of the task. In some cases it might involve training a neural net controller to take over from a planner and plan-execution module.
When that happens the planning and plan-execution sub-mechanisms may be able to perform additional tasks concurrently with the task 'taken over' by the neural net. In particular, complex and difficult novel tasks might be performed slowly by one mechanism while another mechanism rapidly executes some old and familiar actions.
Other kinds of fluency could result from caching of previous results, e.g. storing generalised and flattened parse trees for frequently encountered sentence forms, or phrase forms. Some kinds of fluency are concerned with external behaviour involving control of actions, whereas others are concerned with modifying internal information processing, such as parsing, planning, checking inferences, doing calculations.
These can all be seen as contributions to efficiency, or agility (speed) mentioned earlier, but they can be more than that, insofar as smoothness of behaviours is also increased.
Different sorts of semantic competence can be
distinguished, e.g. having the ability only to refer to what is
currently being sensed or done, being able to refer to past or future
events, states, processes, being able to refer to things that cannot be
sensed or acted on, being able to refer to things that themselves
refer or have semantic contents (second-order intentionality, third,
or higher-order intentionality), etc.
Debates on the topic include
See:
......
......
It is important to find as many of these and describe them as accurately and as early as possible. Since they describe ways that sets of functional requirements must be satisfied, they are effort multipliers to develop. So, for example, if a set of functions have to be secured, then the effort to secure a single function must be multiplied across each of the functions to be secured.
We have not yet found a characterisation of the ilities like the one offered for meta-requirements in this discussion paper, namely in terms of functions that transform behaviour envelopes in a manner that can be specified at a level of abstraction that is independent of the actual behaviours. However, our specification of those transformations is still very informal.
Non-functional?
The characterisation of these requirements as 'non-functional' by some
theorists seems to us to be mistaken. They are highly important for
functionality, but they are higher-level characterisations that
determine the specific form of functionality only in combination with
additional information,
map(list, sqrt)returns a list of the square roots of the original numbers. The fact that 'map' requires another function as argument does not stop it being a function itself. It is merely a second order function.
A difference between the goals of this document and the software engineering discussions is that we have tried to discuss meta-criteria (ilities) that could be equally relevant to biological organisms and engineering artifacts, since the EU FP7 Challenge 2 is in part about biologically inspired systems, but not inspired at the level of mechanisms, as the phrase 'biologically inspired' often indicates.
Such assumptions, however, are frequently false -- in many cases, requirements aren't fully developed and they change over time. Typically, one true goal of software is to respond to changes in requirements without extensive additional rework. SOA is a particularly effective approach in such situations, and the broad recognition that the build-to-today's-requirements approach to software is no longer effective is one of the primary motivations for SOA."
(Partly overlaps with this paper - not yet studied closely.)
Paper by Joel Moses (added 16 Nov 2008)
More to be added
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham