School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project euCognition www.eucognition.org

A First Draft Analysis of Some Meta-Requirements
(or Meta-Functional Requirements?)
for Cognitive Systems in Robots
(An exercise in logical topography analysis. )

Aaron Sloman and David Vernon

Last updated: 16 Nov 2008; 25 Apr 2010; 15 Jul 2013; 25 Jul 2013;1 Feb 2016(reformatted)

This file can be referenced at
http://www.cs.bham.ac.uk/research/projects/cogaff/meta-requirements.html
An automatically generated PDF version is here.

This is also referenced on the Birmingham Cosy Project web site:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0701


Contents


Introduction and background

This is a contribution to construction of a research roadmap for future cognitive systems, including intelligent robots, in the context of the euCognition network, and UKCRC Grand Challenge 5: Architecture of Brain and Mind (one of the grand challenges proposed by the UK Computing Research Committee as a result of a conference in 2002).

A meeting on the euCognition roadmap project was held at Munich Airport on 11th Jan 2007. Details of the meeting, including links to the presentations are available online at http://www.eucognition.org/six_monthly_meeting_2.htm.

It is often assumed that research starts from a set of requirements, and tries to find out how they can be satisfied. For long term ambitious scientific and engineering projects that view is mistaken. The task of coming up with a set of requirements that is sufficiently detailed to provide a basis for developing milestones and evaluation criteria is itself a hard research problem. This is so both in the context of (a) trying to produce systems to elucidate scientific questions about intelligent animals and machines, as in the UKCRC Grand Challenge 5 project or (b) trying to advance long term engineering objectives through advancing science, as in the EU's Framework 7 Challenge 2: "Cognitive Systems, Interaction, Robotics" presented by Colette Maloney here.

An elaboration of the EU FP 7 challenge is now available here:
ftp://ftp.cordis.europa.eu/pub/ist/docs/cognition/fp7-challenge2-background_en.pdf

An explanation of why specifying requirements is a hard problem, and why it needs to be done, along with some suggestions for making progress, can be found in this presentation:

http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0701
"What's a Research Roadmap For? Why do we need one? How can we produce one?"

Working on that presentation led to the realisation that certain deceptively familiar words and phrases frequently used in this context (e.g. "robust". "flexible", "autonomous") appear not to need explanation because everyone understands them, whereas in fact they have obscure semantics that needs to be elucidated. Only then can we understand what the implications are for research targets. In particular, they need explanation and analysis if they are to be used to specify requirements and research goals, especially for publicly funded projects.

First draft analyses are presented below. In the long term we would like to expand and clarify those analyses, and to provide many different examples to illustrate the points made. This will probably have to be a collaborative research activity.

Following an early draft, David Vernon contributed a substantial expansion of the scope of this paper based on the Software Engineering research literature on 'ilities' mentioned below.

The 'ilities' (pronounced 'ill'+'it'+'ease)'are things like 'flexibility', 'usability', 'extendability'. It seems that software engineers have been discussing them for some time and regard them as expressing 'non-functional' specifications. In contrast, we suggest they are higher-order (or schematic) functional specifications, as explained in this document.


Towards a generic analysis

Many of the words discussed below refer to what could be called "meta-requirements" (or possibly "schematic requirements"). What that means is that none of the labels is directly associated with a set of criteria for meeting the implied requirements. For instance, the meaning of 'robust' does not specify features (whether physical properties or behaviours) that would justify the application of the label. Rather the meaning of such a word is more abstract.

Instead of expressing a concept that specifies criteria for instances, a word like 'robust' or 'flexible' expresses a concept that specifies ways of deriving criteria or requirements when given a set of goals or functions. (Such a concept could be called a "meta-concept", a "higher-order concept" or a "schematic-concept".)

There are many words of ordinary language that are like that, as philosophers and linguists have noted. For example, if something is described as "big" you have no idea what size it is, whether you would be able to carry it, kick it, put it in your pocket, or even whether it is a physical object (for it could be a a bit idea, big mistake, or a big price reduction). If it is described as a big pea, a big flea, a big dalmation, or a big tractor, etc. you get much more information about how big it is, though the information remains imprecise and context-sensitive.

In that sense "big", when used in a requirement, specifies a meta-requirement: in order to determine what things do or do not satisfy the requirement, a meta-requirement M has to be applied to some other concept C, and often also W, a state of the world, so that the combination M(C, W) determines the criteria for being an instance in that state of the world. Without W, you get another meta-requirement or schematic requirement M(C) that still requires application to a state of the world to produce precise criteria. E.g. what counts as a big tree, or a big flea, can depend on the actual distribution of sizes of trees or fleas, in the environment in question.

Thus the combinations Big(Flea) and Big(Tree) determine different ranges of sizes; and further empirical facts (about the world) determine what counts as a normal size or a larger than normal size, in that particular state of the world, or geographical location. In another place the average size of fleas or trees might be much larger or much smaller.

It's more subtle than that, because sometimes in addition to C, the concept, and W, the state of the world, a goal or purpose G must also be specified, in order to determine what counts as "big enough" (e.g. a big rock in a certain context might be one that's big enough to stand on in order to see over a wall, independently of the range of sizes of rocks in the vicinity). Often we don't explicitly specify W or G, because most people can infer them from the context, and use what they have inferred to derive the criteria. Notice that such meta-requirements can be transformed in various ways, e.g. 'big enough', 'very big', 'not too big', 'bigger than that', etc. using syntactic constructs that modify requirements.

The point that communication often uses words and phrases whose meaning has to be combined with non-linguistic information available to either or both of speaker and hearer is elaborated in more detail in this draft discussion paper
Spatial prepositions as higher order functions: And implications of Grice's theory for evolution of language.

Another example is 'efficient'. If you are told that something is efficient, you have no idea what it will look like, feel like, smell like, what it does, how it does it, etc. If it is described as an efficient lawnmower, or an efficient supermarket check-out clerk, or an efficient procedure for finding mathematical proofs, then that combination of meta-concept 'efficient' with a functional concept (e.g. 'lawnmower') will provide information about a set of tasks or a type of function, and what kinds of resources (e.g. time, energy, fuel, space, human effort, customer time) the thing uses in achieving those tasks or performing those functions. Someone who understands the word 'efficient', knows how to derive ways of testing whether X is efficient in relation to certain tasks or functions, by checking whether X achieves the tasks or functions in question well, while using (relatively) few of the resources required for the achievement. The object in question will not in itself determine the criteria for efficiency: the object in your shed may be an efficient lawnmower but not an efficient harvester. Or it could be an efficient doorstop in strong winds while being an inefficient lawnmower.

The word 'relatively' was added in parentheses because whether something is efficient sometimes depends on what the competition is -- just as whether something is big sometimes depends on what the competition is. Something that is efficient at one time may turn out to be highly inefficient later because much better versions have been developed. This is related to the meaning of "better", analysed in How to derive 'better' from 'is' (1969).

There are many meta-concepts in ordinary language which are often not recognised as such, leading to pointless disputes about their meaning. But this paper deals only with a small subset relevant to requirements for intelligent machines.


Meta-Requirements

My claim is that many of the words used apparently to specify requirements are actually labels for meta-requirements in the sense explained in the previous section. That means that they have to be combined with further information in order to generate actual requirements. For instance, a meta-requirement M, may have to be applied to a concept C, defining some class of entities with a function or purpose, a state of the world W, and possibly also a goal G, for which instances of C are being chosen. Then the combination M(C,W,G) can determine a set of criteria for satisfying the meta-requirement.

For each meta-requirement M, the criteria will be determined in a specific way that depends on M. So different meta-requirements, such as 'robustness', 'flexibility', and 'efficiency', will determine specific criteria in different ways. Each one does so in a characteristic uniform way, just as "efficient" has roughly the same meaning (or meta-meaning) whether combined with "lawnmower", "proof procedure" or "airliner", even though in each case the tests for efficiency are different. Likewise "big" has the same meta-meaning when applied to "flea", "pea", "tree" and "sea", even though the size ranges are very different. (Though its use in connection with "idea" or "mistake" is more complex.)

It is a non-trivial task to specify the common meaning, or the common meta-requirement, for a word referring to meta-criteria for cognitive systems. So what follows is an incomplete first draft, which is liable to be extended and revised. This draft will need to be followed up later with detailed examples for each meta-requirement. We start by giving a list of commonly mentioned meta-requirements and provide a first draft 'high level' analysis for each of them.

Each of the meta-requirements is capable of being applied to some category of behaving system. This system may or may not have a function, or intended purpose, though in most cases there is one or a set of functions or purposes, assumed by whoever applies the label naming the meta-requirement. For example, if we talk about a domestic robot that deals flexibly with situations that arise, then we are presupposing a specific (though possibly quite general) function that the robot is intended to serve in those situations. So the meta-requirement, in combination with the function allows us to derive specific requirements concerned with forms of behaviour, or more generally with kinds of competences that are capable of being manifested in behaviour, even if they are not actually manifested. I.e. the derived criteria are dispositional, not categorical.

This is closely related to the notions of "polymorphism" and "parametric-polymorphism" used in connection with object-oriented programming, where there are different classes of objects and certain functions, predicates or relations are capable of being applied to one or more class-instances at a time, with results that depend on the types of the instances (the parameters). See Sloman-Poly.

For example, the concept "X gave Y to Z" allows the variables, X, Y and Z to be instantiated by different sorts of entity: e.g. X and Z can be humans, other animals, families, communities, corporations, nations, and what makes an instance of the schema true can depend in complex ways what types of entity X, Y and Z are. Consider what happens when X is not a donor in the normal sense, but something abstract (X) that gives an idea (Y) to a thinker or reader (Z).

It should be obvious that many of the meta-requirements below exhibit such parametric polymorphism, e.g. "X is safe for Y to use", "X can easily teach the use of Y to Z", where X is some abstract tool.

This concept of polymorphism is relevant to many meta-requirements, and also to philosophical analysis of many complex concepts, such as "consciousness" (as Gilbert Ryle noted in 1949).


Meta-requirements and behaviour envelopes

All the meta-requirements for future robots discussed here assume that the functions of the robots define a collection of possible behaviours (or task+behaviour pairs). That collection will have have an "envelope", (or "bounding envelope") with the relevant behaviours within the envelope and other possible behaviours, that are either beyond the machine's capabilities, or are never relevant to the goals or functions, are outside the envelope. We can use that idea as a framework for a first draft specification of at least a significant subset of meta-criteria.

The generic formula is:

Given such an envelope E for a set of behaviours, and a meta-criterion M, the application of M to E, M(E) produces some result which is a modified specification for the set of behaviours -- e.g. expanding or contracting the set of behaviours, or the set of transitions between behaviours.

In other words, our meta-criteria are concerned with features of what the machine can do in relation to the envelope: e.g. how varied the behaviour transitions are within the envelope, and how the machine can extend or modify the envelope over time. E.g. a meta-requirement transforms the specified behaviour envelope in a systematic way to produce a new set of functional requirements. What that means will differ according to what the set of behaviours and purposes is, and what the meta-criterion is.

The concrete requirements derived from the meta-requirements all relate to a space of circumstances in which behaviour can occur and a space of possible behaviours in those circumstances. Given a specification of the behaviour envelope, the notions of robustness, flexibility, etc. determine requirements for the behaviours and the envelope, but they do so in different ways.

Some of the meta-requirements are concerned only with

  1. what happens within the envelope

whereas others are concerned with

  1. possible changes to the envelope,

    or

  2. the system's ability to make changes to the envelope without being 'led' to make them by an external influence such as a teacher.

  3. The speed or other features of the changes to the envelope.
This will now be made clearer by showing how the meta-requirements differ in their implications.

Disclaimer regarding the analyses presented

No claim is made here that the analyses below provide definitions of the words as they are ordinarily used.

This is not a lexicographical exercise to determine what should go into a dictionary (though dictionary makers are welcome to make use of this). Rather it is an exercise in what has been labelled the study of 'Logical topography', which is a modified version of Gilbert Ryle's notion of 'Logical Geography'. The difference is explained in this Web document: Two Notions Contrasted: 'Logical Geography' and 'Logical Topography' Variations on a theme by Gilbert Ryle: The logical topography of 'Logical Geography'.

Roughly, 'logical topography' refers to the space within which a set of concepts can be carved out, and 'logical geography' refers to a particular way of carving out that space, which may correspond to how a particular community conceives of some aspect of reality. The logical topography supports the possibility of dividing things up in different ways with different tradeoffs, as different cultures divide up articles of furniture, or animals or plants in different ways, though they are all talking about the same underlying logical topography, whether they recognise it or not. Our logical topography is concerned with the variety of relationships between a machine and its envelope of possible behaviours, or the possible sequences of envelopes if the envelope can change over time.


Overview of meta-requirements for intelligent systems

We start with high level summaries of the meta-requirements in terms of behaviour envelopes.

......


Other topics to be added (perhaps).

The next lot require meta-semantic competences.

How should we decide which meta-requirements/dimensions are relevant to a long term human-like robotics project?
Compare: how could we decide which are relevant to various kinds of domestic animals?
Compare pets vs various kinds of working animals -- guard dogs, guide-dogs, animals for riding, animals to help with physical labour, animals that can get to places more easily and quickly, etc., animals that help with herding other animals, animals that perform and entertain....


NOTE on software 'ilities' (Added 22 Jan 2007)

David Vernon pointed out that some of the topics discussed here have also been discussed in the software engineering research community, under the heading of 'ilities' and other generic labels. Examples are these web sites:
  • Architecture Requirements are Ilities
    (The 'Software Architecture Notes' web site: includes a 'Starter List of "Ilities"')
    An "ility" is a characteristic or quality of a system that applies across a set of functional or system requirements. So, performance is an "ility" because it is applied against some of the functional or system requirements. Anything that can be expressed in the form "for a set of functional or system requirements, the system must fulfil them this way (this fast, this reliable, etc.)" is an "ility."

    ......

    It is important to find as many of these and describe them as accurately and as early as possible. Since they describe ways that sets of functional requirements must be satisfied, they are effort multipliers to develop. So, for example, if a set of functions have to be secured, then the effort to secure a single function must be multiplied across each of the functions to be secured.

  • Software's Best Kept Secret: 'ilities', by Luc K. Richard
    (Distinguishes external and internal ilities.)
    Many developers make the mistake of thinking that quality attributes -- commonly referred to as nonfunctional requirements, technical requirements or ilities -- are somewhat superfluous. Convinced that these nonfunctional requirements are not as critical to the end product as functional requirements, ilities are rarely documented or even understood.

  • Wikipedia's list of 'ilities'
    (Also referred to as 'non-functional' requirements )
It turns out that giving google both "software" and "ilities" produces a very large number of documents.

We have not yet found a characterisation of the ilities like the one offered for meta-requirements in this discussion paper, namely in terms of functions that transform behaviour envelopes in a manner that can be specified at a level of abstraction that is independent of the actual behaviours. However, our specification of those transformations is still very informal.

Non-functional?
The characterisation of these requirements as 'non-functional' by some theorists seems to us to be mistaken. They are highly important for functionality, but they are higher-level characterisations that determine the specific form of functionality only in combination with additional information,

just as the map function given a list returns a new list, but only if provided with an extra argument, e.g. given a list of numbers AND a function, such as sqrt,
    map(list, sqrt)
returns a list of the square roots of the original numbers. The fact that 'map' requires another function as argument does not stop it being a function itself. It is merely a second order function.
The quotation from the 'Software Architecture Notes' web site above, seems to be making a similar point, namely that the ilities (or what we have called 'meta-requirements') add further specification to a set of functional requirements (e.g. specifying the 'way' the requirements should be met).

A difference between the goals of this document and the software engineering discussions is that we have tried to discuss meta-criteria (ilities) that could be equally relevant to biological organisms and engineering artifacts, since the EU FP7 Challenge 2 is in part about biologically inspired systems, but not inspired at the level of mechanisms, as the phrase 'biologically inspired' often indicates.


NOTES

  1. For further discussion of meta-concepts, or higher-order concepts, see this draft discussion paper
    Spatial prepositions as higher order functions: And implications of Grice's theory for evolution of language.


  2. Note added 20 Jan 2007
    Since deciding to use the label 'meta-requirement' we have discovered that others already use it, sometimes in a different way from the above (e.g. to specify requirements for requirements) and sometimes in a similar way (i.e. to specify schematic requirements that only determine specific requirements in relation to other behavioural requirements).
    An example is in this paper on Service-Oriented Architecture (SOA):
    OA Quality and Governance: Satisfying the Metarequirement of Agility
    By Jason Bloomberg
    Posted: Aug. 24, 2006
    "Change Time Quality
    Traditional software quality management essentially consists of design time and deployment time activities. Basically, given the requirements, make sure that the software is as defect-free as possible given budget and schedule constraints, and then continually monitor the working software to make sure that it meets the requirements set out for it when you deploy it. That basic approach to quality is fine for organizations that know in advance what their requirements are, when those requirements are stable, and when the goal is simply to build software that meets those requirements.

    Such assumptions, however, are frequently false -- in many cases, requirements aren't fully developed and they change over time. Typically, one true goal of software is to respond to changes in requirements without extensive additional rework. SOA is a particularly effective approach in such situations, and the broad recognition that the build-to-today's-requirements approach to software is no longer effective is one of the primary motivations for SOA."


REFERENCES

http://www.scielo.cl/pdf/rfacing/v13n1/art08.pdf
Hernan Astudillo, Five Ontological Levels to Describe and Evaluate Software Architectures, in Rev. Fac. Ing. Univ. Tarapaca, vol. 13 no. 1, 2005, pp. 69-76

(Partly overlaps with this paper - not yet studied closely.)

Paper by Joel Moses (added 16 Nov 2008)

At The Workshop on Philosophy and Engineering (WPE'2008) in London, November 10-12 2008, Joel Moses presented a paper 'Toward an Ontology for Systems-related Terms in Engineering and Computer Science', closely related to this one. His two page extended abstract is included in the workshop proceedings (Unfortunately a MSWord 'doc' file. However, the freely available OpenOffice package converts that to PDF.)

[Sloman-Poly]

http://www.cs.bham.ac.uk/research/projects/cogaff/misc/family-resemblance-vs-polymorphism.html
Family Resemblance vs. Polymorphism A comparison:
Wittgenstein's Family Resemblance Theory vs. Ryle's Polymorphism and Polymorphism in Computer Science/Mathematics
Online discussion note (2011-2016)


See additional references in the main text.

More to be added


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham