Various attempts have been made to formulate a theory of innate language universals that attempts to explain the ability of humans to learn any language if exposed to it in the right way at the right time, an ability not shared with any other species.
However, the proposal is highly controversial and empirical evidence in support of it weak or controversial. A paper by Nicholas Evans and Stephen Levinson (2009), claims that all of the claimed universal features have exceptions in human languages and that
"While there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition."
We have been developing an approach, linking philosophy, AI/Robotics and Biology which we think offers an alternative approach to questions about linguistic universals, and complements the work reported in the target article.
The key idea is that if we look very closely at competences of some non-human animals and pre-verbal children, especially competences involving manipulation of complex 3-D structures that permit vast combinatorial variations (e.g. varieties of sub-processes involved in nest-building, hunting, catching, and eating prey, using hands to pick food and manipulate 3-D objects, including coping with various sizes, shapes, spatial relationships and materials with different properties -- different kinds of stuff) then from a robot designer's standpoint it is very hard to see how those competences could exist without the use of internal information structures linked to and used by mechanisms of perception, goal formation, planning, reasoning, plan execution, action control, and recovery from failures.
Some of the implications are mentioned briefly in
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#glangand spelled out more fully in this PDF presentation
Computational Cognitive Epigenetics
Behavioral and Brain Sciences, 30, 4, 2007, pp. 375--6
(Commentary on Jablonka and Lamb, Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life),
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang
Evolution of minds and languages. What evolved first and develops first in children:
Languages for communicating, or languages for thinking (Generalised Languages: GLs)
Although many details regarding the internal forms representation used by other species and pre-verbal children are still unknown, there seem to be the following requirements to be met by any proposed form of representation (information encoding) that can explain the observed competences. We call such forms of representation Generalised Languages (GLs) because they have so much in common with the forms that are normally called languages, despite the two main differences, namely (a) GLs are used for internal purposes (e.g. contents of perception, expressing goals, reasoning, planning, controlling actions, learning) rather than for communication with other individuals, and (b) they need not take linear formats nor be composed only of discrete elements. Note that human sign languages illustrate (b).
Requirements for a GL:
This will include uses of GLs for self-monitoring, self-evaluation, self-control, etc.
These forms of representation (mainly structures in virtual machines, not physical structures) need not make use of linear sequences of discrete units, e.g. because they can use other structures, mostly still unknown, perhaps some of them more like maps, diagrams, networks, though possibly more abstract and flexible ... and the ways they are combined are richer than the composition of sequences of words.
For a partial analysis of ontological requirements see
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#prague09
Ontologies for baby animals and robots
(Nobody really knows how many forms of virtual machinery are implemented in animal brains.)
For instance if juxtapositions are geometric or topological rather than merely applicative, then that allows richer interactions between substructures in a representation. E.g., if a representation includes (representations of) two gear wheels in the same plane, the implications of rotation of one of them will depend on whether the two are represented with teeth meshing or not -- a minor representational change with major inferential consequences.
This is something like a set of requirements for languages of various sorts, not just for human languages used for communication. Because this requires the notion of "language" to be generalised, we now call them Generalised Languages, or GLs. In particular, the GLs that evolved first in organisms, and which develop first in young humans are not used for communication.
It is important not to think of the instances of such languages as physical structures (as suggested by the phrase "Physical symbol system" used by A. Newell and H.A. Simon). Rather they are structures in virtual machines that are implemented in physical machines.
For an introduction to some of the important points about virtual machines in brains and computers seehttp://www.cs.bham.ac.uk/research/projects/cogaff/talks/#wpe08
Virtual Machines in Philosophy, Engineering & Biologyhttp://www.cs.bham.ac.uk/research/projects/cogaff/09.html#vms
What Cognitive Scientists Need to Know about Virtual Machines
Paper for CogSci'09 (Needs expanding)
If correct, all this potentially has profound importance for human communicative language, if all human languages used for communication develop out of, build on, and make use of these GLs. (E.g. parsing processes would need them, as would information learnt about syntactic rules!) We suspect further investigation will reveal more complex collections of requirements, derived from the structures and processes that can exist in the environment and the kinds of ways in which animals (and robots) can relate to and interact with the environment (including other intelligent systems).
Instead of a rigid set of universals this will be a fairly complex set of requirements, different subsets of which may be met in different organisms -- or the same organism at different stages of development -- using different designs, including using different forms of representation. We think these ideas can be extended to include human (and possibly other) communicative languages, including, for instance artificial languages that might be developed for machines to use.
Then instead of a fixed set of features allegedly common to all language, there will be complex set of possible requirements to be met by a human language, not all of which will be met by all human languages. E.g. humans wanting to do mathematics or engineering collaboratively will have requirements that are not necessarily met by all adult languages, and almost certainly are not met by the language developed by typical five year old humans. (E.g. how many five-year olds could understand or even express the mathematical definition of a limit of a sequence, which requires two universal and one existential quantifier with scopes appropriately nested -- and more besides?)
Likewise, languages for describing concurrent processes that are synchronised have different requirements from languages for describing complex spatial structures. (Compare programming languages for specifying multi-processing operating systems, etc. with CAD formalisms in computers: both are languages used by humans as well as by machines.)
So the missing 'Universal' could be a species-neutral partial ordering on sets of requirements, specifying which subsets are necessary for other subsets to be developed, or in some cases which alternative subsets are sufficient for a new competence, and those requirements will constrain (though not uniquely constrain) designs for: forms of representation, supporting mechanisms, and architectures.
Where the requirements allow alternative designs, it could turn out that both local details and historical precursors determine which designs actually develop in particular species or even particular linguistic communities. We suggest that some of the above ideas are just below the surface in the discussion of biological issues, and forms of diversity in the target article.
One of the (many) requirements for further progress, is finding out just what sorts of information processing occurs in pre-verbal children and other animals. We are trying to explore this in relation to cognitive competences of orangutans and parrots.
One of the ways in which these ideas are sometimes rejected is spurious, namely to assert that the word "language" is defined to refer to something used for communication, so that there could not possibly be languages used for non-communicative purposes. We have answered that objection already by explaining what we mean by a "Generalised language" in terms of the features it should possess, which turn out to be most of the core features of human languages that make them suitable for communicative purposes. But those features also make them suitable for expressing percepts, thoughts, goals, plans, (internal) questions, etc.
Merlin Donald seems to have considered this question in
http://psycserver.psyc.queensu.ca/donaldm/reprints/Preconditions6.PDF
Preconditions for the evolution of protolanguages
in The Descent of Mind, Ed. M.C.Corballis & I.Lea. OUP, 1999, 355-365.
He talks about two preconditions for evolution of a protolanguage
(a) prior evolution of a 'more powerful central motor capacity' giving hominids a higher degree of voluntary motor control than other species, allowing them to vary and elaborate on the entire motor repertoireHe concludes that the evolutionary precursors for the development of human language are 'distinct from the roots of language per se', and that 'protolanguage, even in human infants, floats on the surface of such a system', which must exist before children can learn language. If only he had been able to adopt the design standpoint of an AI researcher/roboticist, and ask how his 'powerful central motor capacity', and the perceptual and planning mechanisms needed to support it, could actually work, he too would, perhaps, have invented the idea of a GL.(b) evolution of a more abstract capacity for expressive modelling than in any other species, providing the basis for a capacity for lexical invention.
Of course, there is still much empirical research to be done testing this idea, and modelling research to be done showing its feasibility as an explanation of some of the capabilities of non-human animals and also as part of the explanation of how young children develop linguistic competences.
See also
Evans, Nicholas, and Stephen C. Levinson (2009). "The Myth of Language Universals: Language Diversity and Its Importance for Cognitive Science." Behavioral and Brain Sciences 32, no. 5: pp. 429-448.
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/myth-of-language-universals-language-diversity-and-its-importance-for-cognitive-science/25D362A6566FCA4F51054D1C41104654Aaron Sloman,
What About Their Internal Languages?Commentary on three articles by Premack, D., Woodruff, G., by Griffin, D.R., and by Savage-Rumbaugh, E.S., Rumbaugh, D.R., Boysen, S. in Behavioral and Brain Sciences Journal 1978, 1 (4), pp. 515,https://www.cs.bham.ac.uk/research/projects/cogaff/62-80.html#1978-02http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#43
A. Sloman, The primacy of non-communicative language,
The analysis of Meaning: Informatics 5 Proceedings ASLIB/BCS Conference, Oxford, March 1979, Ed. M. MacCafferty and K. Gray, Aslib, London, 1979, pp. 1--15,
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham