This article by Oliver Sparrow was posted to two news groups as a commentary on my paper http://www.cs.bham.ac.uk/~axs/misc/turing-relevant.html Newsgroups: comp.ai.philosophy,sci.cognitive Message-ID: References: <8et81i$4v3$1@soapbox.cs.bham.ac.uk> <8f4det$tkh$1@nnrp1.deja.com> Reply-To: ohgs@chatham.demon.co.uk Date: Wed, 10 May 2000 08:55:24 +0100 Organization: RIIA Subject: Re: Computation, Turing Equivalence and AI From: Oliver Sparrow Aaron Sloman wrote: > I'd welcome comments and criticisms, I am sorry to have been slow to view the paper. Further, the time that I have given it is less than it merits. My comments will therefore be general. In opening, I agree with both the goals and sentiment of the paper. The six(?) key points seem important, although the five (?) additions - 'interrupt handling' - seem unnecessary complications to the issues raised. I suggest that the fundamental point that you are making is that real world systems of complex computation show little meaningful analytical division between 'hardware', 'software' and 'data', something fundamental to most theoretical structures that set out to limit and prescribe. For example, if a computer is seen as a set of switches driven by s/ware, possibly to work on data, then whatever one defines as true about switches becomes true of the whole ensemble: computability equivalence and so forth. Embedded in the engineering approach - first the widget, then the operating system, then... - are a mass of assumptions which make tacit claims to a universality which they in fact lack. Why do they exhibit this lack? Chiefly, for the reasons defined operationally by your six (?) points, but conceptually as follows. In order to set theoretical limits - as opposed to observed, phenomenological limits - on a structure, I need to have a model of it. A linear sequence of 'begots' - lo! hardware begot software... - implies (a) a complete model and (b) an unchanging model. Thus the behavioral repertoire (the trajectory through state space, and the dimensions of that state space) is fixed, however much the momentary behaviour of the locus of the moment may appear complex. A child may be astonished by the chiming of a musical clock, but the engineer begot it to be predictable, and tick by tick it winds itself through this predetermined set of states. Suppose that I cannot have a model of a system. Then, de facto, I cannot predict its repertoire with circumscribed confidence. I can only mutter that "'tain't done that before" when something new comes into view, and add this to a list of What The Widget Did Next. Generalising this, we can come up with heuristics of how people behave and how weather patterns develop, but we can only predict (= understand) within confidence bars based on past observations. When we can model with certainty of exact homology (as with the clock, as with meta-descriptions of computation) then the error bars go away, to the exact degree that we are sure of our model. Why might we not be able to model a system? There are two fundamental reasons. !: One is inaccessibility, of which quantum systems are the paradigm; but there are, of course, a myriad of NP and worse structures which are equally intractable, non-linear systems in which divergence rests on ultimately infinitesimal and unobservable quantities, systems in which necessary historical data has been lost. 2: The other issue has been touched upon in your main points. It is that information-processing systems can change the rules that a model is attempting to fix. Phrased another way, the dimensionality of the state space can be both pruned and extended, and the repertoire of trajectories in it can be changed. Further, it is customary to think of the system in question in isolation from its origins and from the world of data and events. Few real-world structures save those engineered specifically to be so enjoy such isolation. The state space of the system under consideration arose from the interaction of other state spaces (such as that representing its designer, or parents, or home ecology or industry) and exists by virtue of interaction with many such structures. let me enlarge a little on this last. Intellectual tidiness causes us to think in handy boxes: a hammer is, therefore, considered in terms of being a tool, sitting in a box, being used and hurting my thumb. A full model of a given hammer-in-situ would, however, need to draw exactly and specifically upon a myriad of contributory threads. There needs to be context-setting structures that put tools in their place in human affairs. The specifics of the metal and chemical trades must be addressed, and economics and retailing become a part of the general context. Psychology and circumstance would illuminate about the mechanisms of choice whereby this model of hammer (and this particular chunk of iron and plastic) was acquired. Yet more models would have to be evoked to place me and it in the loft, and still more to 'explain' hurt thumbs. And indeed, what would such an explanation look like save experience of the event, or an iterative, circular set of references? To what synthesis engine would all this story be told, that it could model and understand without error bars? The real world does not model itself, and there appears to be no need to evoke a grand modeler. Existence is an endless train of events that spark off each other. We can laboriously isolate bits of this and carry out experiments - thus assuring ourselves of the underpinning elementary clockwork - but these 'preparations' specifically set out to chop of an important component of reality: the complex specifics that it encodes. That is, irrespective of what it is that makes up the phenomena that we call space, time, matter and potential, the specific configurations into which the manifestations of this are arranged encode all manner of specific ways. These break the symmetry innate to a disordered system. Isolated structures - 'preparations' for experiment - have this particularity erase from them, on purpose, so that we can explore this symmetry. That is what science usually sets out to undertake. It is very successful in its aims, but this very success may blind us to the operational importance of the specific. It is, however, the encoded information - for want of a better word - that defines how the world comprises objects and properties that go well beyond the qualities of the unstructured, elementary constituents of the universe. This encoded information can exist in independent, abutting, nested or hierarchical structures. These structures display the full range of properties that you impute to computers. They can change their rule base. They can reference other structures. When we miss this feature of reality - its specificity, its messiness - we delude ourselves. When we model 'preparations' (and worst of all, when we model *conceptual* preparations) we delude ourselves greatly. _______________________________ Oliver Sparrow