Some years ago I came across Arnold Trehub in online discussions regarding consciousness and I often felt that his contributions were more sensible than most of the others. So when I came across his book The Cognitive Brain (MIT Press 1991) I decided to buy it. Although I could not follow all the details because of my inadequate knowledge of neuroscience, and because some of the details were very densely written, I felt it was extremely important. Yet somehow it seemed to have been ignored by the relevant research communities -- possibly because most researchers lack the breadth of interest required to engage with the book.An exception is the review here by Luciano da Fontoura Costa.On several occasions in the last few years we exchanged messages, partly because I was not sure his theory could accommodate some of the phenomena I was investigating, such as the variety of forms of deliberative competence discussed in this html filehttp://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0604and the use of an ontology of 3-D processes even when exposed to static or changing 2-D images, as discussed here
Requirements for a Fully Deliberative Architecture (Or component of an architecture)http://www.cs.bham.ac.uk/research/projects/cogaff/misc/nature-nurture-cube.htmlAfter one of our recent exchanges (March 2007) I asked his permission to make some of our correspondence public. He agreed and this file is a consequence. I have very slightly edited the messages to avoid the risk of giving offence to other people and to make the format more consistent.
Requirements for going beyond sensorimotor contingencies to representing what's out there
(Learning to see a set of moving lines as a rotating cube.)There are many more discussion notes by him on the Psyche-D and Psyche-B archives -- where I first encountered him probably in the 1990s:
http://listserv.uh.edu/archives/psyche-d.html
http://listserv.uh.edu/archives/psyche-b.html
From Aaron Sloman Sun Mar 18 23:36:17 GMT 2007 To: trehub@psych.umass.edu Subject: Re: Phenomenal experience In-Reply-To: <1174236529.45fd6d7149e3e@mail-www.oit.umass.edu> Thanks Arnold > On the chance that it will interest you, here is my review of Revonsuo's > new book on consciousness/phenomenal experience: > > http://sci-con.org/ (March 10, 2007) I have not read Revonsuo's book. I had not even heard of it, though I tend to steer clear of things that seem to me to be more concerned with philosophical issues than with design issues. I've had a quick look at the review. ... I prefer your work, because it adopts the design stance, not least because I think our pre-scientific concepts related to consciousness, self, experience, etc. are so ill-defined that trying to come up with the 'correct' explanation of the phenomena they allegedly refer to is doomed. (That's roughly what Turing said about intelligence, in his 1950 paper.) Anyhow, I think I am making slow progress with understanding the scope of your design ideas, though not the details of their neural underpinning. I promised to let you know what I thought would be difficult to accommodate in the retinoid model in its current form as I understand it (though I can imagine ways of generalising it that may help to address the problems, but will make its implementation in neural mechanisms much more complex). Part of the problem is representing the variety of processes we can perceive. In particular I think it's a very awkward form of representation for something like a rotating wire-frame cube where the axis of rotation is not parallel to either the line of sight or the view-plane. I also have concerns about how it would cope with a scene in which there are various objects of different shapes and sizes all moving at the same time, in different directions, some of them not moving in a straight line, not all of them rigid (e.g. a typical street scene in the middle of a busy town), and some of them becoming temporarily unobservable, like a toy train going through a tunnel or an opaque cube rotating so that various faces edges and corners go temporarily out of sight and then reappear in new locations with different directions of movement. I don't know how you would propose to represent differences in the matter of which things are made, including their dispositional properties, e.g. differences in rigidity, solidity, elasticity, hardness, softness, density, temperature, strength, fragility, viscosity, stickyness, etc. I think that's an important part of what a young child learns in the first few years. (Meals are a rich source of such information.) Some high level constraints must be innate, but not the detailed ontology, e.g. because we did not evolve in an environment containing plastic, or wire, or plasticine, rubber, glass, etc. But even if the variety is innate, that still leaves open how the information about instances is represented, e.g. so as to allow predictions to be made about behaviour of parts of objects -- e.g. the different parts of a length of string do different things in various situations where you pull one end. There are also questions about perception of affordances, and the ability to reason about affordances (e.g. how would I see that this object cannot get through that gap unless it is moved using a 'screwing' motion; or how would I see the possible positions where I could grasp something using two fingers, and what the approach routes for getting my fingers to the required location and orientation would be, etc.). There are many open questions concerning how *general* information about various types of things, types of relationships, types of processes, can be represented in a usable form, as opposed to the specific information about a particular situation in which one is immersed. There are also questions about how visual, auditory, and various kinds of haptic/tactile information about shape, spatial relations, and kinds of material and their properties get integrated. I've seen some studies regarding ways in which they can come apart as a result of either brain damage or unusual experimental setups, but that does not explain what form of representation is used when they don't come apart. I realise that it's a bit unfair to challenge *you* with these and similar questions, because as far as I know you are one of the very few people in the world who seems to have even begun to think about how brains could represent detailed information about a changing 3-D environment in which the observer is moving and acting, and I don't think anyone else in the world actually has answers to my questions -- certainly not among the AI/Robotics experts I have been interacting with or whose work I have read, including those who claim to be biologically inspired, and not among the developmental and cognitive psychologists or neuroscientists either (e.g. Daniel Wolpert, Alain Berthoz). They all address only a small part of the problem. I think my main task, on which progress is very slow, is clarifying the *requirements* for an adequate theory. Maybe designs to meet those requirements will eventually be found. Incidentally, two recent things in which I refer briefly to your work are http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0702 Machines in the Ghost (DRAFT PDF paper) Invited presentation for ENF07 http://www.indin2007.org/enf/ (This hastily written first draft is likely to change after criticisms from the organisers who are now looking at it.) http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0702 What is human language? How might it have evolved? (Seminar presented a couple of weeks ago: PDF slides still being revised) In the latter I am trying to re-shape questions people ask about how language evolved. Most of the stuff I have seen written about this assumes that language is essentially about communication, and that initially there were simple forms of communication, either using sounds or gestures, and then gradually they became more sophisticated, whereas I am arguing that something with many of the most sophisticated features of external languages evolved earlier to support perceptual, reasoning, planning, and other cognitive processes in various kinds of animals, and also exist in pre-linguistic human children. I introduce the notion of a g-language (generalised language) which doesn't have to be used for communication, but includes representations with rich structural variability and compositional semantics (which I've generalised), though not necessarily full systematicity (in the linguist's sense), and which can be used for multiple purposes, including representing contents of perception, goals, questions, plans, hypotheses, memories, generalisations, predictions, etc. all of which are required to explain the competences of nest building birds, hunting mammals, primates, and pre-linguistic children. In this context, the question of how human language for communication evolved, or how children learn to speak, looks very different. It's possible that you said something about this in your book, which I've forgotten! Best wishes. Aaron http://www.cs.bham.ac.uk/~axs/
From trehub@psych.umass.edu Mon Mar 19 19:27:40 2007 Return-path:Message-ID: <1174332437.45fee4150e51a@mail-www.oit.umass.edu> Date: Mon, 19 Mar 2007 15:27:17 -0400 From: trehub@psych.umass.edu To: Aaron Sloman Subject: Re: Phenomenal experience Aaron, I appreciate the feedback. > I promised to let you know what I thought would be difficult to > accommodate in the retinoid model in its current form as I > understand it (though I can imagine ways of generalising it that > may help to address the problems, but will make its > implementation in neural mechanisms much more complex). Some of the representationaltasks that you describe below would be difficult or impossible for the retinoid system alone. They are more tractable with interactive coupling between the retinoid system and a level-1 synaptic matrix together with the rotation and size transformer mechanisms I describe. This is represented in the block diagram of the extended system (Fig. 16.1). But I do agree that these mechanisms would have to be elaborated or augmented to handle certain kinds of representations. > In particular I think it's a very awkward form of representation for > something like a rotating wire-frame cube where the axis of rotation > is not parallel to either the line of sight or the view-plane. I don't see why the binocular input of a rotating wire frame cube would not be properly represented as a rotating retinotopic and spatiotopic excitation pattern in the retinoid system regardless of the orientation of the axis of rotation. The problem would lie in *recalling* an image of a wire frame cube and then internally rotating it to a desired orientation when the axis of rotation is not parallel to the line of sight or view-plane. This would obviously require a more complicated neuronal mechanism. But I believe this task would be difficult for the average person as well. > I also have concerns about how it would cope with a scene in which > there are various objects of different shapes and sizes all moving > at the same time, in different directions, some of them not moving > in a straight line, not all of them rigid (e.g. a typical street > scene in the middle of a busy town), and some of them becoming > temporarily unobservable, like a toy train going through a tunnel > or an opaque cube rotating so that various faces edges and corners > go temporarily out of sight and then reappear in new locations with > different directions of movement. This is indeed a perceptual challenge! However, its important to recognize that we never see such an extended complicated scene all at once with uniform clarity. The foveal window for sharp definition is no more than 2 to 5 degrees in visual angle. In order to apprehend the entire scene with even moderate resolution we need to shift our attention (the heuristic self-locus) and the locus of our visual fixation many times. At the same time, the direction of motion of salient objects is traced by the excursion paths of the heuristic self-locus. Incidentally, one of the bounties of the heuristic self-locus is that it can signal object permanence by continuing an uninterupted track of an occluded moving object; e.g., a toy train going through a tunnel. > I don't know how you would propose to represent differences in the > matter of which things are made, including their dispositional > properties, e.g. differences in rigidity, solidity, elasticity, > hardness, softness, density, temperature, strength, fragility, > viscosity, stickyness, etc. These would have to be represented in propositional structures. See TCB Ch. 6 "Building a Semantic Network". Obviously, these representations are not innate but are the product of learning in the modified synaptic matrices that constitute the semantic networks. > There are also questions about perception of affordances, and the > ability to reason about affordances (e.g. how would I see that this > object cannot get through that gap unless it is moved using a > 'screwing' motion; or how would I see the possible positions > where I could grasp something using two fingers, and what the > approach routes for getting my fingers to the required location > and orientation would be, etc.). I think it is important to distinguish between what I would call manifest affordances and cryptic affordances. Manifest affordances can be immediately perceived --- a gap in a wall, a handle that can be easily grasped. Cryptic affordances, perhaps like the efficacy of a screwing motion in getting an object through a gap, might have to be learned by trial and success. In a broad sense, planning a successful action can be a matter of chaining affordances, some manifest, others cryptic. See TCB Ch. 8 "Composing Behavior: Registers for Plans and Actions", pp. 146-151. > There are many open questions concerning how *general* information > about various types of things, types of relationships, types of > processes, can be represented in a usable form, as opposed to the > specific information about a particular situation in which one is > immersed. In principle, I don't see why this cannot be achieved by learned mappings from tokens to types in the synaptic matrices. See TCB Ch. 3 "Learning, Imagery, Tokens and Types: The Synaptic Matrix". > There are also questions about how visual, auditory, and various > kinds of haptic/tactile information about shape, spatial relations, > and kinds of material and their properties get integrated. I've > seen some studies regarding ways in which they can come apart as a > result of either brain damage or unusual experimental setups, but > that does not explain what form of representation is used when they > don't come apart. My claim is that all these get integrated by their common projected locations in the egocentric space of the retinoid system. > I introduce the notion of a g-language (generalised language) which > doesn't have to be used for communication, but includes > representations with rich structural variability and compositional > semantics (which I've generalised), though not necessarily full > systematicity (in the linguist's sense), and which can be used for > multiple purposes, including representing contents of perception, > goals, questions, plans, hypotheses, memories, generalisations, > predictions, etc. all of which are required to explain the > competences of nest building birds, hunting mammals, primates, and > pre-linguistic children. This is an extremely important point. Before language is used for public communication it is used in internal brain mechanisms adapted for coping with a complex world. I have introduced the notion of self query in semantic networks to draw inferences on which to base plans of action (see my comment above re chaining affordances and TCB pp. 114-115). Best, Arnold
From A.Sloman@cs.bham.ac.uk Mon Mar 19 23:53:33 2007 Return-path:Date: Mon, 19 Mar 2007 23:53:32 GMT From: Aaron Sloman Message-Id: <200703192353.l2JNrW9J012629@acws-0203.cs.bham.ac.uk> To: trehub@psych.umass.edu Subject: Re: Phenomenal experience Cc: Aaron Sloman Arnold, > I appreciate the feedback. I wonder how you would feel about my making your replies to my questions available (with the questions) on a web site here. You don't seem to have a web site of your own. Google would very soon index it. I'd make sure it had full details of your book and the recent paper. Thanks for your responses. I'll try to take in the details when a host of current demands has died down, but there's one point on which I probably wasn't clear. > > In particular I think it's a very awkward form of representation for > > something like a rotating wire-frame cube where the axis of rotation > > is not parallel to either the line of sight or the view-plane. > > I don't see why the binocular input of a rotating wire frame cube would not > be properly represented as a rotating retinotopic and spatiotopic excitation > pattern in the retinoid system regardless of the orientation of the axis of > rotation. I was not thinking of problems of representing the binocular input but of representing the 3-D interpretation, e.g. what you see if you look (even with one eye shut) at the images of rotating necker cubes and opaque cubes with links here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/nature-nurture-cube.html under the heading: Online rotating cubes and other 3-D structures. It's very hard to see those little movies just as changing 2-D images (though probably not impossible, with a bit of practice -- though what that means is another problem). In the first image, all you see are the 'wire' edges, and as they move around some don't change their orientation in 3-D much while others change a lot, whereas they all vary their depth, location and direction of movement (in 3-D). Some of the examples show opaque rotating objects. Some (which require java) are opaque and their amount and direction of rotation can be varied with the mouse. I guess I'll have to re-read what you say about the representation of 3-D scenes to see whether I have misremembered. > The problem would lie in *recalling* an image of a wire frame cube > and then internally rotating it to a desired orientation when the axis of > rotation is not parallel to the line of sight or view-plane. This would > obviously require a more complicated neuronal mechanism. But I believe this > task would be difficult for the average person as well. I think seeing the rotating cubes is not at all difficult. Also, I don't think recalling an image, which is a 2-D structure, can account for the percept of a 3-D rotating object in which parts are not just moving in an image, but in 3-D space. Moreover, some of the examples pointed to on that web site are probably not familiar to everyone (e.g. one is a little movie of a plane slicing through a rotating torus). But that does not stop the 3-D interpretation being seen. As Gunnar Johansson showed even a set of moving 2-D light points will be seen as a moving 3-D human body under some circumstances. In fact the rotating necker cube can flip as static necker cubes do, and the 3-D locations and movements will then change. But nothing in the image is different when that happens: the lines continue moving in the image exactly the same way no matter which of the two possible 3-D processes you see. (A similar comment can be made about the static necker cube: it's just more dramatic in the rotating case.) I am not claiming that we see everything with great precision, as might be achieved by representing every 'voxel' and its changing occupancy. In some sense the totally precise replication of a 3-D structure, even a moving structure, is not too difficult in a computer and that sort of thing is done in computer graphic systems, e.g. graphical game engines. But a computer can do that without seeing the various high level features which we see. If you look at a ferris wheel in action, e.g. along a line of sight that's oblique in relation to the plane of the wheel and its axis of rotation, you can see several things going on at the same time: the whole wheel rotating about its axis, the cars/seats swinging in their mountings, people sitting in the seats possibly waving and moving their heads, arms, legs, etc., all in 3-D, but not necessarily in high precision nor with uniform resolution, and not all in syncrony. Aaron =============================
From trehub@psych.umass.edu Tue Mar 20 21:35:39 2007 Return-path:Message-ID: <1174426503.46005387c1c11@mail-www.oit.umass.edu> Date: Tue, 20 Mar 2007 17:35:03 -0400 From: trehub@psych.umass.edu To: Aaron Sloman Subject: Re: Phenomenal experience Aaron, > I wonder how you would feel about my making your replies to my > questions available (with the questions) on a web site here. You > don't seem to have a web site of your own. Google would very > soon index it. I'd make sure it had full details of your book and > the recent paper. I think open exchanges on these issues should be promoted. You certainly have my approval for making my replies to your questions available on your web site. More later in response to your comments about 3D interpretations. Arnold =============================
From trehub@psych.umass.edu Fri Mar 30 17:49:41 2007 Return-path:Message-ID: <1175273351.460d3f87e9931@mail-www.oit.umass.edu> Date: Fri, 30 Mar 2007 12:49:11 -0400 From: trehub@psych.umass.edu To: Aaron Sloman Subject: Re: Phenomenal experience Aaron, The 3D perceptual experience when viewing the rotating cubes in the online 2D displays is striking. How can the brain possibly accomplish this? Here is my explanation based on the neuronal machinery of the 3D retinoid. Let's start with a static Necker cube. Recall that excursions of the heuristic self-locus (HSL) trace neuronal excitation patterns over the autaptic cells within the 3D retinoid structure. How can the 2D wire diagram of the Necker cube be forced to outline a volumetric structure in our 3D phenomenal space? Focus on the lower left junction of 3 edges in the bottom parallelogram of the cube. Two form a right angle, while the third edge slants up to the left. Suppose we perform a trace of this oblique edge by the HSL over contiguous Z-planes from near to far. This immediately forces the 2D stimulus into a 3D cube pattern slanting up to the left. Now focus on the upper left junction of edges in the bottom parallelogram. If we trace the oblique edge over Z-planes from near to far, the cube flips and slants down to the right in 3D space. This kind of bistable pattern depends on the particular part of the complex image that is captured on the normal foveal axis (the center of focus, see TCB, pp. 252-255). The transformation of a 2D pattern into a phenomenal 3D image depends on HSL tracing of obliqe edges over contiguous Z-planes in the 3D retinoid.I assume that the same kind of retinoid activity is responsible for our perception of 3D volumes in "solid" 2D displays. Is this kind of HSL tracing innate or learned? I don't know, but it would be an interesting question to investigate. Recently, there was an important experiment reported which tends to confirm my 3D retinoid model and my explanation for the efficacy of perspective in 2D displays. Recall the mechanism for the conservation of intrinsic size (size constancy) in my retinoid model. This model predicted a real expansion in size of the brain's representation of an object according to its perceived distance. The investigators presented a 2D drawing in 3D perspective containing two identical spheres, one within the nearer perspective and the other within the more distant perspective. Subjects perceived the "more distant" sphere to be significantly larger than the "nearer" sphere though both were identical (a size illusion). Crucially, fMRI imaging during perception showed that the spatial extent of brain activation in visual area V1 changed in accordance with the perceived size of the identical spheres (see Murray et al, (2006). "The representation of perceived angular size in human primary visual cortex". *Nature Neuroscience*, 9, 429-434). Incidentally, in the ferris wheel display, if you fixate the left hemi-rim, the left side of the ferris wheel wll appear to be nearer to you than the right side. Fixate the right hemi-rim and the orientation of the ferris wheel flips in depth. This happens for the same reason 2D wire-cubes flip in depth -- HSL tracing of contours over contiguous Z-planes in the 3D retinoid. This is the only explanation I can find within the cognitive-brain design stance. Best regards, Arnold =============================
From Aaron Sloman Sat Mar 31 01:06:15 BST 2007 To: trehub@psych.umass.edu Subject: Re: Phenomenal experience In-Reply-To: <1175273351.460d3f87e9931@mail-www.oit.umass.edu> References: <1175273351.460d3f87e9931@mail-www.oit.umass.edu> Arnold, Thanks for your message. I will read it more carefully later. For now, having just finished a long overdue book chapter, I have at last produced a draft web page with our recent exchanges, which I'll continue to extend as appropriate. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trehub-dialogue.html Let me know if you are happy with the contents and format or would prefer anything altered. If I insert a link to it from my home page, google will start to index it. Just one comment on your message now: I don't need to look at any particular part of the necker cube for it to flip. I can stare through the centre, and occasionally, especially if I blink, it flips. But I don't need to blink. I can almost but not quite, make it flip at will while fixating the centre, but I cannot describe what I DO to make it flip anymore than I can describe what I do to make a phrase like 'hello there' come to mind (which I did just before writing it). I think there is some sort of unstable (bistable) dynamical system involved in perceiving the cube. But what the format and mechanisms of that system are is not clear to me. Also I can sometimes see the figure as an inconstent object with one half of it corresponding to one of the two main views and the rest corresponding to the other view, though it is painful to do. Several years ago I announced that on the psyche-b list, to the consternation of Bernie Baars. I think some other people said that they had never previously tried it, but when they did they found to their surprise that they too could produce an inconsistent percept. At some stage we should discuss other ambiguous figures. E.g. while experiencing the ambiguity of the necker cube requires only a geometric ontology, the duck-rabbit is far more subtle, as discussed in this 1982 paper; http://www.cs.bham.ac.uk/research/projects/cogaff/06.html#604 Image interpretation: The way ahead? More later. Aaron http://www.cs.bham.ac.uk/~axs/ ==============================
From trehub@psych.umass.edu Mon Apr 02 16:50:09 2007 Return-path:Message-ID: <1175528988.4611261c14850@mail-www.oit.umass.edu> Date: Mon, 02 Apr 2007 11:49:48 -0400 From: trehub@psych.umass.edu To: Aaron Sloman Subject: Re: Phenomenal experience References: <1175273351.460d3f87e9931@mail-www.oit.umass.edu> <200703310006.l2V06Fwm011077@acws-0203.cs.bham.ac.uk> Aaron, > Let me know if you are happy with the contents and format or would > prefer anything altered. It looks fine to me. > Just one comment on your message now: I don't need to look at any > particular part of the necker cube for it to flip. I can stare > through the centre, and occasionally, especially if I blink, > it flips. But I don't need to blink. Visual fixation does not always correspond to the locus of visual attention (the target of the heuristic self locus [HSL]). It has been demonstrated by Posner and many others that attention can be shifted covertly over visual space without corresponding eye movements (see TCB, pp. 63-65). So you can stare through the center of the Necker cube and still capture a particular junction of edges by selective shifts of your HSL. I believe spontaneous flips may be related to neuronal fatigue. > I can almost but not quite, make it flip at will while fixating the > centre, but I cannot describe what I DO to make it flip anymore than > I can describe what I do to make a phrase like 'hello there' come to > mind (which I did just before writing it). Of course. The activity of the neuronal machinery that drives the HSL to a target in 3D retinoid space is outside of your awareness, just like the dynamic processes in the motivational, semantic, and lexical networks that select your phonological and motor routines before you experience and type 'hello there'. > I think there is some sort of unstable (bistable) dynamical system > involved in perceiving the cube. But what the format and mechanisms > of that system are is not clear to me. My claim is that the bistable dynamical system in question is the 3D retinoid system with its capacity to trace selected 2D contours in depth over contiguous Z-planes. The challenge is to specify the operative details of a different kind of dynamical brain system that can do the job. Have you seen any other biological mechanisms that have been proposed? > Also I can sometimes see the figure as an inconstent object with one > half of it corresponding to one of the two main views and the rest > corresponding to the other view, though it is painful to do. I think this experience is similar to the perception of what are commonly called 'impossible figures'. Arnold =============================
From Aaron Sloman Tue Apr 10 23:41:31 BST 2007 To: trehub@psych.umass.edu Subject: Impossible figures In-Reply-To: <1175528988.4611261c14850@mail-www.oit.umass.edu> References: <1175273351.460d3f87e9931@mail-www.oit.umass.edu> <200703310006.l2V06Fwm011077@acws-0203.cs.bham.ac.uk> <1175528988.4611261c14850@mail-www.oit.umass.edu> Arnold, thanks for checking the Web site. http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trehub-dialogue.html > > Let me know if you are happy with the contents and format or would > > prefer anything altered. > > It looks fine to me. I shall go on adding our correspondence there unless you ask me not to include something. The rest of this message, about ambiguous figures and impossible objects, is rather long. [AS] > > Just one comment on your message now: I don't need to look at any > > particular part of the necker cube for it to flip. I can stare > > through the centre, and occasionally, especially if I blink, > > it flips. But I don't need to blink. [AT] > Visual fixation does not always correspond to the locus of visual > attention (the target of the heuristic self locus [HSL]). It has been > demonstrated by Posner and many others that attention can be shifted > covertly over visual space without corresponding eye movements > (see TCB, pp. 63-65). So you can stare through the center of the Necker > cube and still capture a particular junction of edges by selective > shifts of your HSL. I believe spontaneous flips may be related to > neuronal fatigue. I think you are here focusing mainly on what triggers the flip, whereas I am more concerned with how the different 3-D interpretations and the sensed 2-D features of the optic array can be represented in the virtual machine running on the brain, which was the topic of your retinoid theory. What triggers flips between different representations is of secondary importance, as I expect you will agree. Another important question for any theory of how the different interpretations are represented is which of these is correct: When there are two interpretations of the same visual input, o two different structures are created and coexist, one for each interpretation, with only one of them 'active' at a time, o the two interpretations do not coexist: rather one or the other is created, or recreated from the 2-D information whenever the interpretation flips o an intermediate case occurs: a structure is created that mostly represents what is common to the two 3-D interpretations and only a small part of it needs to change between flips (e.g. the common topology if the 3-D interpretation is fixed, but the relative depth orderings and slopes of parts change.) I suspect that something like the last answer is correct, but I don't know if it is compatible with your retinoid theory. My impression is that you claim something more explicitly modelling the scene is created and mapped (in a collection of discrete chunks?) within the different depth layers. Despite my regarding triggering as of secondary interest, your comment about changing fixation as required to produce flips did not strike me as correct: [AS] > > I can almost but not quite, make it flip at will while fixating the > > centre, but I cannot describe what I DO to make it flip anymore than > > I can describe what I do to make a phrase like 'hello there' come to > > mind (which I did just before writing it). > [AT] > Of course. The activity of the neuronal machinery that drives the > HSL to a target in 3D retinoid space is outside of your awareness, > just like the dynamic processes in the motivational, semantic, and > lexical networks that select your phonological and motor routines > before you experience and type 'hello there'. > [AS] > > I think there is some sort of unstable (bistable) dynamical system > > involved in perceiving the cube. But what the format and mechanisms > > of that system are is not clear to me. > [AT] > My claim is that the bistable dynamical system in question is the 3D > retinoid system with its capacity to trace selected 2D contours in > depth over contiguous Z-planes. The challenge is to specify the > operative details of a different kind of dynamical brain system that > can do the job. Have you seen any other biological mechanisms that > have been proposed? I think most people in psychology and neuroscience just treat the ambiguous figures as curiosities, or investigate conditions under which one or other interpretation is experienced, or look for neural correlates of various experiences. But they don't adopt the design stance seriously, as you do. There's another research community that does try to produce working models, but not always based on biological mechanisms. I was introduced to these topics by Max Clowes an AI vision researcher who introduced me to Artificial Intelligence around 1969. (Alas he died in 1981). He and his colleagues implemented a program about 37 years ago that took line drawings and produced interpretations of those line drawings as 3-D opaque polyhedra. (In those days it could take many minutes to do something that would take a tiny fraction of a second now.) His program, instead of representing 3-D structures using depth information, interpreted lines as representing convex, concave or occluding edges (work that was later extended by Dave Waltz at MIT to include cracks and shadow edges). The interpretations created by the program were 'labelled diagrams', i.e. networks of symbolic structures representing the 2-D configuration of lines and junctions where the interpretation process attached labels to the lines indicating what sorts of edges they represented. Junctions where two or more lines meet could be shown to allow only certain combinations of interpretations of those lines. E.g. an ELL junction cannot represent two concave edges, though it can represent two convex edges. The program searched for a consistent interpretation of the whole figure. In some cases it would find only one interpretation. In others more than one was possible but I think it settled for the first it found. However, the mechanism could easily have been extended to produce all the consistent interpetations and alternate between selecting one or another as the favoured one. Toy tutorial programs in pop-11 demonstrated these ideas when I was teaching AI at Sussex e.g. http://www.cs.bham.ac.uk/research/projects/poplog/doc/popteach/labelling INTERPRETING PICTURES OF OVERLAPPING LAMINAE WITH HOLES (figure-ground interpretations in 2-D) http://www.cs.bham.ac.uk/research/projects/poplog/doc/popteach/waltz WALTZ FILTERING (interpreting 2-D images as 3-D polyhedra) In the model developed by Clowes (as in the models of Huffman, Waltz, and others who extended this work) depth in the scene was only implicitly represented, insofar as it was implied by labelling edges as concave, convex or occluding, etc. E.g. in a "T" junction the cross-bar would be the edge of a surface that occluded part of the edge represented by the stem of the "T"; and therefore the parts of the occluded edge that are visible near the junction, and the surfaces meeting at that edge would have to be further away than the edges meeting at the cross-bar (one visible and one self-occluded). Such constraints are all very local, but their effects can propagate. The same general idea could be implemented in a constraint network that is an unstable dynamical system. Around 1976 Geoffrey Hinton extended those ideas to allow the interpretation to be constructed in a 'relaxation network' which instead of using hard constraints used soft constraints (where local interpretations could be given higher or lower preferences instead of being accepted or regected). The earlier methods would fail completely if a bit of a picture was missing or had a spurious line, because elimination of possible interpretations caused by such 'noise' could propagate all round the constraint network. (He called that 'gangrene'.) Instead he allowed local continuously variable *preferences* whose effects propagated around the network, and then the network as a whole tried to maximise preferences globally. This could, in effect, override noise in some parts of the image. He proved that under some circumstances it would converge to a unique interpretation, but I guess the cases where there is more than one interpretation are just as interesting. (This work was reported in his PhD thesis in 1978.) I think those approaches using abstract data-structures to represent the fixed topology of visible parts of complex objects and attaching information about the interpretations of parts are different from the retinoid model, because the latter, as I understand it, implies construction of something partly isomorphic with the 3-D scene. [AS] > > Also I can sometimes see the figure as an inconstent object with one > > half of it corresponding to one of the two main views and the rest > > corresponding to the other view, though it is painful to do. > [AT] > I think this experience is similar to the perception of what are > commonly called 'impossible figures'. Partly. In the impossible figures there are no local discontinuies, whereas in my inconsistent view of the Necker cube I think there were. But I am not sure. Max Clowes claimed years ago that that ambiguous and impossible figures both gave important information about how visual systems work. I've been thinking more about this recently in the context of trying to work out requirements for the visual system of a human like robot - in the CoSy robot project. I've come to the conclusion that the differences between cases where impossibility is obvious, like the Penrose triangle and cases where detecting the impossibility requires some work tracing relationships do give us important clues as to how 3-D structures are represented. I've started preparing a slide presentation on this, which is here if you are interested (10 PDF slides): http://www.cs.bham.ac.uk/research/projects/cogaff/challenge-penrose.pdf Sorry to go on so long. Aaron http://www.cs.bham.ac.uk/~axs/ =============================
From trehub@psych.umass.edu Fri Apr 13 21:23:08 2007 Date: Fri, 13 Apr 2007 16:23:00 -0400 From: trehub@psych.umass.edu To: Aaron SlomanSubject: Re: Impossible figures Aaron, You wrote: > Another important question for any theory of how the different > interpretations are represented is which of these is correct: I'm not comfortable with the use of the term "interpretation" to describe the occurrent perception of a Necker cube in a particular 3D orientation. I take this experience to be an unanalyzed phenomenal event reflecting a particular 3D excitation pattern in the brain. > When there are two interpretations of the same visual input, > > o two different structures are created and coexist, one for each > interpretation, with only one of them 'active' at a time, [1] > > o the two interpretations do not coexist: rather one or the other > is created, or recreated from the 2-D information whenever the > interpretation flips [2] > > o an intermediate case occurs: a structure is created that > mostly represents what is common to the two 3-D interpretations > and only a small part of it needs to change between flips > (e.g. the common topology if the 3-D interpretation is > fixed, but the relative depth orderings and slopes of > parts change.) [3] > > I suspect that something like the last answer is correct, but I > don't know if it is compatible with your retinoid theory. My > impression is that you claim something more explicitly modelling the > scene is created and mapped (in a collection of discrete chunks?) > within the different depth layers. I think case [2] and [3] are both possible and compatible with the retinoid model. You are correct in assuming that an explicit retinotopic and spatiotopic model (excitation pattern) is created and mapped within contiguous depth layers (Z-planes) of the 3D retinoid. The brain does this by moving the heuristic self-locus (HSL) to enhance excitation over the appropriate figure contours (tracing) through retinoid depth. Case [2] would happen with a relatively complete and proper trace of the cube. Case [3] would happen with a partial HSL trace in which one part of the cube is distinctly represented in 3D, while the rest of the figure is indistinct and ambiguous. Think of the ability to represent perspective 2D patterns in depth as a perceptual skill analagous to a motor skill. A highly skilled perceptual "athlete" could conceivably multiplex his HSL tracings to achieve an experience of a Necker cube in both orientations "at the same time". > Despite my regarding triggering as of secondary interest, your > comment about changing fixation as required to produce flips did not > strike me as correct: I thought I said that a shift in attention (changing the target of the heuristic self-locus) does not require a change in foveal fixation, although a change in fixation usually follows a shift in attention. If you believe that a shift in attention is not required to produce a flip, I'd be interested in the reason for your doubt. Arnold =============================
From A.Sloman@cs.bham.ac.uk Thu Apr 26 02:23:34 2007 Date: Thu, 26 Apr 2007 02:23:34 +0100 From: Aaron SlomanMessage-Id: <200704260123.l3Q1NY6X027969@acws-0203.cs.bham.ac.uk> To: trehub@psych.umass.edu Subject: Re: The foundation of logic Arnold > Here is a commentary that I recently sent to PSYCHE-D. For some reason, it > was not accepted for distribution. I have the impression that moderation can be arbitrary at times. I only have one small point for now. > Laureano Luna wrote: > > > Consider the three worlds of Popper and Eccles: > > > W1: objects of external senses: spatial and temporal; > > real. > > > W2: objects of thr internal sense; non spatial; > > temporal; real. > > > W3 (or perhaps better Frege's third reign): objects of > > reason; non spatial; intemporal; ideal. > ..... You wrote: > What is the evidence that the "ideal dimension" of W3 actually exists > outside of W2; i.e., as an invention of the human brain? I can find no > such evidence. Popper wrote a whole book on this 'Objective knowledge' and there is also a long history of discussion in philosophy of mathematics about whether numbers and other mathematical entities exist independently of human minds. The claim that there were no numbers, or shapes, or causal relationships, or true generalisations, before humans (or other mathematically minded animals) evolved, seems to me to be totally implausible, or to be more precise, meaningless: such things don't exist in time so there's no before or after for them. I suppose it is possible that the moderator thought you had not tried very hard to find evidence! In fact Popper's third world contains mostly products of human activity, e.g. fashions, designs, games, etc. But there is a collection of things that humans seem to discover rather than create that are neither physical nor psychological. Regarding your claims about the need for innate mechanisms for the discovery of some of these things we are in partial agreement. I think that humans are not born with the full set of mechanisms, but develop them using innate meta-competences that produce more meta-competences through interacting with the environment and other individuals, etc. I.e. it's a complex multi-level bootstrapping process (cognitive epigenesis). I hope we'll be able to build working models of this before very long though it will not be easy. [ These are ideas I have been developing with biologist Jackie Chappell: a short exposition is in our submitted BBS commentary on the recent book by Jablonka and Lamb (Evolution in four dimensions) available in html and pdf http://www.cs.bham.ac.uk/research/projects/cosy/papers/jablonka-sloman-chappell.html http://www.cs.bham.ac.uk/research/projects/cosy/papers/jablonka-sloman-chappell.pdf I suspect you won't disagree with this, but I still don't know whether the details of your theories are compatible with this. ] Must sleep now! I still have to get back to you on impossible objects and ambiguous figures. Aaron =============================
From Aaron Sloman Tue May 29 18:44:02 BST 2007 To: PSYCHE-D@LISTSERV.UH.EDU Subject: Re: Decomposibility and recomposibility of conscious content Arnold Trehub wrote: > This is only one of several papers by this group that give evidence of > single-neuron selectivity/categorization of complex stimuli. Other findings > include, for example, Kreiman, Koch, and Fried (2000), *Nature Neuroscience*, > and Quiroga, Reddy, Kreiman, Koch, and Fried (2005), *Nature*. Many other > investigators, as well, have found *selective* single-cell responses to > complex input patterns. > ... > Of course, all of the relevant cells that are involved from the input pattern > to the detection/recognition of the input are part of the processing activity. > But the question at issue is the claim that a single cell can *process* its > proximal input to provide a selective and reliable recognition signal of the > distal sensory pattern. Jonathan, I, and many other investigators claim that > the activity of a single neuron can be a reliable indicator of a particular > pattern of stimulation. I guess this raises some questions: what follows from this? and what does it have to do with what can be said about states of the whole animal - e.g. such as that it recognizes something or takes a decision? I'll address those questions in an analogy below. [Arnold] > Consider this simple case: > > - There are two different input patterns, [A B] and [B A]. > - There are two detection neurons, (C1) and (C2). > - Patterns [A B] and [B A] provide synaptic input to *both* detection neurons. > - However the synaptic structure and dynamics of (C1) and (C2) differ so that: > > [A B] +++> (C1) (discharge) and [B A] ///> (C2) (no discharge) > [B A] +++> (C2) (discharge) and [A B] ///> (C1) (no discharge) > > In such a case, the single cells clearly *process* their inputs to provide > a selective detection/recognition response. This is analogous to the much > more complex pattern recognition involved in the studies mentioned above. Fair enough. But what makes the firing count as 'detection', or 'recognition' ? Those are terms that have implications regarding the function that the processes serve within the larger system. What that function is can depend on many different things, including the causal consequences of the firing. Suppose someone asks about some country C Q1. How does C choose its president? Q2. How does C choose its favourite make of car? These are questions about very different forms of information[*] processing. The answer to Q1 usually refers to a formalised, centrally controlled, process of counting votes; and normally there is an explicit recognition of the result of that process (i.e. the selection of an individual) by some formal mechanism which makes the result generally known to many other parts of the system. The answer to Q2 need not involve any formalised, centrally controlled, process and there need not be any explicit recognition of the result, e.g. if nobody collects all the statistics. But there may still be a make of car that is chosen more often than any other make in millions of individual decisions, and those choices can have all sorts of consequences, e.g. some manufacturers going out of business, people becoming unemployed, some companies growing, share prices changing, flows of capital across national or regional boundaries, changes in total fuel consumption, numbers of deaths on roads, etc. Many of those things can be going on without anyone knowing that they are all going on (though individual events, like a company going out of business would be noticed). Many biological systems seem to be like that: lots of things going on but without any centralised control or summarisation. In some countries the information about car-buying may be available in principle, but not actually collected, or collected but not used, etc. E.g. we can distinguish cases where the information cannot be collected because the mechanisms for recording and transmitting individual decisions do not exist or are not in place, cases where the information can be collected but the mechanisms have not been 'turned on', cases where the information is collected but not analysed, cases where the results are analysed but not made available to any decision makers, etc. etc. Those are all patterns of distributed 'decision making' where the 'global preferences' are real, and have real consequences, but are never represented as such, and no summary information about the decisions or their consequences is ever used, although a company going out of business (because its debts exceed some threshold, perhaps) could be an explicit localised consequence of the distributed decision making, even though it is not recognized as such. Like the firing of a cell, the company going out of business could be described as a reliable detection or recognition of a pattern in the distal 'sensory' records of individual purchases. (In this case the pattern is a large drop in the purchases of a particular make of car.) Would you call that a recognition or detection mechanism for that pattern? Many biological systems process all their information in that distributed, de-centralised fashion, without any explicit summary representation of what is going on, but it is not clear whether all major brain processes or which subsets of brain processes are like that. I expect some organisms have *only* totally decentralised distributed decision making (eg plants, slime moulds ?), whereas others have partially centralised decision making with 'localised' events produced by cumulative effects like a company going out of business, e.g. turning the eyes to look left under certain conditions of combined auditory and visual stimulation. Now compare Q1: the question about choosing a president. Selection of a president is a process that can also take various forms, but usually includes use of a centralised mechanism that represents the selection explicitly. That is, instead of the selection being represented only transiently in a pattern of causal influences, at least one enduring record of the outcome is made which is then capable of playing a role in many different causal processes, in combination with other items of information. Let's look at some typical features of presidential elections in typical geographically large democratic countries. And then some other things that may or may not go in parallel with the official processes. Depending on the country C, the answer to Q1 will usually refer to an elaborate formalised process which involves candidates being nominated, followed by formalised (i.e. rule-based) voting procedures being followed. If C is a large country made up of different regions, the votes from the regions may be counted up separately and the totals for each candidate for each region communicated to the chief voting officer O who gets the totals for each region and then adds them up, and in a prearranged way announces that candidate X has won, which in turn triggers a whole lot of activities, subsequently leading to the old president being replaced by X in many physical, legal, political and social contexts. (Let's ignore the cases where the result is challenged, etc. Also if necessary replace the officer O by a computer, or a committee: it makes no difference for now. Another possibility that we ignore for now is use of intermediate stages where votes are counted for sub-regions then reported centrally within each region, etc.) Now consider what happens if somehow an illegal copy of all the regional totals is sent to someone, e.g. a financier F, who manages to get them and add up the totals before O does, and takes actions for his/her own benefit, e.g. buying and selling shares. Now O and F are both localised bits of the country C, and each can "*process* its proximal input to provide a selective and reliable recognition signal of the distal sensory pattern (i.e the votes cast in the regions). But the consequences are very different. What O does is part of the process of choosing the president whereas what F does is not. It is a side-effect of an initial part of the process. There could be many different similar (legal or illegal) processes going on, involving interception of the voting information at various stages and re-routing it to various individuals or organisations who use the information, in some cases before the central counting has been finished and the result announced -- e.g. servants and collaborators of the old, defeated, president who immediately start looking for new jobs, and people who support the winner who start actions designed to facilitate the transfer of power, or who start jockeying for positions in the new government, etc. This begins to take on some of the features of the answer to Q2 (the distributed implicit choice of a favourite type of car), except that in addition to all the distributed and nowhere collated decision-making there is also a formal generally recognized centralised decision-making process. The two sorts of processes can coexist and play different roles in the whole system at the same time. Of course, many variants on these stories are possible. There could be formalised mechanisms whereby the results from the regions, or even the individual votes, are transmitted concurrently to different subsystems to be used for various purposes (e.g. statistical analysis of voting patterns, checks against voting irregularities, speeding up processes connected with regime change, etc. etc.). As a precaution, the official process could involve collating the individual votes in two (or more) different ways, using two sets of routes for information transfer and the officer O may need to check that the different routes produce the same result before the decision is announced. (Compare adding rows and columns in an array of numbers to check for errors in addition.) So although there is a clear sense in which the nation as a whole does not know the result, and has not formally decided until the officer O has completed his/her task and announced the result, the information about the result could be available and used implicitly in many formal and informal, legal and illegal, sub-processes that start up before the final decision, some of which help to improve and accelerate the implementation of the high level decision. Moreover, some aspects of those distributed processes may be noticed and reported either locally or nationally or in organisations that are involved in administration and administration changes. So *subsystems* may be conscious of them even even if the whole system is not. On the basis of what I know about humans and brains I would expect that the correct account of how we work is something like the multi-functional mixture of centralised and distributed information processing and decision making just described. (I referred to this as a 'labyrinthine' architecture, as opposed to a 'modular' architecture, in a paper on vision in 1989 http://citeseer.ist.psu.edu/758487.html ) When all the bits work smoothly together, as they normally do, we think a belief has been acquired, a sensation has been experienced, a decision has been taken, etc. and we think this is a simple process about which we can ask questions like 'where does it occur?', 'when does it occur?', 'what is its function?', etc. When things go wrong or become abnormal, e.g. because of brain damage, or effects of drugs or anaesthetics, or hypnotism, or dreaming, or because abnormal development interferes with the construction of properly functioning information management subsystems, the hidden complexity begins to be more visible, and 'neat' theories look less plausible. Moreover, in some cases, as Neil Rickert pointed out in his message of Fri, 25 May 2007, it may be far more useful to describe what's going on in terms of *virtual* machine processes (possibly in several levels of virtual machinery) rather than in terms of underlying *physical* implementation details. For instance, my answers to Q1 mentioned votes, counting, information communication, etc. not the physical mechanisms used to implement those processes. Most of the sciences, apart from physics, talk about virtual machines implemented in physical systems. But physics also has layers. An event in a virtual machine, e.g. a bad decision taken because some information was corrupted or because the rules used lack the required generality, can be a real cause, with real effects -- as any software engineer knows: debugging software involves identifying such unwanted virtual machine events and changing the virtual machine so that they don't occur or their effects are changed. Our intuitive ideas about causation that lead many people to reject that notion are based on a false model of causation as a kind of fluid that flows through the universe subject to conservation laws. If, instead, we analyse causal relations in terms of truth and falsity of various sets of counterfactual conditional statements, we can admit causes in both virtual machines and also the underlying physical machines. But that's another, long, story. ----------- [*]Yet another long story is what I mean by 'information', a word I have deliberately used many times above, rather than talking about e.g. neuronal excitation patterns. The word 'information' is as indefinable as 'matter', 'energy', and other deep concepts developed in our attempts to understand the universe. Their meanings are determined not by explicit definitions (which always end up circular or vacuous -- like many definitions proposed on this list) but by the powerful theories in which they are used. It's primarily the theories and their associated research programmes (which, as Imre Lakatos pointed out, can be progressive or degenerative) that have to be tested and compared. Sometimes that can take decades, or centuries because we don't know enough. I've written more about the (non-shannon) concept of information here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/whats-information.html Comments and criticisms welcome. Aaron http://www.cs.bham.ac.uk/~axs/ =============================
From owner-psyche-d@LISTSERV.UH.EDU Wed May 30 20:53:56 2007 Date: Wed, 30 May 2007 14:00:16 -0400 From: Arnold TrehubSubject: Re: Decomposibility and recomposibility of conscious content To: PSYCHE-D@LISTSERV.UH.EDU Aaron Sloman has raised several important issues in his recent post. Each of us approaches the subjects on PSYCHE-D with a theoretical predilection. In the interest of full disclosure, since the entire conceptual contents of our written exchanges are products of each of our brains (any dissenters?), I have tried to understand the relevant concepts in terms of competent brain mechanisms and systems. In this spirit, I'll first jump to a later statement among Aaron's comments. [Aaron] > Their [concepts] meanings are determined not by explicit definitions > (which always end up circular or vacuous -- like many definitions proposed > on this list) but by the powerful theories in which they are used. I assume that Aaron means this as a desired goal rather than as a description of common usage. In this sense, I'm in full agreement. The particular theory that shapes my conception of *detection/recognition* and *information* is presented in *the Cognitive Brain* (TCB). [Arnold] > > - There are two different input patterns, [A B] and [B A]. > > - There are two detection neurons, (C1) and (C2). > > - Patterns [A B] and [B A] provide synaptic input to *both* detection > > neurons. > > - However the synaptic structure and dynamics of (C1) and (C2) differ > > so that: > > > > [A B] +++> (C1) (discharge) and [B A] ///> (C2) (no discharge) > > [B A] +++> (C2) (discharge) and [A B] ///> (C1) (no discharge) > > > > In such a case, the single cells clearly *process* their inputs to provide > > a selective detection/recognition response. [Aaron] > Fair enough. But what makes the firing count as 'detection', or > 'recognition' ? I think this is a good example of a definition being determined by the theory in which it is being used. The firing of (C1) or (C2) in this minimal theoretical model counts as *detection* or *recognition* just because these different discharges discriminate between the two possible input patterns in the model universe. Reasonable generalizations can be drawn from this simple proposal. One generalization that I would defend is that any "self"-competent biological information processing system which is equipped for planning and a public language must have pattern detection systems of this kind. > Those [detection, recognition] are terms that have implications regarding > the function that the processes serve within the larger system. What that > function is can depend on many different things, including the causal > consequences of the firing. Absolutely. For a detailed treatment of some fundamental causal consequences of neuronal firing in a cognitive brain system see TCB. > Moreover, in some cases, as Neil Rickert pointed out in his message > of Fri, 25 May 2007, it may be far more useful to describe what's > going on in terms of *virtual* machine processes (possibly in > several levels of virtual machinery) rather than in terms of > underlying *physical* implementation details. Notice that Aaron emphasizes the *usefulness* of describing these *physical* processes in terms of *virtual* machines. I agree with this pragmatic approach, but the intellectual risk lies in treating virtual entities, abstractions, and ideal concepts as ontological categories independent of the physical world. This often happens when computer programs are treated as meaningful causal entities apart from any physical implementation. Consider the following program: L8, R21, L15, R7 What is its meaning? What is its causal efficacy? It has no particular meaning until you are told that it is for a combination lock (a physical machine). And it has no causal efficacy unless it serves as a set of *instructions* for opening a *particular* combination lock or one that works the same way. So it is with computer programs. They *derive* their meaning and efficacy from the physical design of the computing machinery to which they are applied. The attempt to understanding concepts in terms of physical design has been dubbed by Daniel Dennett as the "design stance". Here's a sketch of an effort to deal with the concepts of *meaning* and *understanding* as biological events within the framework of the design stance: The central ideas conveyed by by the OED definitions of meaning and understanding are that (a) *meaning* is the significance or purpose of something, and (b) *understanding* is the ability of a person to capture the meaning of something. The problem, however, as in most verbal definitions, is its circularity. To assert that meaning is the significance or purpose of something is to beg the question: How is the significance or purpose of something actually determined? And what gives a person the ability to capture meaning? What follows is a brief summary of how the neuronal mechanisms in *The Cognitive Brain* (TCB) embody and provide a biological explanation for our sense of meaning and understanding. In this move, the burden of definition is shifted from the nominal notions of significance and purpose to the structure and dynamics of putative brain mechanisms which actually embody significance and purpose. In keeping with the OED definitions, I assume that, in the most general sense, meaning is a construction within the brain. I also assume that understanding is the capacity of the brain to construct meaning in particular instances. It seems straightforward that any particular meaning cannot exist in the absence of a corresponding understanding. There are four critical competencies of the TCB model which enable meaning and understanding: 1. The ability to learn and to establish long-term memories. 2. The ability to represent learned objects and events in both analogical and sentential neuronal structures with interactive recall. 3. The ability to represent goals and compose plans of action. 4. The ability to imagine and analyze the properties of possible (hypothetical) worlds, particularly as they relate to goals and affordances. Back to the OED notion that "meaning" is the significance or purpose of something. I believe that the systematic interrelationships between the neuronally represented properties of objects, events, goals, and plans, both veridical and maginary, which are generated in TCB, warrant the claim that these integrated neurocognitive states are the embodyment of significance and purpose. In my view, this is the biological grounding of "meaning" and "understanding". An indication of how all of this can be accomplished is given by the structure and dynamics of the neuronal mechanisms and systems in TCB. Arnold Trehub =============================
From Aaron Sloman Fri Jun 1 01:40:14 BST 2007 To: PSYCHE-D@LISTSERV.UH.EDU Cc: Arnold TrehubSubject: Re: Decomposibility and recomposibility of conscious content Arnold Trehub Responded to my post of Tue May 29. As I am in the midst of a 3-day conference I'll comment now only on part of his message. We agree on some things, but not about virtual machines. Apologies for length: I've tried to make this as clear as I can. > [Aaron] > > Their [concepts] meanings are determined not by explicit definitions > > (which always end up circular or vacuous -- like many definitions proposed > > on this list) but by the powerful theories in which they are used. > > I assume that Aaron means this as a desired goal rather than as a > description of common usage. No. I am merely summarising results of research on philosophy of science in the early 20th Century in which it was shown (I forget historical details, but I think the researchers in question included R.Carnap, A.Pap, C.G.Hempel, K.Popper, W.V.O Quine and others) that all attempts to show that new deep scientific concepts can be explicitly defined in terms of pre-existing concepts (e.g. in terms of experimental tests) have failed. That failure includes, for instance, P.W.Bridgeman's 'operationalism', which has misled many psychologists and social scientsts. Usually only shallow concepts can be so defined. Restricting ourselves to such concepts can seriously hold up the advance of science. This is related to the failure of concept empiricism, discussed in a bit more detail here, with some references (including links to papers for and against concept empiricism): http://www.eucognition.org/wiki/index.php?title=Symbol_Tethering Symbol Tethering: The myth of symbol grounding NOTE: 29 Jul 2011 That link is now broken. Try instead http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models Why symbol-grounding is both impossible and unnecessary, and why theory-tethering is more powerful anyway. So I am making a substantive claim, not just reporting common usage. In particular, energy, matter, valence, neutrino, isotope, gene, are all concepts that cannot be defined except by their role in theories that use those concepts. Formally, a theory, by its structure, determines a class of models, as shown by A.Tarski and others. However in itself a theory usually cannot determine a unique model. Selection of the intended model usually employs some bridging rules relating concepts of the theory to possible experiments and observations. Those links to pre-existing concepts do not define the new concept. They merely help to constrain the interpretation. Likewise, what is meant by one of the uses of the word "information", as it is increasingly being used in all areas of science and engineering, cannot be explicitly defined in terms of non-theoretical concepts. It is implicitly defined by the many things that can be said in a theory about information, which I've started to expound in the paper cited previously. ("What's information?") [Some concepts turn out to be incoherent when the theories surrounding them are incoherent. I believe this is true of the concept of "phenomenal consciousness" insofar as it is *defined* to have no function by contrasting it with e.g. "access consciousness". I would put, some uses of the concept of "self", and "free will", and many theological concepts into the same basket -- they turn out incoherent when examined closely.] [AS] > > Moreover, in some cases, as Neil Rickert pointed out in his message > > of Fri, 25 May 2007, it may be far more useful to describe what's > > going on in terms of *virtual* machine processes (possibly in > > several levels of virtual machinery) rather than in terms of > > underlying *physical* implementation details. [AT] > Notice that Aaron emphasizes the *usefulness* of describing these *physical* > processes in terms of *virtual* machines. When I said it was more 'useful' that was a bit misleading, because it could be taken as a statement about mere convenience. I should have said 'It may be more accurate'. For there really are virtual machines in which events and processes can interact causally, and if you try to understand what's going on in terms of the underlying physical machinery you may fail to understand important details. E.g. If you try to understand in purely physical, or even digital electronic terms, how a bug in a virtual machine caused files to be lost you'll not understand what really happened, which would have been the same type of virtual machine process evem if the process had been running on a different physical machine (e.g. a sparc cpu instead of an intel cpu, or if the current mapping betwen virtual and physical memory had been different, since typically on a modern muylti-processing computer that mapping keeps changing. (I suspect something similar happens between minds and brains, but that's another story.) Every virtual machine needs to be implemented (ultimately, possibly via intermediate virtual machines) in some physical machine. But when you talk about the virtual machine you are not talking about the physical machine. [AT] > I agree with this pragmatic > approach, but the intellectual risk lies in treating virtual entities, > abstractions, and ideal concepts as ontological categories independent of > the physical world. The categories refer to a non-physical ontology in the sense that the concepts used cannot be *defined* in terms of the concepts used by the physical sciences. E.g. when a chess virtual machine is running, various concepts are needed to describe what's going on, including 'pawn', 'king', 'capture', 'threaten', 'checkmate', 'win', 'illegal move', etc. and those concepts cannot be defined in terms of concepts of physics, chemistry, geometry, etc. Even a list of specifications of all possible physical machines that could implement a chess virtual machine would be something different from a specification of the virtual machine. The latter specification would be required to explain why some physical machines are included in the list and some not. Likewise the concepts used to talk about what goes on in a spelling checker cannot be defined in terms of concepts of the physical sciences. In part that's because the notion of 'spelling' in general, and 'incorrect spelling' in particular, cannot be so defined. The fact that the concepts used in talking about some virtual machine cannot be *defined* in terms of those of the physical sciences does not imply that the virtual machines can *exist* independently of the physical world. This are very different notions of independence. What makes a particular physical system an implementation of this or that virtual machine is a complex topic. (Matthias Scheutz, among others, has written about this. I have not yet seen a completely satisfactory account, thougn in many cases we know that such an implementation exists because we, or people we trust, have created it, and we experience its effects -- e.g. being beaten by the chess machine, or finding spelling mistakes in our documents and seeing the results of their being corrected.) [AT] > This often happens when computer programs are treated > as meaningful causal entities apart from any physical implementation. It is very important *not* to think of a computer program as a virtual machine in the sense in which I was using 'virtual machine'. (This is a very common mistake, made by many critics of Artificial Intelligence. The mistake is encouraged by computer scientists and AI theorists who suggest that the mind/brain relation is like the software/hardware relation. People who don't understand what that means think it refers to the program/hardware relation, which is something very different.) A running program, or more generally, a running collection of interacting programs, can be described as software, and can constitute a virtual machine. But that includes many events processes, causal interactions, etc. A piece of program includes no events, processes or causal interactions: it is just an inert collection of symbols. So it is completely irrelevant to what I think Neil Rickert was talking about, and to what I was talking about. In some cases a compiled program or interpreted program will be part of the mechanism used to implement the virtual machine. But the chess virtual machine running on the computer is not the same thing as the program. In the virtual machine, pieces move, pieces get captured, etc., but not in the program, which typically does not change while the VM runs. There are virtual machines that include incremental compilers or other devices that are capable of altering the programs defining the virtual machines. That's another way of saying that some virtual machines can change themselves. They may be able to alter themselves beyond recognition, so that the original designer has no idea what's going on in the resulting system. I believe infant human minds include virtual machines that grow themselves into completely different virtual machines, partly aided by delayed growth in brain mechanisms. But that's yet another long story. [AT] > Consider the following program: > > L8, R21, L15, R7 > > What is its meaning? What is its causal efficacy? It has no particular > meaning until you are told that it is for a combination lock (a physical > machine). > .... etc..... All of that is irrelevant to what Neil or I wrote, as explained above. [AT] > And it has no causal efficacy unless it serves as a set of > *instructions* for opening a *particular* combination lock or one that works > the same way. So it is with computer programs. They *derive* their meaning > and efficacy from the physical design of the computing machinery to which > they are applied. The attempt to understanding concepts in terms of physical > design has been dubbed by Daniel Dennett as the "design stance". I think that over the years he has loosened the concept of the "design stance" to refer to the same thing as I've called "the design based approach" (e.g. to minds). The looser notion allows someone adopting the design stance to be concerned merely with the design features of *virtual* machines that explain some phenomena. That contrasts with his 'intentional stance' which makes an assumption of rationality, which I think is a red herring. (That's another long story. Most biological organisms are neither rational nor irrational, but that doesn't stop them being highly sophisticated information processing machines.) [AT] > Here's a sketch of an effort to deal with the concepts of *meaning* and > *understanding* as biological events within the framework of the design > stance: > > The central ideas conveyed by by the OED definitions of meaning and > understanding are that (a) *meaning* is the significance or purpose of > something, and (b) *understanding* is the ability of a person to capture the > meaning of something. Note that OED definitions may be good records of actual current and historical usage. That does not make them useful for the purpose of defining concepts to be used in explanatory scientific theories. In particular, any concept of 'meaning', or 'information' that requiresd a *person* to be involved has rendered the concept useless for our purposes. The information processing that goes on in an ant does not require what the OED would call a person to capture the meaning involved. Information is, however, relevant to actual or possible users. The user could, in some cases, be a microbe or an automated factory controller, etc. (Of course not all information can be used by microbes. E.g. they probably cannot use the information that there are no prime numbers between 62 and 66. Nor can many humans.) [AT] > In keeping with the OED definitions, I assume that, in the most general sense, > meaning is a construction within the brain. There are many organisms that have no brains and machines that have no brains, yet can acquire, store, manipulate and use information. So that notion of 'meaning' is too restrictive for our purposes. [AT] > I also assume that understanding > is the capacity of the brain to construct meaning in particular instances. note the looming circularity... [AT] > It > seems straightforward that any particular meaning cannot exist in the absence > of a corresponding understanding. There's a huge amount of information encoded in the genomes of many animals. That information is used by processes tha produce instances of the genome (phenotypes). I would not call that 'understanding', though you may. [AT] > There are four critical competencies of the > TCB model which enable meaning and understanding: > > 1. The ability to learn and to establish long-term memories. > > 2. The ability to represent learned objects and events in both analogical and > sentential neuronal structures with interactive recall. > > 3. The ability to represent goals and compose plans of action. > > 4. The ability to imagine and analyze the properties of possible (hypothetical) > worlds, particularly as they relate to goals and affordances. > > Back to the OED notion that "meaning" is the significance or purpose of > something. I believe that the systematic interrelationships between the > neuronally represented properties of objects, events, goals, and plans, both > veridical and maginary, which are generated in TCB, warrant the claim that > these integrated neurocognitive states are the embodyment of significance and > purpose. In my view, this is the biological grounding of "meaning" and > "understanding". An indication of how all of this can be accomplished is given > by the structure and dynamics of the neuronal mechanisms and systems in TCB. Apart from some minor circularity this is OK as a *partial* theory of the kinds of information processing that can go on in a particular sort of organism. As a general account of meaning or information it is too restrictive, for reasons given above. Moreover even as an account of information procssing in humans it is too restrictive. Your posture-control subsystem uses information about optical flow in maintaining posture while you walk, but that use of information can occur in animals that lack some of the abilities you list as requirements. It is important that there are different kinds of information-users, and a generic specification of that variety, extending the sort of thing you have begun above, would present a theory implicitly defining the notion of 'information'. And like all deep concepts in deep theories the concept would be subject to revision in the light of future scientific advance (as happened to the concept of 'energy' for instance, between Newton and now.) There's more about this in L.J.Cohen's 1962 book 'The diversity of meaning'. Related to all this is how ordinary usage relates to technical and scientific usage of concepts. Gilbert Ryle talked about the 'Logical geography' of sets of concepts. I've argued that in some cases scientific advances can reveal underlying 'logical topography' suggesting that it's useful to revise the logical geography, as part of the process of theory development. This happened in the physical sciences, e.g. as a result of the discovery of the atomic structure of matter, and the periodic table of the elements. We should expect similar advances revising ordinary language, in the science of mind: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/logical-geography.html That will raise the hackles of some philosophers. Aaron http://www.cs.bham.ac.uk/~axs/ =============================
From owner-psyche-d@LISTSERV.UH.EDU Fri Jun 01 20:01:09 2007 Date: Fri, 1 Jun 2007 14:43:26 -0400 From: Arnold TrehubSubject: Re: Decomposibility and recomposibility of conscious content To: PSYCHE-D@LISTSERV.UH.EDU It seems that there are four particularly thorny concepts at issue in recent exchanges: 1. Virtual machines 2. Meaning 3. Understanding 4. Information These are concepts about which it may be difficult to arrive at an agreement among discussants, but I think our efforts toward mutual clarification will be worthwhile. In *The Cognitive Brain* (TCB), I have commented on the difficulty in arriving at a common understanding of linguistic terms because of the very nature (design) of the neuronal mechanisms in our cognitive systems, e.g., see TCB pp. 300-301, "The Pragmatics of Cognition". [Arnold] > > Notice that Aaron emphasizes the *usefulness* of describing these > > *physical* processes in terms of *virtual* machines. [Aaron] > When I said it was more 'useful' that was a bit misleading, > because it could be taken as a statement about mere convenience. > > I should have said 'It may be more accurate'. For there really are > virtual machines in which events and processes can interact > causally [1], and if you try to understand what's going on in terms of > the underlying physical machinery you may fail to understand > important details [2]. [1] Aaron, it would be helpful to have an example of the simplest virtual machine you can think of in which events and processes *really* interact causally, i.e., not an "as if" interaction. [2] It seems to me that here you are agreeing with me that virtual machinery does not exist without corresponding physical machinery. Also, you seem to be saying (as I suggested above) that there is a pragmatic advantage (usefulness) to examining the operational implications of a virtual machine because it is easier to understand than the highly complex details of its corresponding physical implementation. [Aaron] > The fact that the concepts used in talking about some virtual > machine cannot be *defined* in terms of those of the physical > sciences does not imply that the virtual machines can *exist* > independently of the physical world. This are very different > notions of independence. Good. If I understand you correctly, we seem to be in agreement that virtual machines cannot exist independently of the physical world. I take your follow-on argument to be that the conceptual terms of the contemporary physical-sciences canon cannot be used to define some kinds of virtual machines (e.g., minds?). If so, we also agree on this point. But then I would argue that the biophysically based conceptual terms derived from my theoretical model of the cognitive brain *can* be used to define some significant properties of the human mind when we contemplate it as a virtual machine. For example, some of the new conceptual terms would be *synaptic matrix*, *filter cell*, *class cell*, *detection matrix*, *imaging matrix*, *retinoid*, *self locus*, *heuristic self-locus*, etc. The structural and dynamic properties of these new conceptual packages are detailed in the theoretical model. It seems to me that this is all consistent with the design stance and suggests how a new scientific model can promote our understanding of cognitive competence and, perhaps, our understanding of phenomenal experience. I think *meaning*, *understanding*, and *information* would be interesting grist for further discussion. Arnold Trehub =============================
From trehub@psych.umass.edu Fri Jun 08 20:39:18 2007 Date: Fri, 08 Jun 2007 15:39:04 -0400 From: trehub@psych.umass.edu To: Aaron SlomanSubject: Virtual machines Aaron, I would really like to clarify my understanding of what you mean by the term *virtual machine*. I'm not sure that we disagree, though we might. [Aaron] > For there really are virtual machines in which events and processes can > interact causally, and if you try to understand what's going on in terms > of the underlying physical machinery you may fail to understand > important details. Can you give a simple example of a virtual machine in which events and processes actually interact causally? For example, I think of the brain mechanisms that are simulated in TCB as virtual machines, but the real events and processes that are actually interacting causally are the mechanisms in the computer which is running the simulation program. So, for me, the simulation program is the virtual machine which is transduced (so to speak) by a real machine (the computer). By my account, the physical specifications of the brain mechanisms that are captured by the simulation program also constitute a virtual machine. In this case, the real machine that acts as the transducer (from specifications to the simulation program) is a human brain. Where am I going wrong? On your advice, I have now put my book and other papers online here: http://www.people.umass.edu/trehub/ Arnold =============================
From Aaron Sloman Fri Jun 8 22:34:35 BST 2007 To: trehub@psych.umass.edu Subject: Re: Virtual machines Dear Arnold, Very sorry not to have responded to you and several other people who commented publicly and privately on my messages to psyche-d. I thought I would be able to keep up but have swamped by other things. Maybe I should not post at all if I find it so difficult to reply in a reasonable time to people who respond. Anyhow > I would really like to clarify my understanding of what you mean by the > term *virtual machine*. I'm not sure that we disagree, though we might. I suspect you agree about all the *facts* I refer to but for reasons which I think are a hangover from a mistaken philosophy you don't want to use the terminology that I think is perfectly adequate for describing those facts --- e.g. you have a philosophical view that restricts the use of the words like 'real' and perhaps 'exists' unreasonably. > [Aaron] > > For there really are virtual machines in which events and processes can > > interact causally, and if you try to understand what's going on in terms > > of the underlying physical machinery you may fail to understand > > important details. > > Can you give a simple example of a virtual machine in which events and > processes actually interact causally? I have given lots in the past. When you type a character into a word processor the physical keypress can cause an event in a virual machine, namely a character is inserted in a line. That increases the length of the line. Depending on the type of word processor and the current length of the line of text (not physical length, but either number of characters in the line or else the size of the line of text on the page in in the virtual machine which is the running word processor) that may or may not cause the line to be broken. That's a virtual machine event. A bit of the line moving to the next line can cause the next line to be broken. The effects can, in some cases, propagate down to the bottom of the page (another virtual entity) so that part of the page has to be moved to the next page. Eventually this can even cause a new page to be added to the whole document (a larger virtual machine entity). If there was a page limit on the document and the software allows that to be specified, the addition of the page can cause an alarm to be triggered, which might mean that you get sent an email message (another virtual machine event) or some sort of message is sent to the window manager that causes a panel (in the virtual machine) to be displayed on the screen with a warning (a physical event). The character you typed may also cause your spelling checker (another virtual machine in the larger virtual machine) to detect a mistake that needs to be corrected, etc. etc. My example was a word-processor, but I could have chosen any of dozens of other virtual machines that you interact with frequently including operating systems, file management systems, networking systems, email systems, security systems, etc. etc. > For example, I think of the brain > mechanisms that are simulated in TCB as virtual machines, but the real > events and processes that are actually interacting causally are the > mechanisms in the computer which is running the simulation program. As I said, you use a restrictive terminology, possibly because you subscribe to some philosophical theory according to which only physical things can be real, or only physical things can be causes or effects (ruling out desires, decisions, attention switches, economic inflation, political swings, and many other things that most people most of the time regard as perfectly real, until they move into some austere philosophical mode of thinking. For someone who thinks only physical things can be real, and can be causes, I could go into why a billiard ball bumping into another billiard ball is just or a chemical catalyst triggering a chemical reaction, is just another event in a virtual machine running on lower level virtual machines whose existence was not known until recently, and which may turn out to rest on several layers of virtual machines as physics digs deeper and deeper. I don't know if there is some well defined 'bottom' level on which everything is implemented or what it is like. I am inclined to think there must be but I have no idea how close we are to discovering what it is. But that doesn't mean that I am not sitting on a real chair that is really supporting me. > So, for me, the simulation program is the virtual machine which is > transduced (so to speak) by a real machine (the computer). By my account, > the physical specifications of the brain mechanisms that are captured by > the simulation program also constitute a virtual machine. In this case, > the real machine that acts as the transducer (from specifications to the > simulation program) is a human brain. Where am I going wrong? I don't know why you bring in transduction. I note that you yourself have some hesitancy about that because of your parenthesis. As I understand the word 'transduce', it has many different uses all of which are metaphorically related to the notion of something being carried from one place to another. In physics it is usually energy (e.g. a photocell can be a transducer from light energy to electrical energy). But in all those cases the energy or physical entity transduced starts in one place and ends up in another. In biology it may be a molecule, or a cell, or an organism. or a gene, or perhaps larger scale components of ecosystems that moves from one place to another. When a virtual machine runs it does not move around in that way. I suspect that you are thinking of what a compiler does, which can be described metaphorically as transducing, or more accurately translating, information from one form (often a high level, human-readable programming language) to another form (e.g. bit-patterns constituting machine code, or an intermediate level virtual machine program, which itself has to be compiled or interpreted). But compiling happens before the program starts running. When compilation happens the virtual machine specified by the program being compiled is not running (though another virtual machine is: the compiler). Not all programs have to be compiled: some are interpreted. Either way, when a program runs, all sorts of virtual machine entities are created which can exist for an extended time, can interact, can change some of their contents or relationships, can produce physical effects, can be modified as a result of physical effects. Such virtual machine entities can include arrays, lists, trees, graphs, images, simulations of physical or other systems, board states in games, steps in a proof, incomplete or complete plans, rules, etc. What is confusing for many people is that there are coexisting parallel streams of causation that interact with one another, and people cannot accept that because they assume a mistaken philosophical theory of causation as being analogous to some kind of force or perhaps a fluid that is conserved. Saying what 'cause' means is perhaps the hardest unsolved problem in philosophy, but I think it is clear that a major component of the correct analysis is that any statement about something causing something else is in effect a short-hand for a very complex collection of statements about what would or would not happen if various things were or were not the case: i.e. a collection of counterfactual conditionals. But exactly what collection corresponds to any particular causal relation is not easy to specify, and the logic of counterfactual conditionals is still not clear. (Bayes nets are a recent attempt to provide the analsys and have turned out very useful, but I think they cover only a subset of types of causation.) I believe that software engineering would be impossible if software engineers could not think about virtual machine events and processes as causes and effects. However, explaining exactly how the virtual machines operate and what their relationship is to the underlying physical machines and physical processes has never been spelled out very clearly: it is in some sense well-understood by software engineers and computer scientists, but only in the sense in which English is well understood by English speakers: they mostly cannot say what it is they understand, or how it works. So sometimes a computer scientist who expresses an opinion about these matters will give an incorrect answer which is incompatible with what he actually does with computers, compilers, programming languages, operating systems, etc., just as linguists who make mistakes express theories about their language that are incompatible with how they actually speak and understand their language. (Like the early thinkers who said that every sentence has a subject and a predicate, which was refuted by many of the sentences they themselved produced, including conditional and disjunctive sentences.) I have a hunch that you may have a wrong model of what goes on when computer programs run. But perhaps you have an accurate model but describe it in terms that are unfamiliar to me -- eg. using 'transduce' in way that I have not encountered. Note that all my examples are from computing systems. I added that sentence as an afterthought, after writing the bit below on your new web page. Why did I go back and insert that? A specific event occurred in the virtual machine (or one of the many virtual machines) constituting my mind, namely realising that you might have been expecting a psychological example. I don't know what made me realise that: but I think it was connected with thinking about what would get people to read your book chapters. I am sure virtual machine events and processes exist and interact in minds, and they are implemented in brains, in combination with the environment. E.g. my thought that it is getting a bit dark and I should switch light on is a thought that refers to this room at this time, and that semantic content cannot be fully implemented in states and processes in a brain, for the same states and processes could occur in someone else in another very similar room at a different time and place, and he would nto be referring to my room now, where as I am. That reference depends on my being causally embedded in this bit of the world. My ability to think about infinite subsets of the set of integers does not have that dependence, but also involves events and processes in virtual machinery. The difference between computer based virtual machines and mental virtual machines is that we still understand very little about how they work. Your theory goes much further than most in addressing a wide range of requirements, but I still think it is incomplete in ways that we have started discussing in the past. However that may be because I still don't understand it fully. I was amazed at the recent workshop I attended when none of the distinguished psychologists and neuroscientists talking about vision and all the brain routes through which visual information flows ever mentioned that what is seen can persist across saccades and therefore cannot be implemented in any retinotopic map, however abstract. Of course I told them about your book. I don't know if I have answered your question. Maybe I misunderstood the question and answered another one. Or maybe I gave an answer but you think it is mistaken. If I have made a mistake it must be that events in virtual machines cannot cause anything to happen. Proving that will require a detailed analysis of what events in virtual machines actually are and what 'cause' means. Intuitions are not enough. This is truly excellent news: > On your advice, I have now put my book and other papers online here: > > http://www.people.umass.edu/trehub/ I have been recommending for a long time that people read it, and when I ask them later they usually have not done so. I shall now start adding that link to my references (including the slides I used for a talk at a conference a few days ago, which I have to tidy up and put on the web). Thanks for doing that. I guess it won't take google long to get it all indexed. If you can find time to add to the web page a little abstract for each chapter and each paper that will pull in even more readers. Aaron http://www.cs.bham.ac.uk/~axs/ =============================
From Aaron Sloman Fri Jun 8 23:06:36 BST 2007 To: trehub@psych.umass.edu Subject: Your web page, afterthought Why not add a link to this review: http://psyche.cs.monash.edu.au/v1/psyche-1-15-dacosta.html Getting the Ghost out of the Machine: A Review of Arnold Trehub's The Cognitive Brain Luciano da Fontoura Costa Cybernetic Vision Research Group IFSC-USP, Caixa Postal 369 Sao Carlos, SP, 13560-970 BRAZIL PSYCHE, 1(15), January 1995 Aaron =============================
From A.Sloman@cs.bham.ac.uk Sat Jun 16 22:30:00 2007 Date: Sat, 16 Jun 2007 22:30:08 +0100 From: Aaron SlomanTo: trehub@psych.umass.edu Subject: Re: Thanks Arnold, Thanks for pointing this out: > Thanks for posting the notice about *The Cognitive Brain* on your website. > I wonder if you can fix the typo in "Cognitive" as it appears on your web > page. Sorry about the typo -- fixed now. I've been doing too much in too much haste. > I very much appreciate your effort to have my book reach a wider > audience. I had another chance at this workshop in Denmark a couple of days ago http://www.cospal.org/ Although people were trying to relate problems in designing robots to theories about how brains work, nobody had heard of your book. I don't know whether it's because MIT press did not market it properly, or because you require a rare breadth of competence and interests in the reader, so most people are too specialised to want to read it. When I tried, I found it quite difficult because of my lack of knowledge about some of the neural mechanisms discussed in the book. But by then I had encountered you in psyche-b discussions and knew that what you had to say would be important, even if I found it difficult. I guess most readers don't start with that prejudice. Not only had the people at the workshop not encountered your book -- the people working on vision appeared not to have thought about the fact that what is seen survives saccades and head movements, and therefore the perceived scene cannot be in registration with the retina or any retinotopic map, e.g. in V1. I still have things of yours and others on psyche-d to respond to, but some urgent deadlines are keeping me busy. Incidentally, have you ever thought about what goes on in brains when someone visualises the process of a nut being turned on a bolt, or a screw goes into a threaded hole? What goes on inside the mechanism is nothing anyone has perceived unless they have seen a transparent nut being turned on a bolt, for instance, which I don't recall ever having seen. I suspect that a child of 3 or 4 can learn that you have to turn the bolt (or nut) one way to tighten an assembly, and the other way to loosen it, but has no idea why. (An example of Humean causality.) At some later stage (I don't know at what age) it's possible to visualise what's going on in a way that explains *why* the rotary motion produces the relative longitudinal motion. (An example of Kantian causality) I wonder what's going on in the brain when the latter happens. (There's a related question about how the change occurs from the first stage to the second, and how many intermediate steps there are.) No doubt understanding the processes involves assembling in a novel way information about surfaces sliding in contact, information about tilted and curved surfaces, information about rigidity and impenetrability, information about forces transmitted by one surface to another, etc. Increasingly I am convinced that the greatest obstacle to understanding cognition (at least in humans) is our lack of understanding of how 3-D processes of different sorts, at different levels of abstraction, some continuous some discrete (e.g. topological changes), and with concurrent sub-processes doing different things, are represented and used for various purposes (e.g. inventing nuts and bolts, then using them). Most people think about perception of structures, not processes. A structure is just the special case of a process in which nothing is changing. I've been asking psychologists and neuroscientists, and I think nobody has even thought about the problem, except for those who think about echoic auditory memories. And I've no idea if anyone has good theories about how those work. But the requirements for representing a process consisting of a discrete sequence of sounds, or words, or numbers are much simpler than the requirements for representing 3-D movements including rotation and translation of structured objects. Maybe representing a Bach fugue comes closer. Another thought. Although I've been trying to cut down on travel (and failing miserably) I agreed to give an invited talk at a workshop on consciousness at the AAAI Fall Symposium to be held in Arlington, Virginia in November. If you have funds and time and inclination to attend it would be good to have a chance to meet and talk. The call for participation will probably go out in July. The current information is here: http://www.consciousness.it/CAI/CAI.htm My draft abstract is here http://www.cs.bham.ac.uk/research/projects/cogaff/misc/consciousness-aaai-fall07.html Aaron http://www.cs.bham.ac.uk/~axs/ =============================
From trehub@psych.umass.edu Thu Jul 19 19:50:15 2007 Date: Thu, 19 Jul 2007 14:50:24 -0400 From: trehub@psych.umass.edu To: Aaron SlomanSubject: Crane-episodic-memory Aaron, I like your crane challenge. http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane I think TCB Ch. 7 "Analysis and Representation of Object Relations" is particularly relevant to the task of determining the causal consequences of setting a static machine like your toy crane in motion (in imagination). For humans, I think it is necessary to deploy the heuristic self-locus to tag parts of the image, trace their paths of possible motion in 3D space, and determine their final locations. I wonder if a machine-vision robot will require something like a heuristic self-locus in a retinoid-like system. Arnold http://www.people.umass.edu/trehub/ =============================
From Aaron Sloman Thu Jul 19 21:31:00 BST 2007 To: trehub@psych.umass.edu Subject: Re: Crane-episodic-memory Arnold, Nice to hear from you. Every now and again, when I am not at my machine, I remember that I owe you responses to interesting messages (all raising points that need some work!), and then when I am at the terminal there are always other things that grab my attention including trying to finish another big grant proposal or preparing for workshops or conferences (my next one is this one in Vienna http://www.indin2007.org/enf/ and I really wish I were not going as I have not recovered yet from a cold/cough I caught at my last conference!) or producing another little web site with some new ideas, like the crane site (partly a result of my learning to use my wife's digital camera -- not entirely successfully as the out of focus pictures show). > I like your crane challenge. I think TCB Ch. 7 "Analysis and Representation > of Object Relations" is particularly relevant to the task of determining the > causal consequences of setting a static machine like your toy crane in > motion (in imagination). I really must find a way to make myself re-read your book cover to cover. There are always other easier, shorter, more urgent tasks however! So I partially compensate by promoting it, most recently to the attendees at this workshop about six weeks ago: http://comp-psych.bham.ac.uk/workshop.htm some of whom said they would be very interested to look at it, though I've not had any feedback since then. > For humans, I think it is necessary to deploy the > heuristic self-locus to tag parts of the image, trace their paths of possible > motion in 3D space, and determine their final locations. One of the things that the crane task requires if done in full generality is the construction of 3-D interpretations of the images. AI vision systems are, to the best of my knowledge totally incapable of doing this. I believe that one of the reasons for this is that researchers who have tried to do that have tried to find ways of going from the image contents to precise 3-d surface models, faithfully specifying location, orientation and curvature at every point. Because they have no idea how to do it with monocular images (though we obviously can see with one eye) they try to use stereo instead (which is useless for looking at pictures, of course) and their stereo algorithms are so awful that they get totally vague and inaccurate results. I believe that doing stereo properly requires getting lots of monocular structure *first* and then using the high level monocular structure to drive the process of finding low level correspondences, to add precision to the metrical information already derived, which is not how most vision researchers seem to think of the problem. They think stereo is required to see in 3-D despite the clear evidence that we don't need two eyes except for tasks requiring precison and speed, and that only works for nearly objects. Anyhow, I think that trying to find ways of going from the image contents to precise 3-d surface models, faithfully specifying location orientation and curvature at every point is pointless (a) because the task cannot be done, especially in noisy and low resolution images where we nevertheless see clear 3-D structure (e.g. my cup+saucer+spoon challenge pictures which are only about 160x120 pixels, taken in poor light, etc.) (b) even if it could be done the result would be useful for generating new images from different angles by projecting to 2-D, but would not be useful for planning and acting (c) what we see does not have precise metrical properties (e.g. exact lenths and angles) but a host of topological and qualitative relationships in the scene (d) nobody knows a good form of representation to encode the information about scenes that would be useful for a robot (or animals) acting in the environment, in particular things like surface curvature, kind of stuff of which things are made, possibilities for motion, constraints on motion, possibilities for action, obstacles to action etc. The ideas in your book provide some steps in the right direction, but I think a lot more is needed. (Have written to you about ideas for generalising aspect graphs, we've been discussing here?) I must try to make all this more precise, but I'll be able to do it better (or find my errors) after re-reading your book! > I wonder if a > machine-vision robot will require something like a heuristic self-locus in > a retinoid-like system. Yes, several of them dealing with representations of different sorts, where I am in relation to the table, the room, the town, the country.... Various people have come up with related ideas including the notion of a 'virtual finger' that can represent positions in the image or scene. I think he had a paper around 1989 about that. It must have been while you were writing your book. There are references on his web page: http://ruccs.rutgers.edu/faculty/pylyshyn.html Do you know his work? He seems to be one of the very few people (apart from you and me) who acknowledge the importance of the fact that the mapping between what is seen and what's projected from retina to V1 keeps changing. However, I have not looked at his work for years. Something else I should re-read. ...... Aaron =============================
From trehub@psych.umass.edu Mon Jul 23 02:08:11 2007 Date: Sun, 22 Jul 2007 21:08:25 -0400 From: trehub@psych.umass.edu To: Aaron SlomanSubject: AI and psychopathology Aaron, After reading your "Machines in the Ghost" paper, it occurred to me that before you go to the Vienna meeting it might be useful for you to read TCB Ch. 9 "Set Point and Motive: The Formation and Resolution of Goals". In particular, I think the parts dealing with *Pleasure and Displeasure*, *The Central Hedonic System*, and *Development of Secular Goals*, would be relevant to the aims of the meeting. Arnold =============================
From Aaron Sloman Sun Jul 29 23:41:47 BST 2007 To: trehub@psych.umass.edu Subject: Re: AI and psychopathology Arnold, Thanks for your message, which arrived while I was in Vienna, without internet access. > After reading your "Machines in the Ghost" paper, it occurred to me that > before you go to the Vienna meeting it might be useful for you to read > TCB Ch. 9 "Set Point and Motive: The Formation and Resolution of Goals". > In particular, I think the parts dealing with *Pleasure and Displeasure*, > *The Central Hedonic System*, and *Development of Secular Goals*, would be > relevant to the aims of the meeting. You are probably right, but alas that message got to me too late. In any case, they allowed so little time per speaker that I don't think it was possible to communicate anything effectively. However I did have the chance to tell people about your book and its being available online. I also had quite a long chat with Jaak Panksepp who claims that we attended experimental psychology seminars together many years ago when he was at Sussex University (though I don't recall meeting him then). He also has very fond memories of interacting with you at an important stage in his career, a long time ago. In one of the discussion sessions he reported an experiment you did that showed that signals coming into the eyes of an animal were transmitted to every part of the brain. I did not know about that, but it's consistent with the H-CogAff architecture sketch. .... I have no idea what the psychoanalists made of my presentation: restricted to 20 minutes, and therefore probably incomprehensible to anyone who was not already familiar with AI and robotics. I've started re-reading your 'Space, self, and the theater of consciousness', which I had previously only skimmed, and will try to produce some comments on what I agree with and what I think doesn't meet the requirements (or meets wrong requirements: eg. I think Hume was right in saying that there is no such thing as the self: there's you your arms legs, thoughts, hopes, percepts, and many other things, but the notion of a self is just based on a confused interpretation of how words and phrases like 'I did it myself', 'For myself, ...' 'Her actions were selfless', and many more. It's like thinking you have a sake because I can do something for your sake. But that's just a detail. I'll try to find more time before long: I've been swamped with the task of finishing a new large grant proposal (due end of July) and being ill from too much travel. Aaron =============================
From trehub@psych.umass.edu Mon Sep 10 16:21:25 2007 Date: Mon, 10 Sep 2007 11:21:52 -0400 From: trehub@psych.umass.edu To: Aaron SlomanSubject: The self Aaron, Concerning our views of the *self*, there were two interesting experiments published recently in *Science*. One was reported in SCR and I made some comments in response. See here: http://sci-con.org/2007/08/manipulating-bodily-self-consciousness/ I hope you are now feeling better after all of your summer travelling. Best wishes, Arnold http://www.people.umass.edu/trehub/ =============================
From Aaron Sloman Wed Sep 12 23:35:05 BST 2007 To: trehub@psych.umass.edu Subject: Re: The self Thanks Arnold > Concerning our views of the *self*, there were two interesting experiments > published recently in *Science*. One was reported in SCR and I made some > comments in response. See here: > > http://sci-con.org/2007/08/manipulating-bodily-self-consciousness/ I don't have access to Science, but I think I have seen reports of the work. There are many ways in which humans are subject to perceptual and other illusions about themselves and other things. I think I recall Kevin O'Regan describing work that causes people to feel that their body length has been substantually reduced because of some contrived mixture of visual and tactile sensory input. Nothing I have read or heard has ever led me to conclude that there is any sensible use for the phrase "the self", though there are many things we can say in which the particle 'self' forms part of larger constructs e.g. He did it himself He did not recognize himself in the picture / mirror He felt angry with himself He was very self-conscious about the wound on his face. He did not realise how much harm he was doing to himself... He always asserts himself in annoying ways. He is much too self-satisfied. He is much too self-critical. He willed himself to continue He chastised himself He was ashamed of himself He unselfconsciously took charge of the situation He deceived himself about his motives His self-importance irritated his colleagues. His self-deprecation embarrassed his colleagues. He reported himself as ready for duty. His self-control surprised his colleagues Our sense of self-location can be dissociated from our actual body location (Your words! Or, in plainer English: We can sometimes be confused or deceived about where we are and what we are looking at) etc. and many similar things we can say without using 'self', e.g. He misjudged how far he could jump how much he could eat how much he had understood how close he was to his destination what he wanted out of life the extent of his guilt which direction he was facing how well he was hidden from view etc. He thought he was in a different street. He thought someone was in front of him, but it was his own shadow on the wall. He was aware that he was shaking He was unaware that he was angry He thought the broken chair could hold his weight He thought he would enjoy the movie He thought he was hungry He feared he might lose interest in ... I would sum it all up thus: Central point: it is of the essence of any information- processing system that it can get some things wrong. Humans have vast amounts of information about themselves of many different kinds. Sometimes the information is erroneous Humans are able to perform many kinds of actions, both physical and mental Sometimes they lose control or partially lose control. Sometimes they mistakenly think they are performing an action when nothing is happening, or when something is being done but not by them. Sometimes they are unaware of influences on their decisions and actions, or misconstrue the influences. (schizophrenia involves many bizarre phenomena.) Sometimes they do things unawares. Sometimes they lose control - of physical or mental processes (thoughts, feelings, reactions...) Sometimes they don't know what they want, what they will regret, what they will enjoy, what they prefer, .... In other words, there are many things people know about themselves and other things and many things they can do about or to themselves and other things, and in all those cases sometimes things go wrong, for different sorts of reasons, some commonplace, and some requiring very special experimental manipulation (e.g. hypnotism). All of these and many more are summaries of very complex states and processes involving interactions between many different parts of their minds and bodies (and between their brains and bodies and between their minds and their brains). We don't yet know what all the subsystems are, and what information they acquire and use and how they interact with others. Among all the various sub-systems and the ongoing states and processes using various subsets of sub-systems there is no one thing that can be shown to be the referent of the English phrase 'the self'. Anything interesting and true that is said using that phrase can be said with greater precision and clarity without it. But much that is said using the phrase is just confused (in which bit of the brain is the self located?). Everything else you say in that web discussion seems to be fine. However, using the phrase, 'the self' seems to me to be pointless: it adds no new content that I can understand. I would say it probably causes some people to be less likely to appreciate the real content in your work. === By the way I was recently reading what a colleague had written about the blind spot and that reminded me that many people regard the existence of the 'invisible' blind spot as mystifying, whereas I think it is a direct consequence of any theory like yours which separates the percepts constructed from the current contents of the retinal sensory array -- an essential requirement for a theory that allows us to move our bodies, rotate our heads and perform saccades while perceiving an unchanging immediate environment. There is absolutely no reason for some bit of information to be removed because it is temporarily not being supported by retinal input because the blind spot just happens for a time to be mapped onto that location in the perceived scene. I searched for 'blind spot' in your book and could not find any mention of it there, or in your 'theatre' paper. I feel sure you must have written about it somewhere. > I hope you are now feeling better after all of your summer travelling. I've still got a persistent cough unfortunately. I'll be seeing a doctor a bout it. Unfortunately I have a lot more travelling, including the trip to Washington for the Fall AAAI symposium on consciousness. I've also been asked to talk to a parallel symposium on representation and learning and am now struggling to finish off my paper for that by the deadline on Friday. The consciousness one is nearly done -- you saw the abstract with the robot-philosopher 'turing test'. That's now been generalised: For many different opposed philosophical views about mind, science, mathematics, knowledge, etc. a good theory about how human minds work should show how both sides are capable of developing in a mind of the sort specified, e.g. when implemented in a robot that learns. (A bifurcation theory of how minds work?) The paper, still to be polished before electronic submission tomorrow is here http://www.cs.bham.ac.uk/research/projects/cogaff/sloman-aaai-consciousness.pdf Unfortunately it suffers from my haste and disorganisation, and lack of clarity on some of the points I want to make. It's a real pity you won't be there. This coming weekend our EU-funded robotic project is organising a three day workship with psychologists and biologists, mostly invited by two people in Paris. It's supposed to be for us to discuss the potential relevance of biological systems to robotics and vice versa. Again I wish you could be there. This one's a closed meeting unfortunately, for various reasons. .... Still disorganised and fighting overdue deadlines! Aaron PS The organisers of the Vienna meeting on psychoanalysis and artificial intelligence are looking for potential contributors to a book. The conference web site is here: http://www.indin2007.org/enf/ The call for contributions to the book is here: http://www.indin2007.org/enf/cfp.php I expect you are as likely as anyone to make a worthwhile contribution, if you have time, interest, etc. =============================
TO BE CONTINUED
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham