Posted Wed Jul 10 11:42:46 BST 1996 Newsgroups: comp.ai.philosophy References: <4rcir5$ioh@usenet.srv.cis.pitt.edu> <4roit4$p80@sun4.bham.ac.uk> <4ru8om$fjg@usenet.srv.cis.pitt.edu> Subject: Re: Sloman on logical norms (was re: Ryle, imagery, cognitive science) andersw+@pitt.edu (Anders N Weinstein) writes: > Date: 9 Jul 1996 18:35:02 GMT > Organization: University of Pittsburgh > > In article <4roit4$p80@sun4.bham.ac.uk>, > Aaron Sloman wrote: > >[AW] > >> Similarly for a particular ascriptions of beliefs and desires. They set > >> a standard -- a rational ought -- to which the "design stance" stuff > >> ought to conform, but need not in practice. > > > >Actually I am not normally interested in moralising about what other > >people *ought* to do. > > I shouldn't have brought in moral oughts, it is a red herring. OK. Apologies for reacting inappropriately. [AS] > >When I talk or think about their beliefs, desires, fears, ambitions, > >disappointments, emotions, attitudes, etc. in others, I am interested in > >what's ACTUALLY going on inside them, not what ought to happen when they > >manifest what's inside them. I expressed that a bit too generally. I should have listed some exceptions, such as cases where people are trying to engage in persuasive argument. In that case I can and do comment on where they went wrong and what they should have done in order to prove what they were trying to prove, etc. Anders rightly spotted the gap, and wrote: [AW] > Oh really. I have a feeling that if I were to say "SOME cognitive > scientists make mistake X, therefore ALL cognitive scientists are > confused" I might provoke a certain response from you, a response that > might go something like: > > >> Watch your quantifiers!! Yes. If someone is apparently engaging in a philosophical or scientific argument, THEN I assume he is trying to achieve certain objectives (a) justify his claims (b) convince others If he uses argument steps that are invalid (and appears to do so regularly) then I will happily point out the reasons for the invalidity. But just because *some* forms of communication in which people express their beliefs justify the charges of inconsistency, invalidity, etc. it doesn't follow that *all* descriptions of mental states have some reference to norms, standards, commitments, rational oughts, or whatever (moral or non-moral). (Sorry. I could not resist). So if you tell me that you have a bad toothache, but you don't go to the dentist or do anything to relieve the pain and moreover you continue eat of the junk food that you think helped to cause the tooth decay, I can speculate as to why you don't do the rational thing, but your statement that you have the toothache involves no commitment to do it, and puts you under no sort of obligation to do it. I suspect you may be confusing two things: (a) Some mental states involve dispositions, which, in particular contexts would be manifested in behaviour, and if the relevant behaviour does not occur then an explanation is needed (like the person who is in pain not wincing or showing the pain or taking steps to reduce it -- the explanation may be that he has recently joined some stoic-based religious cult, or that he wants to impress his girl friend, or....) (b) Some statements apparently about mental states are actually social acts that involve commitments, incur obgligations, and generate expectations to behave in certain ways, which if not fulfilled would justify accusations of dishonesty, treachery, letting friends down, or whatever. So if you ask me if I'll be at the station with my car when your train comes in and I say "Yes, I intend to arrive at 2.50 pm", then this may look grammatically like a mere description of my state of mind, but in that social context it has an additional pragmatic role, which is to give you the right to expect me to be there, and (depending on the relationship, and the importance to you of that commitment) may give you the right to accuse me of lettting you down if I don't turn up. (a) is, I suppose a Rylean point. (b) is be more a Wittgensteinian point? Anyhow, some statements about mental states, made in appropriate contexts, are like (b), and what you say is right as far as they are concerned. Others are like (a) and not like (b). I wonder whether you've failed to notice the difference? [AW] > or perhaps a less animated observation that "SOME does not imply ALL". Yup. That's a good thing to remember in all these discussions! [AW] > I suppose on your view I could respond: > > Well there's nothing particularly binding about logical norms, my > cognitive scientist friends have discovered. I guess it must just > be that my control system is one in which 'some' is taken to imply 'all' -- > I must just have one of those different kinds of minds in which this > is the correct form of inference -- after all we know there are > multiple possible minds, and no good reasons to be logical. Alas mine is > one of the cheap but efficient ones that is insensitive to these norms. Nice try. But you really are doing more than just describing your mental state: you are trying convince others, and produce strong arguments, aren't you? If not, then I apologise for engaging in an inappropriate form of interaction with you. [AW] > I have a hunch rather that you think the fallacious move is actually > some kind of *mistake*, a failure to argue as one rationally ought. Yes and no. There certainly is a mistake. The argument doesn't achieve what it is apparently trying to achieve, namely show that some conclusion is TRUE. It fails because of the invalid move from "some" to "all". Of course I would never dream of saying that anyone rationally ought to produce arguments that do achieve what the author was trying to achieve, because I think the conclusion is FALSE. OK? [AW] > This rather intense normative attitude is visible in many of your public > interactions, right down to the double exclamation points. You are right. I should resist showing exasperation as much as I do. (I must try to analyse why I use explanation marks. I've never thought about it. Sometimes it's just drawing attention to something that might otherwise be glossed over, I suspect. And sometimes it's drawing attention to an admission I make, which might be found surprising, like the first "!" below. But I could well be overdoing the other cases.) But notice that when I show exasperation it is (always? usually? mostly? I need to check...) about what I take to be: false premisses, invalid inferences, unnoticed ambiguities, confused concepts, evasive responses to criticism, and similar things. Criticising those things is appropriate in commenting on postings to comp.ai.philosophy because I take this news group to be a forum for philosophical and scientific discussion and debate. If it were one of the alt.support.* groups (e.g. alt.support.grief) I would (or at least should!) take the postings to have a totally different pragmatic context and my responses therefore would (I hope) be different in tone and content. [AW] > Now you might also wish to seek *causes* in the control system for why > this sort of fallacy is so seductive -- maybe a heuristic bias towards > quick generalization which is biologically useful in normal contexts. Sometimes. In fact I think a lot of philosophical confusion arises from the natural workings over a long period of time, of our conceptual learning apparatus, which is why many of them cannot be "cured" by argument and debate but need long term philosophical therapy to debug complex thinking and reasoning strategies. (That's my version of a similar point made by Wittgenstein and John Wisdom.) The therapy often fails if the confusions have been compiled into very widely distributed substrategies and information stores scattered around the control system. They can then become permanently resistant to retraining or debugging. It's very sad. [AW] > But this doesn't affect the fact that if we argue over the cogency of a > claim like this, nothing about our control systems is relevant to what > we are doing. Yes. If an argument is invalid, the diagnosis of flaws in the mechanisms that produced the argument is irrelevant to showing that the argument is invalid. The diagnoses can, however, be relevant to a process of helping people change their minds. Really good teachers have to do that often. [AW] > .You asked me to consider the possibility of internet > dialog from the design stance. I would say: one should be *much* more > impressed Sorry I don't understand. I wasn't talking about anything being *impressive*. I just thought it was a project that might interest you, and I predict that if you actually spent some time doing it (or designing some other sort of software agent with moderately "intelligent" capabilities), you would end up a different view of things, because you would become aware of possibilities which right now you seem to be unable to conceive of. [AW] > ....by the ability of people to engage in argument over the > internet *without* ever forming any hypotheses about each others' inner > guidance systems, I wonder why you brought in the word "guidance"? I used the phrase "control system", partly because I want to radically revise and extend what's known as "control theory" to incorporate much richer and more varied classes of architectures, including a wider range of of types of causal interactions between components. Is the difference significant? Control seems to me to be very much stronger than guidance. Maybe you did not intend anything by this? [AW] > ...simply in virtue of their capacity to express > themselves in a public language. *That* capacity is what is so special > about us So you want to draw some sharp line between us and bonobos, monkeys, cats, fleas, etc.? Why? Have you seen the videos of Kanzi the bonobo? > -- that is what it is to have a mind So animals with no public language have no minds? They have no beliefs, percepts, pains, intentions, fears? > -- and its implementation > doesn't matter to its nature. If what you are saying is that any such capacity could be implemented in a wide variety of different ways, and which way is used doesn't matter because they are all equally effective, then I agree with you completely. If you say that what its nature is is not defined by its implementation, then I agree with you. Any engineer will distinguish a high level specification of a design from the specification of the implementation. But the implementation is not irrelevant to explaining how a particular instance actually works. Moreover, subtle differences in the performance of different instances may not be explicable without paying attention to the implementation. And if the system starts going wrong then differences between the normal behaviour and the abnormal behaviour may be explicable in terms of differences between the original implementation and its current state. Such differences can occur in high level virtual machines where faults analogous to software bugs have crept in, or they may be differences in the physical mechanisms which have been diseased or damaged in some way. [AS] > >Similarly, if I tell someone about my beliefs, desires, fears, etc. > >I am giving them my views about what's currently going on in me > >(about which I can sometimes be mistaken), not telling them anything > >about how I ought to behave. [AW] > I disagree completely. I can and do present and argue for my opinions, > but I have never formed any speculations about what is going on in me You don't need to speculate in cases where you have enough information. E.g. when you have a toothache. However someone who says he really does want to please his father may be deceiving himself. You may be deceiving yourself in claiming that you never speculate about what is going on in you. Have you never wondered why you made a hurtful comment to someone? Have you never speculated about whether your grasp of a difficult subject is complete, or perhaps weak in places? Have you ever wondered whether you really are falling in love with someone despite not wanting to? [AS] > when I do so, nor do I have the least ability to scan my own control ^^^^^^^^^^^^^^^^^ > system. What exactly do you mean by "scan my own control system"? I suspect you are thinking of some sort of simple mechanical (para-mechanical?) model? (That's why I suggest spending time on a real AI project, to enrich your models of possible mechanisms of mind.) Certainly I can ask you to pay attention to some details of your current mental state and tell me about them: are you hungry, do you remember the third line of "Mary had a little lamb", etc. I can ask you to tell me whether you can hear the difference between someone else saying, quickly: "good morning" and "goob morning" (an example I got from a linguist). Others you cannot access. E.g. if I ask you to scan through the words you know and count the number containing two occurrences of the letter "i" you cannot do it. If I study you long enough I may be able to report that you know a hundred such words. So you have some ability to inspect your current state, but not full access to all aspects. So what? I have no idea what is supposed to follow from this. > Every last detail about it is news to me, from my point of ^^^^^^^^ > view. I suspect you are making some assumptions about the sort of control system that I am talking about which I don't recognize. I am not talking about the sort of thing that you will read about in standard text books on control theory (full of circuit diagrams and partial differential equations). I agree you cannot scan your brain circuits nor inspect their current state. [AW] > Here is just a quick sketch of an alternative view: > ... Here's my summary of your alternative. When we say "I think that p", "I believe that p", and similar things we are not making statements about the current contents of our minds, but rather qualifying in some way the claim that p. Yes, sometimes. But not always. (There we go again...) However if I say that last week I thought that p but now I don't I am not qualifying any claim that p, but telling you about my change of mind. Similarly when I tell you that Fred thinks that p I am not telling you anything about how Fred qualifies claims that p is the case. I am telling you something about what's in Fred's mind, which may or may not later manifest itself as a result of interacting with other things that are in his mind, such as what he wants, fears, etc. Moreover, it is true that sometimes "I intend to do A" is not so much a statement about the speakers current state of mind as an expression of a commitment to do A. But it isn't always. (Again "some" -|-> "all") And third person or past tense first person reports are not like that at all, or hardly ever (I have not been able to think of a single example, actually). I can truly say that I intended to fetch your book from the library when I set out (that was my state of mind at that time) but that I later forgot. Forgetting here is a suble matter: the intention did not go away, but it did not enter into control processes in the way I now wish it had at the time when it needed to in order for the intention to be carried out. That may be because my attention was diverted by other things. Exactly how and why my attention was diverted may require explanation. The explanation may not be fully accessible to me. It may have something to do with non-optimal attention control strategies that I am using without realising it. Moral: many linguistically oriented philosophers have taken the fact that first person present tense statements using mental predicates ("think", "believe", "know"(only partly mental), "intend", and others) *sometimes* have very special pragmatic functions as a demonstration that these predicates do not *ever* describe facts about mental states. The conclusion does not follow. Neither is there any evidence to suggest that the non-descriptive first person present tense uses are in any sense "primary" as you claim: >... [AW] > In general, the idea is that philosophy of mind should take as primary > the overt expressions of attitudes in the way we operate with the > public language, and, in particular, in the normative inferential > relations which already obtain (autonomously) among these vehicles. > One should explain the concept of inner attitudes derivatively, in > terms of their relations to the overt expressions, and consider the > concept of lying as similarly secondary. This is one line of thought > according to which the very concept of the inner state might be > "internally related" to that of its overt expression in language, as > Wittgenstein suggested. And it suggests these states are not inner > guidance system states at all. One reason I think this is completely wrong is that I don't see human beings as totally separate from other animals. Neither are adult humans totally unlike young children who cannot yet talk (though there are big differences). Our linguistic capabilities (along with some other capabilities, such as more sophisticated self-monitoring capabilities) developed on top of a very sophisticated collection of information processing mechanisms (perception, memory, motive generation, planning, deciding, learning, motor control, and much more) that exist in animals that do not have our public language, and some of them are already well developed in toddlers who cannot yet talk, and also in brain damaged people who never learn to talk. But if you wish to believe that there's some profound gulf between us and other animals, I am sure that no discussion in this forum will convince you. Cheers Aaron From Aaron Fri Jul 12 03:01:30 BST 1996 Newsgroups: comp.ai.philosophy References: <4rcir5$ioh@usenet.srv.cis.pitt.edu> <4roit4$p80@sun4.bham.ac.uk> <4ru8om$fjg@usenet.srv.cis.pitt.edu> <4s01eg$pin@sun4.bham.ac.uk> <4s0uos$p6l@usenet.srv.cis.pitt.edu> Subject: Non-uniqueness of external language (Was Sloman on logical norms ..) [Subject line changed to reflect content] andersw+@pitt.edu (Anders N Weinstein) writes: > Date: 10 Jul 1996 19:02:52 GMT [AS] > >So if you tell me that you have a bad toothache, but you don't go to > >the dentist .... your statement that you have the toothache > > involves no commitment ... puts you under no sort of obligation... [AW] > It may not involve a socially generated commitment. It still however > determines a rational "ought" in your mind. Not my mind. Maybe yours. I am coming round to thinking you come from a strange culture unlike anything I have ever lived in. Anyhow, how can you be so confident about what's determined in my mind? I thought you didn't believe in things going on in minds? [AS] > >(a) Some mental states involve dispositions, which, in particular > >contexts would be manifested in behaviour, and if the relevant [AW] > There is an important difference between the concept of a capacity or > ability and that of a disposition. Yes - I have no problem with that. There are all sorts of subtle differences between dispositions, abilities, capabililites, capacities, competences, skills, tendencies, propensities, inclinations, trends, opportunities, dangers, affordances [I've had years of practice of pulling families of related concepts out of my information store]. And I agree that many mental states and processes can be analysed at least partly in these terms. But all of these are concerned with things, the mechanisms in and around them and the possible or likely or regular or actual events that they produce. These are NOT concerned with norms, values, what ought to be done, nor with what any social group or culture prefers, condones or condemns. [AW] > ...Many of the terms we > use to talk about people's "minds" refer to capacities, abilities > ...... Yes. The idea of a collection of unrealised possibilities of all these kinds is an important aspect of what we mean by a mind (as Chomsky emphasised in distinguishing competence from performance). [AW] > It is pretty obviously true that there is *some* kind of internal or > criterial relation between possession of a capacity and its ^^^^^^^^^ > manifestation under public tests. Nope. Who said the tests have to be public? As Ryle remarked, you can have a disposition (or for that matter capability, ability or competence) whose manifestation can be triggered by completely internal events, and whose manifestations can also be completely internal. Knowing how a tune or poem goes is one of the examples Ryle discussed. [AS] > ..But the relation is not one of simple > verificationism... Who said anything about verificationism? As far as I am concerned verificationist theories have nothing to recommend them. They are among the last throes of dying empiricist philosophies of meaning. [AW] > ...There is also a pretty > clear distinction between the question about what it is to have such a > capacity and the question about what inner structures make it > possible. Yes. That's what I've been talking about all along. That's what cognitive science is about: the internal (virtual machine) structures and processes that *explain*the*possibility* of learning a language, the possibility of extracting information about an enduring 3-D environment from rapid sampling of a 2-D optic array, the possibility of suddenly acquiring a new motive, the possibility of becoming emotionally attached to someone, the possibility of extreme states of grief following someone`s death, and much much more besides. And "make it possible" is a very important type of explanatory relationship. The structure of a chlorine atom makes possible various kinds of chemical compounds. But it does not suffice to make any of them actually occur. [AW] > In talking about capacities, etc., we are characterizing people in > terms of what can be gotten out of them in certain circumstances, which > is usually what we care about, e.g. in evaluating a job applicant. In > Aristotelian terms, we are considering what potentialities have been > actualized in them, not in what inner control system states realize > those potentialities. Who is/ar "we"? Speak for yourself (or for some, but not for all). When I am trying to find better ways of teaching particular students I am often interested in what it is about the current conceptual system, or short term memory mechanisms, or attentional mechanisms, or visual parsing capabilities, or logical inference strategies, that prevent those students picking up some of the things I want them to pick up. I may probe and explore to find out what's in there (which the student may not know) and then use that to help me collaborate with the student's own mind (or brain if you prefer) in that mysterious bootstrapping process we call education. When I discovered in the early 70s that my children had list-like memory capabilities (i.e. chains of linked pairs, with a unidirectional link) that discovery enabled me to help them learn some things faster. A tiny example: one of them reached the stage at which he could count fluently, could tell me what comes before the number N when a clock face with the numbers was in view but could not answer the question "what's before seven?" away from the clock. I drew his attention to the fact that if he counted up to the number I had asked him about, his short term memory would retain the previous number and he could therefore access that to answer the question. It took a few minutes of practice and he was transformed (and very pleased with himself.) I used some similar analyses to produce changes which Piaget had claimed were impossible to produce by teaching, e.g. answering questions like "Are there more bananas in the box or more fruit" when the box has 6 bananas and 2 apples. So by introducing people to their own information processing sub-mechanisms we can (sometimes) enhance their powers. (Not always). Lots of memory tricks are based on such things. (one is a bun, two is a shoe, etc.) [AW] > Now I think our acquired potentiality for engaging in intersubjective > dialogue, criticism and debate is pretty darned special. ^^^^^^^^^^^^^^^^^^^^^ Now that's a nice clear description. I wonder how anyone can decide whether to agree or disagree with it? [AW] > ..It determines ^^^^^^^^^^^ > its own level of description and its own autonomous notion of content, > I think. ^^^^^^^^^^^^^^^^^^^^^^^^^^ You are treating a linguistic capacity as if it were some kind of agency. I cannot make any sense of this. Or rather, I recognise it as the bizarre kind of theory that was prevalent in Oxford in the late 50s partly under the influence of the then dead W, but which had nothing to support it apart from the oddities of a few first person present tense idioms (discussed in a previous message), which you take to be some kind of profound "core" of mental reality. I see it merely as a useful consequence of the simultaneous evolution of (a) human ability to communicate more and more richly about everything and (b) human ability to do certain amounts of internal self monitoring. The stuff about norms (e.g. expressed intentions generate expectations in others which you have an obligation to fulful) far from being some core fact about the nature of mind is just a useful side effect. Arguing otherwise is like arguing that statements about what can occur or what is possible are actually statements about what is socially permitted (because a subset of such utterances are, i.e. "you can..." = "you may..." sometimes) and arguing that *questions* about what is possible or what someone can do are really polite *requests* for action (because a subset of utterances of the form "Can you do X?" really are polite requests to do it, e.g. open the door, pass the pepper, etc.). To say that sodium can combine with chlorine to produce ordinary salt is to say nothing about what is socially permitted or expected. It's a statement about what's enabled by the architectures of the relevant atoms and molecules. Likewise statements about what your perceptual mechanisms can do, what your memory can do, what your problem solving subsystems can do, etc. [AW] > .... > ...stuff on quine.... > ...A similar mistake I would say is thinking of this activity as > merely the "output side" of a theoretical entity -- the hidden ^^^^^^ > inner computational system where the real font of meaning and ^^^^^^^^^^^^^^^^^^^^ > mentality is to be located. I don't say it's "merely" anything: it's all too amazing for that. As for what a "font of meaning" might be, I don't think I'd recognize one if I found one staring me in the face. What's that supposed to mean? [AW] > I think there is no reducing the norm-laden language of assertion of a > content, justification or argument or pointing out an inconsistency to > the norm-free terms of mechanistic science, There certainly is some norm-laden language, but when the mother asks her injured child if his leg still hurts she's trying to find out what's going on in him, not engaging in some ritualistic exchange of commitments or values. [AW] > ...something Wittgenstein and > Ryle more or less took for granted. We can still say that human > mentality is largely a set of capacities -- potentialities -- which > often find their actuality wholly on the outside, in the public sphere, often yes, always, no. (As usual) > with no inner mental processes preceding their exercies. No *conscious* inner mental processes. By defintion you can't know directly about the others. But we are learning more and more about them. [AW] > ...This happens when > I simply conduct an argument or even speak aloud or write equations and > notes to myself without *in any way* thinking or planning what I am saying. Your faith in your own beliefs about what's going on inside you is touching. And misplaced. I might agree that YOU are not necessarily planning how to say things, but even when you are not, there could be (unconscious) planning going on within you and sometimes the planning leads to errors which are detected in mid-sentence and you then have to backtrack and re-plan. Of course, someone who believes that our linguistic capacities are a kind of inexplicable magic that could just as well emerge from sawdust won't believe a word of what I'm saying. [AW] > In these cases, we can say the meaning is all visibly carried in the > overt activity with the public symbols, You lucky man. Never to have been totally misunderstood by those who watched and heard you talk. Some of the rest of us poor folk often have ideas which we fail to communicate in our visible and audible public performances even when the audience knows the language we are using. [aw] > These cases are far from all, but they're pretty interesting to me. In > these cases, what goes on inside consciousness may be, as Wittgenstein > emphasized, nothing, so God himself could not determine what someone is > doing by looking into his mind. I am unimpressed by speculative theorising about mythical beings. People seem to be able to attribute all sorts of magical powers to their gods. Your lack of generosity to yours is an interesting exception. [AW] >..The meaning and intentionality are all > there informing the outer activity, and don't *derive* from antecedent > causes in an inner sphere. Who said meanings *derive* from antecedent causes? you seem determined to mix up all sorts of categories. Do meanings derive from anything? What does "meanings derive from X" mean anyway? I agree that certain empiricist philosophers thought that all meanings were somehow abstracted from inner experiences and then new meanings were created from those. If that's the sort of view you are arguing against then I am sorry to have intruded: it's not something I'd want to defend. [AW] >...In these cases, one doesn't find the > Cartesian duality of an inner and an outer side at all. Rather the > overt exercise of a capacity on the outside, and wholly non-rational > circuitry that makes it possible on the inside. There you go again: nothing inside but circuitry. Try to remember what you know about sophisticated information processing systems. E.g.a computer controlling a chemical plant. [AW] > It is possible to posit lots of "unconscious mentation" to take up the > slack and so preserve the idea of rationality as grounded in an inner > sphere which cognitive science might study, but I think there is > really no need for this. Frankly I don't recognize any of this take up what slack? rationality? who brought in rationality? why should rationality be grounded in something? What on earth could that mean? I get the feeling more and more that I am observing a kind of Don Quixote tilting at imagined windmills. I think you must have hallucinated on to me a very wierd kind of theory, whose exact nature I cannot imagine. And all your counter arguments have therefore probably been lost on me because I have completely failed to understand what you were objecting to. (Except that it seems to be linked to an extraordinarily strong desire to elevate linguistic abilities to some special status that sets humans apart from other animals.) [AW] > ...I do tend to see the idea that intentionality > is mainly a property of control system states inside the brain ^^^^^^ I don't recall saying anything like that. [AW] > ...Perhaps one could think of the > control system as pipelining its intermediate results and products > directly out into the world, so that what's on on the inside alone > is only one part of the story, and not an autonomous natural domain. I have no idea what this is supposed to mean. What sorts of intermediate results and products are you thinking of? (Why are the intermediate ones sent out?) What do you understand by "pipelining" here? (It is usually a reference to a chain of processes or processors through which information flows). I haven't a clue what an "autonomous natural domain" is supposed to be. Nothing that I know about is totally autonomous: reality may come in layers or levels, but there are deep interconnections between them. [AS] > >I wonder why you brought in the word "guidance"? I used the phrase > >"control system" ..... [AW] > I didn't mean any difference. I used "guidance system" because it > sounds more derogatory to my ear -- more like paramechanizing. So, instead of trying to understand what I am saying (which, alas is not a publicly visible feature of my utterances) you are trying to transform it into something else (paramechanizing) which you can then attack? >..... > > >[AW] > >> ...simply in virtue of their capacity to express > >> themselves in a public language. *That* capacity is what is so special > >> about us [AS] > >So you want to draw some sharp line between us and bonobos, monkeys, > >cats, fleas, etc.? [AW] > Well yes of course. There's no "of course" about it as far as I am concerned. The design space of animal minds is not a totally smooth continuum, for there are many discontinuities, but neither is it a dichotomy with some major divide between us and the rest. Especially if, among us, you include all the varieties of human beings of all ages, including those with genetic brain defects, degenerative brain diseases, brain damage, and all the pathologies that don't stem from physical abnormality. > ...But no animals engage in human-level > conceptual thought. This sounds like amazing arrogance. Which humans? A two week old? A 10 month toddler? a three year old? I don't know of other animals that worry about quantum physics. But I have no doubt that (some) other animals can think about spatial relationships, can wonder whether they are being observed, can be curious about the contents of a container, can look for a gap in a fence or wall. [AW] > ..Like many philosophers I think this amounts to a > new and different level of mentality than that possessed by lower animals > and is itself made possible wholly by initiation into the use of a > public language. Well, this does sound to me like the expression of a set of norms or values, rather than the utterance of someone trying to find the truth. Pity. [AW] > If as I believe the concept of an inner state is internally related to that of > its expressions, If you mean *external* expressions then go and read Ryle again. [AW] > ..we can say: animals can have those mental states that are > expressible in behavior; You cannot express in your behaviour anything remotely as rich as your mental state on standing on a bridge watching the swirling rapids of a river below. You can't even describe it except at a fairly coarse level (J.L.Austin: "Fact is richer than diction"). [AW] > ..but we can in addition also have mental states > with finer grained contents which are only expressible in language. It's not the *fine* grained stuff that we express in language. It's the *coarse* grained carving up of reality into large scale chunks about which it is useful to pass on information: chunks like "river", "bridge", "swirling", "rapids", but not the really fine grained ever changing stuff we can see (and sometimes visualise). [AW] >...It > is not as though a lion might be thinking about quantum mechanics but, > sad for him, never acquired a language in which to express his thoughts. We ^^^ > are contstantly tempted to say it doesn't talk because it doesn't think, ^^^^^^^^^^^^^^^ We???? Maybe you. Please don't overgeneralise from your reflections on the world to claims about the intellectual failings of others. [AW] > The problem is that from my subjective point of view, that > representation you point out to me is an *alien* thing, as alien as if > it were written in someone else's diary. Ah but when you discover it is written in YOUR diary that's different? Well, just try joining the rest of us and using the portable diaries that evolution so carefully crafted for us over millions of years. Anyhow, your reaction to my examples of self discovery was very strange. You seem to think that anyone who points to facts about your mental state is necessarily trying to change you or to help you. The thought that I might merely be concerned with what is the case, and what explains the things you are aware of doesn't even occur to you. You think I want only to change the world. The point is to understand it. Which is why all this is totally irrelevant to what I was talking about. [AW] > ..You would have to reframe for > me my feelings or redescribe my actions or otherwise get me to modify > my pattern of acting and describing what I feel with respect to her to > get me to acknowledge the truth about my feelings. [AS] > >What exactly do you mean by "scan my own control system"? > >I suspect you are thinking of some sort of simple mechanical > >(para-mechanical?) model? [AW] > I mean I can't normally find out anything that's going on in there. Exactly: most of what's in your mind is inaccessible to you. E.g. you cannot read off the grammatical rules you know. An experienced flautist can put his/her fingers on the flute and play a top G sharp but if you ask which fingers have to be up and which down when that happens may not be able to tell you. But that doesn't mean the information is not stored there in a form in which it is useful for performance (e.g. when sight reading new music which includes a top G sharp). [AW] >..I > also consider my control system data structures to be alien things from > the perspective of the phenomenological subject (the person). That may be so. It's of no interest to me, except as a strange fact about your personality. That you find such information about yourself alien does not imply that anyone else does. Thousands of students sign up for degrees in psychology because they (mistakenly) think they will learn lots of deep things about how their minds work which are not directly accessible to them. > ....For > example, I can be immediately conscious of my desk, and I can be > reflectively aware that I am conscious of a desk, but I can't acquire > knowledge of any data structure (physical or virtual, it makes no > difference) floating around inside my control system unless you give me > special instruments. What sorts of instruments are you thinking of? What kinds of instruments would enable you to discover that the computer on your desk is running a prolog program that is parsing French sentences and interpreting them as querying a database? What kinds of instruments would tell you that it was running an artificial neural net that implements a highly parallel content addressible memory that allows items to be fetched very rapidly on the basis of partial matches between queries and stored information. "Suzie had a little goat, its fleece...." [AW] > ...and one should not forget about how these I'll happily forget about it when it is irrelevant. > things appear from the first person point of view, the view from which > control system data structures are wholly alien things I can only > discover by looking at a brain scan, things that have no role in my > phenomenology. This is extraordinary. (a) that you should think I am talking about what a brain scan could show you. (b) that you go on so about what's alien. I have no idea what sort of discussion this has turned into. It started, I think, from a question about whether some form of cognitive science can give deep, interesting, true, and perhaps useful explanations of facts about how our minds work. Some people may find the knowledge that their hearts pump blood and their colon digests food alien. That does not make it any the less true. > ..First, "in > the mind" is a dangerous phrase, from my point of view, since the mind is > not a place where things happen. Nobody said it was a place. There are many uses of "in" besides reference to a spatial relation. (In any set of integers, finite or infinite, there's always a smallest prime number.) > ...I think of the mind in Aristotelian fashion > as mainly a collection of capacities and abilities actualized in human > flesh and bone. Well, it's time you started learning about complex information processing systems. But if you wish to remain forever a 1950s philosopher that's fine. Don't expect to be taken seriously by most people on this news group. (I think I have met one or two others here, over the years, but I've not kept records.) [AW] > Also one can say that there are facts about mental states just as there are > facts about promises, without supposing these are facts about control system > states. I think you may have been using a very narrow notion of control system (e.g. something you can inspect only using a brain scanner). [AW] > ...Still it may be right to say that the relevant > domain of facts is one in which first-person avowals have a very > special role and even the idea of past or third-person ascriptions is > related to that special role. Sounds like an extreme form of wishful thinking to me. > ....just as the emergence of the social brings with it a > new and emergent level of description, so does the power of > language. Yes. I have no problem with that. There's no way we could have parliaments, economic systems, games of scrabble, legal contracts, and many other things but for the existence of a public language. > ..It might be implemented in the same or in different sorts > of control systems, but its the power that makes the difference, not > the control system which enables it, in my view. Of course there are different levels of description. But explaining the possibility of a working instance of level A may require reference to level B, and explaining some of the divergences between different instances of level A may be impossible without reference to level B. Similarly lower levels, in cases where there's a multi-level implementation. [AS] > >But if you wish to believe that there's some profound gulf between > >us and other animals, I am sure that no discussion in this forum > >will convince you. [AW] > I hardly think the view that language is special is out of place in > this forum. I don't think you appreciate the extent to which an external language is but an extension of and depends on the vast representational capabilities required for many non-human animal capabilities, including perception, learning, problem-solving, nest-building, hunting, etc. Aaron