From MAILER-DAEMON Mon Mar 10 12:16:52 2008 Date: 10 Mar 2008 12:16:52 +0000 From: Mail System Internal Data Subject: DON'T DELETE THIS MESSAGE -- FOLDER INTERNAL DATA X-IMAP: 1205151412 0000000000 Status: RO This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. From pop-forum-bounces@cs.bham.ac.uk Mon Mar 10 12:12:54 2008 Return-path: Envelope-to: axs@cs.bham.ac.uk Delivery-date: Mon, 10 Mar 2008 12:12:54 +0000 Received: from [147.188.192.54] (helo=mx2.cs.bham.ac.uk) by dobbin.cs.bham.ac.uk with esmtp (Exim 4.54) id 1JYgsA-0007XH-DF for axs@cs.bham.ac.uk; Mon, 10 Mar 2008 12:12:54 +0000 Received: from localhost ([127.0.0.1] helo=mx2.cs.bham.ac.uk) by mx2.cs.bham.ac.uk with esmtp (Exim 4.51) id 1JYgsw-0005Fj-HZ; Mon, 10 Mar 2008 12:13:42 +0000 Received: from [10.0.4.13] (helo=mx1.cs.bham.ac.uk) by mx2.cs.bham.ac.uk with esmtp (Exim 4.51) id 1JYgsu-0005FY-U9 for pop-forum@cs.bham.ac.uk; Mon, 10 Mar 2008 12:13:40 +0000 Received: from gromit.cs.bham.ac.uk ([147.188.193.16]) by mx1.cs.bham.ac.uk with esmtp (Exim 4.51) id 1JYgsu-0003t9-SK for pop-forum@cs.bham.ac.uk; Mon, 10 Mar 2008 12:13:40 +0000 Received: by gromit.cs.bham.ac.uk (8.12.11/submit/1.0) id m2ACDeIi019037; Mon, 10 Mar 2008 12:13:40 GMT Date: Mon, 10 Mar 2008 12:13:40 GMT From: Aaron Sloman Message-Id: <200803101213.m2ACDeIi019037@gromit.cs.bham.ac.uk> To: pop-forum@cs.bham.ac.uk Subject: Turing non-Test [Was pop-forum progress dots in ved ] In-Reply-To: <1idjdr9.dmt03z1at70xqN%spam@sofluc.co.uk.invalid> References: <1idhed0.gaxh2u4k2dkN%spam@sofluc.co.uk.invalid> <1idjdr9.dmt03z1at70xqN%spam@sofluc.co.uk.invalid> X-BeenThere: pop-forum@cs.bham.ac.uk X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Discussion of Poplog and Pop11 List-Id: Discussion of Poplog and Pop11 List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: pop-forum-bounces@cs.bham.ac.uk Status: R X-Status: X-Keywords: Jonathan L Cunningham wrote: > .... > The machine I used this year [brief google] is probably around 3000 > VAX11/780 MIPS. (I once estimated, back in the 1980s, that 50 MIPS would > be enough for a machine to pass the Turing Test, if only we knew how to > program it. The following is not a criticism of Jonathan, but of the majority of people who refer to 'the Turing Test'. The prediction that Turing actually made in his 1950 article came true before the end of the last century. This is what he wrote: It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Available online in various places, e.g.: http://www.abelard.org/turpap/turpap.htm http://www.loebner.net/Prizef/TuringArticle.html (has some OCR errors) I assume he meant 10^9 bits = about 125Mbytes. Lots of PCs had more than that by 2000. I suspect something not very much more complex than the Pop-11 eliza is needed to fool 70 per cent of 'average interrogators' for five minutes. Of course, knowledgable people can insert probe sentence sequences that will identify a current AI system as non-human within a much shorter time than five minutes. Many people mistakenly believe that Turing was proposing his scenario as a serious *test* for whether a machine can think, or whether a machine is intelligent. Turing rightly regarded the question as too ill-defined for any test to be able to answer it. He proposed the test only as a basis for considering and rebutting objections to his claim that it would be passed within 50 years. (1) The Theological Objection (2) The "Heads in the Sand" Objection (3) The Mathematical Objection (4) The Argument from Consciousness (5) Arguments from Various Disabilities (6) Lady Lovelace's Objection (7) Argument from Continuity in the Nervous System (8) The Argument from Informality of Behaviour (9) The Argument from Extrasensory Perception Of course what an 'average interrogator' knows can change as the general population becomes better educated. Right now the level of education about computers and AI in the population at large is still generally so poor that Turing's prediction is probably still true, though it might become false if stated about some future time! At present most people know only how to use computers for a few information-manipulation tasks, and know nothing about how the software works or how it can be tested, improved, etc. I have a proposed a new test in this paper http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0705 COSY-TR-0705 (PDF) Why Some Machines May Need Qualia and How They Can Have Them: Including a Demanding New Turing Test for Robot Philosophers Paper for AAAI Fall Symposium 2007 It's a test not for a specific machine but for an AI *design*. It should be possible for the same design to be implemented in two identical machines (young robots) that 'grow up' to hold opposed views on *every* substantive philosophical question about minds and bodies, minds and machines, etc. E.g. the generic design should make it possible for a robot to develop (possibly after going to school and university) so as to hold philosophical views like Stevan Harnad, or David Chalmers, or like Daniel Dennett (or even like me -- I disagree with all of them). Likewise such a machine could be capable of growing up to be like Richard Dawkins or like a creationist who believes in 'Intelligent Design'. However, different external influences are allowed. It need not be possible to predict how the machine will develop in any particular environment, since some developmental trajectories could be significantly influenced by decisions made partly on a random basis, e.g. whether to go to hear an advertised lecture or go to see a popular movie. A presupposition of the paper is that machines with human-like intelligence will be capable of getting into human-like muddles, confusions, and wickedness -- as a side-effect of using their intelligence to make sense of the world, including learning from others, etc. So the frequently quoted goal of producing 'human-level AI' may not be a good engineering goal, if we can do better than that. However, it is conceivable that products of biological evolution have already hit some theoretical limit in intelligence. I doubt it. Aaron http://www.cs.bham.ac.uk/~axs/