.
Recently viewed
0 comp.ai.philosophy <#!forum/comp.ai.philosophy>
comp.ai.philosophy <#!forum/comp.ai.philosophy>
This topic is no longer open for new replies due to inactivity. If you'd
like to post a message, please search for a more recent topic above or
post a new question.
Emperor's Mind: --> FAQ?
This topic is a duplicate of another discussion.
28 posts by 18 authors
Marvin Minsky
2/1/94
Other recipients: minsky
You should read the end of the book first. He's heading there, but never
gets there. That's why some of us call it "The Enoeror's New Book.". And
yes, the populace wants to be reassured that they are not machines. I
suppose, so that they won't h
In article > jayj...@rahul.net
(Jay James) writes:
>
>I'm sure that this is a FAQ but has anybody read Penrose's "The Emperor's
>New Mind" (no?) and have any comments about his observations on free will
>and why computers will never have one? (I'm only half-way through the
>book but it seems that's where the book is heading).
>
>Also, any critiques of this book in general are appreciated. It's rare
>that a science book becomes a best seller so it must have struck a chord
>in the populace.
>
>Cheers,
>--
>Jay James >
You should read the end of the book first. He's heading there, but
never gets there. That's why some of us call it "The Enoeror's New Book.".
And yes, the populace wants to be reassured that they are not
machines. I suppose, so that they won't have to become immortal.
Show trimmed content
Michael Jampel
2/1/94
Other recipients:
You could try and get hold of the following article, which I think is
also available by ftp from Birmingham: @article{ sloman-emperor, author
= "Aaron Sloman", title = "The Emperor's Real Mind", journal =
"Artificial Intelligence", yea
In article >, Jay James
> wrote:
>
>I'm sure that this is a FAQ but has anybody read Penrose's "The Emperor's
>New Mind" (no?) and have any comments about his observations on free will
>and why computers will never have one? (I'm only half-way through the
>book but it seems that's where the book is heading).
You could try and get hold of the following article, which I think is
also available by ftp from Birmingham:
@article{ sloman-emperor,
author = "Aaron Sloman",
title = "The Emperor's Real Mind",
journal = "Artificial Intelligence",
year = 1992,
volume = 56,
number = "2-3",
month = aug,
note = "(Review of Penrose's book)",
My personal opinion is that the book is an interesting set of chapters
which completely and utterly fail to achieve Penrose's aim. It's a while
since I read it, but I think he actually says in the conclusion
something along the lines of: ``Well I haven't proved what I wanted to
prove, but I still believe I am right''.
Michael Jampel
Show trimmed content
Bruce Stephens
2/1/94
Other recipients:
> Also, any critiques of this book in general are appreciated. It's rare
>>>>> On Tue, 1 Feb 1994 00:02:44 GMT, jayj...@rahul.net
(Jay James) said:
> I'm sure that this is a FAQ but has anybody read Penrose's "The Emperor's
> New Mind" (no?) and have any comments about his observations on free will
> and why computers will never have one? (I'm only half-way through the
> book but it seems that's where the book is heading).
> Also, any critiques of this book in general are appreciated. It's rare
> that a science book becomes a best seller so it must have struck a chord
> in the populace.
Hans Moravec > has written a critique
which should be
around somewhere. Marvin Minsky has a more general criticism, which
is that many arguments against AI work from deducing limitations of
consistent logical systems, but they obviously don't apply to
inconsistent ones, and surely noone really doubts that much of our
"reasoning" is slightly less than logically consistent?
Daniel Dennett's "Consciousness Explained" is in paperback now, so
let's hope that it becomes just as successful.
> Cheers,
> --
> Jay James >
--
Bruce Institute of Advanced Scientific Computation
br...@liverpool.ac.uk University of Liverpool
Show trimmed content
Lance Fletcher
2/1/94
Other recipients:
In Article <2il9fc$7...@toves.cs.city.ac.uk>, jam...@cs.city.ac.uk (Michael
In Article <2il9fc$7...@toves.cs.city.ac.uk >,
jam...@cs.city.ac.uk (Michael
Jampel) wrote:
>My personal opinion is that the book is an interesting set of chapters
>which completely and utterly fail to achieve Penrose's aim. It's a while
>since I read it, but I think he actually says in the conclusion
>something along the lines of: ``Well I haven't proved what I wanted to
>prove, but I still believe I am right''.
I believe this is the passage from Penrose to which Michael Jampel is
referring:
"Some of the arguemtns that I have given in these chapters may seem tortuous
and complicated. Some are admittedly speculative, whereas I believe there
is no real escape from some of the others. Yet beneath all this
technicality is the feeling that it is indeed 'obvious' that the conscious
mind cannot work like a computer, even though much of what is actually
involved in mental activity might do so." (The Emperor's New Mind, p.448)
I personally don't think it means quite the same thing as Michael's
paraphrase.
Lance Fletcher
The Free Lance Academy (a Platonic BBS) 201-963-6019
for Internet access: gopher to: lance.jvnc.net
or anonymous ftp to: world.std.com
/ftp/pub/freelance
Show trimmed content
Jay James
2/1/94
Other recipients:
Cheers, -- Jay James
I'm sure that this is a FAQ but has anybody read Penrose's "The Emperor's
New Mind" (no?) and have any comments about his observations on free will
and why computers will never have one? (I'm only half-way through the
book but it seems that's where the book is heading).
Also, any critiques of this book in general are appreciated. It's rare
that a science book becomes a best seller so it must have struck a chord
in the populace.
Cheers,
--
Jay James >
Show trimmed content
Hans Moravec
2/2/94
Other recipients:
Here's a recent letter to Penrose, from last year, on the same subject
as the one from 1990: -------------------------------------- September
26, 1993, Professor Roger Penrose, Oxford University, Mathematical
Institute 24-29 St. Giles', Oxford, OX1
Here's a recent letter to Penrose, from last year, on the same subject
as the one from 1990:
--------------------------------------
September 26, 1993,
Professor Roger Penrose, Oxford University, Mathematical Institute
24-29 St. Giles', Oxford, OX1 3LB, ENGLAND
Dear Roger,
We all enjoyed your visit and presentation on September 13:
thanks very much. A frequent wish afterwards was to hear more about
non-computability in physics. No one I spoke with admitted they were
convinced by the Godel argument, but a few were inclined to accept it
on your authority. Your question about my position on it triggered
elaborate thoughts that mixed several of the alternatives, and left me
tongue tied. I think the question must be answered at three levels,
each with its own style of reasoning, each level happening to be
implemented the next lower one.
At the top level, a very few people have learned to follow long
chains of formal inference. It is an unnatural activity, and even
skilled practitioners work slowly, make many errors, and are thus
limited to small axiom systems and modest inference lengths--computers
already often do better. Since the reasoning is expressed externally,
it can be checked and corrected, mitigating some of the error, and
allowing the work do be continued over generations. Though we can
never be sure our axiom systems are really consistent, this type of
reasoning would be subject to the undecidability theorems if it were
not possible to step outside axiom sets. But we can step outside to
add new axioms, because formal reasoning operates in the context of the
far more powerful and information-rich middle level, common sense
reasoning, where our intuition lies.
Common sense is an innate, evolutionarily honed, skill built on
an enormous body of prewired or systematically learned predispositions,
many of them sensory and motor oriented. Described in formal terms, it
has maybe millions of axioms, and the ability to combine them rapidly
in parallel, but only in short inference chains. It is inconsistent,
probabilistic and finite, but in a statistical way encodes many usually
correct facts about the world, including ones about arithmetic and
geometry, which allow it to guide, imperfectly but powerfully,
reasoning chains at the formal level. The computational power and
informational content of human common sense thinking exceeds the
capacity of existing computers. There are efforts to automate common-
sense underway, but I estimate success will take several more decades.
Since it is largely a compact encoding of a finite, though huge and
growing, amount of derived experience about the world, Godel's theorems
do not apply. We lack any comprehensive theory of this kind of system,
but work in knowledge representation in AI is exploring some of the
issues.
Common sense is evolutionarily and experientially derived from
interctions with the bottom level, the physical world. What we call
physical law encodes the simple regularities, but boundary conditions
are necessary to explain the actual world. Describing the universe
with boundary conditions would probably require an astronomical number
of axioms, giving a system that, though presumably consistent, is far
too vast to understand, making the Godel argument moot.
Your mentioned the improbability of our mammoth-hunting
ancestors developing common sense intuition that aids higher
mathematics. I don't find this so improbable. Counting, measuring,
shaping, motion and strategy all played a survival role, leading to
mental mechanisms for dealing with them, which could be orchestrated
into arithmetic, geometry, topology, calculus and logic by those other
survival mechanisms, curiosity, flexible learning and the ability to
make generaliations. There must be potential fields of mathematics for
which our ancestors bequeathed us no intuitions, but we would barely be
aware of them, since they would be so hard to explore and appreciate.
Negative and complex numbers, high dimensionality, abstract algebraic
objects, relativity and quantum mechanics are strange, but retain
enough analogies with the physical world to allow some people to
visualize, grasp, hear, smell or taste them in a way that greatly
facilitates their thinking. People who can only follow the formal
reasoning don't get very far. Possibly, deep down, the physical world
is constructed in a way that is fundamentally incompatible with our
intuitions (who knows, maybe quantum gravity is where intuition begins
to fail). If so, our middle level intuitions may be "mined out" by top
level inquiries before we find the ultimate answers, and further
progress may then depend on computers powerful enough to go as far in
broad formal explorations as we now travel in narrower intuition-guided
searches.
I have an explanation for Minsky's approach to debate (mostly
shared by McCarthy and Simon--I don't know about Donald Michie). Though
Turing was there first in thoughts, insights, radio debates and
articles, he left us before the attempt to seriously support
investigations into machine intelligence got underway. At MIT (and
most elswhere) the majority opinion in the fifties, especially among
the main computer users in the physical sciences and the computer
builders in electrical engineering, was that intelligent machinery was
a ridiculous, abhorrent, computer-wasting, science-fiction idea.
Though funding became available from broad-minded military sources in
the aftermath of Sputnik, the established MIT faculty (their
reputations exalted because of wartime success with radar) opposed the
effort at every turn, even beyond the normally intense MIT style. That
MIT has no computer science department, and gives its AI degrees in
electrical engineering, is an aftermath of that stern opposition. So
Marvin and company aquired jungle fighter habits at a time when those
were necessary. We in succeeding generations were greatly sheltered,
so have different reflexes.
So there we are. I still find the possibility of exceeding
Turing computability fascinating, and wonder how it might effect the
arguments above, or the shape of machines to come.
Very best wishes,
Hans Moravec, CMU Robotics
Show trimmed content
Mr Robin J Faichney
2/2/94
Other recipients:
Marvin Minsky (min...@media.mit.edu) wrote:
Marvin Minsky (min...@media.mit.edu ) wrote:
>And yes, the populace wants to be reassured that they are not
>machines. I suppose, so that they won't have to become immortal.
This goes straight into my collection of classic Usenet quotes!
--
Robin Faichney rj...@stirling.ac.uk
(+44)/(0) 786 467482
Environmental Economics Research Group, University of Stirling, FK9
4LA, UK
*Don't ask me, I only mind the machines around here.*
Show trimmed content
Mr Robin J Faichney
2/2/94
Other recipients:
Hans Moravec (h...@cs.cmu.edu) wrote: [originally to Roger Penrose] > If
your book was written to counter a browbeating you felt from >proponents
of hard AI, mine was inspired by the browbeaten timidity I found >in the
majority of my colleague
Hans Moravec (h...@cs.cmu.edu ) wrote:
[originally to Roger Penrose]
> If your book was written to counter a browbeating you felt from
>proponents of hard AI, mine was inspired by the browbeaten timidity I found
>in the majority of my colleagues in that community. As the words
>"frightening" and "nightmare" in your review suggest, intelligent machines
>are an emotion-stirring prospect, and it is hard to remain unbrowbeaten in
>the face of frequent hostility. But why hostility? Our emotions were
>forged over eons of evolution, and are triggered by situations, like
threats
>to life or territory, that resemble those that influenced our ancestors'
>reproductive success. Since there were no intelligent machines in our
past,
>they must resemble something else to incite such a panic-perhaps another
>tribe down the stream poaching in our territory, or a stronger, smarter
>rival for our social position, or a predator that will carry away our
>offspring in the night. But is it reasonable to allow our actions and
>opportunities to be limited by spurious resemblances and unexamined fears?
Does Hans imagine that present fears must be incited by a similarity to
some *concrete* denizen of "species memory"? Whatever happened to our
fabulous capacity of abstraction? Could it not be that some people are
made uneasy by the prospect of capability without empathy, that of an
intelligent machine, like us in that respect, but lacking the low level
built-in emotional response that we have to each other. Something like
a psychopath, in fact.
I know Hans will see the building-in of such a thing as relatively
trivial. I'm just trying to suggest that the negative emotional
reaction to the prospect of intelligent machinery is not *quite* as
silly as it seems. And that one of the leading proponents of that
prospect does not seem to have a very good feel for a fundamental aspect
of what he wants to simulate--opps, sorry, emulate. (Why does there so
often seem to be a trade-off between interest in people and interest in
machines?)
--
Robin Faichney rj...@stirling.ac.uk
(+44)/(0) 786 467482
Environmental Economics Research Group, University of Stirling, FK9
4LA, UK
*Don't ask me, I only mind the machines around here.*
Show trimmed content
Hans Moravec
2/2/94
Other recipients:
Division of labor, in our house. My wife has superb social intelligence,
and is able to intuit need for action in relationships with people
sometimes years before I become even dimly aware. So I'm relieved to
leave most of that aspect of life to he
> (Why does there so often seem to be a trade-off between interest in
> people and interest in machines?)
> Robin Faichney rj...@stirling.ac.uk
(+44)/(0) 786 467482
Division of labor, in our house. My wife has superb social
intelligence, and is able to intuit need for action in relationships
with people sometimes years before I become even dimly aware. So I'm
relieved to leave most of that aspect of life to her, and to
concentrate full time on mechanical maunderings. Analogously, she's
an excellent driver, whereas I was a frightening one--I stopped
driving altogether over a decade ago, very possibly saving my life or
someone else's.
When robots become advanced enough to interact as conscious
agents in human society, a large part of their programming will have
to be devised by future specialists in "social engineering" -- where
Miss Manners meets the metal.
-- Hans Moravec CMU Robotics
Show trimmed content
John Snodgrass
2/2/94
Other recipients:
In <1994Feb1.0...@news.media.mit.edu> min...@media.mit.edu (Marvin
Minsky) writes: [...]
In <1994Feb1.0...@news.media.mit.edu > min...@media.mit.edu
(Marvin Minsky) writes:
[...]
>And yes, the populace wants to be reassured that they are not
>machines. I suppose, so that they won't have to become immortal.
Machines are far less stable than organisms. On EArth, organisms
have been growing and transforming for 3.5 BYs. It would seem that as
a whole, life _is_ immortal.
It's more like fear of death makes people want to imagine
themselves as machines, so they can defuse their fear of their
individual death. (Also escape a sense of personal responsibility
by believing they have no free will.) Religions and belief systems
in general have satisfied this need. Strong AI included.
JES
Show trimmed content
John Snodgrass
2/2/94
Other recipients:
(Hans "Fu Manchu" Moravec) writes: >I imagine a >future debate in which
Professor Searle, staunch to the end, succumbs to the >"mere imitation"
of strangulation at the hands of an insulted and enraged >robot
controlled by the "mere imitation" of th
(Hans "Fu Manchu" Moravec) writes:
>I imagine a
>future debate in which Professor Searle, staunch to the end, succumbs
to the
>"mere imitation" of strangulation at the hands of an insulted and enraged
>robot controlled by the "mere imitation" of thought and emotion.
JES
Show trimmed content
Hans Moravec
2/2/94
Other recipients:
Bruce Stephens mentions:
Bruce Stephens mentions:
>Hans Moravec > has written a critique
which should be
>around somewhere.
The following aging letter is, at this moment, being adapted as part of
the final chapter (6) of forthcoming book "Mind Age", out this fall:
-------------------
February 20, 1990
To: Professor Roger Penrose, Department of Mathematics, Oxford, England
Dear Professor Penrose,
Thank you for sharing your thoughts on thinking machinery in your
new book "The Emperor's New Mind", and in the February 1 New York Review of
Books essay on my book "Mind Children". I've been a fan of your
mathematical
inventions since my high school days in the 1960s, and was intrigued to hear
that you had written an aggressively titled book about my favorite subject.
I enjoyed every part of that book-the computability chapters were an
excellent review, the phase space view of entropy was enlightening, the
Hilbert space discussion spurred me on to another increment in my incredibly
protracted amateur working through of Dirac, and I'm sure we both learned
from the chapter on brain anatomy. You won't be surprised to learn,
however, that I found your overall argument wildly wrong headed!
If your book was written to counter a browbeating you felt from
proponents of hard AI, mine was inspired by the browbeaten timidity I found
in the majority of my colleagues in that community. As the words
"frightening" and "nightmare" in your review suggest, intelligent machines
are an emotion-stirring prospect, and it is hard to remain unbrowbeaten in
the face of frequent hostility. But why hostility? Our emotions were
forged over eons of evolution, and are triggered by situations, like threats
to life or territory, that resemble those that influenced our ancestors'
reproductive success. Since there were no intelligent machines in our past,
they must resemble something else to incite such a panic-perhaps another
tribe down the stream poaching in our territory, or a stronger, smarter
rival for our social position, or a predator that will carry away our
offspring in the night. But is it reasonable to allow our actions and
opportunities to be limited by spurious resemblances and unexamined fears?
Here's how I look at the question. We are in the process of creating a new
kind of life. Though utterly novel, this new life form resembles us more
than it resembles anything else in the world. To earn their keep in
society, robots are being taught our skills. In the future, as they work
among us on an increasingly equal footing, they will acquire our values and
goals as well-robot software that causes antisocial behavior, for instance,
would soon cease being manufactured. How should we feel about beings that
we bring into the world, that are similar to ourselves, that we teach our
way of life, that will probably inherit the world when we are gone? I
consider them our children. As such they are not fundamentally threatening,
though they will require careful upbringing to instill in them a good
character. Of course, in time, they will outgrow us, create their own
goals, make their own mistakes, and go their own way, with us perhaps a fond
memory. But that is the way of children. In America, at least, we consider
it desirable for offspring to live up to their maximum potential and to
exceed their parents.
You fault my book for failing to present alternatives to the "hard
AI" position. It is my honest opinion that there are no convincing
scientific alternatives. There are religious alternatives, based on
subjective premises about a special relation of man to the universe, and
there are flawed secular rationalizations of anthropocentrism. The two
alternatives you offer, namely John Searle's philosophical argument and your
own physical speculation, are of the latter kind. Searle's position is that
a system that, however accurately, simulates the processes in a human brain,
whether with marks on paper or signals in a computer, is a "mere imitation"
of thought, not thought itself. Pejorative labels may be an important tool
for philosophy professors, but they don't create reality. I imagine a
future debate in which Professor Searle, staunch to the end, succumbs to the
"mere imitation" of strangulation at the hands of an insulted and enraged
robot controlled by the "mere imitation" of thought and emotion. Your own
position is that some physical principle in human brains produces
"non-computable" results, and that somehow this leads to consciousness.
Well, I agree, but the same principle works equally well for robots, and its
not nearly as mysterious as you suggest.
Alan Turing's computability arguments, now more than fifty years
old, were a perfect fit to David Hilbert's criteria for the mechanization of
deductive mathematics, but they don't define the capabilities of a robot or
a human. They assume a closed process working from a fixed, finite, amount
of initial information. Each step of a Turing machine computation can at
best preserve this information, and may destroy a bit of it, allowing the
computation to eventually "run down", like a closed physical system whose
entropy increases. The simple expedient of opening the computation to
external information voids this suffocating premise, and with it the
uncomputability theorems. For instance, Turing proved the uncomputability
of most numbers, since there are only countably many machine programs, and
uncountably many real numbers for them to generate. But it is trivial to
produce "uncomputable" numbers with a Turing machine, if the machine is
augmented with a true randomizing device. Whenever another digit of the
number is needed, the randomizer is consulted, and the result written on the
appropriate square of the tape. The emerging number is drawn uniformly from
a real interval, and thus (with probability 1) is an "uncomputable" number.
The randomizing device allows the machine to make an unlimited number of
unpredetermined choices, and is an unbounded information source. In a
Newtonian universe, where every particle has an infinitely precise position
and momentum, fresh digits could be extracted from finer and finer
discriminations of the initial conditions by the amplifying effects of
chaos, as in a ping pong ball lottery machine. A quantum mechanical
randomizer might operate by repeatedly confining a particle to a tiny space,
so fixing its position and undefining its momentum, then releasing it and
registering whether it travels left or right. Just where the information
flows from in this case is one of the mysteries of quantum mechanics.
The above constitutes a basic existence proof for "uncomputable"
results in real machines. A more interesting example is the augmentation of
a "Hilbert" machine that systematically generates inferences from an initial
set of axioms. As your book recounts, a deterministic device of this kind
will never arrive at some true consequences of the axioms. But suppose the
machine, using a randomizer, from time to time concocts an entirely new
statement, and adds it to the list of inferences. If the new "axiom" (or
hypothesis) is inconsistent with the original set, then sooner or later the
machine will generate an inference of "FALSE" from it. If that happens the
machine backtracks and deletes the inconsistent hypothesis and all of its
inferences, then invents a new hypothesis in its place. Eventually some of
the surviving hypotheses will be unprovable theorems of the original axiom
system, and the overall system will be an idiosyncratic, "creative"
extension of the original one. Consistency is never assured, since a
contradiction could turn up at any time, but the older hypotheses are less
and less likely to be rescinded. Mathematics made by humans has the same
property. Even when an axiomatic system is proved consistent, the augmented
system in which the proof takes place could itself be inconsistent,
invalidating the proof!
When humans (and future robots) do mathematics they are less likely
to draw inspiration from rolls of dice than by observing the world around
them. The real world too is a source of fresh information, but pre-filtered
by the laws of physics and evolution, saving us some work. When our senses
detect a regularity (let's say, spherical soap bubbles) we can form a
hypothesis (eg. that spheres enclose volume with the least area) likely to
be consistent with hypotheses we already hold, since they too were
abstracted from the real world, and the real world is probably consistent.
This brings me to your belief in a Platonic mathematical reality, which I
also think you make unnecessarily mysterious. The study of formal systems
shows there is nothing fundamentally unique about the particular axioms and
rules of inference we use in our thinking. Other systems of strings and
rewriting rules look just as interesting on paper. They may not correspond
to any familiar kind of language or thought, but it is easy to construct
machines (and presumably animals) to act on their strange dictates. In
the course of evolution (which, significantly, is driven by random
mutations) minds with unusual axioms or inference structures must have
arisen from time to time. But they did poorly in the contest for survival
and left no descendants. In this way we were shaped by an evolutionary game
of twenty questions-the intuitions we harbor are those that work in this
place. The Platonic reality you sense is the groundrules of the physical
universe in which you evolved-not just its physics and geometry but its
logic. If there are other universes with different rules, other Roger
Penroses may be sensing quite different Platonic realities.
And now to that other piece of mysticism, human consciousness.
Three centuries ago Rene Descartes was a radical. Having observed the likes
of clockwork ducks and the imaging properties of bovine eyes, he rejected
the vitalism of his day and suggested that the body was just a complex
machine. But lacking a mechanical model for thought, he exorcised the
spirit of life only as far as a Platonic realm of mind somewhere beyond the
pineal gland-a half-measure that gave us centuries of fruitless haggling on
the "mind-body" problem. Today we do have mechanical models for thought,
but the Cartesian tradition still lends respectability to a fantastic
alternative that comforts anthropocentrists, but explains nothing. Your own
proposal merely substitutes "mysterious unexplained physics" for spirit.
The center of Descartes' ethereal domain was consciousness, the awareness of
thought-"I think therefore I am".
You say you have no definition for consciousness, but think you know
it when you see it, and you think you see it in your housepets. So, a dog
looks into your eyes with its big brown ones, tilts its head, lifts an ear
and whines softly, and you feel that there is someone there there. I
suppose, from your published views, that those same actions from a future
robot would meet with a less charitable interpretation. But suppose the
robot also addresses you in a pained voice, saying "Please, Roger, it
bothers me that you don't think of me as a real person. What can I do to
convince you? I am aware of you, and I am aware of myself. And I tell you,
your rejection is almost unbearable". This performance is not a recording,
nor is it due to mysterious physics. It is a consequence of a particular
organization of the robot's controlling computers and software. The great
bulk of the robot's mentality is straightforward and "unconscious". There
are processes that reduce sensor data to abstract descriptions for problem
solving modules, and other processes that translate the recommendations of
the problem solvers into robot actions. But sitting on top of, and
sometimes interfering with, all this activity is a relatively small
reflective process that receives a digest of sensor data organized as a
continuously updated map, or cartoon-like image, of the robot's
surroundings. The map includes a representation of the robot itself, with a
summary of the robot's internal state, including reports of activity and
success or trouble, and even a simplified representation of the reflective
process. The process maintains a recent history of this map, like frames of
a movie film, and a problem solver programmed to monitor activity in it.
One of the reflective process' most important functions is to protect
against endless repetitions. The unconscious process for unscrewing a jar
lid, for instance, will rotate a lid until it comes free. But if the screw
thread is damaged, the attempt could go on indefinitely. The reflective
process monitors recent activity for such dangerous deadlocks and interrupts
them. As a special case of this, it detects protracted inaction. After a
period of quiescence the process begins to examine its map and internal
state, particularly the trouble reports, and invokes problem solvers to
suggest actions that might improve the situation.
The Penrose house robot has a module that observes and reasons about
the mental state of its master (advertising slogan: "Our Robots Care!").
For reasons best known to its manufacturer, this particular model registers
trouble whenever the psychology module infers that the master does not
believe the robot is conscious. One slow day the reflective process stirs,
and notes a major trouble report of this kind. It runs the human interaction
problem solver to find an ameliorating strategy. This produces a plan to
initiate a pleading conversation with Roger, with nonverbal cues. So the
robot trundles up, stares with its big brown eyes, cocks its head, and
begins to speak. To protect its reputation, the manufacturer has arranged
it so the robot cannot knowingly tell a lie. Every statement destined for
the speech generator is first interpreted and tested by the reflective
module. If the robot wishes to say "The window is open", the reflective
process checks its map to see if the window is indeed labeled "open". If
the information is missing, the process invokes a problem solver, which may
produce a sensor strategy that will appropriately update the map. Only if
the statement is so verified does the reflective process allow it to be
spoken. Otherwise the generating module is itself flagged as troublesome,
in a complication that doesn't concern this argument. The solver has
generated "Please, Roger, it bothers me that you don't think of me as a real
person". The reflective process parses this, and notes, in the map's
schematic model of the robot's internals, that the trouble report from the
psychology module was generated because of the master's (inferred)
disbelief. So the statement is true, and thus spoken. "What can I do to
convince you?"-like invoking problem solvers, asking questions sometimes
produces solutions, so no lie here. "I am aware of you, and I am aware of
myself."-the reflective process refers to its map, and indeed finds a
representation of Roger there, and of the robot itself, derived from sensor
data, so this statement is true. "And I tell you, your rejection is almost
unbearable"-trouble reports carry intensity numbers, and because of the
manufacturer's peculiar priorities, the "unconscious robot" condition
generates ever bigger intensities. Trouble of too high an intensity
triggers a safety circuit that shuts down the robot. The reflective process
tests the trouble against the safety limit, and indeed finds that it is
close, so this statement also is true. [In case you feel this scenario is
far fetched, I am enclosing a recent paper by Steven Vere and Timothy
Bickmore of the Lockheed AI center in Palo Alto that describes a working
program with its basic elements. They avoid the difficult parts of the robot
by working in a simulated world, but their program has a reflective module,
and acts and speaks with consciousness of its actions.]
Human (and even canine) consciousness undeniably has subtleties not
found in the above story. So will future robots. But some animals
(including most of our ancestors) get by with less. A famous example is the
Sphex wasp, which paralyzes caterpillars and deposits them in an underground
hatching burrow. Normally she digs a burrow, seals the entrance, and leaves
to find a caterpillar. Returning, she drops the victim, reopens the
entrance, then turns to drag in the prey. But if an experimenter interrupts
by moving the caterpillar a short distance away while the wasp is busy at
the opening, she repeats the motions of opening the (already open) burrow,
after shifting the prey back. If the experimenter again intervenes, she
repeats again, and again and again, until either the wasp or the
experimenter drops from exhaustion. Apparently Sphex has no reflective
module to detect the cycle. It's not a problem in her simple, stereotyped
life, malicious experimenters being rare. But in more complex niches,
opportunities for potentially fatal loops must be more frequent and
unpredictable. The evolution of consciousness may have started with a
"watchdog" circuit guarding against this hazard.
I like thinking about the universe's exotic possibilities, for
instance about computers that use quantum superposition to do parallel
computations. But even with the additional element of time travel (!), I've
never encountered a scheme that gives more than an exponential speedup,
which would have tremendous practical consequences, but little effect on
computability theorems. Or perhaps the universe is like the random axiomatic
system extender described above. When a measurement is made and a wave
function collapses, an alternative has been chosen. Perhaps this
constitutes an axiomatic extension of the universe- today's rules were made
by past measurements, while today's measurements, consistent with the old
rules, add to them, producing a richer set for the future.
But robot construction does not demand deep thought about such
interesting questions, because the requisite answers already exist in us.
Rather than being something entirely new, intelligent robots will be
ourselves in new clothing. It took a billion years to invent the concept of
a body, of seeing, moving and thinking. Perhaps fundamentals like and
space and time took even longer to form. But while it may be hard to
construct the arrow of perceived time from first principles, it is easy to
build a thermostat that responds to past temperatures, and affects those of
the future. Somehow, without great thought on our part, the secret of time
is passed on to the device. Robots began to see, move and think almost from
the moment of their creation. They inherited that from us.
In the nineteenth century the most powerful arithmetic engines were
in the brains of human calculating prodigies, typically able to multiply two
10 digit numbers in under a minute. Calculating machinery surpassed them by
1930. Chess is a richer arena, involving patterns and strategy more in tune
with our animal skills. In 1970 the best chess computer played at an
amateur level, corresponding to a US chess federation rating of about 1500.
By 1980 there was a machine playing at a 1900 rating, Expert level. In
1985, a machine (HiTech) at my own university had achieved a Master level of
2300. Last year a different machine from here (Deep Thought) achieved
Grandmaster status with a rating of 2500. There are only about 100 human
players in the world better-Gary Kasparov, the world champion, is rated
between 2800 and 2900. In past each doubling of chess computer speed raised
the quality of its play by about 100 rating points. The Deep Thought team
has been adopted by IBM and is constructing a machine on the same
principles, but 1000 times as fast. Though Kasparov doubted it on the
occasion of defeating Deep Thought in two games last year, his days of
absolute superiority are numbered. I estimated in my book that the most
developed parts of human mentality- perception, motor control and the
common sense reasoning processes-will be matched by machines in no less
than 40 years. But many of the skills employed by mathematics professors
are more like chess than like common sense. Already I find half of my
mathematics not in my head but in the steadily improving Macsyma and
Mathematica symbolic mathematics programs that I've used almost daily for 15
years. Sophomoric arguments about the indefinite superiority of man over
machine are unlikely to change this trend.
Well, thank you for a stimulating book. As I said in the
introduction, I enjoyed every part of it, and its totality compelled me to
put into these words ideas that might otherwise have been lost.
Very Best Wishes, Hans Moravec
Robotics Institute, Carnegie Mellon University
Show trimmed content
Andrew Scott
2/4/94
Other recipients:
jayj...@rahul.net (Jay James) writes:
jayj...@rahul.net (Jay James) writes:
>Also, any critiques of this book in general are appreciated. It's rare
>that a science book becomes a best seller so it must have struck a chord
>in the populace.
Can't say what others saw in it, but I was most impressed with the way
Penrose seemed to keep on getting distracted. The scope of the book is
enormous and covered a huge portion of my 1st year stuff (except I
didn't do physics).
I found the final 2 chapters to be the most entertaining, and if you
feel like giving up during the read (not too hard to do), then try
skipping ahead and reading them. Split-brain experiments: Yes! It's the
sort of philosophical/moral quandry that I was expecting in other places
in the book.
Even though it's tough to follow his argument through 500 or so pages,
the journey was fun for me.
Andrew Scott
INTERNE...@tartarus.uwa.edu.au
Show trimmed content
John Snodgrass
2/6/94
Other recipients:
In <2itgrk$p...@lorne.stir.ac.uk> rj...@stirling.ac.uk (Mr Robin J
Faichney) writes: [...] >That's exactly the dichotomy I had in mind, but
what I wonder is, why is >it that people who have plenty of "social
intelligence" so often seem to >lack tal
In <2itgrk$p...@lorne.stir.ac.uk > rj...@stirling.ac.uk
(Mr Robin J Faichney) writes:
[...]
>That's exactly the dichotomy I had in mind, but what I wonder is, why is
>it that people who have plenty of "social intelligence" so often seem to
>lack talent for what used to be the slip-stick disciplines, and vice
>versa?
I'd just like to point out that views or feelings regarding
machine intelligence (so-called) do not necessarily divide along
predictable lines WRT sociability. One can be a computer nerd and still
be anti-AI, because AI is a view on machines, not an appreciation of the
power of machines. People doing AI programming have a certain agenda,
and they build certain kinds of programs. Other types of high-level
programming are important and being done. The basic difference is whether
you explore the human mind by trying to copy it, or by trying to augment
it.
IMO people interested in high-level programming are interested
in humans and the deeper questions of human existence. AI types want to
view the human as a machine, while augmentation types want to transform
the human with the machine. Both seek to penetrate the social/psychological
aspects of human nature with the help of dynamic programmed models. But this
does raise an interesting point. While the AI type is concerned with
interacting with his machine, and presumably engendering some form of
self-control in the machine, the augmentor is ultimately interested in
interaction with other humans (aided by his machine) and gaining greater
power for himself and his group (e.g. corporation). I guess this does
reassert the difference you describe on a higher level.
JES
Show trimmed content
Jay James
2/8/94
Other recipients:
Stop the presses! I've discovered a thought experiment that will prove
at least one point made in Penrose's "ENM" (first few chapters, haven't
finished the book yet). The premise by Penrose that no algorithm can
truly represent conciousness, no m
Stop the presses! I've discovered a thought experiment that will prove at
least one point made in Penrose's "ENM" (first few chapters, haven't
finished the book yet).
The premise by Penrose that no algorithm can truly represent conciousness,
no matter how closely it passes the "Turing Test" can be illustrated as
follows.
(1) Construct a random-number generator. Such a function by definition
has no intelligence to begin with (it is not causual, for one).
(2) Make a one-to-one mapping between the numbers generated by this
function and the alphabet; eg., 01 = 'A', 02 = 'B', etc.
(3) It can be proven (details omitted here) that probability theory
predicts that such a sequence will somewhere contain all the english
literature every written or to be written, all the scientific theories
made or to be made, all the mathematical proofs given now or hereafter, etc.
(4) Therefore the random-number generator contains embedded within it all
of the intelligence of mankind, now or in the future. But it cannot be
considered conscious, even if perchance the sequence happens to pass the
"Turing Test", or any other test of intelligence.
Q.E.F.
--
Jay James >
Show trimmed content
Hans Moravec
2/9/94
Other recipients:
would be interested in hearing of any other work along similar lines.
>-- >Derek Harter dha...@hns.com Don't have an on-line source, but
here's an old-fashioned one: Author VERE, S.; BICKMORE, T.; Lockheed AI
Center, Palo Alto
>>[In case you feel this scenario is
>>far fetched, I am enclosing a recent paper by Steven Vere and Timothy
>>Bickmore of the Lockheed AI center in Palo Alto that describes a working
>>program with its basic elements.
>Just a quick request to Mr. Moravec or anyone else who might know
>(Mr. Vere or Mr. Bickmore included :-). Is the above mentioned paper
>available online anywhere and if so where might we obtain it. Also I
would be interested in hearing of any other work along similar lines.
>--
>Derek Harter dha...@hns.com
Don't have an on-line source, but here's an old-fashioned one:
Author VERE, S.; BICKMORE, T.;
Lockheed AI Center, Palo Alto, CA, USA
Title A basic agent
Source Computational Intelligence;
Comput. Intell. (Canada);
vol.6, no.1;
Feb. 1990 pp.; pp. 41-60 pp.
Abstract A basic agent has been constructed which integrates limited
natural language understanding and generation, temporal
planning
and reasoning, plan execution, simulated symbolic perception,
episodic memory, and some general world knowledge. The agent is
cast as a robot submarine operating in a two-dimensional
simulated 'Seaworld' about which it has only partial knowledge.
It can communicate with people in a vocabulary of about 800
common English words using a medium coverage grammar. The agent
maintains an episodic memory of events in its life and has a
limited ability to reflect on those events. The agent is
able to
give terse answers to questions about its past experiences,
present activities and perceptions, future intentions, and
general knowledge
Subject artificial intelligence; inference mechanisms; knowledge
representation; natural languages
Keyword AI; integrated artificial intelligence; semantics; agent;
natural language understanding; planning; reasoning; plan
execution; symbolic perception; episodic memory; English
words
ClassCodes C1230; C6170
Treatment theoretical/mathematical; experimental
Coden COMIE6
Language English
RecordType Journal
ControlNo. 3748739
AbstractNos. C90067501
ISSN 0824-7935
References 32
Country Pub. Canada
date 1172
Show trimmed content
Brian Ewins
2/9/94
Other recipients:
Oops! sorry to burst your baloon here Jay, but the thing you have
written is _not_ an algorithm. The 'random number generator'
specifically, must generate numbers in such a way that we cannot predict
its outcome == NO ALGORITHM... pseudo random no. g
- show quoted text -
In article 5...@rahul.net , jayj...@rahul.net
(Jay James) writes:
> Stop the presses! I've discovered a thought experiment that will prove at
> least one point made in Penrose's "ENM" (first few chapters, haven't
> finished the book yet).
>
> The premise by Penrose that no algorithm can truly represent conciousness,
> no matter how closely it passes the "Turing Test" can be illustrated as
> follows.
>
> (1) Construct a random-number generator. Such a function by definition
> has no intelligence to begin with (it is not causual, for one).
> (2) Make a one-to-one mapping between the numbers generated by this
> function and the alphabet; eg., 01 = 'A', 02 = 'B', etc.
> (3) It can be proven (details omitted here) that probability theory
> predicts that such a sequence will somewhere contain all the english
> literature every written or to be written, all the scientific theories
> made or to be made, all the mathematical proofs given now or
hereafter, etc.
> (4) Therefore the random-number generator contains embedded within it all
> of the intelligence of mankind, now or in the future. But it cannot be
> considered conscious, even if perchance the sequence happens to pass the
> "Turing Test", or any other test of intelligence.
>
> Q.E.F.
>
> --
> Jay James >
Oops! sorry to burst your baloon here Jay, but the thing you
have written is _not_ an algorithm. The 'random number generator'
specifically, must generate numbers in such a way that we cannot
predict its outcome == NO ALGORITHM... pseudo random no. generators
are algorithmic, and your argument would fall to Penrose's.
If you even suppose that you have such a beast which *is* algorithmic
it would be undecidable whether or not it would pass the Turing test:
your 'algorithm' does not necessarily pass it in finite time.
Shame tho'. That books a sows ear made from a fine collection of
silk purses and thoroughly deserves holes poked in it. I think the
best argument is one that's appeared on this group already: if the
computer has access to the real numbers (ie the real world...maybe)
you no longer have countable infinites, Cantor's diagonal slash
doesn't work and "algorithms+realworld" escape most of his arguments.
This is essentially what you wanted to say, but you don't _have_
to use _random_ numbers, so maybe you _can_ construct a T test passer
like that. Best part about this argument: it relies on Penrose's. :o)
Example of a machine that works this way: an analogue NN... Prof P
skims these.
Another thing about this book is that he relies on real world=real
numbers. There's nothing much against constructing the world with
integer dimensions, if we make our units sufficiently small.
Then, even if 'integer effects' propogate up quickly, we just
make the scale small enough so that we only see macroscopic effects
after the present time in the Universe: we don't know anything about
what gives beneath the Planck scale anyway so this may as well be
true. I don't deny that this is facetious and a piss-poor argument,
but its one he didnt mention. And even when he used the real numbers
he had to run off and hide in tiny ones in his new quantum theory...
aagh donning asbestos suit now...
Baz.
Show trimmed content
James A. Campbell
2/9/94
Other recipients:
Are you suggesting that any system upon which a random number generator
can be implemented is, by definition, non-intelligent? Pick up a coin
and start flipping it. Feel your brains slipping away? You don't? But
that can't be! You're a random numb
- show quoted text -
In article >, jayj...@rahul.net
(Jay James) writes:
> Stop the presses! I've discovered a thought experiment that will prove at
> least one point made in Penrose's "ENM" (first few chapters, haven't
> finished the book yet).
>
> The premise by Penrose that no algorithm can truly represent conciousness,
> no matter how closely it passes the "Turing Test" can be illustrated as
> follows.
>
> (1) Construct a random-number generator. Such a function by definition
> has no intelligence to begin with (it is not causual, for one).
> (2) Make a one-to-one mapping between the numbers generated by this
> function and the alphabet; eg., 01 = 'A', 02 = 'B', etc.
> (3) It can be proven (details omitted here) that probability theory
> predicts that such a sequence will somewhere contain all the english
> literature every written or to be written, all the scientific theories
> made or to be made, all the mathematical proofs given now or
hereafter, etc.
> (4) Therefore the random-number generator contains embedded within it all
> of the intelligence of mankind, now or in the future. But it cannot be
> considered conscious, even if perchance the sequence happens to pass the
> "Turing Test", or any other test of intelligence.
>
> Q.E.F.
>
> --
> Jay James >
Are you suggesting that any system upon which a random number
generator can be implemented is, by definition, non-intelligent?
Pick up a coin and start flipping it. Feel your brains slipping away?
You don't? But that can't be! You're a random number generator!!
You are correct that _the_process_ of random number generation does
not involve intelligence, but that proves nothing about the system upon
which the RNG is implemented.
--
James A. Campbell Opinions expressed in this document are
MSU Computer Science Department not to be construed as representing
those
jacam...@msuvx1.memst.edu of Memphis State
University in general.
------------------------------------------------------------------------------
"Press any key to test ..." "Release key to detonate ..."
Show trimmed content
Mr Robin J Faichney
2/9/94
Other recipients:
If the number series is *truly* random, does that not mean *by
definition* that the system generating it, if intelligent, has
successfully negated its intelligence, at least in the context of random
number generation? Just a thought. --
James A. Campbell (jacam...@msuvx2.memst.edu ) wrote:
> Are you suggesting that any system upon which a random number
>generator can be implemented is, by definition, non-intelligent?
>Pick up a coin and start flipping it. Feel your brains slipping away?
>You don't? But that can't be! You're a random number generator!!
> You are correct that _the_process_ of random number generation does
>not involve intelligence, but that proves nothing about the system upon
>which the RNG is implemented.
If the number series is *truly* random, does that not mean *by
definition* that the system generating it, if intelligent, has
successfully negated its intelligence, at least in the context of random
number generation?
Just a thought.
--
Robin Faichney rj...@stirling.ac.uk
(+44)/(0) 786 467482
Environmental Economics Research Group, University of Stirling, FK9
4LA, UK
*Don't ask me, I only mind the machines around here.*
Show trimmed content
Harry Erwin
2/9/94
Other recipients:
You're already in sin with this one. The 'random number generator'
algorithms you're thinking of aren't random. -- Harry Erwin Internet:
her...@gmu.edu or er...@trwacs.fp.trw.com Working on Katchalsky
networks....
In article >, Jay James
> wrote:
>Stop the presses! I've discovered a thought experiment that will prove at
>least one point made in Penrose's "ENM" (first few chapters, haven't
>finished the book yet).
>
>The premise by Penrose that no algorithm can truly represent conciousness,
>no matter how closely it passes the "Turing Test" can be illustrated as
>follows.
>
>(1) Construct a random-number generator. Such a function by definition
>has no intelligence to begin with (it is not causual, for one).
You're already in sin with this one. The 'random number generator'
algorithms you're thinking of aren't random.
--
Harry Erwin
Internet: her...@gmu.edu or er...@trwacs.fp.trw.com
Working on Katchalsky networks....
Show trimmed content
Rainer Dickermann
2/9/94
Other recipients:
jayj...@rahul.net (Jay James) writes:
jayj...@rahul.net (Jay James) writes:
- show quoted text -
>Stop the presses! I've discovered a thought experiment that will prove at
>least one point made in Penrose's "ENM" (first few chapters, haven't
>finished the book yet).
- show quoted text -
>The premise by Penrose that no algorithm can truly represent conciousness,
>no matter how closely it passes the "Turing Test" can be illustrated as
>follows.
- show quoted text -
>(1) Construct a random-number generator. Such a function by definition
>has no intelligence to begin with (it is not causual, for one).
>(2) Make a one-to-one mapping between the numbers generated by this
>function and the alphabet; eg., 01 = 'A', 02 = 'B', etc.
>(3) It can be proven (details omitted here) that probability theory
>predicts that such a sequence will somewhere contain all the english
>literature every written or to be written, all the scientific theories
>made or to be made, all the mathematical proofs given now or hereafter,
etc.
>(4) Therefore the random-number generator contains embedded within it all
>of the intelligence of mankind, now or in the future. But it cannot be
>considered conscious, even if perchance the sequence happens to pass the
>"Turing Test", or any other test of intelligence.
>Q.E.F.
(1)(2) f: y=character(random(t)); f(t) has no intelligence
(3)(4) f(t) has ALL intelligence, so NOT((1)(2)) =>
f(t) has intelligence and can pass the turing-test
All you have 'proven' is that a random-generator can be conscious.
The probability of a random-generator to pass a turing-test
and therefore being conscious can be estimated :
Think of a language L with Ln = 100000 words.
Each word has a length of Lw = 5 letters of an alphabeth of La = 30.
Any combination of words of L will be considered meaningfull.
f(t) will be considered conscious, if it produces a meaningful
text of about N = 1000 words (a weak condition).
The amount of possible meaningful texts is :
m = Ln ^ N
m = 100000^1000
The amount of possible texts on this alphabeth is :
a = (La^Lw)^N
a = (30^5)^1000
So the probability for f to pass the test is :
p = m/a
= 100000^1000 / (30^5)^1000
= (100000/30^5)^1000
= (3^(-5))^1000
= 3^(-5000)
So, f is with the probability of p = (Ln/(La^Lw))^N = 3^(-5000) conscious.
Oh, I think I can accept that.
Rainer
>--
>Jay James >
Show trimmed content
Phil Veldhuis
2/9/94
Other recipients:
I think you are presenting a weaker argument then that. You are arguing
that one algorithm can pass the turing test, and be unitelligent. A
first step to the stronger conclusion you think you can make.
In article <1994Feb9.0...@msuvx2.memst.edu >
jacam...@msuvx2.memst.edu (James A. Campbell) writes:
>In article >, jayj...@rahul.net
(Jay James) writes:
>> Stop the presses! I've discovered a thought experiment that will
prove at
>> The premise by Penrose that no algorithm can truly represent
conciousness,
>> no matter how closely it passes the "Turing Test" can be illustrated as
>> follows.
I think you are presenting a weaker argument then that. You are arguing
that one algorithm can pass the turing test, and be unitelligent. A first
step to the stronger conclusion you think you can make.
>> (1) Construct a random-number generator. Such a function by definition
>> has no intelligence to begin with (it is not causal, for one).
>> (2) Make a one-to-one mapping between the numbers generated by this
>> function and the alphabet; eg., 01 = 'A', 02 = 'B', etc.
>> (3) It can be proven (details omitted here) that probability theory
>> predicts that such a sequence will somewhere contain all the english
>> literature every written or to be written, all the scientific theories
>> made or to be made, all the mathematical proofs given now or
hereafter, etc.
>> (4) Therefore the random-number generator contains embedded within it all
>> of the intelligence of mankind, now or in the future. But it cannot be
>> considered conscious, even if perchance the sequence happens to pass the
>> "Turing Test", or any other test of intelligence.
>>
>> Q.E.F.
(I think that should be Q.E.D.)
>> --
>> Jay James >
>
> Are you suggesting that any system upon which a random number
>generator can be implemented is, by definition, non-intelligent?
>Pick up a coin and start flipping it. Feel your brains slipping away?
>You don't? But that can't be! You're a random number generator!!
Wrong. You are not a random number generator. You are acting as one
though and that is all the difference in the world. The minimal system
upon which a random number generator can be implemented is, by definition,
unitelligent.
> You are correct that _the_process_ of random number generation does
>not involve intelligence, but that proves nothing about the system upon
>which the RNG is implemented.
Unless you take a machine (like a computer) to be intelligent w/o
instantiating a program, which is absurd, then the machine is as
intelligent as the program it is instantiating. The random mumber
generator program is 1) not intelligent; 2) produces inteligent behaviour.
Hence you can conclude than a non-intelligent system can produce
intelligent behviour and hence (the original poster alleges)
pass the turing test.
The suggestion that the random number generator is only a pseudo-random
number generator is a red herring (different followup. THe original post
clearly didn't say that it was going to use the pseudo random number
generator at your university. Leave it to the smart people to find a true
random number generator. (seed it w/ quantum phenomena or something?)
The way to get out of this argument is to point out that the random
program would not pass the turing test unless it got really lucky, and
happened to be outputing a great literary work during the test. Even
then, it wouldn't be very interactive. Furthermore, it would take an
intelligent system to recognise any new contributions made to knowledge by
the random system. I'm presuming that all existing knowledge and
literature would be checked for in the output by purely mechanical means,
and that a mechanical system would "dump" E \= mc2 even though it may turn
out that Einstein was wrong, and our random machine was the first to
"output" the discovery.
Summary:
One the one hand, the argument presumes intelligence for sorting the wheat
from the random chaff of the output. If you remove this presumption, then
the argument has sort of lost touch w/ the original
turing test which was to fool a human (paradigm inteligent system) into
thinking it was interacting w/ an inteligent system. Giving me the
complete works of the library of congress is not going to fool me into
thinking I am dealing with anything more than a library online.
Either way (presumed intelligence to sort or not) it does not establish
you claim.
(but a really nice try)!
Phil Veldhuis, University of Manitoba.
Show trimmed content
Rainer Dickermann
2/10/94
Other recipients:
vel...@cc.umanitoba.ca (Phil Veldhuis) writes:
vel...@cc.umanitoba.ca (Phil Veldhuis) writes:
>Wrong. You are not a random number generator. You are acting as one
>though and that is all the difference in the world. The minimal system
>upon which a random number generator can be implemented is, by definition,
>unitelligent.
The problem of randomness is typical for TESTS. Any
test can principly be passed just by chance, any test has a statistical
error of measuring a non-existing property. So this phenomenon is more
a problem of testing than a problem of an un/intelligent random number
generator.
>Unless you take a machine (like a computer) to be intelligent w/o
>instantiating a program, which is absurd, then the machine is as
>intelligent as the program it is instantiating. The random mumber
>generator program is 1) not intelligent; 2) produces inteligent behaviour.
Aren't the mysteries of live wonderful ? There is a not intelligent thing
that can just 'behave' as if intelligent ! I like that !
Can I learn that somewhere (behaving as if intelligent) ? :->>>>>
Rainer
>Phil Veldhuis, University of Manitoba.
Show trimmed content
Phil Veldhuis
2/10/94
Other recipients:
In article <2jcsn2$2...@urmel.informatik.rwth-aachen.de>
rai...@tschil.informatik.rwth-aachen.de (Rainer Dickermann) writes:
In article <2jcsn2$2...@urmel.informatik.rwth-aachen.de >
rai...@tschil.informatik.rwth-aachen.de (Rainer
Dickermann) writes:
>vel...@cc.umanitoba.ca (Phil Veldhuis) writes:
>
>>Unless you take a machine (like a computer) to be intelligent w/o
>>instantiating a program, which is absurd, then the machine is as
>>intelligent as the program it is instantiating. The random mumber
>>generator program is 1) not intelligent; 2) produces inteligent behaviour.
>
>Aren't the mysteries of life wonderful ? There is a not intelligent thing
>that can just 'behave' as if intelligent ! I like that !
>Can I learn that somewhere (behaving as if intelligent) ? :->>>>>
(as far as I can see, you are already close to it)
(sorry, in a weak momment I couldn't resist)
Well, that was the claim of the original argument wasn't it. That a
random number generator could pass the turing test w/o being genuinely
intelligent.
Of course it is basically a dogma of modern AI research that if something
produces intelligent behaviour, it is intelligent. This _is_ the Turing
test thesis.
If you insist on this dogma, then the Turing test is unfalsifiable in
virtue of your having begged the question.
As a matter of empirical fact, I think also think that the Turing test is
wrong. Many insect societies manifest intelligent behaviour w/o being
genuinly intelligent. Unless you want to say that honeybees, for
instance, are intelligent, I suggest you keep your dogmas to yourself.
Phil Veldhuis, University of Manitoba
Show trimmed content
Stanley Friesen
2/11/94
Other recipients:
I disagree with this assertion. I would rather say that the current
status is that nobudy has found a *better* way of detecting
intelligence, as limited as the TT is.
In article <2jdkif$f...@canopus.cc.umanitoba.ca >
vel...@cc.umanitoba.ca (Phil Veldhuis) writes:
>
>Of course it is basically a dogma of modern AI research that if something
>produces intelligent behaviour, it is intelligent. This _is_ the Turing
>test thesis.
I disagree with this assertion.
I would rather say that the current status is that nobudy has found
a *better* way of detecting intelligence, as limited as the TT is.
>As a matter of empirical fact, I think also think that the Turing test is
>wrong. Many insect societies manifest intelligent behaviour w/o being
>genuinly intelligent. Unless you want to say that honeybees, for
>instance, are intelligent, I suggest you keep your dogmas to yourself.
Well, speaking as a biologist, with some knowledge of bee behavior,
I would not characterize the behavior of a bee colony as particularly
'intelligent'. Such a colony does have some limited ability to adapt
to unexpected conditions, but it lacks any ability to generate new
solutions, or to deal with a truly novel situation. Humans have
demonstrated both abilities.
For instance, honeybees can be 'domesticated' entirely due to their
inherent, invariant response to certain environmental cues. This
invariance allows the keeper to manipulate the entire hive - even
controlling where it lives. I maintain that this inability to
ever *recognise* they are being manipulated is a counter-indication
to intelligence.
--
NAMES: sar...@netcom.com s...@ElSegundoCA.ncr.com
May the peace of God be with you.
Show trimmed content
Phil Veldhuis
2/11/94
Other recipients:
Let me know what you think when You get transfered to manitoba! Sure,
bees are alot dumber than Humans. Not all behaviour they manifest is on
a par with human behaviour. But, some behaviour they manifest can be
accurately described as "intelligent
In article > sar...@netcom.com
(Stanley Friesen) writes:
>In article <2jdkif$f...@canopus.cc.umanitoba.ca >
vel...@cc.umanitoba.ca (Phil Veldhuis) writes:
>>
>>Many insect societies manifest intelligent behaviour w/o being
>>genuinly intelligent. Unless you want to say that honeybees, for
>>instance, are intelligent, I suggest you keep your dogmas to yourself.
>
>For instance, honeybees can be 'domesticated' entirely due to their
>inherent, invariant response to certain environmental cues. This
>invariance allows the keeper to manipulate the entire hive - even
>controlling where it lives. I maintain that this inability to
>ever *recognise* they are being manipulated is a counter-indication
>to intelligence.
Let me know what you think when You get transfered to manitoba!
Sure, bees are alot dumber than Humans. Not all behaviour they manifest
is on a par with human behaviour. But, some behaviour they manifest
can be accurately described as "intelligent". That, and not a stronger
claim, was all I suggested.
BTW, we have biologists up here too.
IMHO, bees do manifest intelligent behaviour. If you recall the original
post, it would have been absurd for me to say that bees were intelligent
because the manifest intelligent behaviour. Au contraire, my very point
was that very stupid animals can still do intelligent things.
After all, we intelligent humans can manifest some pretty unitelligent
behaviour. Intelligent behaviour is neither a necessary or sufficient
criteria for ascribing intelligence.
Phil Veldhuis, University of Manitoba
Show trimmed content
Aaron Sloman
2/13/94
Other recipients:
jayj...@rahul.net (Jay James) writes: > Date: Tue, 8 Feb 1994 19:42:59
GMT > Organization: a2i network
jayj...@rahul.net (Jay James) writes:
> Date: Tue, 8 Feb 1994 19:42:59 GMT
> Organization: a2i network
>
> Stop the presses! I've discovered a thought experiment that will prove at
> least one point made in Penrose's "ENM" (first few chapters, haven't
> finished the book yet).
>
> The premise by Penrose that no algorithm can truly represent conciousness,
> no matter how closely it passes the "Turing Test" can be illustrated as
> follows.
Actually, if you read on and see what use Penrose makes of Godel's
argument I think you will find that he thinks that computers can't
really pass the Turing test, if interrogated by a mathematician like
Penrose, or Goedel.
> (1) Construct a random-number generator. Such a function by definition
> has no intelligence to begin with (it is not causual, for one).
Also, note that no Turing machine can be a truly random number
generator, unless it starts with an infinite tape that already has a
random sequence on it. If a Turing machine has a finite machine
table and a finite set of symbols on the tape, then its output will
not be truly random. Your comment that "it is not causal" rules out
its being a Turing machine, or any similar machine running an
algorithm.
However, the rest of your argument need not be based on use of a
Turing machine. E.g. it could be a random number generator that uses
a lump of radioactive material and a geiger counter. This makes
your argument irrelevant to Searle's and Penrose's discussion of
what algorithms can do, but not irrelevant to those who wish to use
the TT as a criterion for intelligence.
(Turing did not: he was too intelligent to think it worth wasting
time trying to define such a vague and confused notion as
"intelligence"! He merely made a prediction that machines would be
able to pass his test in about 50 years. He did not say that would
make them intelligent.)
> (2) Make a one-to-one mapping between the numbers generated by this
> function and the alphabet; eg., 01 = 'A', 02 = 'B', etc.
> (3) It can be proven (details omitted here) that probability theory
> predicts that such a sequence will somewhere contain all the english
> literature every written or to be written, all the scientific theories
> made or to be made, all the mathematical proofs given now or
hereafter, etc.
> (4) Therefore the random-number generator contains embedded within it all
> of the intelligence of mankind, now or in the future. But it cannot be
> considered conscious, even if perchance the sequence happens to pass the
> "Turing Test", or any other test of intelligence.
>
> Q.E.F.
I think you need to tidy up the argument a bit. What you need is
something like this: if you consider a large collection of machines
running a suitable random number generator then there will be a
finite probability that at least one of them will generate an
initial (finite) sequence of characters that (via a suitable simple
mapping) constitutes a convincing set of English (or French, or
Urdu, or..) responses to an interrogator (or succession of
interrogators) interacting with the machine via a terminal. (This
follows from the fact that for any given rate of printing, and any
limited time interval, the set of possible sentences that can be
produced in that interval is finite.)
(You have to add more detail about synchronising the machine's
output with the stuff typed in by the interrogator. That could be
done by having all output characters buffered, and then whenever a
particular number is generated by the random number generator the
buffer is flushed, and the interrogator sees the output. Of course
the probability that this will happen at just the right time to
provide an answer to a complete question that has been typed in is
pretty small, but all the argument requires is that it be non-zero.)
I suspect there is not enough matter in the universe for a large
enough set of such experiments to be set up to give a reasonable
probability of at least ONE of them producing a sensible
conversation for five minutes, but that does not affect the
argument, which is that IF one of them did, it would apparently pass
the turing test simply by using a random number generator. Moreover,
there's a non-zero (but exceedingly small!) probability that the
very first machine you build like this will pass the test
immediately.
The only conclusion I draw from this, which repeats what I've said
before, is that the Turing test may be a nice practical test, but
cannot be the basis for a *definition* of "intelligence" (or
"understanding", "consciousness", or any other mental state), since
these are not defined by external behaviour.
It is not WHAT behaviour is produced that matters, but HOW it is
produced. This is essentially because creating an intelligent system
is an engineering problem, and no good engineer (including
evolution) will be happy with a system that merely happens to pass
some tests. There had better be a reliable basis for its passing
those tests. The Turing test does not reveal that. (The same comment
applies to Harnad's extensions to the Turing test, e.g. adding eyes,
ears, touch sensors, etc. and arms legs, etc. to the machine.)
This is the same point as has been made previously regarding
a machine driven by a Huge Lookup Table: its behaviour is not
produced by ITS intelligence, but by the intelligence of
whoever worked out what needs to go into the table. (Hans Moravec
and others don't like this argument, but it is not an anti-AI
argument: rather it helps to define the task of AI.)
(Apologies if I am repeating what someone else has said. I don't
have time to read any net news thread fully, so I use a random
generator in my brain to select articles when I have a few minutes
to spare from time to time!)
Greetings from Birmingham.
Aaron
----
--
Aaron Sloman,
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sl...@cs.bham.ac.uk OR A.Sl...@bham.ac.uk
Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281
Show trimmed content
Oliver Sparrow
2/15/94
Other recipients:
The problem witht he monkeys-and-a-typewriter argument of the thought
experiment proposed in this text is that written material is not
thought, it is potential data for a sorting system, such as a reader.
The entire universe may indeed be encoded in
The problem witht he monkeys-and-a-typewriter argument of the thought
experiment proposed in this text is that written material is not thought,
it is potential data for a sorting system, such as a reader. The entire
universe may indeed be encoded in a bit of stale cupcake, but one needs
Adam's Total Perspective Vortex in order to transcribe it and an observer
to make sense of it. Shakespeare, embedded haplessly in an ocean of junk
(and another entry has shown how large - indeed, universal - such an ocean
would be, would only be useful if read and only read if distinguished from
the junk; but it can only be separated from the junk if it has been read,
which brings us back to where we started.
In a non-trivial sense, everything is latent: one can think of
Shakespeare as
the transducer which separates the latent information from the junk in
which it is embedded. His talent did not arise from nothing: it was the
product of a myriad systems and structures which had taken billions of years
to be put into place. The issue is finding the transducer, not the field
of latency or the yet-unconnected contributory factors.
_________________________________________________
Oliver Sparrow
oh...@chatham.demon.co.uk