School of Computer Science THE UNIVERSITY OF BIRMINGHAM Ghost Machine

Comments received after Cognition and Affect Workshop
Cognition and Affect: Past and Future
Held on Monday 24th April 2017
School of Computer Science
University of Birmingham

Organiser: Aaron Sloman
School of Computer Science, University of Birmingham

From: Liangpeng Zhang
Date: Tue, 25 Apr 2017 15:31:24 +0100
Subject: Some random thoughts on the topics discussed in Cogaff workshop

Dear Aaron,

Thank you very much for organising such an interesting workshop
yesterday. I am Liangpeng, the PhD student that obviously didn't have
much background knowledge about Cogaff but still attended the workshop
due to curiosity. The following are some of my unprofessional random
thoughts on this topic.

1. About your homework: one of my friend is suffering from some kind of
mental disorder that makes her feel depressed without reason (I don't
know the exact name of it, perhaps simply called depression). She told
me she didn't want to feel depressed at all, but the depression still
came to her rather randomly, without any apparent patterns or causes,
and the only way she had found effective in relieving such depression is
taking the medicine given by her doctor. I asked her whether there had
been some specific event or experience that may explain this disorder,
but she said she couldn't think up any such events, so she thought it's
quite purely a bodily phenomenon (e.g. unable to produce some chemicals
sufficiently for suppressing the effect of some other chemicals).
Therefore, although it's not the case for me, I think it's possible that
people can feel emotions without the things you mentioned in the
homework (beliefs, preferences, desires, etc), and it's reasonable for
psychiatrist to treat this kind of mental disorders purely as bodily
defects/malfunctions.

That being said, I agree that many emotions are explicitly related to
cognition processes. The following is my own experience; I'm not sure if
this works as an evidence, but I guess it's at least relevant to this
matter. I was born in China and lived there until 5 years old, speaking
Chinese of course. At 5 I was brought to Japan by my parents and lived
there until 9. In these four years, I started to speak Japanese and
gradually forgot Chinese. When I was back to China at 9 years old, I was
almost unable to speak Chinese and had to study it from the very basic
things. It took me more time to learn to speak Chinese fluently again.

The odd thing is, I found myself increasingly frequently described as an
emotionless person by my classmates in China, which I never had when I
was in Japan. When I became more self-aware (~15 years old), I myself
also began to notice that when I was thinking in Chinese, it would be
much harder to evoke emotions than when thinking in Japanese, so the
observations of my classmates might be correct. Such phenomenon became
more evident when I began to use English frequently: when I think in
English, it's almost purely rational with hardly any emotion involved.
Since Japanese is effectively my mother tongue and English is a language
I still struggle to use, my conclusion is that, the ability of
having/feeling an emotion is directly related to the ability of
recognising and expressing that emotion (at least for me).

2. About the unimplemented phenomenon where different choices come back
again and again in one's mind even if that people has made a decision: I
think this actually may occur in reinforcement learning where the agent
may decide to take a plan (e.g. go for a lunch), but stop executing that
plan after several time-steps, and decide to follow another plan (e.g.
return to work) because now the agent thinks the latter plan lead to
higher values/rewards.

This inconsistency of plan execution is caused by the agent updating its
estimated value function at every time-step using the collected
information, and deciding the action at that time-step according to the
updated function. Therefore, the option that has been discarded by such
agent is never literally thrown into the bins; rather, it still pops
back to the agent at every time-step, which I think is quite similar to
what you said about human, although the exact mechanisms can differ.

(Actually I think it can be quite close to human's if we add random
noises to each estimated values representing the degree of uncertainty
and hesitation. With such a control system, you'll very likely to see
the agent leave his desk for a lunch but then decide to continue working
when it steps out its office.)

I think the main reason it doesn't appear much in papers and
applications is that it doesn't seem to be practically useful (and
sometimes it can be harmful for the applications), so people tend to
"fix" it rather than report it as a feature.

3. About baby Euclid machine: I think there is a difference between
interesting and novel results that worth reporting, and results that
help understand things better but are known or just uninteresting to
other people. If a machine produces something like "if a+b=b+a then
0+1=1+0", should we blame it as incapable mathematician? Euclid might
have produced tons of such uninteresting results to help himself
understand mathematics before writing his famous book.

I think what prevents the current AI from growing up to an Euclid is
that we neither provide it a sufficiently good curriculum that can turn
an ordinary human student step to step to an Euclid, nor provide it an
efficient self-criticism system so it can gradually become Euclid all by
itself. There have been researches on automatically generating
texts/musics/theories/game contents/etc that can learn/evolve under
certain degree of human supervision, but it seems to me that when
machines produces something like "if a+b=b+a then 0+1=1+0", people can't
give it a sufficiently informative feedback that help it improve itself,
because it is unclear even for human supervisors whether producing such
things would finally lead to Shakespeare/Bach/Euclid. Rather, the human
feedbacks tend to be either noisy (a quite random numerical evaluation)
or uninformative (simply tell it it's uninteresting), making it
difficult for machines to find a direction that can make a real progress.

In short, even if a machine can process information in a sophisticated
way, I guess it won't grow up to Euclid if we can't provide it a system
that could consistently guide its growing. How would you address this
problem?

That's all of my random thoughts. Thank you for reading this long email,
and thank you again for hosting such an informative workshop.

Best wishes,
Liangpeng

This document is
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cogaff-sem.html

A partial index of discussion notes is in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html


Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham