DRAFT VERSION

Comments received after Cogaff Workshop
Cognition and Affect: Past and Future
Held on Monday 24th April 2017
School of Computer Science
University of Birmingham


Workshop web site
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cogaff-sem-apr-2017.html
Organiser: Aaron Sloman
School of Computer Science, University of Birmingham

THE FORMAT OF THIS DOCUMENT IS PROVISIONAL AND WILL BE RECONSIDERED IF/WHEN MORE COMMENTS ARE RECEIVED

LIST OF COMMENTS

Installed: 11 May 2017
Updated:
From: Liangpeng Zhang (1)
Date: Tue, 25 Apr 2017 15:31:24 +0100
Subject: Some random thoughts on the topics discussed in Cogaff workshop

Dear Aaron,

Thank you very much for organising such an interesting workshop
yesterday. I am Liangpeng, the PhD student that obviously didn't have
much background knowledge about Cogaff but still attended the workshop
due to curiosity. The following are some of my unprofessional random
thoughts on this topic.

1. About your homework: one of my friend is suffering from some kind of
mental disorder that makes her feel depressed without reason (I don't
know the exact name of it, perhaps simply called depression). She told
me she didn't want to feel depressed at all, but the depression still
came to her rather randomly, without any apparent patterns or causes,
and the only way she had found effective in relieving such depression is
taking the medicine given by her doctor. I asked her whether there had
been some specific event or experience that may explain this disorder,
but she said she couldn't think up any such events, so she thought it's
quite purely a bodily phenomenon (e.g. unable to produce some chemicals
sufficiently for suppressing the effect of some other chemicals).
Therefore, although it's not the case for me, I think it's possible that
people can feel emotions without the things you mentioned in the
homework (beliefs, preferences, desires, etc), and it's reasonable for
psychiatrist to treat this kind of mental disorders purely as bodily
defects/malfunctions.

[Note 1 inserted by AS]
    Yes: there are many different affective states and processes of different
    sorts, and any simple general cliam about what emotions are is likely to
    have counter examples. The case of anger and its variants was chosen for
    "homework" simply because it is very easy to discern the requirement for
    fairly rich cognitive/descriptive contents, requiring something like
    grammatical structures for their characterisation, rather than numerical
    measures of bodily states. I think you agree with some of this in your
    second message below.

    The CogAff project investigated a wide range of types of affect, but we
    have never claimed to have a complete account.

That being said, I agree that many emotions are explicitly related to
cognition processes. The following is my own experience; I'm not sure if
this works as an evidence, but I guess it's at least relevant to this
matter. I was born in China and lived there until 5 years old, speaking
Chinese of course. At 5 I was brought to Japan by my parents and lived
there until 9. In these four years, I started to speak Japanese and
gradually forgot Chinese. When I was back to China at 9 years old, I was
almost unable to speak Chinese and had to study it from the very basic
things. It took me more time to learn to speak Chinese fluently again.

The odd thing is, I found myself increasingly frequently described as an
emotionless person by my classmates in China, which I never had when I
was in Japan. When I became more self-aware (~15 years old), I myself
also began to notice that when I was thinking in Chinese, it would be
much harder to evoke emotions than when thinking in Japanese, so the
observations of my classmates might be correct. Such phenomenon became
more evident when I began to use English frequently: when I think in
English, it's almost purely rational with hardly any emotion involved.
Since Japanese is effectively my mother tongue and English is a language
I still struggle to use, my conclusion is that, the ability of
having/feeling an emotion is directly related to the ability of
recognising and expressing that emotion (at least for me).

2. About the unimplemented phenomenon where different choices come back
again and again in one's mind even if that people has made a decision: I
think this actually may occur in reinforcement learning where the agent
may decide to take a plan (e.g. go for a lunch), but stop executing that
plan after several time-steps, and decide to follow another plan (e.g.
return to work) because now the agent thinks the latter plan lead to
higher values/rewards.

This inconsistency of plan execution is caused by the agent updating its
estimated value function at every time-step using the collected
information, and deciding the action at that time-step according to the
updated function. Therefore, the option that has been discarded by such
agent is never literally thrown into the bins; rather, it still pops
back to the agent at every time-step, which I think is quite similar to
what you said about human, although the exact mechanisms can differ.

(Actually I think it can be quite close to human's if we add random
noises to each estimated values representing the degree of uncertainty
and hesitation. With such a control system, you'll very likely to see
the agent leave his desk for a lunch but then decide to continue working
when it steps out its office.)

I think the main reason it doesn't appear much in papers and
applications is that it doesn't seem to be practically useful (and
sometimes it can be harmful for the applications), so people tend to
"fix" it rather than report it as a feature.

[Note 2 inserted by AS]
    The reference to a "value function" seems to be connected with the widely
    shared assumption that all motivation is based on some estimation of
    a reward or value or utility to be gained, or negative value to be
    avoided, where these are measured on some scale, so that the values of
    different motives can be compared.

    Although this is very widely believed by researchers in many disciplines,
    I think the assumed universal requirement for such a value function is
    just a myth, In a separate paper I have proposed a biologically useful
    type of mechanism, produced by evolution, that generates motives in
    individuals without making use of any measure of value for the individuals.
    See:
    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/architecture-based-motivation.html

[Note 3 inserted by AS]
    The next paragraph relates to points made during the discussion about
    limitations of current AI. I claimed that the kinds of geometrical and
    topological discoveries made unwittingly by human toddlers and other
    animals are pre-cursors to the sorts of mathematical discoveries made by
    ancient mathematicians that current AI systems are incapable of making,
    e.g. discovering that internal angles of a planar triangle must add up to
    half a rotation or that it is impossible for two rigid linked rings to
    become unlinked without temporarily changing the shape of at least one
    of them. That topological impossibility is obvious to children and adults
    who have not been taught topology. More details here: [linked rings]
    I claimed that current AI techniques could not produce a baby robot able
    to grow up to be a mathematician like Euclid, Archimedes, etc.

Liangpeng comments on the above:

3. About baby Euclid machine: I think there is a difference between
interesting and novel results that worth reporting, and results that
help understand things better but are known or just uninteresting to
other people. If a machine produces something like "if a+b=b+a then
0+1=1+0", should we blame it as incapable mathematician? Euclid might
have produced tons of such uninteresting results to help himself
understand mathematics before writing his famous book.

I think what prevents the current AI from growing up to an Euclid is
that we neither provide it a sufficiently good curriculum that can turn
an ordinary human student step to step to an Euclid, nor provide it an
efficient self-criticism system so it can gradually become Euclid all by
itself. There have been researches on automatically generating
texts/musics/theories/game contents/etc that can learn/evolve under
certain degree of human supervision, but it seems to me that when
machines produces something like "if a+b=b+a then 0+1=1+0", people can't
give it a sufficiently informative feedback that help it improve itself,
because it is unclear even for human supervisors whether producing such
things would finally lead to Shakespeare/Bach/Euclid. Rather, the human
feedbacks tend to be either noisy (a quite random numerical evaluation)
or uninformative (simply tell it it's uninteresting), making it
difficult for machines to find a direction that can make a real progress.

In short, even if a machine can process information in a sophisticated
way, I guess it won't grow up to Euclid if we can't provide it a system
that could consistently guide its growing. How would you address this
problem?

That's all of my random thoughts. Thank you for reading this long email,
and thank you again for hosting such an informative workshop.

Best wishes,
Liangpeng
Response by AS:
    Current AI mechanisms used in robots are not capable even of expressing
    the knowledge acquired by ancient mathematicians. Providing more training
    examples will not give robots the ability to represent impossibility.
    But this is a discussion for another context. Many different examples
    are presented and some similarities and differences shown, or discussed
    in this incomplete document:
    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html

    One topic relevant to the cognition and affect is why some people find
    mathematics so enthralling. I don't think it has anything to do with wanting
    to gain recognition, win prizes etc. The desire to understand something
    complex and difficult can simply be very strong, without any ulterior
    motive involved. Unfortunately modern education (or perhaps modern culture
    generally) does not cultivate such desires, at least with mathematical
    contents.

  • Liangpeng Zhang 2 (Computer Science
    From: Liangpeng Zhang 2
    Date: Wed, 26 Apr 2017 15:59:31 +0100
    To: Aaron Sloman
    Subject: Re: Update on Workshop on CogAff/Emotions Monday 24 April 2pm onwards
    
    Dear Aaron,
    
    I'm a first year PhD student in our school, supervised by Xin Yao. My
    research mainly focuses on reinforcement learning, but I also have
    interests in other topics related to AI.
    
    (More precisely, I am mostly interested in understanding myself, as I
    find it very difficult to understand what exactly I am thinking and how
    I come up with those thoughts. However I was strongly advised by my
    parents and friends that I shouldn't choose philosophy or psychology as
    my major because that'll make me starve, so I picked AI instead, hoping
    it's relevant enough to answer my questions, and meanwhile sounds
    practical enough to my parents and friends so that they won't need to
    worry.)(My current research is nothing about those questions though)
    
    Response by AS:
    
        The original AI researchers, including Turing, McCarthy, Simon,
        Minsky and others, were all primarily interested in the hard
        scientific questions, though they were also interested in engineering
        issues.
    
        Unfortunately after several ups and downs in the history of AI, as the
        practical applications became more spectacular and sub-fields were
        shown to allow progress to be made by use of statistical learning
        mechanism, the public interest and the interest of funders seemed to
        focus more and more on the practical applications of AI. So now very
        few AI researchers are still using AI as new form of philosophy of mind,
        or theoretical psychology.
    
    > ...... [Regarding the notes on anger HERE.] I've just read that page and I think the "angry without having physical symptoms" part is rather interesting. I've never thought anything like that, but as I read your note, I immediately realised that such kind of things do exist (especially for adults). On the other hand, I think I frequently experience the opposite, i.e. emotions that I can feel through physical symptoms but can't immediately figure out what it is about. For example, there was once very recently that I found myself feeling unhappy/irritated, and I had to recall everything I did at that day (which takes ~10 mins) to seek for the exact reason (or the target that I'm unhappy at). Finally I realised that it's because I forgot to buy milk the day before that day, and had to eat muesli with hot water (which I didn't enjoy very much) for the breakfast that morning. It also seemed that I didn't find myself unhappy during that breakfast, because I was also watching something funny on youtube at the same time. It is clear this event caused those negative feelings, but the feelings remained even when I almost totally forgot the cause. So which part in this example do you think is the emotion -- is it the unhappy feeling (more like physical symptoms), or the information processing involved when I ate muesli with hot water? (Or both, seeing it as a two-part thing?) Finally, one question about perturbance/interrupt (it's called interrupt in your notes but I guess they're the same thing?): If A turns into full anger at B, and immediately starts punishing B, with nothing but anger in A's mind, does this anger also count as a perturbance? Because it's like anger.exe 100% CPU without interrupting anything (except, say, thinking whether A should punch B or kick B or hit B with a club, which I think is done unconsciously in A's mind if A is really in 'full anger'). Best wishes, Liangpeng
    Response by AS:
        In the terminology of the CogAff project, the notion of "perturbance"
        was used as distinct from "interrupt" because a perturbant state can
        have a tendency, or propensity, to interrupt (or more precisely to gain
        control) without actually doing so. So it is a dispositional concept.
    
        An important fact about biological minds is that different conflicting
        dispositions can coexist. They do not necessarily sum to a single
        "resultant" disposition as different forces do.
    
        Several different perturbant states, concerned with different motives or
        concerns, can coexist in the same person. A very well controlled person
        can choose which ones to act on, and in what order, or in which
        circumstances, Other individuals may have great difficulty getting
        anything done when in a highly perturbant state because of inability to
        suppress the interruptions of (temporarily) rejected motives. Luc
        Beaudoin, who was one of the speakers at the workshop, has written a great
        deal about perturbance, both in his PhD thesis (Birmingham, 1994) and in
        recent online papers. See his link on the main workshop page.
    
        The work of Dean Petters on varieties of attachment, also introduced at
        the workshop, includes similar kinds of phenomena, though his examples
        are different.
    

    This document is
    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/cogaff-sem.html

    A partial index of discussion notes is in
    http://www.cs.bham.ac.uk/research/projects/cogaff/misc/AREADME.html


    Maintained by Aaron Sloman
    School of Computer Science
    The University of Birmingham