Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top


No Comments

Cybernetic systems for self-awareness @heyhexx: Interactive social media puppetry theatre for grasping emotions

Cybernetic systems for self-awareness @heyhexx: Interactive social media puppetry theatre for grasping emotions

This thesis aims to explore the possibilities of designing systems of interaction for current standard communication methods such as social media, to facilitate a process of self-reflection leading to deeper self-awareness. The main concepts to be discussed are cybernetics, specifically second-order cybernetics, as a guide to designing an interaction system, as well as the topics of social media, data science, emotion theories and system architecture as the context in which the thesis project @heyhexx sits in. Case studies and fields of design and research that provided inspiration for the project will be analysed, and tied into how these different methods were implemented into tahe system of the thesis project.

The thesis project, @heyhexx is a robotic puppet theatre installation that exists in the physical world, but facilitates a two-way conversation through Twitter[1]. Twitter users can send tweets to Hexx, and Hexx analyses the emotion of the tweets. The robot and its environment physically react in response to the emotion, and Twitter users are able to watch a video of the short theatre piece that was recorded in real-time and sent back to the person who sent the tweet. The purpose is for Twitter users to become aware of their own emotions expressed on social media in a unique way.

Keywords: Cybernetics, Social Media, Emotion, Affect, System Design

[1] Microblogging service that allows users to post tweets, which are short pieces of text limited to 140 characters.

1. Introduction

The social culture of today heavily relies on the use of social media platforms such as Facebook, Twitter or Instagram to keep connected to the social networks in which people exist. This has bred a generational culture in which people feel a need to stay connected with others more than ever, even leading to social anxiety termed ‘FOMO,’ the fear of missing out. The attention of social media users today is increasingly directing outward toward keeping up with what others are saying, doing, and thinking. As a result, a culture in which it is becoming more difficult to reflect inward at the self is being created. This motivated the research topic, ‘Is it possible to use the strengths of social media and flip the purpose of it, from a tool for connecting outward, to a tool for reflecting inward? How can the design of a system of interaction, based on the standard methods of communication today, facilitate a process of self-reflection and self-awareness?’ The thesis project @heyhexx aims to provide a unique approach to facilitate the self-reflection of emotions within social media users, and this thesis paper aims to examine the theories and methods that were considered and implemented for the achievement of the interaction system for the project.

2. Theory and Context

2.1 Cybernetics

To understand the central motive of the project @heyhexx, first the concept of cybernetics must be explained. The term cybernetics, used in the context of defining systems of control and communication, was made famous by mathematician Norbert Wiener who wrote the book Cybernetics: control and communication in the animal and machine in 1948. The word cybernetics originates from the Greek word for ‘governor’ or ‘art of steering,’ and it means to take action toward a goal, and the necessity of communication to take the action. (Pangaro, 2013) According to Andrew Pickering, ‘Unlike more familiar sciences such as physics, which remain tied to specific academic departments and scholarly modes of transmission, cybernetics is better seen as a form of life, a way of going on in the world, even an attitude, that can be, and was, instantiated both within and beyond academic departments, mental institutions, businesses, political organizations, churches, concert halls, theaters, and art museums.’ (Pickering, 2010, p.9)

Cybernetics is fundamentally a closed circular feedback loop, and the ways of observing this circular feedback loop splits cybernetics into two orders. The first order is the study of the observed system, where a feedback loop is observed from outside of the loop.

Figure 1: First order cybernetics

Second order cybernetics is the shift from the observed system, to the cybernetics that considers observing, meaning the observer is observing from within the loop. (Glanville, 2003, p.1)

Figure 2: Second order cybernetics

Cybernetician Ranulph Glanville claims that what second order cybernetics offers is a consideration of observing through the first person, rather than the neutral and detached observations of first order cybernetics. (Glanville, 2003, p.3) What makes this interesting is that second order cybernetics no longer ignores the observer’s involvement in a closed cycle of feedback, and the observer becomes aware of its own role and own subjectivity in relation to the cycle. Heinz von Foerster wrote in Cybernetics of Cybernetics, that ‘…it appears to be clear that social cybernetics must be a second- order cybernetics–a cybernetics of cybernetics–in order that the observer who enters the system shall be allowed to stipulate his own purpose: he is autonomous. If we fail to do so somebody else will determine a purpose for us. Moreover, if we fail to do so, we shall provide the excuses for those who want to transfer the responsibility for their own actions to somebody else: “I am not responsible for my actions; I just obey orders.” Finally, if we fail to recognize autonomy of each, we may turn into a society that attempts to honor commitments and forgets about its responsibilities.’ (von Foerster, 2003, p.286) By this, von Foerster suggests the importance of applications of second-order cybernetics in a social context, so as to hold each person’s observations and own role in their society accountable.

Socio-cybernetician Walter Buckley applies cybernetics to consciousness, stating that a brain itself does not start by having consciousness, but that consciousness is generated through feedback from the entire bodily system (hormonal, emotive, endocrine, motor, and nervous systems), in response to interacting with its environment. This is through the ongoing process of ‘(1) environmental inputs to the sensory apparatus; (2) the sensory, perceptual, and cognitive processing of this input; leading often to (3) motor processes back out onto the environment (or to further internal mental processing), all occurring with many levels of recursion and feedback in a continuous dynamical, self-referential, control system cycle.’ (Geyer and Zouwen, 2001, p.42) Although Buckley focuses on the bodily sensory feedback which contributes to mental consciousness, he emphasizes that the organism-social environment dynamic loop needs to be considered for a complete theory on consciousness. The interactions between an organism and its sociocultural linguistic community is fundamental to the formulation of a higher cognition and a sense of self. (Geyer and Zouwen, 2001, p.53) What is interesting about what Buckley observes is the application of cybernetics in the self biologically, and in the self as part of a dynamic socio feedback loop. Further, it would be even more interesting if second-order cybernetics was applied to what Buckley states, thus a person becoming aware of how they themselves are being affected by the socio environmental inputs received, the sensory processing of it, and the output that the person processes back out to the environment. By becoming self-aware of one’s own cause and effect from the environment, and cause and effect back to the environment, the way of relating to the environment would surely evolve over time.

2.2 Social Media

In designing a cybernetic interaction system for today’s society, the common methods of interaction should be considered. With the establishment of Web 2.0 [2] at the turn of the millennium, the way in which we interact in society has shifted dramatically. From a social culture in which personal information and communication was shared in person only between selected individuals before Web 2.0, to a culture in which much of this sharing of personal information and communication is done on a public domain on the internet, the effects of the shared information reach further and larger than before. (Dijck, 2013, p.7) The environment in which we interact daily with society is no longer only in our immediate surroundings, but has expanded widely to the social network that exists on the internet. Even when alone in a room, the internet can keep us connected to others. The number of people that information is shared with increases with the widening of a social network, and the amount of information that internet users absorb daily is significantly greater than before Web 2.0. This means that people using social media are being affected by others much more frequently than in the past. The choice of social media platforms also affects what type of influence a person outputs and receives. The current three most widely used platforms are Facebook, Twitter and Instagram, and each serve a different purpose. On Facebook, since there needs to be mutual consent to become friends with someone and see what they post, the purpose of the platform is to connect with those that you know. This is where people generally get the most influence from their personal network of acquaintances. Twitter is a microblogging platform, in which people can create short posts up to 140 characters, and follow people’s accounts without the account holder’s consent. Often people that have never met follow each other. Although Twitter users do post personal opinions and thoughts on Twitter, due to the nature of the platform being very public, it is often used for self-promotion or cross-promotion. Instagram serves a similar purpose to Twitter, with the main difference being that the sharing is done through posting photos.

[2] ‘Web 2.0 is the current state of online technology as it compares to the early days of the Web, characterized by greater user interactivity and collaboration, more pervasive network connectivity and enhanced communication channels.’ (

2.3 Case Study 1: us+

Us+ is a Google Hangout[3] video chat application created by artists Lauren McCarthy and Kyle McDonald, which analyses video chats to balance and optimize conversations. Users get pop up notifications suggesting how to improve their conversation, such as “try to be more positive” or “stop talking about yourself so much,” and the application also takes automatic actions when the conversation becomes unbalanced, for example when a participant talks too much they get auto- muted. The application also measures the tone of the conversation, such as the level of positivity, self-absorption, and aggression amongst others. (Bosker, 2013)

Figure 3: us+ by Lauren McCarthy

According to McCarthy, the intent for us+ is for it to be part social experiment, and part critique of people’s general willingness to be analysed and controlled by machines. McCarthy is critical of whether people should start to rely on applications like this to improve their conversations, saying that too much trust in these applications could lead to manipulation of thoughts and behaviours toward the agenda of the creators of the applications. (Bosker, 2013)

Although McCarthy is critical of our reliance on applications like us+, some important points become clear through this social experiment: People easily accept advice from machines and algorithms, even for quite personal matters such as emotions, and people are able to gain different perspectives of their behaviour and emotions through real-time feedback from machines. Human beings analyse situations based on cognitive bias, forming opinions from their own perception of the inputs received, whereas although machines are also biased, they are perceived to be much less biased than humans. This makes machines and algorithms appear to be more trustworthy even when the analyses may not necessarily be accurate. Us+ also follows a second order cybernetic model, whereby the application sends constant real-time feedback about the participant’s communication, making the participant aware in detail of how they themselves are affecting the conversation, and prompting the participant to adjust their way of conversing. People are not accustomed to receiving detailed real-time feedback about their own habits of communication, so as a result of the feedback from the machine, beyond the immediate action and reaction there would surely be lingering thoughts of self-questioning about their usual habits of communication. This could be powerful as a tool to instigate self-awareness.

[3] Google’s communication platform with text, voice or video chat services for one-to-one or group conversations.

2.4 Communication: Data Visualisation

Methods to communicate computerised analysis of human interaction can be learnt from the field of data science and data visualisation. Data science uses scientific methods to extract insight from data. As the field of data science is rapidly growing, there is an enormous amount of data that now exists, giving us more insight into the society and world in which we live in. As a result, there is a need for more advanced practices of data visualisation to make sense of the data. The beginnings of data visualisation start from after the Crimean War, when Florence Nightingale[4] called for reforms on the sanitary conditions of the British Army hospitals. In order to convince the government to put her reforms into practice, she used graphic visualisation to give a quick impactful representation of the scale of the sanitary issues in the hospitals.

Figure 4: Florence Nightingale’s Diagram of the Causes of Mortality in the Army in the East.  Red: Death from battlefield wounds Blue: Death from preventable diseases caused by unsanitary conditions Black: Death from other causes

Although data visualisation such as the graph that Florence Nightingale created is common now, Nightingale’s graphs were revolutionary at a time when such methods of the meaning within data did not exist. In the context of the standards of that time, it follows the key principles of effective data visualisation as stated in the book Beautiful Visualization: Looking at Data Through the Eyes of Experts, which are:

  1. Novel:‘…afreshlookatthedataoraformatthatgivesreadersa spark of excitement and results in a new level of understanding.’
  2. Informative:‘Thekeytothesuccessofanyvisual,beautifulornot,is providing access to information so that the user may gain knowledge. ‘
  3. Efficient:‘Abeautifulvisualizationhasacleargoal,amessage,ora particular perspective on the information that it is designed to convey.’
  4. Aesthetic:‘Often,novelvisualtreatmentsarepresentedasinnovative solutions. However, when the goal of a unique design is simply to be different, and the novelty can’t be specifically linked to the goal of making the data more accessible, the resulting visual is almost certain to be more difficult to use.’ (Steele & Iliinsky, 2010, pp.1-3)

In the context of today, when data visualisation is becoming an increasingly active discipline, people are already familiar with common methods of data visualisation such as graphics and animations. In order to achieve data visualisation that can create an impact, current popular methods of communicating ideas, such as real- time interaction and computational arts should be considered.

[4] Founder of modern nursing, and was also a statistician and social reformer.

3. The Project @heyhexx

Figure 5: First prototype of Hexx the Robot

3.1 Concept and Research Topic

@heyhexx started out as a simple cardboard robot that could be puppeteered by input from Twitter. To our surprise, the robot elicited much joy from the people that saw it. We saw the potential in using the charm of the robot as a powerful medium to connect the emotional attachment of people that interact with it to a social topic matter. Social media platforms such as Twitter and Facebook are often used to express and exchange personal opinions, but often this can lead to arguments, harassment, and miscommunications. As is the case with any form of written messages, it is much more difficult to understand the context or emotional intent behind messages exchanged on social media platforms than when talking face-to- face or even on the phone. Much of communication is based on nonverbal communication such as gestures and facial expressions, so written messages which cannot express any of these can be easily misinterpreted. This was proven by a study conducted on how often emails can be misinterpreted. (Kruger et al., 2005) The results showed that although 80 percent of the participants writing the emails thought that the tone of their email would be interpreted correctly by the receiver, only 50 percent on the receiving end were able to correctly identify the intended tone of the emails. This means that 50 percent of written communication is being misinterpreted. Thus, we decided to explore if Hexx the robot can be used as a medium to prompt self-reflection and self-awareness of the emotions expressed in written messages by social media users.

In designing an interaction system that can prompt self-reflection, consideration of second-order cybernetics became key. The aim of the interaction system is to visualise social media users emotions in an endearing way which could make the emotion results more easily relatable to the users. Through this, the goal is for the users to self-reflect and gain awareness of how their emotions within their written messages may affect the physical world beyond the digital screen, something that is often not taken into consideration when posting messages on social media. Once this can be achieved, the next step would be for the system to facilitate a continuation of the interactions between users and Hexx, to achieve ongoing two- way conversations and to tailor the interactions to the users. The purpose of establishing a two-way interaction would be to explore if users can recognise the longer-term effects of their accumulated emotions or habits of communication on the physical world around them, rather than through only separate individual interactions.

Figure 6: Hexx’s second order cybernetic system

3.2 Gustav Freytag’s Five Act Structure

The aim of @heyhexx can be clearly defined in the terms of Gustav Freytag’s five act structure for constructing a narrative. Freytag separates a dramatic structure into five parts, which are introduction, rise, climax, return or fall, and catastrophe. (Freytag, 1894, pp.114-140)

Figure 7: Gustav Freytag’s Five Act Structure

Freytag’s five act structure is helpful in organizing the construction of the interaction narrative for @heyhexx- the arc of how the interaction system affects the user. The five parts can be related to the @heyhexx system as follows:

Introduction (Exposition on diagram)- Sets the context of the narrative. In Hexx’s system, this is a user’s pre-existing knowledge of their own emotions.

Rise (Rising action)- The motivation or action which set the story in motion. A user’s tweet would initiate the interaction with Hexx.

Climax- The peak of a narrative, when the results of the rising action are determined. The user anticipates the results of their tweet, whilst Hexx analyses the emotion and translates it visually.

Return or fall (Falling action)- The return after the peak. The results of the interaction with Hexx is revealed to the user.

Catastrophe (Denouement)- The closing action. After the user sees the results, the user gains a different perspective on their emotion from their interaction with Hexx.

3.3 Emotion
3.3.1 Case Study: Pixar’s Inside-Out
A case study that provided insight as to how to create a unique visual language for a new understanding of emotions was Pixar’s film Inside-Out. The film successfully helped a large audience of all ages to start understanding their own emotions, by creating an animated storyline based around the core theories within the study of emotions, as consulted by expert emotion psychologists Dacher Keltner and Paul Ekman. (Keltner and Ekman, 2015) The animated film follows the inner emotions of 11-year-old Riley, during a transition of moving and adjusting to a different city, by creating a fantastical world in which emotions are given bodies and personalities, and memories are stored as glowing orbs, coloured by the emotions associated with those memories.

Figure 8: Five main characters in Inside-Out

Figure 9: Memories as orbs in Inside-Out

What the film successfully achieves is in leaving a memorable vivid impression in viewers’ minds, of abstract concepts as emotions and memories. By fabricating an entertaining world in which these concepts exist, and creating storylines for each of these concepts, it provides viewers with concrete references for difficult to grasp ambiguous concepts. Also making this film successful was its simplification of complex theories down to its essential elements, making it possible to deliver the key messages with clarity. Although there are many more complex emotions being studied in science, the creators of the film chose to depict only five emotions. One reason being that those five emotions are widely stated by a large group of psychologist as being the five primary emotions, and another reason being that having five or six emotions would add enough complexity, yet not overwhelm, the storyline. (Judd, 2015)

3.3.2 Emotion Theories

In the process of deciding which emotions to analyse, it was important to consider the various theories on emotion that exist, and which ones would be relevant and helpful to the project, as well as possible with the tools available. The three main theories focused on were by psychologists Paul Ekman, Robert Plutchik and James A. Russell. Ekman’s prominent theory is based on his study of facial expressions. He identified six universal emotions expressed, which were anger, sadness, happiness, disgust, fear and surprise. Although later, it was discovered through studies on facial and vocal expressions, that there are more universal emotions such as awe, desire, and sympathy amongst others.

Figure 10: Paul Ekman’s six Primary Emotions

Plutchik’s theory is also based on primary emotions, but he suggested that there were 8 primary emotions, with 4 pairs of polar emotions: joy and sadness, anger and fear, trust and disgust, and surprise and anticipation. He mapped these to a 3D (cone), and 2D (wheel) model, suggesting that these emotions can have different intensities and be mixed with each other to create secondary emotions.

Figure 11: Plutchik’s 2D and 3D emotion model

According to Plutchik’s wheel of emotions, the intensities of the 8 primary emotions are named, for example the 3 intensities of the primary emotion joy from lowest to highest are serenity, joy and ecstasy. Two primary emotions can mix to form more complex secondary emotions, such as the primary emotions anger and disgust mixing to form the emotion contempt. Russell’s theory took a different approach from Ekman and Plutchik’s theories on primary emotions. Instead of primary emotions, he suggested that emotions can be mapped to a two-dimensional space, with one axis being valence (pleasure or displeasure), and the other axis being the degree of arousal.

Figure 12: James A Russell’s Circumplex Model of Affect

Excitement is placed in a point of moderately high pleasure and moderately high arousal, whereas depressed is placed at moderately high displeasure and in the middle area of unaroused.

3.3.3 Emotion Analysis Methods for Text

These emotion theories require different approaches when implementing them in text analysis. Ekman and Plutchik’s theories require first the extraction of primary emotions from the text, then a processing of the emotion results. Russell’s approach would require sentiment analysis (positivity and negativity), and analysis of arousal within text. As emotion analysis from text is still quite a new field of study within Natural Language Processing[5], there are no gold standard methods that we could easily choose for the project. We instead considered the various methods of emotion analysis being researched, and weighed the pros and cons, and the achievability of the methods in terms of the knowledge, skills, and time that we had. The main methods for emotion analysis are: affect lexicons, rule-based linguistic approach and machine learning.[6]

[5] A field of computer science and artificial intelligence concerning the computerized processing of natural human languages.

[6] Overview of the three common approaches are from the papers by Neviarouskaya et al., and Krcadinac et al. Lexicons/keywords

A common approach to extracting sentiment and affect from text is the use of lexicons[7] containing keywords and the emotions associated with those words. This approach extracts the emotions contained within only the keywords of text, rather than from the entire body of text. Using lexicons is an intuitive approach to extracting emotion from text, since only words with no ambiguity in the emotion classification are included. The words picked out from text using lexicons will result in accurate classification of emotions. However, the use of lexicons is limiting, since it does not have the information to analyse complexity in text resulting from subtle ambiguities in meaning and relations between words. For the initial prototypes of Hexx’s system, we experimented with using the AFINN-111[8] lexicon to extract the overall sentiment from text. We found that although it was accurate for sentences with clear positivity or negativity, it was unable to detect sentiment from sentences with more complexity, such as sentences containing many of both negative and positive keywords.

[7] ‘A lexicon is a collection of information about the words of a language about the lexical categories to which they belong.’ (

[8] Lexicon of 2477 words and phrases rated with a valence score (positivity or negativity). Rule-Based Approach

Another method is to use advanced rule-based linguistic approaches. In this approach, a set of rules is defined based on human-defined common sense rules, and the emotion is extracted based on these rules. There are also approaches where different lexicons are used together, or lexicons and rule-based approaches are combined to form hybrid approaches. Synesketch is an open-source emotion visualisation tool that implements a hybrid approach to analyse text and visualise six primary emotions based on Ekman’s theory, as well as the weight (intensity) of the emotion, and the positivity and negativity. The method first parses text through a set of rules that considers negation, emotional weight, and elements of surprise, along with other factors. Then a combination of two lexicons is used, a word lexicon and an emoticon[9] lexicon, to determine the emotion of the text. This method produced good performance on sentences with clear emotions, but struggled with more complex sentences with less clear emotional cues, and requires a feedback system with users to continuously improve the lexicon and rules for accuracy. (Krcadinac et al., 2013)

[9] ‘…a pictorial representation of a facial expression using punctuation marks,

numbers and letters, usually written to express a person’s feelings or mood.’

( Corpus and Machine-learning Approach

A corpus[10] and machine learning[11] based approach is not based on the classification of keywords, but on the classification of bodies of text. In this approach, bodies of text are tagged with emotion classifiers, which train a machine learning algorithm to automatically recognize the emotions of text. EmoTweet-28 is an impressive corpus, consisting of 15,553 tweets that were annotated with 28 emotion categories to recognize not only primary emotions, but 28 fine-grained emotions. The dataset of tweets was annotated to a gold-standard, by first employing Amazon Mechanical Turks[12] to annotate the large set of tweets. Researchers then manually reviewed the annotations done by Amazon Mechanical Turk workers to reach a gold-standard. The algorithm trained by the EmoTweet-28 gold standard corpus achieved a mix of results for accuracy, some of them being very low, many achieving moderate accuracy, and certain emotions reaching high accuracy of above 70 percent. (Liew, Turtle and Liddy, 2016) It appears to be a promising approach to detecting fine- grained emotions beyond Ekman’s six primary emotions, but the disadvantage is that it requires a large corpus to achieve consistently high accuracy across all the emotions classified.

[10] ‘A corpus is a large body of natural language text used for accumulating statistics on natural language text.’ (
[11] A branch of artificial intelligence. ‘Machine learning is said to occur in a program that can modify some aspect of itself, often referred to as its state, so that on a subsequent execution with the same input, a different (hopefully better) output is produced.’ (
[12] ‘Amazon Mechanical Turk is a crowdsourcing marketplace that enables individuals or businesses to use human intelligence to perform tasks that computers are currently unable to do.’ (Amazon) Emotion Analysis Services

Given that the methods for emotion detection described above are from researchers in the field of computer science, specifically natural language processing, we realised that given that we do not have the specific practical skills required, or the time to implement these methods, it would not be realistically possible for these methods to be properly implemented into the project. Instead we considered existing emotion analysis services that are publicly available for use in applications. Various emotion analysis services for text are publicly available, such as IBM Watson, Qemotion, AYLIEN, PreCeive, and Microsoft Azure. These services generally achieve the same task of analysing primary emotions and outputting the intensities of these emotions, but the specific primary emotions that are analysed varies from service to service. Interested in implementing Ekman and Plutchik’s theories of emotion, we wanted to find a service that analysed the emotions that were closest to the primary emotions they suggested.

IBM Watson’s Natural Language Understanding (NLU) service analyses five primary emotions (sadness, joy, anger, fear and disgust), the same emotions represented in the film Inside-Out, and the same as Ekman’s suggested primary emotions (leaving out the emotion surprise). IBM Watson is a supercomputer built on machine learning algorithms that can understand the complex structures of natural language based on the rules of grammar, context and culture. (IBM, 2014) Watson’s various cognitive computing[13] services make it possible for such advanced artificial intelligence to be easily used by non-experts of natural language processing. We chose to use Watson NLU due to IBM Watson services being generally trusted and widely used for a range of projects, such as fashion label Marchesa’s use of Watson to change the colour of a dress at the Met Gala to reflect the sentiments of the public surrounding the event from Twitter in real-time. Watson’s range of other cognitive services also offered flexibility to expand the possibilities of Hexx’s interaction system if we chose to in the future. NLU is a practical solution for fulfilling the emotion analysis requirements of Hexx’s system. Most of the time the analysis is very accurate, but there are still instances of the analysis being wrong. Since Watson’s algorithms are constantly getting feedback from the use of the services and constantly learning from its mistakes, the hope is that the emotion analysis will continue to improve.

[13] Computing modelled on the way the human brain works, rather than on traditional mathematical computing models.

3.3.4 Post-processing Emotions

Watson NLU’s analysis outputs the intensities of each of the five emotions for the text analysed, so a step of interpreting the results is required to narrow it down to one emotion.

Figure 13: Example of IBM Watson Natural Language Understanding results

For @heyhexx to ignite interest in users’ own emotions, the emotions output by the system must go beyond only five emotions, so it became important for the results to still be able to output a range of emotions. Plutchik suggested that dyads (pairs) of emotions create complex secondary emotions, so we theorised that the results from NLU may be combined to generate secondary emotion results.

Figure 14a: Plutchik’s 2D and 3D model of emotions Figure 14b: Plutchik’s emotion dyads

Figure 15: Chart of Hexx’s potential behaviours for secondary emotions based on Plutchik’s emotion dyads

3.3.5 Experiments Using a Simplified Machine-learning Approach

To achieve this processing of the results to output secondary emotions, a simplified approach based on EmoTweet-28’s fine-grained emotion classification method was tested. We tried experiments using Wekinator[14], a user-friendly machine learning software that accepts input values via OSC[15]. Since Watson’s results output numbers, and Wekinator also only accepts numerical inputs, this approach trains the algorithm based on numbers, differing from EmoTweet-28’s method of training based on text. The first experiment was in defining 16 of the more common emotions within the list of 28 emotions analysed by EmoTweet-28, and manually finding tweets that fit those 16 emotions. As it took a significant amount of time to manually filter through tweets, we started with 80 gold standard tweets which got classified into the 16 emotions using Wekinator. For such a small data set, this produced relatively accurate results for a few of the secondary emotions, but a much larger data set was necessary to produce accurate enough results to be useful for the project. Since the time or resources necessary for this method are not available to this project, the next step was to try automating the annotating process. What was discovered during the first experiment was that each emotion indeed showed patterns for which of the primary emotions were strongest. The 16 emotions were categorized into these patterns of emotions, and were used to filter through a dataset of 5,000 tweets. After training Wekinator with the filtered datasets, the strengths of the five emotions were adjusted for each new emotion category until the results reached moderate accuracy. However, for many of the fine-grained emotion categories, filtering the dataset with this method was ineffective, since most of the data filtered in was irrelevant. The method showed some signs of promise, but would require much more in-depth understanding of natural language processing in order to achieve an adjustment of the filters that can produce accurate results. This method would not be achievable in the time frame given for the project.

[14] Open source machine learning software intended for the creation of new musical instruments and interactive systems.
[15] ‘Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology.’ (

3.3.6 Case Study: TrumpFeels

An application called TrumpFeels achieved exactly what we wanted to achieve, using a remarkably simple approach. TrumpFeels is an application that analyses current United States President Donald Trump’s daily tweets. The application uses Watson NLU to analyse the tweets, then processes the results to tag the tweets with creative emotion labels. Although not always accurate, the application is still able to achieve enough accuracy and entertainment factor to be able to critically consider the emotions contained within the President’s tweets.

Figure 16: TrumpFeels application

The TrumpFeels method for processing the NLU emotion results consist of 3 parts:

    1. If there is a clear dominant emotion, for each of the five primary emotions, there is an array of five to ten emotional words describing the different intensities of the emotion. From the list, the emotional word that corresponds to the intensity of the emotion would be chosen. For example, a result with anger being dominant but fairly low may choose the word ‘irritated’ from the list.
    2. If there are two dominant emotions with close intensity, it would choose a special emotion label. For example, if joy and disgust were the only two dominant emotions, then it may be labelled as ‘smug.’
    3. Other functions are used to handle special cases such as having more than two dominant emotions.

    (see Appendix A)

The TrumpFeels method of processing Watson NLU’s results is simple to achieve, yet still follows the fundamental theories of emotion which Ekman and Plutchik suggest. Because of the method’s simplicity, it produces consistent predictable results from a developer’s perspective, allowing for fine control of the parameters that affect the processing, while still providing seemingly unpredictable and interesting results for users. We chose to implement this method into Hexx’s system, but changed the number of final output emotions and the labels of these emotions to match Plutchik’s wheel of emotions and his emotion dyads.

For Hexx’s emotion processing, we chose to have 3 possible intensities of each primary emotion, resulting in 15 possible emotion labels total for only the varying intensities of the primary emotions. For the emotion dyads, we followed the chart of potential behaviours of Hexx [Figure 14] which make 10 possible complex emotions. There would be an additional emotion ‘indifferent’ for very low results across all five emotions. This makes 25 total possible emotion outputs that users could receive back from their tweet, which gives enough variety to provide interest to users.

3.4 Design of Hexx’s World

Since the project started with a physical robot puppet, it was clear from the beginning that the emotions from tweets would be visualised in tangible objects, and that Hexx would be the main medium that the emotion would be expressed through. However, only expressing the emotions through Hexx did not provide enough novelty in the concept, so it was decided on that a world for Hexx should be created. In theatre, stage design plays a vital role in providing context for a storyline, and enhances the storytelling experience. Specific to this project, an advantage of Hexx being a small robot means that it is possible for the objects in Hexx’s world to be easily moved by small motors, something that is not impossible, yet difficult to achieve in stage design for the human scale. By animating the physical objects in Hexx’s world, it becomes possible to enhance the emotion that Hexx displays in its behaviour, as well as use the set’s motion to provide triggers in the storyline for Hexx’s behaviours.

Figure 17: Hexx’s environment

Figure 18: One location in Hexx’s world- The park

Creating multiple different locations for Hexx’s world meant that an automated puppeteer would be required to move Hexx around in its world, so we made use of a UR10[16] robot arm which was made available to us by the university.

Figure 19: UR10 Robot Arm as Puppeteer for Hexx

[16] Universal Robot UR10- Collaborative robot arm used to interact physically with humans in a shared space. In a work process, they are intended to be safe to share in work tasks with humans.

3.5 Design of an Automated System

The design of the software architecture[17] is an important aspect to supporting the conceptual cybernetic system that we wished to create. The way in which the software’s architecture is structured affects the user’s experience and the scalability of the software. We went through several iterations on the structure of the overall system, but what we wished to achieve overall are as follows in the diagram.

Figure 20: Main elements of @heyhexx system

[17] The structures of a software system

3.5.1 Monolithic vs Microservice Architecture

Hexx’s system started out as a monolithic[18] system, with all elements apart from the applications controlling the behaviours of the objects being contained in one central node.js[19] application.

Figure 21: Initial @heyhexx monolithic system

At the beginning when the concept of Hexx’s interaction system was still at the initial stages of development, a monolithic system was useful for testing out the skeletal elements of the system together. However, after continuing to add more detailed requirements to the system as the project developed, it became difficult to continue with a monolithic system. With having only one application, the various functions it contains connect to each other in a specific sequence and structure, so we faced difficulty in having to restructure the application when needing to add new functions.

Restructuring the system to a microservice[20] system solved these issues. By having a modular system of multiple separate applications that each do one function (sometimes two), and having them run autonomously, the structure of the system can stay flexible. This provides benefits for when needing to scale up the system, as new features can be added to the system easily. A microservices system also makes debugging[21] easier, since it becomes clear which application in the chain of applications has resulted in error. With one large application, as the code gets larger with more functionality, the same error may be more difficult to spot. Currently in Hexx’s system, there are five node.js applications, and they all connect by passing around JSON22 files between applications.

Figure 22: @heyhexx microservices system

Application 1: Twitter Streaming

The first application uses the Twitter API23 to stream tweets with the mention of the Twitter handle @heyhexx. Tweets are temporarily placed in a queueing list in a first in, first out system.

Application 2: Queue Hold

The second application regulates when the next tweet at the top of the queue can go through the rest of the system. If there is currently a tweet still being processed by the system, it will wait until that one cycle finishes to let the next tweet through.

Application 3: Emotion Analysis

This application uses IBM Watson’s NLU API to analyse the emotion of the tweet, and processes the results to output one of the 25 emotions labels.

Application 4: Communication to Other Applications and Recording Video

This application sends the results from application 3 out to the two applications that control the behaviours of the objects. A message is sent out to Grasshopper[24], which controls the robot arm determining Hexx’s location within its world, and also sends to the Processing[25] application, where the behaviours of the objects containing motors (Hexx and the environment) is programmed.

In this same 4th node.js application, a video is taken of the physical objects expressing the emotion. A Logitech C920 Pro webcam mounted on the base of the robot arm is triggered to take a 10 second long video, using Ffmpeg[26]. Text containing the tweet’s emotion result is overlaid to customise each video.

Application 5: Replying

The final application sends a tweet containing the video back to the account that tweeted at Hexx. This is the end of one cycle, and the system is ready for the next incoming tweet.

[18] ‘A monolithic architecture is the traditional unified model for the design of a software program. Monolithic, in this context, means composed all in one piece.’ (
[19] ‘Node.js (Node) is an open source development platform for executing JavaScript code server-side. Node is useful for developing applications that require a persistent connection from the browser to the server and is often used for real-time applications such as chat, news feeds and web push notifications.’ (

[20] ‘A microservice architecture (MSA) is a logical structure for the design of a software program involving loosely-coupled modular components known as microservices.’ (
[21] The process of identifying and removing errors from code.

[22] JavaScript Object Notation (JSON) is a data exchange format used commonly to collect and exchange data between web applications.
[23]Application Program Interface (API) is code that allows for an application to access the services of another application.

[24] Visual programming language for the computer aided drafting application Rhinoceros 3D.

[25] Programming environment using the Java language, built for the arts and design.
[26] Free command line software for streaming, recording, converting and editing audio and video.

3.6 Future Development

Once we achieve the fine tuning of all the elements of the project and have it working reliably to a point where users can identify the correlation between their tweet and the puppetry theatre piece, we would like to implement a second phase of the concept and system as mentioned earlier in this paper. The second phase would be to widen the target audience beyond Twitter, and then to develop a system to facilitate a continuous two-way conversation. Exploring theories of causation could craft interesting narratives in conversations between Hexx and users. Specifically within the study of causation, the theory of probabilistic causation is of interest. The main idea of probabilistic causation is that a certain cause is likely to produce a certain effect, and that this certain cause must raise the probability of that effect more than other causes. (Sosa and Tooley, 1993, pp.152-53) It could be potentially interesting and give new meaning to the project, if probabilistic causation were to be introduced into Hexx’s system. Currently, Hexx’s output is a direct reflection of the emotion that is analysed from the tweet. Incorporating ideas of probabilistic causation would change Hexx’s output to be a response to the emotion analysed rather than a reflection.

Figure 23: Current interaction system

A further step that can be taken would be to create a personality for Hexx based on the accumulations of the interactions with each user, resulting in a continuously evolving personality of Hexx customised to each user. Each new interaction could build on the previous interactions, and this system could simulate the effects of one’s emotions on the physical world over time.

Figure 24: Future proposed interaction system

These future developments would add complexity to the concept of the system, and would need careful reconsideration of the design of the interface through which users would be interacting and receiving the video feedback from. Without careful consideration of this, the correlation between the interactions from the users and Hexx’s output can easily become obscured, thus failing to deliver the original intention of the project. Due to the limitations of using Twitter as a platform, it may become necessary to add these new features on a different social media platform such as Facebook, which has more variety of functionality. Or perhaps it would become necessary to create a standalone platform on the internet which could be freely customised to aid the storytelling.

4. Conclusion

At its current state, the @heyhexx system has the potential to instigate a certain degree of self-reflection in users, through the analysis of emotions that are more complex beyond the five primary emotions. However, it is questionable if this could prompt enough self-reflection to influence users’ future communication. To achieve a truly second-order cybernetic system for self-awareness of emotions, there should be improvement in the accuracy of the emotion analysis, and the concept should reach the future steps of becoming able to facilitate continuous two-way interactions by incorporating causation into the concept. Achieving these could potentially lead to the creation of a unique tool that could augment the experience of using social media.

Appendix A

Email from creator of TrumpFeels 16 June, 2018:

Not sure it will be helpful, but I’d be happy to give you an outline of how the tweets are being processed. As you may have guessed, I am using Watson for the 5 basic emotions. I spent some time considering different ways to boil the analyses down into one word–including using logistic regression on survey data collected through something like Amazon’s Mechanical Turk–but wound up going with something much simpler and far easier to implement:

  1. I have an array of 5—10 words for each of the 5 basic emotions, roughly ordered by intensity. If the analysis is strongly dominated by one of the five emotions, I go to the appropriate array and choose a word describing that emotion with the appropriate intensity. For example, if anger is dominant but the anger is fairly low, I would go to the anger array and the word might be something like “irritated.”
  2. There are special words to handle cases where an analysis is dominated by 2 emotions of roughly equal intensity. For example, an analysis ranking high for joy and disgust and low for the other three basic emotions might be labeled as “smug.”
  3. I handle some other special cases (such as several roughly equal) separately.

My goal was to have a system that gives a plausible word for the results of each Watson analysis, and while this system could be improved, I think it accomplishes that goal reasonably well.

Figure References

Figure 1: First order cybernetic diagram. Sana Yamaguchi (2018) Figure 2: Second order cybernetic diagram. Sana Yamaguchi (2018) Figure 3: us+ by Lauren McCarthy. Lauren McCarthy (2013)

Figure 4: Figure 4: Florence Nightingale’s Diagram of the Causes of Mortality in the Army in the East. Florence Nightingale (1858) Available at: pioneer-statistician

Figure 5: First prototype of Hexx the Robot. Sana Yamaguchi, Patsaraporn Liewatanakorn (2018)

Figure 6: Hexx’s second order cybernetic system. Sana Yamaguchi (2018)

Figure 7: Gustav Freytag’s Five Act Structure. Available at:

Figure 8: Five main characters in Inside-Out. Pixar (2015) Available at: emotions-in-inside-out-male

Figure 9: Memories as orbs in Inside-Out. Pixar (2015) Available at:

Figure 10: Paul Ekman’s six Primary Emotions. Available at:

Figure 11: Plutchik’s 2D and 3D emotion model. Swiss Miss (2011) Available at:

Figure 12: James A Russell’s Circumplex Model of Affect. (Russell, 1980)

Figure 13: Example of IBM Watson Natural Language Understanding results. IBM (2018)

Figure 14a: Plutchik’s 2D and 3D model of emotions

Figure 14b: Plutchik’s emotion dyads. Available at: emotions_fig3_308972170

Figure 15: Chart of Hexx’s potential behaviours for secondary emotions based on Plutchik’s emotion dyads. Sana Yamaguchi (2018)

Figure 16: TrumpFeels application. Quizzical Therapy LLC (2017)

Figure 17: Hexx’s environment. Sana Yamaguchi, Patsaraporn Liewatanakorn, Parvin Farahzadeh (2018)

Figure 18: One location in Hexx’s world- The park. Sana Yamaguchi, Patsaraporn Liewatanakorn, Parvin Farahzadeh (2018)

Figure 19: UR10 Robot Arm as Puppeteer for Hexx. Sana Yamaguchi, Patsaraporn Liewatanakorn, Parvin Farahzadeh (2018)

Figure 20: Main elements of @heyhexx system. Sana Yamaguchi (2018) Figure 21: Initial @heyhexx monolithic system. Sana Yamaguchi (2018) Figure 22: @heyhexx microservices system. Sana Yamaguchi (2018)

Figure 23: Current interaction system. Sana Yamaguchi (2018)

Figure 24: Future proposed interaction system. Sana Yamaguchi (2018)


Ashby, W. (1970). An introduction to cybernetics. London: University Paperbacks.

Bain, M. (2016). IBM Watson co-designed the most high-tech dress at the Met Gala. [online] Quartz. Available at: most-high-tech-dress-at-the-met-gala/ [Accessed Jun. 2018].

Becker-Asano, C. and Wachsmuth, I. (2009). Affective computing with primary and secondary emotions in a virtual human. Autonomous Agents and Multi-Agent Systems, 20(1), pp.32-49.

Benarous, X. and Munch, G. (2016). Inside Childrenʼs Emotions. Journal of Developmental & Behavioral Pediatrics, 37(6), p.522.

Bosker, B. (2013). Finally, An App To Fix Your Terrible Personality. [online] HuffPost UK. Available at: app_n_4455788?guccounter=1&guce_referrer_us=aHR0cHM6Ly93d3cuZ29vZ2xlL mNvbS8&guce_referrer_cs=O-iliRHdPo1FssJ1Ix23Gg [Accessed Mar. 2018].

Canales, L., Strapparava, C., Boldrini, E. and Martinez-Barco, P. (2016). Innovative Semi-Automatic Methodology to Annotate Emotional Corpora. Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pp.91-100.

Dijck, J. (2013). The culture of connectivity. New York: Oxford University Press.

Freytag, G. (1894). Freytag’s Technique of the drama : an exposition of dramatic composition and art. An authorized translation from the 6th German ed. by Elias J. MacEwan. 3rd ed. Chicago: The Henry O. Shepard Co., pp.114-140.

Geyer, R. and Zouwen, J. (2001). Sociocybernetics. Westport, Conn.: Greenwood Press.

Glanville, R. (2003). SECOND ORDER CYBERNETICS, in Systems Science and Cybernetics, [Ed. Francisco Parra-Luna], in Encyclopedia of Life Support Systems (EOLSS), Developed under the Auspices of the UNESCO, Eolss Publishers, Oxford ,UK, [] [Retrieved August 31, 2008]

Haselton, M., Nettle, D. and Andrews, P. (2005). The evolution of cognitive bias. In: The Handbook of Evolutionary Psychology, 1st ed. New York: Wiley, pp.724—746.

IBM (2014). IBM Watson: How it Works. [image] Available at: [Accessed 22 Sep. 2018].

Judd, W. (2015). A Conversation With the Psychologist Behind ‘Inside Out’. [online] Pacific Standard. Available at: psychologist-behind-inside-out [Accessed 13 Sep. 2018].

Kaur, J. and R. Saini, J. (2014). Emotion Detection and Sentiment Analysis in Text Corpus: A Differential Study with Informal and Formal Writing Styles. International Journal of Computer Applications, 101(9), pp.1-9.

Keltner, D. and Ekman, P. (2015). The Science of ‘Inside Out’. [online] Available at: inside-out.html [Accessed 13 Sep. 2018].

Kim, Y., Smith, D. and Thayne, J. (2016). Chapter 6: Designing Tools that Care: The Affective Qualities of Virtual Peers, Robots, and Videos. In: S. Tettegah, ed., Emotions, Technology, Design, and Learning, 1st ed. [online] London: Elsevier, pp.115-129. Available at: [Accessed May 2018].

Krcadinac, U., Pasquier, P., Jovanovic, J. and Devedzic, V. (2013). Synesketch: An Open Source Library for Sentence-Based Emotion Recognition. IEEE Transactions on Affective Computing, 4(3), pp.312-325.

Krishnan, H., Elayidom, M. and Santhanakrishnan, T. (2017). Emotion Detection of Tweets using Naïve Bayes Classifier. International Journal of Engineering Technology Science and Research, 4(11), pp.457-462.

Kruger, J., Epley, N., Parker, J. and Ng, Z. (2005). Egocentrism over e-mail: Can we communicate as well as we think?. Journal of Personality and Social Psychology, 89(6), pp.925-936.

Liew, J. (2015). Discovering Emotions in the Wild: An Inductive Method to Identify Fine-Grained Emotion Categories in Tweets. In: Twenty-Eighth International Florida Artificial Intelligence Research Society Conference. Syracuse: Association for the Advancement of Artificial Intelligence, pp.317-322.

Liew, J., Turtle, H. and Liddy, E. (2016). EmoTweet-28: A Fine-Grained Emotion Corpus for Sentiment Analysis. In: 10th International Conference on Language Resources and Evaluation. LREC, pp.1149—1156.

McCarthy, L. (n.d.). us+. [online] Available at: http://lauren- [Accessed Dec. 2017].

NEVIAROUSKAYA, A., PRENDINGER, H. and ISHIZUKA, M. (2010). Affect Analysis Model: novel rule-based approach to affect sensing from text. Natural Language Engineering, 17(01), pp.95-135.

Newman, S. (2015). Building microservices. Sebastopol, CA: O’Reilly Media, pp.1- 11.

Nielsen, F. (2011). AFINN. [online] Available at: [Accessed Mar. 2018].

Pangaro, P. (2013). Cybernetics – A Definition. [online] Available at: [Accessed 3 Jan. 2018].

Pask, G. “A Comment, A Case History, and a Plan”, in Cybernetic Serendipity, J. Reichardt, (Ed.), Rapp. And Carroll, 1970. Reprinted in Cybernetics, Art and Ideas, Reichardt, J., (Ed.) Studio Vista, London, 1971, 76-99.

Paul Ekman Group. (2017). Paul Ekman Group. [online] Available at: [Accessed 18 Sep. 2018].

Picard, R. (1995). Affective Computing. M.I.T Media Laboratory Perceptual Computing Section Technical Report, (321).

Pickering, A. (2010). The Cybernetic Brain: Sketches of Another Future. Chicago, USA: The University of Chicago Press.

Plutchik, R. (1991). The Emotions Revised Edition. Lanham, Maryland: University Press of America, Inc.

Ranellucci, J., Poitras, E., Bouchet, F., Lajoie, S. and Hall, N. (2016). Chapter 5: Understanding Emotional Expressions in Social Media Through Data Mining. In: S. Tettegah, ed., Emotions, Technology, and Social Media, 1st ed. [online] London: Elsevier Inc., pp.85-102. Available at: 6.00005-1 [Accessed May 2018].

Rooney, D. (2016). Florence Nightingale: the pioneer statistician. [online] Science Museum. Available at: pioneer-statistician [Accessed 14 Sep. 2018].

Russell, J. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), pp.1161-1178.

Sidana, M. (2017). Top Five Emotion / Sentiment Analysis APIs for understanding user sentiment trends.. [online] Medium. Available at: 116cd8d42055 [Accessed Jun. 2018].

Sosa, E. and Tooley, M. (1993). Causation. New York: Oxford University Press. Steele, J. and Iliinsky, N. (2010). Beautiful visualization. Beijing: O’Reilly. (2018). TrumpFeels. [online] Available at: [Accessed Jun. 2018].

Von Foerster, H. (2003). Understanding understanding. New York: Springer, pp.283- 286.

Zappavigna, M. (2012). Discourse of Twitter and social media. London: Continuum International Publishing Group.

Submit a Comment