Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

No Comments

Social Media-responsive Robot Week 3

Social Media-responsive Robot Week 3

After our tutorial last week, we felt it was necessary to take the time this week to solidify the concept of the project, then resume making with clear goals in mind for the next three weeks.

 

Aims for the week:

-Establish the visual language of the project

-Establish the rules of the digital and physical aspects

-Figure out the method to analyse emotion from tweets

 

Concept:

Normally on social media if we send messages to someone, we are not able to see their reaction, and even in person it is difficult to see people’s inner emotional reactions as a result of things we say. The purpose of this project is to visualize the physical and emotional reactions as a result of messages posted on social media, which we normally cannot see. This may bring about realisations within twitter users of the affects in the real world of words written on the internet.

One main question about our concept was about how the physical objects intersect with the digital aspects. On the one hand, the robot and projector are clearly physical objects, and operate mechanically. And on the other hand, we have a highly digital side of the project which involves analyzing and sending data from twitter. Questions about why this physical object has a significant impact on the project were brought up, and after discussions, we found that rather than having just digital animations of everything, seeing a physical tangible quality to a moving image incites a fascination different from viewing digital animations. Since we’re intending to do a live video stream of the robot, we thought that this reminds us of reality television in a way, and the fascination that society has in watching people’s lives play out on their screens. Clearly a lot of what happens in reality television is not very realistic, but there is something in seeing a physical person on the screen that makes it more relatable than viewing an animation.

To establish a set of rules for the digital and physical aspects in this project, we’ve decided on two main rule sets to follow going forth. The digital virtual world of twitter will solely be the interaction method between twitter users and the robot. Twitter users can send input to the robot, and the robot will send output back to the twitter user through twitter. Everything else will exist in the physical world. This includes the physical behaviour of the robot, in response to the input tweet, and the emotional response which will be expressed by projections of images and colours. We would like to make these projection image changes through physical mechanics, rather than projecting digital content, because the plan is for the livestream to show the process of the images changing physically, to build up the anticipation of how the robot will react to the input tweet. In the end, the plan will be for the robot to send a meme, of itself and its mood, back to the person that tweeted.

 

 

Sentiment Analysis

And in order to do achieve all of this, we looked into using the method of sentiment analysis to define the tone of the tweets. Sentiment Analysis is a computational process used to determine an attitude of language whether it is positive, negative or neutral. It can be software based, machine learning or collection of data that assigns a score to each word that represents positive or negative. It is widely  used in business to analyse customer feedback and gain the understanding of the the attitudes expressed within online platforms to develop a product in a better direction. In the digital art and design scenes, this method is also used as data input to visualise emotion or state of society then translated into the various type of output such as lighting, sound, etc.

The following are some interesting projects that use Sentiment Analysis.

First, Fuse Studios, Italy based digital and design studio, created Amygdala, a lighting installation and exhibition at the CUBO Centro Unipol in Bologna, Italy. It presents a collective inner state of mind of the world that is caused by stimuli from outside world, and is achieved through analysing the emotion of tweets. The Installation processes 30 tweets per second and uses Sentiment Analysis to identify tweets into six types of emotion: happiness, sadness, fear, anger, disgust and amazement then translated into the pattern of LED light motion, sound and info-graphic.

[vimeo 154049756 w=640 h=360]

http://fuseworks.it/en/project/amygdala-en/

Another project is from Lauren Mccarthy, American based artist, and creator of p5.js. The project called us+, it is a video chat app(google hangout) “that uses audio, facial expression, and linguistic analysis to optimise conversations based on the Linguistic Inquiry Word Count (LIWC) database and the concept of Linguistic Style Matching (LSM)”. Participants can see real-time analysis when the person expresses positivity, self-absorption, temerity, aggression and honesty. This app can also take actions, for example mute the conversation when somebody is taking to much. This project is an interesting way to use Sentiment Analysis as a method to improve communication and visualise real-time feedback in our expression to one another.

[vimeo 81903116 w=640 h=360]

us+ from Kyle McDonald on Vimeo.

http://lauren-mccarthy.com/us

 

 

 

 

 

 

Submit a Comment