Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

Lost in Translation

Lost in Translation

The project explores the mechanism of human communication with machines. Communication between most people occurs when two parties can see listen to and observe one another. As such it is a combination of visual and verbal cues. Communication with machines is via command. Digital machines accept verbal or textual command and physical machines accept physical ones. As machines get smarter, their algorithms are increasingly designed to replicate human conversation. If that is contingent on a combination of visual and verbal cues, then any form of communication with a machine that is aimed at this replication, is inherently incomplete.

Although certain algorithms allow machines to read visual information (facial recognition, for example), this is not carried forward to communicating with them. The project envisions a scenario where machines can use these algorithms to inform the conversation. However, when designing a human computer interaction of this sort, it is important to consider the nature of the algorithm itself. At present, we credit machines and algorithms with a certain level of objectivity. However, this relies on the ‘assumed’ objectivity of the data set and the training model. Within these are hidden certain biases that percolate the interaction. These biases can be of several types; in training data, transfer context, etc.

The project investigates the consequences of algorithmic bias by investigating the relationship between a person and their smart home assistant. It imagines a future where machines are capable of communicating information not just verbally but visually. It poses questions about the biases that go into training these algorithms and how those biases are reflective of our own biases in communication. We represent these biases through household appliances, namely a kettle, a toaster and a clock. Each appliance communicates (visually) through itself and (verbally) through your ‘smart’ home assistant by taking control of it for that period of time.

Interaction model

Interaction model

Experiments

Teachable Machine: We made a simple experiment using the Google AI experiment, Teachable Machine, in order to study the potential of analysing visual data, particularly face data.

Translating face data to emotions using Teachable Machine

p5,js: We connected the machine learning model made on Teachable Machine to Arduino using p5.js and a serial control monitor. This allowed us to control a servo using our face.

Connecting Teachable Machine to Arduino using p5.js and a serial control monitor

Chatbot: After experimenting with several different libraries and tools for making a chatbot algorithm, we have concluded that the key to building different personalities(biases) into chatbots is the database it’s trained on. We would be focusing on creating our own data and selecting a corpus that will help create biases.

Comparasion of results between different dataset

An example of comparison of results between different datasets trained with the Sequence to Sequence model.

Experiment with ChatterBot and dialogues from the script of movie Joker.

Experiment with ChatterBot and dialogues from the script of movie Joker.

To follow

In the months to come, the project shall include physical iterations of these appliances that are reconfigured and designed in a way that would let the user control them using their face as well as their words. The project is building toward experiencing a smart home assistant that adopts a more active role in its user’s life.

References

Dunne, A., & Raby, F. (2007). TECHNOLOGICAL DREAMS SERIES: NO.1, ROBOTS. Retrieved September 07, 2020, from http://dunneandraby.co.uk/content/projects/10/0

https://teachablemachine.withgoogle.com/