In one of the post Kant philosophy which is called Object Oriented Ontology, Graham Harmer reveals an alternative perspective towards the world where all the objects are equal and may all have their own consciousness. In 2017, Facebook AI bot project has brought about to some of us the “apocalyptic doomsday scenarios” for these non-human intelligence to have their own communicable languages. But 1000 years back in the history, some Chinese fairy tales about how everything can grow their own spirits and the Japanese story about Tsukumo gami have already somehow predicted such scenarios in the future. Our project “Preternatural Speech” starts with a question, what would the interaction be in such near future scenarios where consciousness is not exclusive to human beings?
The Preternatural Speech installation enables us to explore the inner lives within machines or generally all the objects around us through the most simple interaction, speech, while abstracting all the objects to some prismatic metal designs. Through the distortion of the meaning of the sentences, lying behind which are the machine learning algorithm and tanh and sine wave synthesiser, life emerges from the chat machines.
Making of preternatural speech
Preternatural Speech consists of voice recognition, speech synthesiser, responsive behaviour of objects and projection mapping. In the making-of film, the basic interaction of conversation is explored. We implemented simple interaction as rotation to outline the change of roles in a conversation.
The experiment of Chinese whisper involves two laptops with speech synthesiser integrated in the prototype MaxMSP patch, but in two different languages (one shown in the video with Japanese/ English). Human(pretending only understanding one language) is also one source of the noise. The distorted communication is one of our main idea of how to make artificial lives within objects.
Making of synthesiser
We explored the way to convert the human voice to uncanny speech with Max/MSP. The signal of a voice is transformed with hyperbolic tangent modulation method. The degree of changing the voice varies from 5% to 100%
Making of Converting text Input into Additive Synthesis Output
Voice recognition technology allows the project to change speech into text. Texts are split into segments with three letters. Through the simple machine learning tool Wekinator, we can convert any text input into sine wave output.
Four objects are designed through several iterations by starting to cut the simplest form of a cube into prismatic shapes. All the objects are made from steel for the sake of the sound resonate. After assembling all the pieces, these are welded, ground and polished with blackening spray.