Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

No Comments

Reality into Music

Reality into Music
  • On April 7, 2018
  • https://yildiztufan.squarespace.com/

I’m interested in spatial representation of sound and translation of space into sound. I’ve been researching methods to narrate music, experimental music scores that allows interpretation of pieces by performers and visualisations of computer music which visualises music that can’t be notated by conventional forms. Sound is described through graphical notations that can be represented spatially to create immersive experiences. Spatial qualities of sound can create an immaterial architecture that effects our perception of space.

 

Cornelius Cardew – Treatise
http://socks-studio.com/2015/10/05/the-beauty-of-indeterminacy-graphic-scores-from-treatise-by-cornelius-cardew/

 

I’ve been working on a system where the machine, as the performer and the instrument, translates reality into music. The generated sound is site specific and constantly changing that results in emergent music.The site I’ve been working on was the remains of Winchester Palace, translating the texture of the wall into the musical score which is similar to the visualisation of the frequency of a pulsar signal (e.g. Unknown Pleasures by Joy Division). The score is read by a graphical sequencer, which is connected to samplers that generates ambient, harmonic, drone music. The machine becomes both the performer and the instrument and space becomes the composition. Machine listens to itself and sends back messages to the environment that changes the score therefore creates generative music. The changing score is then mapped back to space.

 

Joy Division’s Unknown Pleasures album cover detail (left); Allen Beechel, in “Pulsars,” by Anthony Hewish, Scientific American, October 1968 (right). https://blogs.scientificamerican.com/sa-visual/pop-culture-pulsar-the-science-behind-joy-division-s-unknown-pleasures-album-cover/

 

Later, I used a camera to allow the machine to read the space independently. I worked with the luminance values of the space to create a mesh that is then read as the score, and I worked on tracking brighter or darker regions to allow participants to interact and transform the sound. I found out that for the luminance values to work effectively, I need a controlled environment, otherwise there is a constant change going on that transforms the sound so the effect is not clearly audible.

 

 

Further on, I will work on a site specific sound installation that brings people together. A piece that animates with interaction but people can choose either to interact with it or not as they pass along. The interaction is also the plot device that drives the system but it is irrelevant to the participant.

Max Neuhaus – Times Square Sound Installation, https://www.onlineopen.org/times-square

Submit a Comment