Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top


No Comments

The Ear and The Other. Collective Listening.

The Ear and The Other. Collective Listening.

Featured image: Kristina Kubish – Electrical walks, Bremen (2006).

1.1 What does our future city sound like?

Virtual and augmented reality has become more and more accessible over the last couple of years with the rise of mobile virtual technology such as the Google Cardboard and the Gear VR.The rise of the new virtual reality platforms has also created a new interest in spatial binaural sound.

As David Beer points out listening to virtual sound through headphones can often become an isolating experience (Beer, 2007), but can these experiences go from being isolating to become social through the use of Virtual Reality? As researchers Barry Blesser and Linda Salter point out it is difficult to have long term memories of soundscapes (Blesser and Salter, 2009). Can Virtual reality become our collective memory of our cities, and can it through this memory help us to decide what our future cities will sound like?

This research into virtual soundscapes is a part of the larger project The Urban Palimpsest. It is being developed together with Russel Beaumont and Takashi Torisu as a part of the The Interactive Architecture Lab at The Bartlett UCL. More information about the project can be found here: The Urban Palimpsest.

Through taking a critical look at existing practices of recording and listening to what Murray Schafer defines as soundscapes (Schafer, 1977) the research aims to create a set of tools that has relevance to how urban spaces are designed. The thesis will look at how existing personalized virtual soundscapes can become shared and integrated into the city.

.Through what R. Murray Schafer refers to as the electric revolution sounds have become detached from their sound sources. This detachment has led to the possibility of creating personalised soundscapes that can be experienced by a single individual. The consequence of this is that listeners get isolated from the natural soundscape but also from others. The rise of mobile virtual reality through smartphones has opened up new opportunities to create virtual soundscapes that are easily accessible to a larger part of the population. It is time to investigate which tools that are available and how they can be implemented into the field of architecture.

The palimpsest as described in the following chapter will be dealing with the opposition between connection and disconnection. This will on one side refer to being socially connected or disconnected and in turn the connection between our perception and a virtual environment. This is defined as the embodiment or disembodiment of an environment. The embodiment of virtual spaces will often be referred to as immersion. The paper will focus on what aspects that are important in creating a connection between the body and virtual soundscapes and how to move forward from soundscapes that cause a disconnection between individuals and the city.

This text examines, and expands upon, these shared virtual soundscapes as applied to the Project Tango Development Kit. The built depth sensing capabilities of the device enable rapid prototyping of virtual experiences. The design will investigate how we can navigate virtual environments through bodily movement and how people can use these tools to create virtual soundscapes that can feed directly into an open debate on the development of our future cities.

The following steps will be taken throughout the text: First, the relevant research on soundscapes and spatial hearing will be interrogated to see what is relevant in the context of soft architecture. The key references will be used to discuss the design project. The design project is divided into three main parts; Early research, Traces of Reality, and The Palimpsest. The three parts are a result of a continuous design process, and even though they have taken different turns each part has informed the next. The Palimpsest, which is the final part, ties together the research on soundscapes and shared virtual spaces and connects it to the urban fabric. The palimpsest becomes a toolset that allows to collectively experience and interrogate an overlapping of past, present and future conditions of the city.

1.2 Keywords:


Soundscapes; Binaural sound; Virtual reality; Public space.


2.1 Introduction

The project is an urban palimpsest that probes the ongoing changes in an urban environment. The first site for the investigation is the municipality of Camden. The reference to a palimpsest refers both to its use as “a manuscript or piece of writing material on which later writing has been superimposed on effaced earlier writing.” (Palimpsest, Oxford) The term can also refer to a building which has “been layered onto”. This connection between architecture and the continuous storage of knowledge made it particularly relevant as a starting point for the project. Browning and Slating even refer to the computer as a virtual palimpsest (Browning, Slatin, 1998) when investigating the use of computers in regards to learning and education.

An urban environment can not be understood purely from its visual information; sound plays a significant role in how these conditions are experienced. Therefore it is crucial that sound become an integral part of how we design our urban environments. New Virtual tools are investigated to predict how the soundscapes of urban environments change and can evolve over time.

Through the literature review that was done during the early stages of the project Murray R. Schafer’s concept of Soundscapes was compared to Softness in Architecture (Tveito, 2016). Schafer defines soundscapes as “acoustic field of study” (Schafer, 1993). Further on he argues that “We can isolate the acoustic environment as a field of study just as we can study the characteristics of a given landscape.”(Schafer, 1993 p.7). As a continuation of this study the current examines the concept of soundscapes as an urban palimpsest. The palimpsest is understood as a spatiotemporal document that can be written and rewritten to manifest the changes of the city, its inhabitants, and its soundscape.

Virtual reality experiences have until now mainly focus on its visual aspects, but recently there has been an increase in the interest of the sonic aspects of VR. (Poeschl, Wall, & Doering, 2013 p.1) However, increasingly designers and companies focus on the added immersion through sound. The accessibility of spatial sound has increased with the rise of the Google cardboard and Gear VR. This has given both a new interest in the field as well as a new relevance that never was achieved with the expensive speaker-based spatial sound systems.

2.2 Research question


Can virtual, mobile soundscapes go from being individual to become collective experiences that inform the way we shape our cities? Also, specifically, can virtual collective soundscapes inform the way that past present and future merge in a virtual environment? Which tools are available to create virtual soundscapes that are spatial and where is there a need to create a new platform?


2.3 Research scope

This thesis will primarily focus on sound in the context of virtual reality. The scope is set to the two senses sound and vision. The focus on sound is because it has been underdeveloped within an architectural context compared to the visual sense. The project is not focusing on haptic feedback to the fact that it is less accessible to the public. Spatial sound can today be rendered through a normal, low-cost set of headphones in tandem with a Head mounted display or Virtual reality headset. The fact that any modern smartphone can be turned into such a display opens up a new relevance of spatial sound. Both regarding the artistic and the potential as well as to reach a larger number of people.

2.4 The Palimpsest

The project is an urban palimpsest that examines the ongoing changes in an urban environment. The first site for the investigation is the municipality of Camden. The reference to a palimpsest refers both to its use as “a manuscript or piece of writing material on which later writing has been superimposed on effaced earlier writing.” (Oxford) The term can also refer to a building which has been layered upon. This connection between architecture and the continuous storage of knowledge made it particularly relevant as a starting point for the project. Browning and Slating even refer to the computer as a virtual palimpsest (Browning, Slatin, 1998) when investigating the use of computers in regards to learning and education.

In this undated image provided by the Walters Art Museum, a page from the Archimedes Palimpsest is seen on display. Without the aid of ultraviolet light, a spiral written on the top center of the page is nearly invisible. Ancient mathematical genius Archimedes' text will be on display in the Baltimore museum's exhibition "Lost and Found: The Secrets of Archimedes" before it is returned to its anonymous owner. (AP Photo/The Walters Art Museum)

fig 1. Archimedes Palimpsest

As virtual mobile technology will become ubiquitous and available to consumers from their pockets (Lenovo, 2016), new questions arise. What is the relationship between the virtual and our built environment and the communities that they house? Moreover, specifically, how does this change our relationship to our surrounding soundscapes?

The project seeks to test the relationship between the connection and disconnection to a physical site. Virtual soundscapes can be experienced anywhere so what happens when they are tied to a particular context? The palimpsest is placed in the context of St. James Garden as it is a center point for a large planned urban infrastructure project. The palimpsest seeks to layer past present and future to record the changes in the environment.

2.5 Past, present and future

The past represents an archive over the conditions that have been recorded through scans, sound recordings, and three-dimensional interviews. The present represents the current state and its debate over where the community is headed. Here the Palimpsest lends itself as a design tool for participatory design. The paper will investigate what this means regarding the design of soundscapes and collective acts of listening.

The future represents the outcome of the joint efforts of the municipality, architects, artists and the government. The aim is to create an assemblage of urban conditions that can generate new discussions on how we develop our cities. The palimpsest offers a connection between the recorded visual and sonic inputs feeding directly into the senses, though there may be a disconnection between the different perspectives recorded in the palimpsest. For instance, how does the soundscape of a quiet apartment or a neighbourhood cafe overlay with that of a proposed building site? Alternatively, how will the recordings soundscape change over time? Moreover, what is the friction between what was proposed and what was realised? To answer any of these questions, we have to analyze how we perceive spaces through our hearing.

3.1 Hearing space. What constitutes natural spatial hearing?

When constructing virtual aural environments, it is necessary to consider how we hear spatial cues in a natural context. Our hearing is an important part of how we sense space, and it is closely connected with our other senses. The way that we can localize a sound in space can be broken down into three parts: “Interaural time difference (ITD) caused by the propagation delay between the ears, interaural intensity difference (IID) caused by the head shadowing effect, and spectral cues caused by reflections in the pinna.” (Talagala, Zhang & Abhayapala, 2014 p.1207) These three cues combined gives us a three-dimensional image of where the sound is positioned about ourselves.

Listening is a highly complex task that involves the mechanic transformation from air pressure through the outer ear also called the pinna and the ear canal. When it hits the eardrum, it is transformed into mechanical motion as it travels through to the middle ear. Here the signal is transferred to three bones. “The Ossicles, are strung from ligaments so that the ear drum pushes against the first (the malleus or the “hammer”), which yanks the second (the incus or “anvil”) which shoves the third (the stapes or “stirrup”) into an opening to the fluid-filled inner ear where neurons await”. (Jourdain 1997, p.8)

The pinna is constructed in such a way that its shape filters the sound that is reflected off it. These reflections will cause spectral differences that enable us to differentiate where the sound originally came from. The spectral changes caused by the pinna also highlights the frequencies of the human voice. So we have a passive equaliser that focuses on communication with other humans.

Binaural microphones

Fig. 2. In ear binaural microphone (2015)

When listening to the most important cue is the proximity of the sound, this is even more important than what the sound is. This is because this information is needed to know where to escape in case the sound turns out to be coming from a threat. -”Evolution’s priority was to find out where sounds come from rather than what they are. There’s not much point in distinguishing the sound of prey or predator when you cannot tell which way to approach or flee.” (Jourdain, 1997 p.20)

The final cue used to perceive spaces through listening is reverberation. Reverberation is “caused by repeated reflections of a sound source from the surfaces of an enclosure. Like a light source on a painting, the sound reflections from the surfaces of an enclosure or in the outdoor environment can potentially cause a significant effect on how a sound source is perceived.” (Begault, 1995 p.3). Through these cues, we can get a perception of a space that surrounds us and the events that are taking place. But many of the situations we find ourselves listening to today are recorded events. Is it possible to replicate all these spatial cues to fully perceive a three-dimensional space in a recording?
3.2 Listening in virtual environments. Why Binaural sound?

Binaural means two ears, and it stems from the fact that we need two ears to locate sounds in space. (Begault, 1995) Binaural sound recordings are an old technique dating back the early 1800s. It was first used to broadcast theatre performances in Paris through a double telephone line. (Blesser & Salter, 2009 Kindle Locations 4358-4359). Binaural sound gives the listener a perception of a Three-dimensional soundscape. Several systems can be used to replicate 3D sound why binaural sound and not surround system? To answer this question, I will briefly go through the competing systems and interrogate their pros and cons about the design project.

The most used setup is the Dolby THX surround sound system. Surround sound systems have become popular in commercial cinemas and home theaters. The positive side of a surround system is that it caters to a group of people and that it is a widely recognised format in the film industry. The main issue with a surround setup is that it is costly, and it is time-consuming to set up and to calibrate. The other matter with a surround sound setup is that it accompanies the localization of sound in elevation quite poorly. You perceive the sound as in a horizontal band around you.

The main benefit of a binaural setup is that it is incredibly easy to distribute. In its simplest form, it can be broadcast through any stereo medium as long as it is received through a set of headphones. The main drawback is that binaural sound does not work with speakers. However, what it offer is a sweet spot for every listener. In the context of the design, the portability and accessibility are the main advantages.

There are two ways of achieving a binaural sound. The first through recording the sound through a binaural microphone. This can be done either through a set of artificial ears attached to a dummy head. Alternatively, it can be recorded placing a set of microphones in a person’s ears. Both methods have its pros and cons. Recording with a dummy head gives the possibility to have a stable recording that can be positioned anywhere. By recording through a person’s ear, you get the benefits of the actual materiality of the individual and any spectral change caused by the body, torso, head or even hair.

Fig. 3. Beck and Chris Milk – Hello again (2014).

In 2013 Chris Milk’s collaborated with Beck on “Hello Again”. The recording of the 360 video introduced a new type of binaural dummy head. The dummy head allows a recording that captures a 360-degree rotation of binaural sound. The recordings were played back through a head tracking system using a web camera to synchronize the video with the viewer’s head. The second way of achieving binaural sound is through synthesizing the binaural cues through convolution. The technique is similar to convolution reverbs. To achieve this one needs a set of HRTF files. The files are recorded in a system similar to an ambisonic speaker array. The sound is then measured by a binaural microphone at series of points divided in a spherical array. A sine wave sweep will be played into the room and measured by the binaural microphones. The final result of these analysis files can be used to shape the sound, so that appears to come from a specific position in space. Soundscapes that are directly recorded with a binaural setup are beneficial as they sound natural to the listener. But these recordings are not possible to do if the soundscape yet does not exist. Though through the recent development of digital binaural technology it is now possible to shape the sound so that it appears in a virtual space. In other words, we can synthesize space.

3.3 Virtual soundscapes. Synthesizing space

Through the research of virtual soundscapes has aimed to uncover what is already known and available in the field and what may be missing. Concretely there are already several tools that enable us to synthesize and simulate soundscapes in real time available through game engines such as Unity. Binauralisation engines which utilise HRTF functions are available from plugins such as 3DCeption. The company behind it was recently purchased by Facebook as they are moving towards integrating Virtual reality as a platform for social interaction online. (two big ears, 2016). Oculus and Google also offer similar plugins for free. (Oculus Spatializer, 2015) This can be seen as a shift towards making spatial sound available to a mass market. Facebook recently purchased audio Two Big Ears in order to make their binaural sound software available to the public for free. (The software can be found here).

There is a range of tools to record visual aspects of an environment. A landscape can be represented through a single image frozen in time, but it requires more to capture the soundscape. The sounds are temporal and dependent on their sound sources. The project aims to create tools to record the sonic qualities of a given environment, whether it is a single building or a complex urban environment. To achieve this known method from audio engineering has been utilized. Such as the production of convolution reverbs. In the technique “the sequence of signal “events” is reversed: A Space is excited by a signal and recorded, and the resulting sound or ambience of that space is then processed and used to treat and react to an entirely different signal. “ (Hamberg, 2015). This allows the user to capture the acoustic profile of an environment, in a similar fashion that one can capture a three-dimensional scan of a space. By doing so, new sonic events can take place in this acoustic space. In this way, the relationship between an acoustic environment and its sound sources can be deconstructed.

Fig 4. Recording convolution reverb (2016).
Furthermore, it exists tools that enable us to simulate reverberation in real time based on the virtual geometry in the environment. The plugin that is available from Phonon (Phonon, 2016 ) uses a ray casting model which operates similarly to ray casting models that are used for simulating the lighting of virtual geometry in traditional render engines. Phonon can simulate the reverb of the space as well as the sonic occlusion of objects. So if an object like a tree or a column is obstructing a door, this will be perceived through alterations in volume and frequency. However, the research has shown that no visual feedback of acoustics is available for real-time game engines, the only tools that are available is for traditional sound visualization which visualises changes in energy along a frequency spectrum. However, this has little to do with the spatial quality of the sound and is not effective as a design tool. Now we will take one step back to analyze how spatial hearing takes place on a societal level. How do the sounds of our environment affect us and our society?

4.1 Sound, silence and the other

The soundscapes that surround us has become increasingly louder ever since the industrial revolution and what Murray R. Schafer refers to as the electric revolution (Schafer, 1993). As machines have replaced many of the natural sounds that used to be a part of our environment. As the noise of machines and other people around us increase what is the role of silence?

John Cage wrote the piece 4’33 after having visited an anechoic chamber. The piece is often attributed to being a piece consisting of silence, but it is in fact created by Cage’s realisation that we can never witness pure silence. Cage later uttered “There is no such thing as silence. Something is always happening that makes a sound.” (Schafer, 1993 locations 5274-5275) In 4’33 the sound of the piece is, in fact, the sound of the audience, the sound of the other. (Schafer, 1993 locations 165-166) The role of silence today can be seen as a search for peace and a form of introspection. However, can it also become an act of isolation? The electric revolution also introduced the opportunity to isolate sounds from their sound sources this enables us to curate our soundscape through the use of sound recordings. The playback devices are becoming increasingly portable and interactive and thus they are used to a larger extent. David Beer introduces the concept of tuning out. (Beer, 2007) He critiques the conception of the use of mobile music devices as a complete disconnection from the built environment. Instead, it is seen as a distraction. “Tuning out is to use mobile music reproduction devices, such as the iPod, to actively stimulate and prioritize the virtual mise-en-scene over the physical one” (Beer, 2007 p. 858). Does how we prioritize and curate these personal soundscapes, in short, which sounds we choose to listen to and which ones we choose to shut out say anything about the values that we assign to the sound?

Fig. 5. Cage – 4’33 (1952).

M. Hagood Analyses the urge to silence the other through the aid of noise cancelling headphones. The text looks at how Amar Gopal Bose noise cancelling headphones offer a personal space in air travel. This is in line with the neo-liberal ideal of personal freedom. (Hagood, 2011 p.574) Thus silence is something that becomes privatized and something that can be bought. The cabin of the airplane becomes an office space for those in commute who can afford to create silence through technological means. Modern soundscapes are coloured by the opposition between the noisy soundscape of the city and the personal soundscapes achieved through portable media players and personal headsets.

To deal with sound in virtual reality, we must first define what virtual reality is. Further on the term reality must be defined in the current context. As Begault states: “This contrast underlies an important difference between the phrases virtual environment and virtual reality–no correlation to reality is necessarily assumed grammatically by the first term, and we may certainly want to create virtual experiences that have no counterpart in reality.” (Begault, 1995 p.6)

Studies show that even if the sound pressure levels of spatial and monaural recordings are the same, the listener’s response is highly different. A study conducted by Maori Kobayashi, Kanako Ueno and Shiro Ise studied how the body reacts to virtual sounds that intrude on our personal space. As “The results of the physiological measures showed that the sympathetic nervous system was activated to a greater extent by the spatialized sounds compared with the non-spatialized sounds, and the responses to the three-dimensional reproduced sounds were similar to those that occur during intrusions into personal space in the real world.” (Nagendran, Pillat, Adam, Welch & Hughes, 2013 p.163). These studies show that virtual soundscapes can cause similar effects as if the event occurred in a natural situation. However, Virtual reality also allows us to explore soundscapes that otherwise would be impossible to experience.

4.2 A change of perspective

Fig. 6. Marshmallow laser feast – In the eyes of the animal (2015). More information about the artist in residency at the IALAB can be found here: MLF at the IALAB

In the eyes of the animal by Marshmallow Laser Feast lets the viewer take the perspective of four animals which are a part of the food chain in the forest of Grizedale. The experience creates a connection between the animal and the way that it perceives the surrounding landscape. For instance, a Midge can see the Co2 around it, and the viewer is thus allowed to see the flows of Co2 around them in the forest. This also creates a deliberate disconnection between the viewers normal way of perceiving the forest and the animal would perceive it. The experience naturally produces a translation of how the animals perception but the sensation creates an expression of the scientific facts. The installation also uses binaural sound to evoke the soundscape of the forest. What makes the installation relevant to my research is that it makes an experience which the viewer quickly can grasp in a complex situation. In the Eyes of the Animal renders something intangible as Co2 tangible. Could we, in that case, see sound?

4.3 I do not believe it until I see it. Seeing sound

Some VR experiences have experimented with different forms of synaesthesia or seeing sound. Notes on Blindness is another installation that started with an audio-only experience done in collaboration with the French company Audio gaming. (Spinney and Middleton, 2016 ) It was created by the two directors Peter Middleton and James Spinney. Notes on Blindness launched as both a VR experience and as a feature film. It is based on the story of John M. Hull a professor in Theology which gradually starts to lose his sight. He began to record his thoughts on becoming blind to a series of cassettes and these recordings become a narration in the film.

Fig. 7. Spinney and Middleton – Notes on blindness (2015).

The Experience starts in an almost entirely dark park and as you listen to the voice of Hull explaining how he experiences the park through sound more and more starts to appear around you. The landscape is rendered in blue points that shimmer in the dark. In another scene, you can see the landscape by blowing a wind through it. As the wind hits the objects, they start to become visible as the narrator explains that he can only perceive the surrounding landscape through the sounds that surround him.

Middleton and Spinney use sound as an effective medium in their installation to drive the narrative as well as it gives you a perspective of what it is like to lose your sight. However, even more so, it enables the listener to explore what it is like to listen to one’s surroundings. The interactive element in Notes on Blindness is effective but can it be pushed further? In the installation, you are sitting in one position throughout the whole experience, however, what happens if you are allowed to move freely in the virtual environment? The intersection between the physical and the virtual environment opens up a new set of design decisions.

4.4 Disconnected

Once an HMD is attached to the listener’s head, the person is no longer able to navigate their surroundings unless there is a feedback between the virtual and the physical environment. This becomes an architectural design challenge. How do we manage the intersection between the virtual and the physical environment?


Fig. 8. Diagram. Visual feedback from the environment (2016).

Brasil 27.04.201630One can augment a purely sonic virtual environment into the physical environment. Then the listener can navigate the physical environment by their vision. This has been done in projects such as the LISTEN project at the Kunstmuseum Bonn. (Zimmermann & Lorenz,2008) The project used a user adaptive audio interface that augments an exhibition of the paintings by the artist August Macke. While these systems can work well in a gallery situation, they can become limited as there quickly occurs a disconnection between the visual and the sonic input. If a listener is immersed in both a visual and sonic environment, there is a need for feedback from the physical environment for the listener to navigate. The feedback can be in the form of both visual and sonic cues. This diagram shows a grid representing a physical boundary. The diagram shows four stages ranging from full immersion to an entirely visible grid representing a wall.

4.5 Connected. Collective listening

Furthermore, the project investigates how people can connect through virtual soundscapes. What happens when people are no longer experiencing VR as an isolated experience? In virtual spaces, the relationship between the performer the listener and the space is completely malleable. Spatial relations manifest a hierarchy both of information and even the spectral information of the sound as we have seen before.

When working in a game engine, it is possible to use its ability to simulate physics. Because games rely on physics simulations to convey a realistic interaction between a player and its environment Game engines such as Unity can simulate physical properties in real time. Though the accuracy may be less than dedicated simulation software created for the building industry, the game engines can create a rich interaction between the actors.

Through several experiments of different scanning technologies, we ended up using a Faro Lidar scanner for the final installation. The scanner captures high definition scans through a laser and creates High-density point clouds. The Lidar scanner has an unparalleled precision in comparison to the Infrared scanner on google’s project Tango, but the quality of the scanning capabilities of the Tango is rapidly improving through the development of the tangos software. Pattern recognition is also being improved so that the geometries rapidly improved and cleaned up (Matterport, 2016). When the accessible project tango reaches the level of a Lidar scanner, it will become an effective tool for scanning large environments in real time.

4.6 Sound City. Festivals as a myth and an urban laboratory

Signe Brink Pedersen is a curator for the Roskilde music festival. Her work with the festival has culminated in an urban laboratory where artists, architects, and volunteers are invited to build their festival city. In fact “for eight days each year, the Roskilde festival creates a kind of temporary town with 100,000 inhabitants, which becomes one of the most densely populated areas in the world.” (Petersen, 2015) Pedersen points out that the festival can give people and experience of how that can affect the way the society that they live in. This can give a sense of empowerment, and this is something that can be taken back into the city.

Suzanne lacy

Fig. 9. Diagram – Lacy – Imagining degrees of engagement as concentric circles (1991).

Pedersen also refers to the work of Suzanne Lacy and looks at the festival as something that lives on in as a myth in the shared memory of the audience. Festivals can be seen as one of the largest forms of collective listening experiences. Some have such an impact that they may become a collective memory that alters the idea of a specific place and time The Woodstock festival is an example of this.

5.1 Augmented hearing. Early research.

The early research into virtual soundscapes focused on the intersection between the physical and the virtual environment. Supra hearing was the first iteration which featured the design of a custom binaural microphone. The microphone enabled the listener to affect directly the visual aspects of the virtual environment experienced through an Oculus Rift virtual reality headset. The listener experienced four different acoustic environments that each had their specific character. The installation aimed to give a close connection between the visual and auditory aspects and to give the listener agency over its outcome. The design process lead to two main realizations the first was that many listeners felt inhibited to make any sound at all, and thus did not change the virtual environment. Due to the use of a stationary system, the listener was also tied to one specific place in the virtual space. Most of the interaction happened between other people in the audience interacting with the microphone a and the person wearing the headset viewing the changes in the virtual environment. The work of Spinney and Middleton with “Notes on Blindness” shows that creating a link between the visual and the auditory experience is key even though the audio can take a leading role. However, the work with Supra hearing showed that the visual output does not necessarily have to mimic the sound or directly visualize aspects of it. A certain friction or disconnection can be allowed to happen.

The other takeaway from the project was that the pipeline to make the experience was too cumbersome. The setup used the node based visual programming tool VVVV to create the visual aspects of the experience. The information from VVVV was then sent to the audio based tool MAX/MSP. The connection was done over OSC messaging. Having to network two separate pieces of software over two computers turned out to be a bottleneck in the process. After doing further research, the solution of using the game engine Unity solved the issue of doing binaural sound with head tracked movement in one software package. Unity also allows the experiences to be exported to mobile platforms.


Fig. 10. Augmented hearing. 360 binaural microphone inspired by Chris Milk.

Together with a colleague an early test was done to combine binaural recordings done in a small forest in Norway with scans of the environment done with Google’s Project Tango. This showed the potential of creating mobile virtual environments with a small amount of portable equipment. The research was carried forward to explore the possibility to make a mobile experience that uses The area learning functionality of the Project Tango.
The first experiments for traces of reality included binaural recordings done with in-ear microphones. The recordings where done in a small auditorium at The Bartlett. The recordings were then later played back in the same space to a group of listeners through a set of headphones. The listener would sit in the same position as the person who recorded the binaural audio. The microphones used were a pair of the Roland CS-10EM recorded on to an Edirol R-09 portable flash drive recorder. The recordings were done at a sample rate of 16bit 44khz.

The listeners heard a conversation recorded between two actors, but while the conversation was going on a third actor moved the seating in the auditorium. The chairs were moved with increasing intensity in sound. Several of the listeners would first remark that they did not hear anything in particular and that it seemed like there was something wrong with the recording. After about 30 to 45 seconds, the listeners would look around to realise that the sounds that they were hearing were, in fact, coming from the recording and not from the space itself. This realization left several in disbelief. Some listeners described the experience as shocking or uncomfortable, and others reacted with laughter.

The test uncovers a link between the visual information of the actual space and the three-dimensional binaural recording. The listeners experienced a cognitive dissonance as they saw the dimensions of the room and the objects represented in the recordings. However, the objects did not move as they were in the recordings. Some listeners described it as listening to ghosts. The experiments on creating tension between connections and disconnections between the virtual soundscape and the physical environment were taken further into the work with the work with the next step of the design work, Traces of Reality. The next aim was to create a collective experience that could be shared by multiple actors and to tie this experience to a specific site to give it relevance to discuss how the virtual impact the way we use physical spaces.


5.2 Traces of reality. The Roundhouse

As a part of our design project we produced a virtual reality installation that was commissioned by the We Are Now festival. The festival is housed by The Roundhouse at Chalk Farm in London. The building was purpose built as a roundhouse for repairing trains (History Of The Roundhouse – Roundhouse), later on, because the trains grew too big for the rotation tables in the roundhouse it was used as a storage space for gin barrels. Later in the 1960’s, it became a space for theatre performances and concerts. Dominant bands and artists such as Jimi Hendrix and The Pink Floyd performed there. The history of the building is the foundation of the installation.

The primary objective of the installation was to create a collective virtual experience. The research laid the groundwork for the future progress of the Palimpsest. The installation explored interactions between two participants while connecting the virtual to the context of The Roundhouse. In the early stages of the design of the experience, 3D scanning was done by the use of a project tango. Convolution reverbs were also recorded to create a virtual model of the building to understand how the experience could be integrated into the space. This was key to designing a site-specific interaction between the building, the audience, and the virtual installation. The scan was later replaced by a real time scan to give the audience a visual feedback of their position in the space as well as other members of the audience. This served two purposes firstly it gives the viewer the opportunity to navigate freely in the space, but it also merges the virtual and the real and creates a new environment that is entirely time and site specific.

Roundhouse collage 02

Fig. 11. Tveito, Beaumont and Torisu – Traces of reality (2016)

As you enter into the experience, you enter into a wormhole. As time and space merges you can experience the three most important eras of The Roundhouse. This gives the experience a connection to the context and the site. This contrasts the notion of virtual reality being free from ties to a specific place. The participants are free to move in the virtual space as they would use the movement of their body to move in the virtual world. This decision was made due to earlier experiments where results show that one feels less motion sickness through using the HMD headset when moving one’s body rather than being moved by an external force.

Research shows that “Within the vestibular organs, working as gyroscopes, inform us when our own angular velocity changes, but they are unable to report constant-velocity rotation; the otolith organs, in turn, measure the direction of accelerations, but, as any accelerometer, they cannot distinguish between gravity and inertial forces”. (Bertolini & Straumann, 2016 p. 2). This leads to the fact that our body is the ablest to detect “natural self-propelled motion” (Bertolini & Straumann, 2016 p. 2) rather than motion caused by an external force. This is true both in the case of physical motion as for being on a moving boat or through visual illusions such as motion in a VR experience.

The ability to retain control under self-controlled movement in virtual environments correlates with the experience of the audience at the Roundhouse. During three days of exhibiting the piece, we received no complaints about motion sickness. This came as a surprise in many ways as the experience suffered from low frame rates due to the relatively small computational power of the Project Tango tablet. This may point towards that low frame rates may reduce the immersion but does not directly cause motion sickness.

The main issue with the project Tango is it is relatively low computational power. Because stereoscopic rendering requires every frame to be rendered twice the demands on the processing power is rapidly increased. In this regard, the virtual sound has an advantage of being less demanding on the hardware. Also, the sound does not suffer from the pixelation found in most HMD headsets. Thus the experience was focused more towards the interaction with the virtual environment. The installation allowed two participants to interact with each other in the virtual environment. This allows the virtual environment to become a social platform rather than an isolated experience. Further on each person would be visible in the world through their avatar. Based on earlier experiments the avatars were rendered with a large degree of abstraction, This allows a certain openness to take on a different identity often associated with virtual communities found online.

The experience relied heavily on the use of binauralised sound. The soundscape would change for each scene giving each its different character and sense of time. The experience starts with the psychedelic rock sounds of the 1960s. Each group of instruments is separated and tied to a specific object in the scene. All the sonified objects are floating in a zero gravity state; this allows the users to push the sounds around in the space. The effect that is achieved sonically resembles a rotating Leslie speaker often used on Hammond organs or a guitar effect pedal like a Phaser, which often was used by Jimi Hendrix. Here all the spatial cues described in the section about natural spatial hearing are at work. The binaural engine applies subtle filtering of the sound to position the sound sources in relation to the listener. When the sound sources move after being pushed by the listeners. The sounds are layered in three layers: Collision sound which only occurs when the listeners touches and object, object sound which are sounds that are tied to an object but it plays continiously and finally ambient sounds . The last category is not tied to a specific object but they give the scene its ambience.

Sound Layers-01

Fig. 12. Layers of sound in Traces of reality (2016).

The interactive aspect allows the participants to alter the soundscape together and to compose the way it sounds in a playful manner. The ambient sound sources overlap each other  and they are adjusted in terms of volume according to the distance to the listener.

movement in the virtual soundscape

Fig. 13. Diagram. Overlapping sound sources with volume adjusted according to the listeners position in space (2016).

The installation at the Roundhouse gave an insight into all the elements that would be needed to create a shared virtual reality installation. Traces of reality was installed on site at The Roundhouse during the We are now festival running for three days in total. Having a public display of the work gave us valuable experience of which aspects that worked and which that would need to be improved. Many participants tried VR for the first time through Traces of Reality; this made us aware that the introduction menu that was designed to lead the participant into the experience was not intuitive enough for those who tried VR for the first time.

Connected Collective listening-01Cropped

Fig. 14. Diagram. Two Project Tango units networked for collective listening (2016)

The menu was designed to give the user an introduction to the experience much like the introduction to a film. After the main narrative of the time travel is explained the user will activate a black hole through their gaze control. The black hole consisting of a mass of swirling particles would grow until the point it would absorb the viewer and transform the space into the psychedelic era of The Roundhouse.

It  was designed to function much like the introduction to a film. Because the real-time scanning started during the introduction experience several people thought that this was it, so they would walk around and scan the environment without any other content added. To avoid this from happening, we edited out the introduction. This made us realise that experiencing Virtual Reality in the setting of a festival is quite different from when being at home. A large number of people try the installation in a short amount of time so to accommodate this the menu was removed so that people would go directly into the experience knowing that they were in the right place. The introduction and the needed instructions would be given orally instead.

During the three days of exhibiting the work, we had the chance to get feedback on the project. At one point a family came in to try the experience. One of their children was sitting in a motorized wheelchair. The child was able to try the experience simply by having the headset held to his face and from assistance from his parents the wheelchair could be moved forward. This reminded us that one could attain possibilities that one either has lost or never has had in virtual environments. By the use of the wheelchair, the child could move through the soundscape like anyone else. The interesting thing was that the children often were the most unafraid to maneuver in the virtual environment. Today many children are being born with smartphones, tablets and connected objects as a given they seemingly adapted to the Virtual experience with ease. That being said a large span of age groups tried the experience and after some initial steps were able to walk freely around and experience the sights and sounds.

Fig. 16. The Roundhouse – Traces of reality  (2016).

Many of those who tried the installation responded to the connection between the objects and the sounds they produced. For instance hearing the sound of a train approaching from a distance then suddenly see it approaching with the corresponding sound increasing until they found themselves completely immersed in the train sounds was described by many as a new and surprising experience. The fact that the train both looked and sounded spatial added to the experience, but when they found themselves in the middle of the point cloud of the train the experience transcended what they could observe outside a virtual environment.


Fig. 17. Custom cardboard headset designed for Traces of Reality (2016).
After finishing the installation, the aim was to expand the of the virtual environment to a public space. The experience of working with the relatively limited computation power of the project Tango we aim to create two experiences that illustrate the two main potentials of mobile virtual reality and soundscapes. The decision was made to show people what the highest immersion that can be achieved through a commercial VR headset such as the Oculus Rift CV1 connected to a stationary computer with a high-end graphics card that can handle rendering all the needed information twice. However, what the stationary setup gains in visual fidelity and power to handle large amounts of information it loses regarding mobility. Because you are tethered to a machine which is stationary the movement is limited to a cable. Certain companies have experimented with lightweight stationary computers that fit in backpacks, but this will most probably only be a passing phase until we have truly powerful Virtual reality ready hardware integrated into cell phones.

5.3 The Camden Palimpsest

The Palimpsest is the next step in the investigation as it is introduced into an urban context. Through taking the public space of St. James Garden the idea of collective soundscapes is incorporated as a method of participatory design. The design requires no physical manifestation other than the context it is situated in. The VR headset becomes a gateway to the virtual space. The main potential of the project lies in a combination of its accessibility and its potential to be a shared experience. As stated earlier binaural sound can be achieved even through the most simple pair of headphones, it did not require an expensive set of bespoke speakers and calibrated listening rooms. Furthermore, mobile virtual reality is accessible through smartphones in combination with a cardboard headset. In a few years time smartphones will also be equipped with depth sensing cameras such as the project Tango. This opens up for a large number of people to use and contribute to virtual environments.

Fig. 18. Tveito, Beaumont and Torisu – The Camden Palimpsest. Overlapping soundscapes (2016)

In connection to the concept of soft architecture, the Palimpsest functions in a similar to an urban legend or a myth. The sonic aspect reinforces this as it is invisible and can fade gradually into the listener’s attention, merging the virtual layer with the environment. The urban legend is here understood as a narrative, legend or myth that informs the way we view a specific site or context. Further on songs and anthems may have the same function. As a piece of music will give a place a certain character. As Brink Pedersen’s work shows music festivals are another example of collective memories with a strong connection to sound. A festival is often a temporary affair that transforms a place for a set period. The rest of the year the space of the festival only exists within the memories and anticipations of the audience of the festival and those who are involved in other ways.

The research from the Roundhouse shows that it is possible to create shared and virtual social environments through the networking capabilities of game engines such as Unity. The implications of this are that it is possible to create shared soundscapes that can be shared by a large group of people in real time. The outcome of the project is a prototype of how the Palimpsest could evolve if it were implemented in St. James Garden. Though the end goal is to create a platform for shaping collective soundscapes the main outcome is a mediation. This is because the software necessary to produce the content still needs a fair amount of previous knowledge of software development and architectural design to yield a meaningful result. The first part of the palimpsest is made to explore its potential.

Fig. 19. Tveito, Beaumont and Torisu – Simulated Soundscapes (2016)

The process would be first to scan spaces and record sounds of spaces or objects that have a value or importance to you. The data is then brought into Unity to be edited. Once the data is loaded up to the Palimpsest, it can be placed within the context of the park. This can be done as a collective effort. Models and sounds can also be made or collected online or modelled in 3D modelling software such as Rhino or Sketchup. When the soundscapes are situated in the space they can be experienced by walking through the space of St. James Garden. As you move past, present and future merges. In this way ¨new connections can be made that otherwise would be outside of our perception of time. As Blesser and Salter point out: “To preserve our experience of aural architecture, most of us depend on long-term memory, which, without extensive training and practice, is even more unreliable than short-term memory. For this reason, few of us accumulate aural experiences of spaces.” (Blesser & Salter, 2009). The palimpsest can function as a spatial record that stores soundscapes evolving over time. The acoustic properties, as well as sonic events, can be experienced. The aim is to create a platform which can be used by both members of the municipality, architects, designers politicians and developers. As the recordings sit side by side in the palimpsest, they can provide a pluralism that otherwise would be hard to achieve in traditional media channels such as printed press, radio, television or even online web-pages. Over time the amount and richness of the information will only increase as it constantly is overlaid and rewritten.

The experience was used to test several forms of recording and scanning the environments. Ranging from the rough mesh based scans of the project Tango to the high definition of the Lidar scan. The expressions narrate the difference of past present and future. The past is rendered in abstracted and loose particles as distant memories. The present is represented through interviews, sound recordings, and scans.

The palimpsest appartment

Fig. 20. Tveito, Beaumont and Torisu – Within the soundscape of an appartment in Camden (2016).

The design discusses different ways of rendering the scanned data. The way it is presented is key to the narrative that is conveyed through the experience. In the early version the point clouds were rendered in a static manner, but this hindered a dynamic experience. When the static point clouds sit alongside the sounds which are temporal it creates a disconnection between the audio and the visuals. Referring to the use of a dynamic treatment of point clouds in “In the eyes of the animal” the point clouds become a part of the narrative of the experience, and creates another level of interaction. As when you take on the vision of an Owl, the points in the point cloud grow when they appear in your peripheral vision. This is a reference to the Owls vision which is extremely sharp towards the centre and blurry towards the periphery. The dynamics of the point clouds used in The Palimpsest can be ascribed new characteristics and behaviours depending on the new narratives that occur.

5.4 The festival

Through the knowledge of natural spatial hearing, The design of the Palimpsest aims to alter the way we hear a space in real time. This allows a group of people to share the same virtual environment and further on alter it as they see fit. This allows an intuitive discussion about the surroundings that remains spatial and directly relates to the perception of the actors. The actors will be able to experience the changes both in the terms of the environment but also in relationship to the other actors. The Palimpsest is designed to discuss possible outcomes of the situation and what it could generate if it were given to people.

As an example, we looked at Drummond Street, which is situated close to the proposed renovation of Euston station. The restaurant owners of Drummond Street has a concern that the long-term construction process of the High-speed train line will inhibit their business both due to the obstructions of the traffic and the access to the area but also due to noise from the construction site itself. As a possible outcome of the palimpsest, we propose a street market and a festival. As a compromise to the ongoing construction, the festival could transform the street one day every weekend and well as an afternoon during the week. This will allow both access and a complete change in the soundscape. During the rest of the week, the memory of the Drummond Street festival sits within people’s minds as a collective memory. Further on this is enhanced by the fact that it is possible to visit the festival within the Palimpsest, here it can be further improved and evolved until it can be implemented for the next event.

Drummond Street

Fig 21. Tveito, Beaumont and Torisu – The festival (2016).

The palimpsest is seen as a way to test new ideas coming from both the municipality and the developers. Like the Electric Walks of Christina Kubish (Polli, 2012) the project also aims to create awareness around the soundscape that surrounds the Garden. As Kubish’ work renders electrical circuits found in the urban environment audible to the human ear The Palimpsest render past, present and possible future soundscapes audible to the listener.

5.5 Further studies

The scans accumulated by the Palimpsest become the geometry needed to simulate the acoustics of the environment. Once the existing environment is present new layers can be added and a group of people can interrogate how the soundscape will change. Sound sources can be recorded and placed in the environment as well as they can be simulated through ray casting. For instance, a group of people from a municipality can interrogate what the concert space in their new gathering place or how a planned infrastructural project will affect the public spaces regarding noise.

Further on the tools available for synthesizing soundscapes in game engines allows us to position sound sources in space through binaural technology, it allows us to create acoustic properties based on the virtual geometry, and even the occlusion of these objects. But there is a gap regarding visualising acoustic properties in real time. Acoustic simulation packages such as Odeon that utilise scatter based models are not widely accessible due to their high cost. (Odeon price list, 2015). However, the particle system in Unity can be programmed in a similar fashion, as the system allows a collision detection. This allows the particles to be reflected off a surface. It also enables the user the particles to generate a collision report. This can be used to assign specific properties to other objects that the particles collide with. This can enable the listeners to assign acoustic properties to the materials of the virtual geometry. If the sound particle collides with a wooden panel, it will know this. Upon the collision, it could further change colour as well as generate a report which could be uploaded to the palimpsest or given directly to others who are involved.

Fig. 22. MIT (CSAIL) – Visually indicated sounds.

Another aspect of integrating is the technique of generative soundscapes. These can be split into two main categories soundscapes based pre-recorded sounds and sounds based on synthesized sounds. Today the first gives the most convincing results, but researchers at MIT have found ways to predict what objects sound like when being struck by another object such as a stick. Through machine learning the sound of the object can be anticipated much like humans can anticipate what an object will sound like. It is easy to see how this can inform the future soundscapes of the Palimpsest. What if we could anticipate and further on simulate what the future city will sound like? This can give a unique impression when sitting alongside the layers of past and present states of the city.

6.1 Conclusion

The thesis raises the main question: “Can virtual, mobile soundscapes go from being individual to become collective experiences that inform the way we shape our cities?”. Through the research it has become clear that the infrastructure to create shared soundscapes that are linked to physical spaces already exist.

Which tools are available to create virtual soundscapes that are spatial and where is there a need to create a new platform? The tools can be found within game engines such as Unity, and these experiences can be made accessible through mobile technology. This also allows the soundscapes to be tied to a physical environment through area learning.

Can virtual collective soundscapes inform the way that past present and future merge in a virtual environment?
The project proposes a Palimpsest that can be written and rewritten to record the past present and future states of the urban soundscape. The Palimpsest is using virtual reality to create shared spatial experiences that allow the listeners to take other perspectives and together draw new conclusions on the future of the city.

By incorporating the element of sound the discussions of how our cities change will be richer. There is historically few documents over how urban soundscapes have changed and today the tools are available to incorporate soundscapes in virtual archives. The design of soundscapes can be used as a real-time collective design tool through virtual reality as well as it can create an experiential output that may generate discussion around how our soundscapes and cities evolve.

Fig 23. Tveito, Beaumont and Torisu – The Urban Palimpsest (2016).


6.2 References


Schafer, R. Murray (1993-10-01). The Soundscape: Our Sonic Environment and the Tuning of the World (Kindle Locations 5274-5275). Inner Traditions/Bear & Company. Kindle Edition.

Blesser, B. Salter, L.R. (2009). Spaces Speak, Are You Listening?: Experiencing Aural Architecture The MIT Press. Kindle Edition.

Talagala, D. S., Zhang, W., & Abhayapala, T. D. (2014). Binaural sound source localization using the frequency diversity of the head-related transfer function. J Acoust Soc Am,

Begault, D. R. (1995). 3-D Sound for Virtual Reality and Multimedia. Computer Music Journal, 19(April), 99.

Jourdain, R (1997). Music, the Brain and Ecstacy, How music captures our imagination. Harper Collins Publishers, New York.

Creating Sonic Spaces: An Interview with Natasha Barrett Authors(s): Felipe Otondo and Natasha Barrett. Computer Music Journal Vol. 31, No. 2, Creating Sonic Spaces (Summer, 2007)The MIT Press. Accessed: 29-03-2016 10:48 UTC

Oculus Connect 2: 3D Audio: Designing Sounds for VR
Tom Smurdon, Audio Content Lead, Oculus

Mise-en-scène, U., & Beer, D. (2016). TUNE OUT”¯: MUSIC, SOUNDSCAPES AND THE, 4462(March).

Zimmermann, A., & Lorenz, A. (2008). LISTEN: A user-adaptive audio-augmented museum guide. User Modeling and User-Adapted Interaction, 18(5), 389—416.

Oxford Dictionary [Online] (2016)

Two Big Ears [Online] (2016)
(accessed on 10.07.2016)

Odeon price list 2016 [Online] (2016)
(accessed on 10.07.2016)

Matterport Brings 3D Capture to Mobile. Matterport. [Online] (2016) (accessed on 10.07.2016)

Lenovo 2016 [Online] (2016)
Oculus Spatializer 2016 [Online] (2016)

Zeng, X., Lynge, C., & Rindel, J. H. (2006). Practical methods to define scattering coefficients in a room acoustics computer model, 67, 771—786.

Spinney, J and Middleton P, Notes on Blindness, 2015.

Hamberg, K, Convolution reverb explained [Online] (2016)

Signe Brink Pedersen, City Link symposium [Online] (2015)

“History Of The Roundhouse – Roundhouse.” Roundhouse [Online] 2016.

Smith, S. (2016). Mapping the Terrain , Again.

Schafer, R. Murray (1993-10-01). The Soundscape: Our Sonic Environment and the Tuning of the World (Kindle Location 1486). Inner Traditions/Bear & Company. Kindle Edition.

Blesser, Barry; Salter, Linda-Ruth (2009-09-18). Spaces Speak, Are You Listening?: Experiencing Aural Architecture (Kindle Locations 463-465). The MIT Press. Kindle Edition.

Skovgaard Petersen, Casper – Roskilde Festival: a laboratory of cities [Online] (2015) (accessed on 20.07.2016)

Image references:

Fig 1.The Associated Press – Archimedes Palimpsest.,-scholars/1

Fig. 2 Tveito, H. Beaumont, R. Torisu, T, 2016.
In ear binaural microphone.

Fig. 3. Milk, C – Being there, 2015. https://(

Fig. 4 Tveito, H. Beaumont, R. Torisu, T, 2016.
Recording convolution reverb Impulse response. https://(

Fig. 5. Cage – 4’33 Score (1952). Live at the Barbican BBC Four. https://(

Fig. 6. Marshmallow laser feast – In the eyes of the animal (2015) https://(

Fig 7. Spinney, J and Middleton P, Notes on Blindness, 2015.

Fig. 8. Tveito, H. 2016
Diagram. Visual feedback from the physical environment

Fig. 9. Diagram – Lacy, S – Imagining degrees of engagement as concentric circles.  (1991) Mapping the Terrain.

Fig 10. Tveito, H.  Aghakouchak, A. Chaturvedi, S.  Augmented hearing. 360 binaural microphone inspired by Chris Milk.

Fig 11. Tveito, Beaumont and Torisu – Traces of reality (2016)

Fig 12. Tveito, Beaumont and Torisu. (2016) Layers of sound in Traces of reality.

Fig 13. Diagram. Tveito. H, (2016)
Overlapping sound sources with volume adjusted according to the listeners position in space.

Fig 14. Diagram. Two Project Tango units networked for collective listening (2016)

Fig 15. Tveito, Beaumont and Torisu (2016) – Traces of reality. The train in the Roundhouse

Fig 16. Tveito, Beaumont and Torisu. (2016)
The Roundhouse – Traces of reality

Fig 17. Tveito, Beaumont and Torisu. (2016) Custom cardboard headset designed for Traces of Reality.

Fig. 18. Tveito, Beaumont and Torisu – The Camden Palimpsest. Overlapping soundscapes (2016)

Fig 19. Tveito, Beaumont and Torisu (2016)
– Simulated soundscapes

Fig 20. Tveito, Beaumont and Torisu (2016)
Within the soundscape of an appartment in Camden

Fig 21. Tveito, Beaumont and Torisu — The festival (2016).

Fig. 22. MIT (CSAIL) — Visually indicated sounds.

Fig 23. Tveito, Beaumont and Torisu — The Urban Palimpsest (2016).




Submit a Comment