Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

No Comments

Exploring Spatiality Through Sound – A Sonic Interpretation of Movements for Live Performance

Exploring Spatiality Through Sound – A Sonic Interpretation of Movements for Live Performance

Cross-disciplinary collaboration and complex systems have contributed to interactive live performances in terms of interpreting physical world into the virtual or the other way around. Meanwhile, spatial representation has merged various theories and technologies, and extended conventional ways of people perceiving space and interacting with it. This design report presents an experimental system that brings spatial representation and live performance together through sound. There are differences and similarities between contemporary dance, computer vision, sonification, and spatial sound related to spatial representation. This project facilitated through building connection amongst each. The study starts with the theory of spatiality, exploring spatial expression through dance performance, using computer vision and sonification as the digital approaches. By translating body movements into sonic expression and mapping it back to physical space, one can perceive movements and space by listening, or creating sound though space (using body movements). Potentially, the system allows multi-user and multidisciplinary collaboration related to designed performances, thereby, providing users and audiences an unconventional experience in terms of spatial representation.

  1. Introduction

The study of spatial cognition and representation has permeated various areas. Illustrations can be found in the scope of neuroscience and psychology where Gibson (1979) explored human perception, and the scope of computational modelling where Hirtle (2013) investigated models for spatial cognition, as well as the scope of choreography and digital performing arts where body movement meets digital devices and computer techniques (Rubidge and MacDonald, 2004; Leach and deLahunta, 2017). This phenomenon has led to many cross-disciplinary collaborations and the development of complex system concerning spatial representation. Particularly, the collaboration of live performance and digital techniques have created unconventional spatial experience and brought new conversations amongst different domains such as dancing, digital sound, visual arts, and space design. As Christiane Paul states, digital art works and space frequently lead to possible relationships between virtual and physical space, and the threshold of which depends on the approaches bringing one space into the other and the balance between them (2003). By translating virtual elements into physical environment, or mapping physical world into the virtual, or fusing the two (ibid.), not only can relevant concepts and techniques benefit from such interdisciplinary communications, but also such collaboration contributes to unconventional experience of perception and interpretation in terms of physical and virtual space.

The purpose of this project is to explore modalities of spatial representation in multidisciplinary domains and their correlation. This means different subjects play a role but are gathered as a whole in this complex system. Specifically, the project has brought live dancing performance, camera-assisted system, and movements sonification into a system interpreting spatial information: a performer is invited to dance in front of a webcam, and the dancing movements are subsequently translated into spatial sound as the outputs in real time.

In this context, the movements are considered the dancer’s personal understanding of the given space and a personal creation of transitory space. It is important to observe that body movements through dancing are the tangible interpretation and expression of the space, which is unique and unrepeatable, while the sound is an intangible expression yet influential to the performer’s next move. In other words, the input (body movement at this movement) is also an output partially coming from the last movement, which completes a loop in this complex system. By doing this, cross-disciplinary theories and techniques can be combined in terms of spatial interpretation, providing a chance of breakthrough of conventional spatial representation as in typical architecture for example.

Given the fact that this project has brought several subjects together, this design report will present the relevant theories respectively in Chapter Two: starting at the theoretical discussion about spatiality, explaining the fundamental relationship between space perception and human action, then moving toward investigating how contemporary dance related to spatial representation and how digital approaches enhance such expression and contribute to performing arts, after which is a brief review of computer vision as an interface for interactive art with an emphasis on camera-assisted live performance, followed by the theory of how interactive performing can benefit from responsive sound and movement sonification in real time by reviewing the work of Sarah Rubidge (2004), et al. Chapter Three will present the approach of this project, including the space built for performance and the main digital process of it, as well as the limitations of the current system. In Chapter Four, criteria will be given to assess the interactivity level of current system. Some users’ feedback and the possibilities for dancers, architects, and programmers to contribute to a live performance will also be discussed.

  1. Background Study and Related Work

2.1. Concept of Spatiality

Space and motion underpin the concept of this project. Although space perception and human action are traditionally considered independent disciplines (Warren, 1995), an increasingly number of theories and studies addressing spatial topics have argued that these two fields are closely linked and one would not be complete without the other (Fajen & Phillips, 2013). From a biological perspective, spatial competence has been playing a vital role in human adaptation and the development of human society. This influences the full range of human activity spanning from basic survival and reproduction to high-level human activities (Newcombe and Huttenlocher, 2000). Arguably, such ability refers not only basic daily activities such as walking from A to B, but also higher-level activities such as sophisticated mathematical modelling, geographic navigating and mapping, and diagrammatic summarising. Studies concerning spatial competence have shown growing interests in cross-disciplinary projects, complex human activities and multiple modalities of spatial representation and reasoning. Examples are diverse: multisensory contributions and information extraction in the processing of spatial perception (Mohler, Luca and Bülthoff, 2013), self-motion and space memory in terms of landmark-based navigation and path integration (Philbeck and Sargent, 2013), spatial reasoning tools used in cognitive simulation developed by Ferguson and Forbus (1999), etc.

As the motivation of this project, the study has laid the emphasis on representing and manipulating spatial information and communicating such information in different forms, from both physical and virtual aspects. Therefore, by leading spatial representing and reasoning into different domains and building communication between physical and virtual space, our perception of space can be extended and various ways of interpreting space explored.

However, a primary question need to put forward before spatial representation has been discussed: what is space? From a physical perspective, space is considered extension in all directions from a given point (Curl and Wilson, 2015). However, when one speaks of space, it is also information, such as location, distance, shape, size, colour, order, reference system that we mean (William H. Warren, Jr., 1995). These are the “extensional properties” of space, which Daniel Montello and Martin Raubal also defined as spatiality (2013). Yet spatial perception, as stated before, would not be completed without human action. Warren (1995) also implies that perception of space is the preparation of subsequence actions with purposes rather than obtaining basic information. Given the context of this study, “extensional properties” as the definition of spatiality is questioned. Therefore, spatiality discussed here is defined as below:

Spatiality: A term that denotes socially produced space, rather than space conceived in absolute terms. That is, spatiality recognizes the roles people play in creating space and the interaction between space and human action. Spatiality denotes the idea that rather than space being a backdrop to social life, it is constitutive of social life” (Castree, Kitchin, and Rogers, 2013).

This definition has emphasised the spatial effects on human action, indicating that space and activities are mutually embedded in each other instead of being considered independently. This also means that human action is the interpretation of space while spatial expressions potentially lead to a variety of human activities in the context of human society. In other words, as Warren claims, there is a bipartite process in such interaction: perceiving, through which the current states are informed, and acting which alters the surrounding affairs (1995). He also describes such circular causality “a perception-action loop” (ibid.). Therefore, spatiality shall be discussed from both spatial and behavioural points of view.

To narrow the scope, architecture is considered the domain for further discussion of space for three reasons. Most importantly (an architectural perspective), space could be so thoughtful as it touches the immediacy of individuals’ sensory perceptions (Rebecca Ruth Watford, et al., 2013), and transcends its physical boundaries, entering people’s imaginations, and touching their emotions (Holl,2006). Watford concludes that such communication provides rich meaningful multi-sensory experiences (2013). Gaston Bachelard (1994) has also described space experience a “phenomenological event” that mainly focuses on emotional, and personal experiences of place and space, through which individuals project their emotions and associations onto architectural space (Pallasmaa, 2009). This means that spatiality combined with architectural space and individual activities, especially individual creativity become akin to a scene in a performance where the action of life unfolds.

Secondly (a straightforward reason), people engage with built environment every day. In other words, we stick to changing space constantly. Architects bring interaction into constructions by creating and modifying space. In relation to the application in this project, the potentials for architectural collaboration will be discussed in Chapter Four.

Another reason is the similarities between architecture and dance in terms of interaction which will be discussed in the next section.

2.2. Contemporary Dance and Digital Performance

Human action as discussed in last section is rather broad, mainly referring to daily general activities. As Britt Elena Bandel Jeske (2013) has observed, daily movement has the same spatial abilities as dance performance though the latter is more readily observable. Therefore to narrow the scope, while the system described in Chapter Three can respond to any kind of activity, it is contemporary dance that is considered in this project. This section will present how contemporary dance is related to the spatial representation with an emphasis on how its expression benefits from digital performance.

It is important to notice that both architects and dancers strive to create and define space, whilst designing for movement in space (Bandel Jeske, 2013). This can be reflected by many studies in the relationship between dance and architecture (Bronet and Schumacher, 1999; Evelyn Gavrilou, 2003; Vera Maletic, 1987). In The Production of Space, Henri Lefebvre (1974) claimed that dancing body has the capability to shape the given space while it is at the same time shaped by the surrounding constructions, and that even many social spaces are given certain rhythm by the gestures which are acting within them. From this point of view, dancers do the same things as architects, even though body movements are impermanent. Bernard Tschumi (1990) agreed and explained more explicitly that it is the movement-created space in parallel between dancers’ movements and physical space (such as walls and columns). This transient action of space can be easily amplified and observed through the time-lapse photo by Heidi Wilson (Figure 1).

Figure 1: Movement-created space. Time-lapse photo of Heidi Wilson, Dance Workshop.

The features of contemporary dance also provide various potentials of interaction. Unlike classic dance, expressions of contemporary dance are not constrained by performing technique, established movement patterns, audience types, styles, or culture, but strive for dynamics and unconventionality, which frequently leads to dialogues with other disciplines such as digital media, computer technologies and cognitive science. It is true that computer based collaborations have been playing a dynamic and increasingly important role in dance, live theatre, and new forms of performance emerged in interactive installation since the end of the twentieth century (Steve Dixon, 2007). In the 1990s, Wayne McGregor, a choreographer, together with his company, Random Dance, started a decade-long collaboration with a team of cognitive neuroscientists, concerning the relationship between mind and body movement, as well as digital approaches for dance-making (Leach and deLahunta, 2017). This research comprised several fields, leading to some innovative ideas and mutual benefits to all related domains (Rosaleen McCarthy, Alan Blackwell, Scott deLahunta and Alan Wing et al., 2006). Software for Dance Project (London, Autumn 2001), a workshop facilitated by Scott deLahunta, brought together four choreographers and five artists/programmers to discuss how computer might specifically assist the rehearsal process with the focus on open source code (Steve Dixon, 2007). Another workshop facilitated by Scott deLahunta, New Performance Tools: Technologies/Interactive Systems (Ohio State University, January 2002), also gathered a group of experts and artists to explore the implications of collaborating with interactive tools and computer controlled systems from both practical and conceptual concerns within live performance and installations (ibid.). Examples also can be found using various digital tools. Yacov Sharir choreographed an entire dance on the computer using Life Forms and Poser software; Merce Cunningham projected images of virtual dancers on stage, created by combining motion-capture techniques and advanced animation software (ibid.).

Although the valuable sharing of experience and research among these projects raised more questions than answers, with no particular conclusions discerned (ibid.), there is a reminder that choreography and live performance are becoming increasingly available to specialists and non-specialists (ibid.), and that by building “ready-to-use” facilities, computer software and programming are becoming appealing to performing artists and a wider range of users. Whereas Dixon has observed that the words “digital” and “computer” has been less used to highlight special “magic” within dance performance since the 2000s (2007). This phenomenon is an indicator that digital tools and techniques have been deeply embedded and thoroughly assimilated with dance practice (ibid.). It also means, the live presenting, the interaction and implications are always the highlight of live performance rather than any digital techniques. Thus in this project, contemporary dance acts as both the input of the digital process and more importantly, the form of presenting. This will be detailed in Chapter three, and the brief background of the digital approaches used in this design work will be addressed in the following sections.

2.3. Computer Vision as An Interface for Live Performance

Dixon  has observed that most visual images used in digital performance are initially from lens-based camera systems subsequently digitalised and manipulated within computer, instead of generated entirely from computer (2007). Camera-assisted interface therefore provides a primary starting point for exploration and discussion of digital performance and its liveness. Morrision also believes that as a human-machine interactive interface, compute vision systems are more attractive than other input devices as they are more versatile and they add capabilities similar to human vision (2004). Senior and Jaimes (2010) agree that by giving computers the sense of sight as humans, computer vision has opened a wider scope of potentials for perceiving and interacting.

Figure 2: image collection of VideoPalace (Krueger, Gionfriddo and Hinrichsen, 1985)

There is little doubt that computer vision as a technique and tool has become a significant component within digital media and human-computer interaction through spatial cognition and representation, spanning from simple photography to sophisticated ambient intelligence. As early as in the nineteenth century, Eadweard Muybridge investigated body and human action using sequences of photography (Muybridge and Taft, 1955), and Myron Krueger developed the VideoPalace in the 1970s, which combined a live video of a participant and a computer generated image that can adjust its own shape according to the participants’ movements within the video (Figure 2) (Krueger, Gionfriddo and Hinrichsen, 1985), which is considered the earliest interactive work of art using computer vision (Senior and Jaimes, 2010). A recent project concerning three-dimensional dynamic scene reconstruction in real-time using a single depth camera has laid the foundation for ambient intelligence such as simultaneous localisation and mapping (SLAM) systems (Lu et al., 2018).

Furthermore, In the context of live interactive performance, with different computer vision techniques used, the artistic expression and liveness have been extended significantly, and frequently related with sound generation. For example, a tiny device, developed by Lyons, Haehnel and Tetsutani, generates MIDI sound by detecting mouth gestures of the user (2003).

Figure 3: Flavia Sparacino demonstrating her DanceSpace Installation (Paradiso and Sparacino, 2007)

In the work of Valenti, Jaimes and Sebe, facial expressions recognised by the developed system are digitalised and mapped to sound waveforms and a set of parameters modulating sound (2008). EyeMusic, where computer vision is used to detect eye movements which are subsequently processed in a music and multimedia environment (Max/MSP/Jitter), is another system designed for music performing (Hornof, Rogers and Halverson, 2007). Paradiso and Sparacino also used optical tracking systems as human-computer interfaces collaborating with music and dance in interactive multimedia performance (1997). In this project, an interactive stage: DanceSpace, is created, by presenting audio and visual effects while participants are dancing within it, despite their dancing skills (ibid.). By moving their limbs, the user can create a series of multicoloured trails across the wall screen (Figure 3). (See more at http://paradiso.media.mit.edu/SpectrumWeb/captions/DanceSpace.html)

Examples alike can be found frequently in camera-assisted live performance, and one can easily notice that data provided by camera for subsequent process is flexible so that can be selected on purpose, mapped, or manipulated in various ways. More importantly, neither the movements nor the generated sound can hardly be repeated in these live performances. This means that a unique performance is created each time, which reinforces the importance of liveness and the present of the performer. These features are considered very important in this project, which will be further discussed in Chapter Three and Four.

 

2.4. Sonification of Movements

As mentioned in last section, sound generation is frequently linked with camera-assisted live performance, and with spatial sound engaged, spatial representation can be extended, as well as immersive experience. Dixon (2007) suggests:

Responsive sound is often key to the physical and immersive experience of installation (over its observation merely to be seen), establishing moods and eliciting emotional responses that emphasise and enhance the visitor’s connectedness to the space.”

Referring back to the discussion of spatial representation and dance performance, it is little doubt that the enhancement of the performer-space connection is essential, as through which there is technically more information that can be interpreted by the dancer’s movements, and potentially leading to more delicate interaction within performance.

An inspiring interactive work of choreographer and digital artist Sarah Rubidge and composer Alistair MacDonald (2004), Sensuous Geographies, has demonstrated this theory articulately. Participants blindfolded, are instructed to enter the space and stand stationary to locate themselves by listening in this resonant space, then they can move to follow the sound that has been identified as their personal sound (ibid.). In this installation, a video camera system is used to track visitors’ motions individually within the active space (ibid.). The generated sound for each visitor is modulated by their own moving direction and speed (ibid.). This system therefore encourages a group of people to affect and build the sound environment, which described as “a personal/group signature tune that constantly shifts and cannot be repeated” by The Herald (Brennan, 2003). Dixon commented that such experience is delicate, corporeal and uplifting, as participants perceive space through sense other than sight, and such subtle sensations permeate the body while the space is “played” in a musical way (2007). Individuals therefore listen and mark space for others, respond, contradict each other, and build duets and ensemble sequences together as shown in the video (Alistair MacDonald, 2003. see at: https://vimeo.com/123180961). Another movement sonification work is “afaao” (short for “as far as abstract objects”), a multimedia contemporary dance performance, where the dancer’s live movements are complemented and enhanced by a space filling electronic sound installation in collaboration with digital animated images (Papadopoulou and Schulte, 2016). This transdisciplinary research, as Papadopoulou and Schulte (2016) stated, is designed to augment the space-time limited within a live performance, and to articulate the hidden dynamics of contemporary dance movement. Thus, there is more than simple translation for sonification of dance movement, but wordless communications between space, body, perception and expression as discussed in previous sections.

In terms of digital tools sonifying data, Max/MSP is a software used most frequently in sonic interaction, which is created by Miller Puckett in an open-source version in 1990s (Senior and Jaimes, 2010; Dixon, 2007). It uses a graphical display instead of mathematical code manipulation, and allows inputs from various sensors including cameras, whilst presenting visual and/or sonic response in real-time (Dixon, 2007). Therefore a “live” relationship is created between the performer and multimedia at the time no matter digital video or audio materials presented, which has encouraged performances requiring liveness and dynamics. For example, Troika Ranch, a dance-theatre group from New York, has developed their own interactive system using Max/MSP (Dixon, 2007). The members of the group explained that unlike recorded soundtracks or images, a dancer hardly present exact the same performance and deliver precisely the same emotions each time, even though with the same media material (Broadhurst, 2008). Therefore, by translating body movements into live music, media elements that are accompanying the dancer during the performance are alive and have been given the same sense of liveness and “the chaos of the human body” (ibid.). As Dixon commented, “Troika Ranch conceptualised media-activating computer-sensing systems as something equivalent to a musical instrument” (2007). The interactive sound system in Sensuous Geographies is also built in Max/MSP.

Given the background theories discussed and related design work reviewed in this chapter, spatiality, dance/body movement, camera-assisted system, and sonified live performance are closely linked with each other, and constitute a complex and multi-strand system as a whole. With such combination, an unconventional representing of space therefore can be developed by body movements and responsive sound in real time.

  1. Approaches

3.1. Setting Space

The shape of active space provided for visitors for interacting may suggest the basic direction of their movements. For example, a linear space is more directional for motions in terms of which way to move. This project simply provides two shapes of active space: a rectangular space (directional) and a round/square space (non-directional). Space settings for these two are slightly different concerning the positions of the webcam and speakers. The rectangular space is approximately 5m×2m, while the actual size could be flexible depending on the focal length of the webcam and the size preferred for performance. It requires four speakers (Genelec 8030C in this project) positioned in a line at a side of the site and about 1.8m above the floor, and a webcam placing at the middle of the opposite side (see Figure 4). Alternatively, they can be placed at the same side to avoid the speakers in the background. Four speakers are connected to a computer through a 4-channel soundcard and the webcam is also connected to the computer where Max/MSP is running (see Figure 4). Users can feel the generated sound “follows” their movements from one side to the other.

Figure 4: plan view of a rectangular space setting for performance

The round space is surrounded by eight speakers connected to an 8-channel soundcard, and the webcam hangs overhead at about 2.7m above the site (see Figure 5).

Figure 5: plan view of a round space setting for performance

The size of the active space depends on the height and focal length of the camera. Thus, users will not observe the camera at the first place, therefore more focusing on the generated sound and their movements. Alternatively, speakers could be placed at eight corners of a cubic space, providing a more immersive resonant space as the sound comes from different heights (see Figure 6&7). In this situation, speakers indicate users’ movements in a slightly different way, which will be detailed in the next section.

Figure 6: plan view of an alternative square space setting for performance

Figure 7: a more immersive resonant space to interact

 

3.2. Data Process in Max/MSP

Figure 8: 400 pixels separated into 16 areas

This section will present the algorithm of developed system using the square space (see Figure 6&7) as an example, as this is the last version developed. The Max/MSP patch (Max) developed recognises the grayscale of pixels from the live images caught by the webcam. In other words, this system only read the grayscale (integer: 0-255) from each pixel, which means the number of values obtained at each time is the same as the total pixel count. To reduce the process load, the resolution has been set to 20×20 pixels, so there are totally 400 values from each frame (24 frames per second). These 400 pixels are separated into 16 areas as shown in Figure 8, and these 400 values are proceeded in two ways at the same time. One is to compare the grayscale changes from the last second (not last frame). This means it calculates the mean grayscale difference of the whole and in 16 separated areas. It is important to notice this step undertakes the same effect as a time-lapse photo where transient movement-created space stays (refer to Figure 1). So the system knows the highest difference value and which area this value comes from. This area therefore is identified as “the most active area” in the system, of which the active level (together with the overall difference) is controlling the tempo of generated sound. In the other procedure, a list is created for each area. The 25 values from the most active area are selected, and the mean value of them is acting as a MIDI note number, which subsequently turns into the carrier frequency in a synthesizer (also developed in Max/MSP), where it is modulated by this list (this 25 values). In other words, the grayscale of these 25 values decide the note pitch and “shape” this note. This step is essential because from a statistic point of view, the mean value of a group of numbers is sensitive in terms of telling the overall active level, yet does not tell how these values are distributed in the group. By doing this, each pixel speaks individually, and the same carrier frequency would not necessarily generate the exact same sound. Meanwhile, the maximum and the minimum value in this list also generates sound using a slightly different synthesizer, adding to the output sound. This step, in a way, amplifies the feature of this active area. The output sound is then assigned to eight speakers depending on the index of the most active area in previous steps. For example, if area_1 is the most active, the generated sound will output from the two speakers placed at that corner (one hanging overhead), with speaker_1 louder than speaker_2 (see Figure 9-a); area_6 triggers these two speakers the other way around (see Figure 9-c); for area_2&5, the speakers are equally loud. Other speakers are triggered in the same way.

Figure 9: assigning speakers according to active area

Figure 10: logic of data process in Max/MSP

The brief description above could be summarised as Figure 10, while some details related to data scaling are not included. The result at the current stage please refer to video:

Additionally, the synthesizers in this system are still under development in terms of tones and tunes, so the final effects could be different.

3.3. Limitations

For the last and previous versions of this system, one obvious limitation is that colours cannot be recognised. This is set at the beginning stage of programming to lower the data process load and complex level, because each pixel contains three values instead of one in colour (RGB) mode. Another reason is that the system is meant to avoid any distraction by colour effects on spatial and movement interpretation at the first place. However, some visitors argue that colours should also be interpreted as the same grayscale might stand for different colours. Besides, with colours engaged in the system, there are more potentials in terms of costume and scenes within a designed performance, which will be discussed in Chapter Four further.

The restricted frame resolution is another limitation. Referring back to the idea of spatial and movement interpretation discussed in previous chapters, it is important for the system to detect motions in the active area. However, the low resolution (Figure 11) can hardly meet the expectation. For example, the system is yet sensitive enough to tell the differences of some different movements taking place in the same position, and some invited dancers and artists believe the sound output lacks some breaking points when there is a sudden move in the real space. Therefore, increasing the resolution while maintaining each pixel speaking for themselves is one of the challenges for the next stage.

Figure 11: low-resolution image in current system

  1. Discussion

4.1. Comparison and Discussion

As theories and technologies from different subjects are engaged in this system, discussions exist in cross-disciplinary domains. However, it is essential that we review it from a perspective of interaction and performance, as interaction in a sense takes place in any performance yet not all the performances are defined as interactive art. Therefore, the discussion in this section will unfold from the categories of interactivity within digital performance, with an emphasis on the levels of user interaction.

Both academics and artists have their definitions of interactivity emphasising on real-time response, or interface, or roles the participants play. Interestingly Jaron Lanier (1996) uses a dance metaphor to explain the importance of interaction: interactivity is the way to “dance with computer”. Bolter and Gromala (2003) suggests the notion of performance being a substitution of interaction in digital design: users enter into a performative relationship as they are “performing the design”. Dixon defines the categories by discerning how deep users can “dance with computer” and “perform the design” (2007). Furthermore, Dixon suggests there are four levels of interactivity in an ascending order: navigation, participation, conversation, collaboration (2007). According to Dixon, navigation means that visitors can make simple options concerning the need of moving to the next scene, in which situation all the scenes are predetermined (2007). He also claims participation allows visitors to play with the system following some instruction, while conversation requires a reciprocating information exchange within the performance, which also means participants undertake a dialogue with the system (ibid.). In collaborative interaction, users become a major performer or designer of the performance or experience, together with the computer/virtual environment or other participants (ibid.).

Given the criteria above, both performers and beholders can discern a reciprocating dialogue during the interaction. This could also be identified by analysing this interaction process. When a dancer enters the active space (movement detected), the system gives the first sonic response, which is translated from spatial information such as the dancer’s position and moving speed. The dancer then makes another move or a series of movements to see how the system interprets his/her movements in this space, which leads to subsequent personal actions. This is similar to Warren’s “perception-action loop” as discussed in Chapter One, and also makes the dancer the designer of the sonic space. Furthermore, the dancer and computer start to “learn” the language of each other. Some visitors even learnt to make a certain sound through such self-exploration.

However, it is hard to give a verdict of interactivity, because the depth of user interaction could be ambiguous in relation to designed performances. In other words, having learnt the “language” of the system, technically the performer could move in a certain way to control the tempo and create desired sound. This can be done by a single performer or a group of performers who are arranged to move in a certain way or stationary in the space. In this situation, the performers’ next movements are less likely affected by the sound output than self-exploring, therefore their conversations with the space and computer are slightly different and more debatable in terms of the depth of interaction.

In one way, performers are doing a task rather than self-exploring as they are told what to do during the performance. This means movements are predetermined, thus the interaction could hardly be a conversation even though the reciprocating of spatial information still exists. In the other way, this predetermined mode can lead to a complex and collaborative interaction, allowing experts from different domains to design a performance. An example will be a choreographer designing the movements for a dance performance following the responsive sound output. This actually creates a storyboard for the performance. This storyboard can be used by an architect to design the background or a series of scenes which changes during the performance. Even a lighting designer could also be involved: to enhance some sonic effects by changing the lighting condition. Such cooperation is different from conventional theatre performance as the sound output becomes unpredictable when different elements are detected by the system at the same time. As a result, the dancer might adjust his/her movements according to the live sound, and other co-operators can adjust their actions accordingly, thus creating an increasingly complex loop. Although such performance might lead to chaos, it is prearranged yet improvisational. More importantly, all co-operators take part in the design of the performance and modulate it in their unique ways before and during the performance, which can hardly be repeated. From this point of view, such performances can be considered a collaborative interaction.

4.2. Potentials of Collaboration

Although this system has yet achieved the complex level that allows multiuser collaboration as presented above, there still are potentials to meet certain types of collaboration while avoid chaos. Different elements can be arranged properly by pre-programming them into the system. This means an agency links all the collaborators together, potentially providing a more desired result in terms of spatial expression and sonification. Whereas this does not mean to predetermine everything but to invite experts from different backgrounds to be the designers of live performances, and the unpredictability can also be achieved by programming. Thus, architects, dancers (choreographers), and musicians (composers) can work together to design a live performance while maintaining unpredictability of live interaction. This seems the other way around to apply the system compared to self-exploring mode described in the last section, yet does not against the concept of spatial representing and human-computer interacting.

Another item which can be engaged is a projector. Some visitors suggest the interaction will be more unpredictable if the live images from the camera were projected onto the surface of the performing space. For a more complex collaboration, this system can collaborate with a sound visualising system to visualise the sound output and project it to the performing space, making the sound output an input for another process. This method does increase the interactivity level by making the performing space itself dynamic and unpredictable, and more attractive for audiences as well. Yet it remains debatable as the visuals could be a distraction for both performers and audiences to focus on sonic interpretation.

  1. Conclusion

The rethinking of space and movement unfolds the exploration into the theories of spatiality. From the biologic discovery of Newcombe and Huttenlocher (2000), to Warren’s “perception-action loop” (1995), it is self-evident that human action and physical space are deeply embedded in each other, and neither of them could be isolated when discussing the other. More importantly, spatial representing and reasoning is considered the ability of people perceiving physical world and interacting with it. Meanwhile, by looking into the features of contemporary dance movement, the implication of dancing has been brought into the scope of architecture, as well as spatial representing into dance performance. Furthermore, the collaboration between live performance and digital techniques provides a great potential of unconventional experiences in terms of spatial representing and interpreting as discussed in Chapter One.

Inspired by the work Sensuous Geographies (Rubidge and MacDonald, 2004), this project brings camera-assisted interactive performance and sonic ambient together, whilst attempting to extend spatial perception through body movement and sound rather than human sight, and to explore spatial representation through a sonic and transitory way rather than typical architecture. As presented in this design report, both the physical settings and digital process strive for such purpose. The real-time expression reinforces the importance of liveness, which also breaks the boundary of users and designers. Although some limitations have yet been improved at the current stage, it is clear that spatial interpretation could be achieved in a complex computer-assisted performance through dance and sonic output.

Some potentials of collaboration have also been discussed. Although involvement of some elements remains debatable, experts from different domains are encouraged to bring their modalities of spatial representation together to design for live performances, also to alter their roles by creating sound or space though choreography, or interpreting space through movements.

 

Bibliography

Bachelard, G., 1994. The Poetics of Space: The Classic Look at How We Experience Intimate Places. Boston, MA: Beacon Press.

Bandel J. B., Iarocci, L. and Strauss, D., 2013. Performing space: a centre for contemporary dance. Master dissertation, University of Washington.

Bolter, J. and Gromala, D., 2003. Windows and Mirrors. Cambridge: MIT Press.

Brennan, M., 2003. Performance Morphia Series/ Sensuous Geographies, The Arches, Glasgow, The Herald, 6 February. Available at: http://www.sensuousgeographies.co.uk/review.html (accessed 4 September 2018)

Broadhurst, S., 2008. Troika Ranch: making new connections — A deleuzian approach to performance and technology. Performance Research, 13(2), pp. 109-117.

Bronet, F. and Schumacher, J., 1999, Design in movement: the prospects of interdisciplinary design, Journal of Architectural Education, 53(2), pp. 97-109.

Castree, N., Kitchin, R. and Rogers, A., eds., 2013. A Dictionary of Human Geography, Oxford University Press. Available at: http://www.oxfordreference.com/view/10.1093/acref/9780199599868.001.0001/acref-9780199599868-e-1771 (accessed 6 August 2018).

Curl, J. and Wilson, S., eds., 2015. The Oxford Dictionary of Architecture, Oxford University Press. Available at: http://www.oxfordreference.com/view/10.1093/acref/9780199674985.001.0001/acref-9780199674985-e-6976 (accessed 6 August 2018).

Dixon, S., 2007. Digital Performance: A History of New Media in Theatre, Dance, Performance Art, and Installation. Cambridge: MIT Press.

Fajen, B. R. and Phillips, F., 2013. Spatial Perception and Action. In Waller, D. and Nadel, L., eds., Handbook of spatial cognition. Washington: American Psychological Association, pp. 67-80.

Ferguson, R.W. and Forbus, K. D., 1999. GeoRep: A flexible tool for spatial representation of line drawings, Proceedings of the Qualitative Reasoning Workshop. Loch Awe, Scotland.

Gavrilou, E., 2003. Inscribing structures of dance into architecture, Proceedings of the 4th International Space Syntax Symposium. London.

Gibson, J. J., 1979. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.

Hirtle, S. C., 2013. Models of spatial cognition. In Waller, D. and Nadel, L., eds., Handbook of spatial cognition. Washington: American Psychological Association, pp. 211-226.

Holl, S., Pallasmaa, Juhani, and Pérez Gómez, Alberto, 2006. Questions of perception: Phenomenology of architecture (New ed.). San Francisco: William Stout.

Hornof, A., J., Rogers, T. and Halverson, T., 2007. EyeMusic: performing live music and multimedia compositions with eye movements, Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME07), pp. 299-300. New York, USA.

Krueger, M., Gionfriddo, T. and Hinrichsen, K., 1985. VIDEOPLACE-an artificial reality. ACM SIGCHI Bulletin, 16(4), pp. 35—40.

Lanier, J., 1996. “The Prodigy.” In Brockman, J., ed., Digerati. London: Orion Business Books, pp. 163-174.

Leach, J. and deLahunta, S., 2017. Dance becoming knowledge: designing a digital “body”, Leonardo, 50(5), pp. 461-467.

Lefebvre, H., 1991. The Production of Space. Translation of Lefebvre, H.,1974 La Production de L’espace, Oxford: Blackwell.

Lu, F., Zhou, B., Zhang, Y. and Zhao, Q., 2018. Real-time 3D scene reconstruction with dynamically moving object using a single depth camera. The Visual Computer, 34, pp. 753-763.

Lyons, M. J., Haehnel, M. and Tetsutani, N., 2003. Designing, playing, and performance with a vision-base mouth interface, Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME-03), pp. 116-121. Montreal, Canada.

MacDonald, A., 2003. Sensuous Geographies. Available at: https://vimeo.com/123180961 (accessed 4 September 2018)

Maletic, V., 1987. Body — Space — Expression: The Development of Rudolf Laban’s Movement and Dance Concepts. Berlin; New York; Amsterdam: Mouton de Gruyer.

McCarthy, R., Blackwell, A., deLahunta, S. and Wing, A., et al., 2006. Bodies meet minds: choreography and cognition, Leonardo, 39(5), pp. 475-477.

Mohler, B. J., Luca, D. M., and Bülthoff, H. H., 2013. Multisensory Contributions to Spatial Perception. In Waller, D. and Nadel, L., eds., Handbook of spatial cognition. Washington: American Psychological Association, pp. 81-97.

Montello, D. R. and Raubal, M., 2013. Function and Applications of Spatial Cognition. In Waller, D. and Nadel, L., eds., Handbook of spatial cognition. Washington: American Psychological Association, pp. 249-264.

Morrison, G. D., 2004. Camera-based man-machine interface for computer application control. Proceedings of IS&T/SPIE Electronic Imaging. California.

Muybridge, E. and Taft, Robert F., 1955. The human figure in motion, London: Constable.

Newcombe, N. and Huttenlocher, J., 2000. Making Space: The Development of Spatial Representation and Reasoning. Cambridge, Massachusetts: The MIT Press.

Pallasmaa, J. M. N., 2009. The Thinking Hand: Existential and Embodied Wisdom in Architecture. Chichester: Wiley.

Papadopoulou, F. and Schulte, M., 2016. Movement notation and digital media art in the contemporary dance practice: aspects of the making of a multimedia dance performance. Proceedings of the 3rd International Symposium on movement and computing, 05-06, pp.1—7.

Paradiso, J. A. and Sparacino, F., 1997. Optical tracking for music and dance performance, Proceedings of the 4th conference on optical 3D measurement techniques. ETH, Zurich.

Paul, C., 2003. Digital Art, London: Thames & Hudson.

Phibeck, J. W. and Sargent, J., 2013. Perception of Spatial Relations During Self-motion. In Waller, D. and Nadel, L., eds., Handbook of spatial cognition. Washington: American Psychological Association, pp. 99-115.

Rubidge, S. and MacDonald, A., 2004. Sensuous Geographies: a multi-user interactive/ responsive installation. Digital Creativity, 15(4), pp. 245-252.

Senior, A. W. and Jaimes, A., 2010. Computer Vision Interfaces for Interactive Art—Chapter 2. In Aghajan, H., Delgado, R. L. and Augusto, J. C., eds., Human-Centric Interfaces for Ambient Intelligence. Elsevier Inc., pp. 33—48.

Tschumi, Bernard and Architecture Association, 1990. Question of Space: Lectures on Architecture. London: AA Publications.

Valenti, R., Jaimes, A. and Sebe, N., 2008. Facial expression recognition as a creative interface, Proceedings of the 13th international conference on Intelligent user interfaces, pp. 433-434. Canaria, Spain.

Warren, W. H., Jr., 1995. Self-Motion: Visual Perception and Visual Control. In Epstein, W. and Rogers, S., eds., Perception of Space and Motion. San Diego: Academic Press, pp. 263-325.

Watford, R. R. et al., 2013. Architectural space: in search of sensory balance for contemporary spaces. PhD dissertation, California Institute of Integral Studies.

 

Image References

Figure 1: Bandel J. B., Iarocci, L. and Strauss, D., 2013. Performing space: a centre for contemporary dance. Master dissertation, University of Washington. —- Movement-created space

Figure 2: Krueger, M., Gionfriddo, T. and Hinrichsen, K., 1985. VIDEOPLACE-an artificial reality. ACM SIGCHI Bulletin, 16(4), pp. 35—40. —- Image collection of VideoPalace

Figure 3: Paradiso, J. A. and Sparacino, F., 1997. Optical tracking for music and dance performance, Proceeding s of the 4th conference on optical 3D measurement techniques. ETH, Zurich. —- Flavia Sparacino demonstrating her DanceSpace Installation

Figure 4: Li, Y., 2018. Plan view of a rectangular space setting for performance

Figure 5: Li, Y., 2018. Plan view of a round space setting for performance

Figure 6: Li, Y., 2018. Plan view of an alternative square space setting for performance

Figure 7: Li, Y., 2018. A more immersive resonant space to interact

Figure 8: Li, Y., 2018. 400 pixels separated into 16 areas

Figure 9: Li, Y., 2018. Assigning speakers according to active area

Figure 10: Li, Y., 2018. Logic of data process in Max/MSP

Figure 11: Li, Y., 2018. Low-resolution image in current system

 

Related Links

http://two.wordpress.test/lab-projects/sonicdance

https://vimeo.com/277989044.

 

Submit a Comment