Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

No Comments

Reactor for Awareness in Motion Workshop

Reactor for Awareness in Motion Workshop

Using tools for assisting the composition of choreography has always been an important part of the creative process for dance. Terms like space, body, time, and geometry have been used and abused consecutively by choreographers, dancers and artists in order to trigger new ways of thinking, moving and composing especially in improvisation techniques. From Rudolf Laban and his “devices” (e.g. isolation, contact, kinespheric space etc.) to the newest digital choreographic interfaces (by Trisha Brown, William Forsythe, Wayne McGregor etc.), it has always been an exciting task for choreographers and their teams to discover new tools which can offer assistance and an extension for thinking within the field of choreography.

The above mentioned terms (space, body, time, and geometry) have been used by architects in previous attempts of describing architecture as a non-static, flexible, kinetic subject matter. One of our research fields in the IALab aims to explore new ways of thinking and designing in architecture through the help of tools used in dance and choreography. Last month in Watershed, I had the pleasure to participate in a 2 days’ workshop with Yamaguchi Center for Arts and Media (YCAM) and a diverse set of creatives (dancers, choreographers, coders) in an attempt to discover the potentials of the Reaction for Awareness in Motion (RAM) toolkit for the first time in UK.

logo

Reactor for Awareness in Motion (RAM) Dance Toolkit is a creative coding application for dancers and choreographers developed by Yamaguchi Center for Arts and Media and Yoko Ando, a dancer from The Forsythe Company. It contains a graphic user interface (GUI) that helps in accessing several digital environments for dancers. Using motion sensors, it re-creates abstract ideas of a tracked body in different digital environmental conditions (scenes). It enables, affects or comments on choreography and dance movement by giving realtime feedback using code in an easy way (C++ Open Frameworks).

Credit: YCAM Interlab

Credit: YCAM Interlab

During the workshop,  Lisa May Thomas (dancer and choreographer) and I were invited to experiment with the existing virtual environments (scenes) and explore the possibilities of creating new “scenes” in the software, according to our own interests and research. Here is a taste of what we did in those 2 days:

 


We started by altering the existing scenes in the system and playing with terms like touch, dialogue, negative-positive space, permanent-temporary traces, tuning/unison moments and memory of the body. Some of those scenes included unnaturally extended arms, multiplication of ourselves and realisation of unison movement, control of active objects on screen (rotating cubes connected to different joints of our bodies), connected loops and lines, rotated floors to walls(creating digital space that does not exist in physical space), and the sculpting of space with our exterior or interior body lines.

Credit:Akiko-Takeshita-YCAM

Credit:Akiko-Takeshita-YCAM

In one of the scenes (called Monster), parts/joints of our body (arms, knees, legs, pelvis, neck etc.) were connected in a random way, resulting in two asymmetrical sticky figures. Firstly, we had to identify which part of our body was moving in the physical and accordingly in the digital world (for example my arm was appearing as Lisa’s long neck and her calf was my torso). Then, we tried to reach a tuning — unison moment by moving on the screen the same part of our bodies simultaneously. Our synchronised movement in the virtual world was not synchronised in the physical world. It was a moment of exploring an extremely uncanny, hybrid version of ourselves and the effect it had on our movement, synchronisation and self-perception.

 Credit:Yohei-Miura-YCAM


Credit:Yohei-Miura-YCAM

Several times we came across the realisation that things that are possible in the digital space were presenting physical limits in the real world. In one of our “scenes” we created a wall between us that was not apparent in physical space. We experimented with terms like contact, touch, virtual contact vs physical distance and vice versa. We were thinking with our body whilst we were trying to follow the digital rules.

Credit:Yohei-Miura-YCAM

Credit:Yohei-Miura-YCAM

We were constantly altering the priority in our decisions between the digital and physical space. We observed the effects in our movement, decision making, perception of the other dancer, perception of ourselves and the space around us. We were aware of the interaction both in physical and digital world and our senses were stimulated on both levels. We could choose to react to the cues between us in the actual physical world or follow the lines/shapes/results generated from the bodies in the screen.
Finally, we observed the importance of the audience. We finished the workshop with a Lunchtime Talk Showcase, sharing the workshop process with an audience in Watershed. Some scenes allowed space for narrative, whilst others were entirely abstract. When screen presented the energy of the 2 dancers, with simple graphics and abstract shapes the audience was more attracted to the real world.
When the screen was presenting something more permanent, like an architectural form or a body sculpture in space, the interest of the audience was shifting back to the screen more often. The audience was becoming less interested about the movement quality and more excited about the story/narrative. We were inviting the spectators into the loop of physical versus digital thinking by shifting and altering their attention between physical and digital movement.

The notion of digital versus physical with regards to improvised choreographical compositions and new thinking processes is something which has been explored in an entirely new way during the course of the workshop. It is hoped that this is the start of a much longer process with RAM software and collaboration between Watershed, YCAM and IALab.

Credit:Yohei-Miura-YCAM

Credit:Yohei-Miura-YCAM

Credit:Verity-McIntosh-Watershed

Credit:Verity-McIntosh-Watershed

 

Submit a Comment