Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top


Aura of Jianghu

Aura of Jianghu

Today, when new technologies continue to emerge, as architects, we no longer construct traditional spaces and functions, but re-architect the interactive logic of space, perception, memory and time. The fusion and interaction of the same time-space things existed perception, and the way of blending different time-space memories and spaces, is an interactive space that needs to be restructured.

What is the future of interactive architecture? Absolutely not on function or technology

1. “Aura”-the interaction of the perception of the temperament of things

The space for understanding and expressing the “temperament” of things and people is called “aura”

It refers to the influence of the temperament of people and things on the surrounding things, and the influence of the space and sense organs produced when the event fluctuates, which can reflect the current emotions and thinking of individual human beings.

Studying it can provide theoretical support for the subsequent application of interactive space.

2. Cooking-temperament, venue, time

Cooking is the most common thing, but it contains countless temperamental information.
The locality of the ingredients expresses

Studying the steps and processes of individual cooking can well show the “field”.

2.1The process of making Mapo Tofu–Sound——the emergence of a new city market experiment

In this experiment, I captured the sound of the process of making Mapo Tofu, and matched it perceptually with the architectural functional modules in the city, and then used the order of sound to reconstruct to encode a new city model.

It is found that sound is a good medium for perceiving food, people, and urban temperament.

3. Sound—–identifying the medium of “aura”

Sound can carry many spatial changes, directions, speeds, accelerations, materials, and steps of actions during cooking.
I use sound as a medium to identify the “aura” to study the personal temperament and emotional field that people express when cooking.

Use the contact microphone to capture the sound of different knives cutting different foods on the cutting board, and study the information in the spectrogram to match the information of people moving in the space.

3.1Interactive machine learning can match the corresponding actions by recognizing the sounds in different cutting actions in real time

By studying MAXMSP’s MUBU system (from ), testing the sound recognition level of different vegetable cutting actions, and through post synthesis,
Get the sound effect produced by the new action.


4. Externalization of “Aura”Cooking Movement+Wuhan City Story+AudioVisual

DanJ=Dancer + DJ +VJ experiment

Everyone’s ability to perceive the aura of things is limited. I explore ways to externalize the aura.
Food, space, events, time, actions, sounds, and the aura of the city will be integrated in my future experiments.

5.Wuhan Hot-Dry Noodles Experiment

Is COVID19 the first time you have heard of the city of Wuhan?
As a Wuhan native in London, I feel the world’s misunderstanding of this city, it is not the hometown of demonized bat-eating.
It is an international cultural city with rich cultural heritage and the most universities.
Sitting on the Yangtze River, Wuhan’s overall “gas field” is straightforward.

Hot dry noodles is the breakfast of every Wuhan person, and the process of making it can show this kind of “gas field”.

I capture the sound of making hot dried noodles and match the hand movements of different steps in making hot dried noodles.
Give the dancers the right to express these voices and the urban voices of Wuhan.
The texture of the steam, the Yangtze River, and the seasoning particles of the dancer’s hot dry noodles is expressed with abstract elements.

Give dancers more ability to develop their sense of movement.

In terms of technology, I initially separated the visual and sound effects
Visual: Kinect2 + Touchdesigner + smoke moves with people
Audio: Genkiintrument + Ableton Live + Wuhan dialect

The problem is that there are a lot of visual and sound delays. Instead, it delays the dancer’s experience of expressing temperament. Using sound to recognize movements and combining movements with sound and visual elements is a good way to express aura. But technically speaking, it is necessary to reduce the delay speed to 0.2s of human body reaction speed to achieve good sensory effects. In the future, I will try to use VIVE handles and motion capture suits to better get the effect of the aura.