Knowing where you are is not a problem for humans since our solutions to this problem range from relying on our basic senses of sights, sounds and smells to using sophisticated technologies, such as GPS. Nevertheless, how can robots know where you are?
James George is known as a media artist and the developer of RGBD Toolkit, an experimental software for filmmaking with depth sensor cameras. TrackStand is a spatial interaction project developed by James George and Yamaguchi Center for Arts and Media (YCAM). All the particles on the screen and sound are controlled by the timeline on the floor which is triggered by the participant position. Through the work and film, he addresses the emotional response to science fiction technologies as they become reality.
James George developed ofxTimeline as an openFrameworks add-on for creating a user interface to a flexible timeline, enabling users to compose sequences of change over time, control various type of data including his RGBD format and create time-based interactions. With just a few lines of code, user can add a visual editor to smoothly interpolate curves, colors, video, oscillators, audio and 3D cameras.
This multi-touch, high-resolution and interactive floor/screen system provides a reactive, real-time user experience, which can be incorporated into stage production. Along with the fluid response comes accurate tracking of object positions using 64 built-in proximity sensors per panel to give 6.25 cm precision, combined with an LED pixel pitch of 7 mm for high-resolution graphics. Participants can be as creative as they want. Moreover, this simple modular design is also easy to install and maintain.
‘Sniff’ is another on-screen interactive project developed by James George. Sniff, a lovely pup, exists on screen and interacts with visitors by virtue of machine-vision-based sensing. The key feature of Sniff is that it can identify with one of several visitors as a primary interaction, and has a sophisticated behavioral memory. The last part of this 3D animated dog is particularly important, because it gets at the crux of how it reacts to be intelligent.
Instead of using proximity sensors, in our project Trespass, we use 8X6 pressure pads to sense the participant’s real-time position. Each pad updates live the state of pressure on it: ‘0’ indicates that there is nothing on that pressure pad, while ‘1’ indicates that something (human or object) is adding pressure on that pressure pad. Because of the weight of the robot, the four pressure pads in the central area always send signal ‘1’ to Arduino. Once the robot knows where you are, it will react by changing the direction or speed of its rotation.
A further development of our research would suggest the improvement of the robot’s level of learning capabilities, while tailing the advantage of the full use of pressure pads and touching sensors, to achieve higher accuracy regarding the position and gestures of dancers. Finally, such research techniques, if applied to the realm of architecture, could imply changes in the way we interact, co-exist and grow within our everyday individual and collective build environment.