Image Image Image Image Image Image Image Image Image Image
Scroll to top




Combining mobile robotics and projection mapping, Phyxelbots explores the future of playful learning environments. Merging digital spaces of information, with physical spatial interaction, a community of kinetic interactive objects perform in response to human gesture and can also organise into pre-coded configurations. Experiments in the Lab have explored their potential applications as teaching aids, narrating stories, participating in group educational games (such as colour mixing seen in the film), and serving as personal companion devices for children.


The project was a projected surface on a table which is actuated by the object placed on the table. On placing the object on the table a pattern is formed which is called a glider in John Conway’s cellular automata. It glides along the surface meeting other gliders formed by the object being placed in different positions. The surface has the game of life running and its previous steps are visible in the consecutive depth of the top surface. The creation of gliders in various locations generate interesting patters, certain stable forms like block, blinker, beehive. Interaction among these forms continue the game according to the users response.



Our study aims at understanding the past researches in tangible interfaces and history of artificial life.

Tangible Interaction

The history of tangible interaction begins in 90s. In those times the interactions between people and digital space were largely limited to traditional GUI, the Graphical User Interface. The notion of a ‘Tangible User Interface’ constituted an alternative vision for computer interfaces that brings computing back ‘into the real world’.

One pioneering work is Marble Answering Machine by Durrell Bishop in 1992. A marble represents a single message left on the answering machine. Another early work is Digital Desk made by Wellner. It showed a way to merge physical and digital documents by using video projection

In 1997 the term “Tangible User Interface” was used by the head of MIT Tangible media group, Hiroshi Ishi. One of his theory work “Tangible Bits” allows users to grasp and manipulate bits by physical objects. In 2003, a system of 3d Tangible interface was produced by tangible media group. The system consists of a ceiling mounted laser scanner and computer projector.

For performance, the Reactable is a very famous performance installation with a tabletop Tangible User Interface. The table itself is the display. As a tangible is placed on the table, various animated symbols appear. Commercially, since 2007, Microsoft company start making an interactive surface called Microsoft PixelSense. For people to touch and share information.

A part of tangible interaction design group focuses on making an additional feedback loop for computer output by using magnetic forces to move objects on a table in two dimensions In 2013, another three dimensional tangible interaction was developed called inFORM which is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way.

Since 2007 Tangible interaction have had their own conference called Tangible and Embedded Interaction. The TEI conference strongly focus on how computing can bridge atoms and bits into cohesive interactive systems. Now several design groups have started working in this area.


Artificial Life

There has always been a desire to capture the generative and emergent qualities of nature. Artificial life is studying the logic of life like systems in artificial environments in order to gain an understanding of the complex information processing that defines such systems . There were examples of mimicking  life pre computer era as well. Vaucanson’s digesting duck exhibited life like metabolism though it was designed merely to mimic it. Automatons existed using mechanics, pneumatic and hydraulic.

John Von Neumann was the earliest thinkers on artificial life. He described an automaton as a machine whose behaviour proceeded logically step by step combining information from environment and its own programming. He  made a logic automaton based on 29 changing states of cells on an infinite grid. It was much complicated as compared to the later cellular automatons. An example of physically instantiated artificial life during this time are Grey Walter devised autonomous robots which were called tortoises. They were capable of moving on stimulus of light.

In 60’s John Conway made his version of  cellular automata and called it the game of life. He used simple  rules and applied repeatedly to create further generations forming complex patterns , providing an example of emergence and self organisation. Christopher Langton officially coined the term artificial life in late 1980’s. He created the ant and loop artificial life simulation based on 2 states  of self reproducing cells.

Stephan wolfram investigated the elementary cellular automaton and build a classification scheme for the complexity of their behaviour. In his book A New kind of Science he explores how simple computational programs can lead to unimagined complexity  and it applies to natural patterns of seashells and nature of plant growth, which led to the study of fractals. Craig Reynolds created the boids artificial life simulation to create recognizable flocking behaviour in a computer program using minimum rules. These self organising theories have been applied to research behaviour of social insects where the individual actions are dictated by those of the neighbours.

In early 90’s Thomas Ray developed  Tierra , a computer simulation where computer programs compete for time and space. They are considered to be evolvable. They can self replicate, mutate and recombine.

Since then there have been practical and theoretical implications of artificial life, in virtual reality, robotics, real time computer graphics, gaming animation, art and interactive installations.



We worked on the elementary automata with a grid of 8 cells, with 2 states of each cell either 0 or 1. The state of each cell in the next generation is defined by the state of its adjacent cells called neighbours . The rules are: if there is any one neighbour in state 1  the cell stays at 1 or becomes 1 if it was at state 0. If both the neighbours are in same state either 0 or 1, .the cell becomes 0 , whether it was previously 1 or 0. We made a version of this automata with different beginning conditions, to explore the emerging patterns.

We explored John Conway’s 2D CA in which the state of each cell is defined by the state of its 8 neighbours. If there are less than 2 or more than 3 neighbours the cell dies and comes alive when it has exactly 3 neighbours. We explored this CA on a set of pattern called glider which appears to move on the screen in each step in time. On beginning with a random set of pattern the grid demonstrates various patterns in each generation. Some of them stabilise while others move, merge or grow.

We tried  a 3D CA as well applying different set of rules to observe the changing generations. Here each cell’s future state is defined by the state of its 29 neighbours.

We experimented with object tracking using marker and colour to explore which works best in dim light. The colour tracking seemed a good option for the dimly lit environment, our project is based in. We also experimented with head tracking using a Kinect , in order to modify the perspective vision of the observer in the game according to the position of his head.


  1. Levy, S. 1992.Artificial Life- The Quest for a New Creation. London, Jonathan Cape.
  2. Penny, S., 2009. Art and artificial life–a primer.Digital Arts and Culture 2009.
  3. Shiffman, D., Fry, S. and Marsh, Z., 2012.The nature of code. D. Shiffman.
  4. John Conway’s game of life available at: (accessed on : 21/03/2016)
  5. Tangible Media Lab available at: (accessed on : 21/03/2016)