Interaction Dynamics of Behaviour : An artificial life approach to augmented space design
The research is an attempt to embed intelligence in spaces combining virtual and digital medium using an artificial life approach. The report looks into the design of objects that can sense their use, analyse the context and act instantaneously.
The aim of the project is to build autonomous physically reconfigurable modules which develop their behaviour while interacting with the environment. The study of artificial life acts as a means to impart decision-making ability in these modules. In order to achieve this, theories on artificial life and tangible interaction have been studied. Project references are derived from the field of interactive architecture , robotics, cybernetics and tangible media.
Spaces are increasingly being dominated by devices and technology. The report questions our lifestyle in such device ecology. The theory informing the design is based on virtual augmentation of space and our interaction with this virtual world, through tangible responses leading to a physical dynamic behavioural design approach.
Further, the report analyses two forms of artificial life, namely virtual and physical agents to understand principles of building them. The principles are developed by means of previous projects and experiments to derive principles to work on the final project. Through experimentation, it is learnt how to build emergent properties and generate a behaviour to create a dynamically adaptive system.
Finally, a layered design method is used to design Physical Pixels developing a modular system of self-organisation with a virtual representation of data to be applicable for a playful learning environment. Further developments and applications of the project have been discussed to envisage the design method in larger architectural context.
In 1960’s when Brodey wrote about the design of intelligent environments, an idea that he called soft architecture (Brodey,1967), there was limited computing competence with people. But his idea was envisaged for a context where the ecology is governed by computers and devices. With the growing number of devices, the idea of embedding intelligence in architectural spaces becomes essential to build more resilient and adaptive systems.
Intelligence has been defined by researchers in artificial intelligence, psychologists and cognitive scientists in various ways. The most comprehensive definition in my view is by J. Peterson, who defines it as the ability to adapt oneself by responding to a complex stimulating environment in the form of a unified behaviour (Phiefer et al.,2001).Â This definition of intelligence finds similarity with Gordan Pask’s conversation theory with description derived out of constant communication feedback loop between the object and its environment (Chaturvedi, 2016). Therefore, the environment or the context becomes more crucial while defining intelligence in architecture. Since the environment consists of a growing digital ecology in the future, building a relationship between the digital (also referred as virtual ) and physical is a major area of study for this report.
Another important aspect of the definition of intelligence is the ‘unified behaviour’. Behaviour being an observant aspect is difficult to predefine in the system though it should be a governing aim of the design (Bondin,2013). This behaviour based approach ofÂ studying intelligence has opened up ‘an artificial life route to artificial intelligence’ which is also referred to as the ‘bottoms-up design’ method (Steels,1993).
The aim of this project is to build a responsive behavioural system that adapts to changing environment and interacts with the user and is in coherence with an artificial living system. According to Langton’s views, a living system being a physical manifestation of information processing (Levy, 1992),further, strengthens the coupling of physical and digital processes.
1.1 Questions/ Hypothesis
The growth of computation has rearticulated our way of living. This influences architecture and its elements, therefore, the report focuses on ways to design a dynamic system which can adapt to change in its physical space. Also, our relation to the digital world is disconnected from the physical space, therefore, the report questions human interaction in this device ecology to establish a relationship between the physical and digital elements.
Robotics and artificial life theory have instances of providing real-time response to the surroundings, therefore, they can be effective medium to study and apply design methods to architecture.
1.2 Aims of the project
The major objectives of the design project are to achieve responsiveness, adaptability and reconfigurability in design.
Responsiveness is explored as an ability to engage the user in a physical space augmented with virtual data. This correlates with the objects situated in the environment acting with immediate reflex to changing situations and develops a stronger interaction with people
Adaptable behaviour to modify its states which provides a decision-making ability in the space.
Reconfigurability shall be perceived as an emergent quality of a living system. A comprehensively used bottoms-up approach to achieve self-organisation though remains a far-fetched objective.
1.3 Scope and limitations of the project
Scope:Â The scope of the report is to explore a method based on an artificial life approach to augment spaces. The study is based on research and experimentation by building artificial life objects and develop principles that are applied on a prototype exhibiting intelligence and speculate its application to a wider architectural context in future.
The dynamism in the environment as mentioned in this report is mostly user influential in internal built environment or controlled outdoor spaces.
Limitation :Â Study of intelligence has a wide knowledge base in different subjects. Since this report is a study in architectural design, therefore the nature of intelligence and its relevance to biology, psychology and philosophy of mind are beyond the scope of this report. The augmentation is a computational interface in the physical space and not to be confused with augmentation in wearable devices. The term ‘virtual’ as interpreted in this report has been explained in section 2a.
Since the project is specific to the environment it dwells in, we choose a playful learning environment to base our study and derive context. The study uses the learning space as an environment to test the project. The report does not analyse or strengthen any psychological, physiological or social impact on the users of this environment.
The dream of embedding intelligence in everyday objects has been pursued since 1990’s. Today we have a larger networking infrastructure , greater public familiarity with computing and the number of devices in our living ecology are going to grow in the future. As the world becomes more connected day by day through computation, our relation to space has extended globally through the virtual screens.
2.1 Virtual spaces
Virtual here refers to the representation of information that surrounds us. It is situated closely to Benedikt’s interpretation of virtual in cyberspace which is a computer-sustained, computer-accessed and computer generated reality (Benedikt,1994). The virtual is perceived through actions and characters made of data and information. Access to this space is usually via computer screens or wearable devices, but augmented spaces allow the user to stay in his own physical world and yet perceive the virtual.
Ambient Room demonstrates an augmented wall in a space
This is evident in the ambient room, a project by the MIT media lab (Ishii et.al.,1998). Here the representation of activity in an atrium elsewhere in the building is represented on an augmented wall in a room. The project demonstrates visual representation of information which can be accessed by any person in the room without having to physically go to the atrium.
Designing a virtual world can alsoÂ involve designing the objects inherent, which have certain properties. These properties establish the relationship between the objects. A circle in the virtual space can be considered an object and can be related to a ball in three-dimensional space. Its location could be a property which defines its position with respect to time. Another important aspect of designing such virtual spaces is the physics which govern the activity in the space. Since they need to interact with people, the physics in a virtual world is usually based on real world physics. Human beings have learnt from an early age the cause and effect rules of how reality works, they tend to behave in a similar way and apply those rules except by a conscious action (Bartle,2004).
Tangible interaction at the Augmented shadow exhibit
Augmented shadows (Moon et.al,2010) uses these principles to build up the narration of a story. Here the visual representation of houses, people and trees have been assigned properties to appear in relation to the other virtual objects. The physics can be attributed to the movement of virtual creatures according to the position of physical objects in the space.
2.2 Interacting with virtual space
Virtual worlds create experiences using its own rules and characters that play their roles within the environment.Â They are actuated by a user but tend to live in their own virtual spaces without having much interaction with the physical dynamics around them limiting themselves to a visual and auditory appeal. Since they are computing platforms, human tangible interaction is limited as a control over a keyboard or a mouse to click, press or touch the screen. Augmented shadows (Moon et.al,2010), is exemplary in this aspect where it allows users to physically manipulate objects to be more participative in the narration and allows them to create their own version of the story.
Tangible interaction with Reactable, an electronic music builder
This approach is further explored in the Reactable (Jorda et.al,2007) (Figure 3) where a user can not only move tangible objects but also interact with the visuals on the table using fingertips to create music. Both of these projects allow users to stay connected to the physical objects yet explore the virtual, though the connection is merely moving objects on a table.
Informal education experience through augmented sandbox
Augmented reality sandbox (Reed et.al.,2014) takes tangible interaction to a greater extent by allowing users to play with sand particles, creating valleys and mountains and watching their actions in physical space transform the virtual world. Involving users in such augmented experience extends their spatial boundaries to include the virtual. The experience incorporates the virtual space in the physical environment but this lone communication cannot be interpreted as an intelligent behaviour. Referring back to Pask’s communication theory [Spiller,2002], intelligence is derived out of constant feedback loop between the objects and the environment. Objects here would be interpreted in their physical manifestation which informs the environment which is the virtual space. In order to complete the feedback loop, it is needed to inform the physical objects through the virtual world as well. This requires an actuation process to be initiated for the physical space via computation.
Computational control on physical spaces has been widely explored. The generator project (Price,2002) is one of the earliest proposals for reconfigurable and responsive architectural space. The building was expected to control its own organisation as aÂ response to the user, and if not changed it would become bored and propose an alternative arrangement. The project was intended to show intelligence in space using computational ability to sense and respond to the environment. This ability would create metabolic balance in the system. The system consists of artificial elements in architecture which live in symbiosis with the natural environment. Frazer investigated on developmental and evolutionary principles to achieve this metabolic balance in architecture (Frazer,2002). He considered architecture as a form of artificial life and I align my thoughts on this aspect.
2.3 Behaviour based Artificial life
Since 1980’s artificial life has been used to understand biological mechanisms by digital computation to create living systems. It offers a new type of interactivity in which there is a potential for systems to respond in ways which have not been explicitly defined. It can be categorised in two forms as computational forms and physical models of artificial life.
- Artificial life in the virtual sense are computing programs or algorithms which are small parts of various levels of interaction summing up to the actuation process of the digital creatures which is in coherence with Christopher Langton’s views . He calls the final actuation process as global behaviour also mentioning that any behaviour at higher levels than individual programs is emergent (Levy,1992). Virtual life is discussed previously with Augmented shadows as an example. Another example will be discussed in section 3a.
- The other form of artificial life is physically instantiated. This aims at building physical models of devices aided with sensing and actuation mechanisms. Behaviour in artificial life is observed in physical agents which were designed to mimic animal behaviour. There have been examples of this even in pre-computer era. Automatons existed using mechanics, pneumatics and hydraulics. Examples for physically instantiated artificial life are the Grey Walter devised autonomous robots (called tortoises) and Braitenberg vehicles. They were phonotaxis,Â capable of moving on the stimulus of light.
Braitenberg phototropic vehicle consisted of a chip based controller programmed to react to light. Light sensors in their body sense the surroundings and move the vehicle around according to a predefined stimulus. But it is observed that the vehicle shows complex trajectories in complex lighting environments, therefore imparting an unexpected behaviour though it is based on very simple action parameters (Soler et.al., 2014).
Grey Walter’s Machina Speculatrix was one of the first forms of autonomous agent exhibiting life in the form of free will. The machine had two sensory organs and a few movement mechanism to choose from (Walter,1950). They were quite complex machines in the analog form that they were initially built.
Both of these examples demonstrate an unpredictability in response to their environment. They are based on a simple principle of providing action as a direct response to the sensor.
This hypothesis is further elaborated by Rodney Brooks as ‘subsumption architecture’. His theory suggests the coupling of sensory information to an action selection in an intimate and bottom-up fashion. It is explained as a “decomposition of complete behaviour into sub behaviours and organising them in a hierarchy where higher levels are able to subsume the lower levels to create a viable behaviour” (Penny, 2009). This theory is the basis of this report to interpret the bottom’s up design methodology for the final project.
The sub behaviours can be formed by independent sensory- actuator couplings in a simple way where the priority of action is provided via a predefined hierarchy. Each component involved in the system serves its functionality independently, but according to the priority order it cooperates, competes or reinforces the action in the agent. Steels considered that ingredients ofÂ cooperation, competition, selection, hierarchy and reinforcement between components identified at a behavioural level are crucial for the emergence of complexity (Steels, 1993). It is interesting to see how simple programmed characteristics could give rise to complicated behaviour.
In order to apply the contextual study in the project, the design is explored in two parts of experimentation. Â The first project ‘Coded Existence’Â was aimed to create a physical connection to virtual space and explore the emergent behaviour in virtual space by a study on cellular automata. The second project ‘Recreating Machina Speculatrix’1 was an investigation into the independent sensory-motor couplings to observe navigational behaviour in a complex changing environment, finally deriving design parameters for Physical Pixels.
3.1.1 Exploring generative virtual spaces
Coded Existence explored the Artificial life computational worlds by studying the phenomenon of emergence in virtual space by building a cellular automaton (CA). Neumann describes automata as a machine which proceeds logically step by step combining information from its environment and its own programming (Levy, 1992). A cellular automaton is a computerised version of this machine where the environment is defined by a discrete grid of cells.
Experimenting with one-dimensional CA was the basic step to observe the emergence of a pattern with a row of 8 cells capable of being in two states. The condition of the state in each cell was defined by the state of its adjacent cells in the previous generation. Since the application of rules would need an infinite grid of cells, it was important to define the end conditions whether to terminate at the ends or continue in cyclic order. The beginning order of cell states provides different results on repeated generation of pattern. Though the pattern is consistent after a series of steps and becomes repetitive.
A two-dimensional CA has much more possibilities of exploring patterns and the program continues for an infinite time, thus, we used John Conway’s game of life to exhibit life as a quality of emergence. It is based on a few set of rules that define the two states of each cell as graphically represented in figure. He incorporated rules to let the cells live, die or reproduce depending on the state of its 8 neighbours (Levy, 1992) Therefore providing them with a free will to demonstrate life in its environment. The character of the virtual life was a popular pattern called a glider observed in John Conway’s CA experiment. The glider had the ability to change into two consecutive patterns on each step simultaneously shifting its position diagonally on the grid. The program shows life in the virtual space by creating and transforming patterns autonomously. They show emergence which lives internally in the virtual space.
The hard fact is that there are quite a few formations which show living characteristics in 2D CA, glider being one of them. Most of the other formations turn into one of few stable states to live or else they die. Thus the characters of this performance are limited and unpredictable.
3.1.2 Building the physical and virtual interaction
Development of Coded Existence to include human interaction was by combining computational artificial life to the physical world, using a solid object as the external influence to actuate the program. The environment was defined by a grid of cells visible in three dimension which gets its output from the user’s presence by modifying the perspective vision according to their position around the table.
The program is actuated by tracking an object on the grid and creating a glider in its position. The glider, therefore, is the first creature in the 3d mesh which has the ability to move and merge with one or more gliders creating more new patterns or creatures in the space. A user can populate the environment with these creatures by manipulating the object on the table. Most of them die, but a few stay. The interesting aspect were the formations of stable patterns which either do not move from their position or fluctuate between two forms.
The user has no control over the emerging or dying particles. There is also a diversity observed in the transformations of existing and newly created agents. There are a number of ways which could lead to the creation of stable forms, therefore the program stays unpredictable for the user.
These forms of computer simulated lives show immense resemblance to biological creatures and provide a vision of creating physical living systems. The physical agents along with virtual life should exhibit a behaviour to experience a space as living and intelligent by completing the constant feedback loop according to Pasks’ theory (Spiller,2002) discussed earlier in the report. This leads to a study of behaviour oriented artificial life.
3.1.3 Behaviour based physical form
In order to learn how to impart a free will behaviour via sensory motor couplings, we recreated the Grey Walter’s tortoise (Walter,1950) using digital tools. The machine had two sensory inputs: primarily light sensing the environment and tangible reflex to any obstacle in its path. The output was the movement of the vehicle in left, right, forward or backward direction. The light in the environment gave the machine a direction to move forward or turn left or right wherever it senses greater brightness of light. When the antenna of the device hits any obstacle in its path it reverses the wheel rotation to move backwards.
The processes defined were independent of each other, therefore they could be tested on the model even before the entire system was created. The design of the prototype made us realised the importance of defining the environment where the agent lives. Since different surface material have different friction requirements, it was found that a rubber wheel is more suited to move on diverse surfaces as compared to wooden.
The first prototype had one servo motor and one DC motor. The servo motor controled the tracking of light to get the input as well as the direction of the vehicle as the output. There was a time given for each alternative task hence making the system not very proactive. The DC motor’s job was to drive the vehicle forward during the time servo guides the direction. In this situation, the vehicle would take time to sense the environment before it starts to follow a path. If the light source changes its position during this time, the vehicle would still move in the premeditated direction until the next periodic search for maximum light. Hence, this method makes the system imprecise in environment which frequently modifies.
The second prototype had two motors independently controlling two wheels and a servo working continuously to test the lighting conditions. The motors were given a defined speed to move forward and backward. Each wheel could individually stop so that the vehicle could turn. Though the motors are intended to provide four basic movements : forward, backward, left and right, it is possible for the vehicle to combine the movements to create six sets of different dynamic possibilities: forward straight, forward left, forward right , backwards straight, backwards left and backwards right. The second prototype had better reflexive capabilities as compared to the previous one.
There is no control over the time taken for the vehicle to respond to its sensor. It depends on the processing time giving the vehicle a free will to move in any direction depending on the information it receives. It is observed to move towards the light from a far distance, but when it comes closer, it turns away from the light and moves out of the way. Therefore, we modified the decision making by adding an extreme condition to stop finding the direction if it is too close to the light, which made it follow straight into the light source and recoiling back, hitting it repeatedly and changing the course of its movement as explained in figure. In certain situations the vehicle shows unexpected behaviour as it keeps colliding with the surface close to the source on both sides until it turns completely away from it, then it moves on to search for a second light or goes around till it finds its way back to the first light.
There is no control over the time taken for the vehicle to respond to its sensor. It depends on the processing time giving the vehicle a free will to move in any direction depending on the information it receives. It is observed to move towards the light from a far distance, but when it comes closer, it turns away from the light and moves out of the way. Therefore, we modified the decision making by adding an extreme condition to stop finding the direction if it is too close to the light, which made it follow straight into the light source and recoiling back, hitting it repeatedly and changing the course of its movement as explained in figure 14. In certain situations the vehicle shows unexpected behaviour as it keeps colliding with the surface close to the source on both sides until it turns completely away from it, then it moves on to search for a second light or goes around till it finds its way back to the first light.
A mirror in the environment or a reflection of the light from any surface in the box has a similar effect on the vehicle, it follows the reflection considering it to be the source . A hindrance in the way of the light makes the vehicle go around it in search for the source hiding from its straight path. The vehicle visibly shows intelligence to make its own judgement on encountering obstacles.
The vehicle is suited to work in dark environments and any amount of ambient lighting in the room makes it move uncannily, not recognizing the source of light. The observation implies that such phonotaxis are quite biased to their environment and any change in the lighting conditions needs a change in the extreme ranges specified in the coding.
3.2 Deriving principles
Artificial life can demonstrate basic reflexes and deliberative thinking to perform in diverse environment. The previous experiments explore methods to achieve this in both virtual and physical ways. Both types of artificial life exhibit emergence in their behaviour. The virtual life form follows its instinct to transform, shift, merge or reproduce graphical shapes while the physical life form continuously changes its course of movement to display free will.
Another similar aspect to both of them is the use of simple rules to observe complex responses. The cellular automata experiment incorporated three basic rules to define the state of patterns but itÂ was able to create complex structures. The virtual space provides a tremendous advantage to self-organisation which is difficult to observe in physical systems due to complex transformation possibilities. The physical form of life also had two basic sensory inputs to deliver six types of modified output, though, the output was based only on four dynamic movements, yet the object behaves intelligently by traversing even in complex or misguided environments. Therefore it is relevant to design sub behaviours in accordance with Brooks’s theory (discussed earlier) which leads to a final behaviour in such systems. This, thereby, follows an approach to combine, compete and reinforce a particular sub behaviour to witness the resultant behaviour.
Aspects necessary to achieve this are:Â Â Â Â Â Â Â Â Â Â
Autonomous response: It is important for the system to be autonomous in response to dynamics of the user and environment owing to the characteristic of independent decision making. The project is to be considered as a whole system to achieve autonomy rather than individual parts. This aspect is explored to form a real-time continuous feedback loop between all sensors and actuators in the system.
Emergence: It is essential to be dynamically adaptable to demonstrate life. Therefore generative or emergent qualities are an important consideration. Emergence imparts a deliberative thinking to the system dynamics. This is explored as the bottom’s up method of design.
Reconfigurability: In order to provide a utility to space and demonstrate flexibility in transformation, it is required to design a morphology which executes the dynamic behaviour. Reconfigurability is explored as a physical attribute in this project.
- DESIGN OF PHYSICAL PIXELS
The principles derived in the study are demonstrated in the final project which aims to achieve autonomous reconfigurable modules called Physical Pixels. Versions of these modules have been built and tested for different behaviours. The project continues to experiment and exhibit a prototype culminating this thesis.
The design is focussed on developing behavioural patterns in the physical object which is enhanced with a virtual representation of digital information. Derived from coded existence it has built communication between the virtual layer and physical modules. The virtual layer is updated according to the position of the object. As learnt from Machina Speculatrix, the objects develop their behaviour via sensory-motor coupling in a hierarchical approach.
4.2 Layers of design
The project has several layers of design to explore the bottom’s up method with a graphical visualisation of information.
4.2.1 The feedback loop connecting virtual and physical
Similar to coded existence, this project develops a virtual physical connection using a camera to track the position of objects. The virtual layer is projected on the modules which is updated in real-time as a response to the changing position.
There were challenges in this process to counteract computational timing delays and obtain a more real-time projection output. But the communication was established using both input(camera) and output (projector)as external sources. Though it limits the span of environment within the ranges of both the devices, it provides a good testing ground for the objects. The virtual layer is supplemented with basic graphics and colours since the environment it is being tested is an informal learning environment. This connection establishes one side of the constant feedback loop between the objects and the environment.
The second part of the feedback loop is established by communicating autonomous movement ability to the modules allowing them to move around independently depending on their own will. Their ‘will’ is a coded program within each module. This aspect as derived from Machina Speculatrix provides the objects autonomy of movement in the space.
This was achieved using radio frequency (RF) signals to communicate to each module which links them up with each other and the central controller. The central controller receives data from the camera tracking and processes commands for each module. This is also a real time process as the modules are operating in an ever-changing environment.
4.2.2 Bottom-Up behaviour
The crucial aim of the report is to understand how to develop behaviour in such dynamical systems. This was achieved through a three-step process involving sensing, actuation and hierarchy of response. The sensing part involves all the information received from the environment. The hierarchy of response is the decision-making process or selection of choices of actuation. Actuation is the motor or movement control. It can also be interpreted in the virtual sense as an actuation of the visual display.
The system incorporates three types of visual sensing to gage the environment.
The objects have been provided with independent distance sensors to operate effectively in their environment and stay away from hindrances. This aspect as achieved in the Grey Walter’s tortoises has been useful for establishing navigational skills in a changing environment.
Another mode of external sensing is the camera, which retrieves the position of each module and is able to provide information in relation to other modules. This part of sensing is essential for establishing the physical communication to the virtual and any other networking required between the modules. The camera sensing is a processed information it not only calculates the position of modules with respect to global coordinates, but also determines the target of approach or distance between the nearest module.
The third sensor is a Kinect camera, which tracks the movement of people around together with their gestures. The gestures form an input layer for a responsive action.
The prototype for Physical Pixels has an ability to move on levelled surfaces using omni-directional wheels. This gives the omni-bot Â flexibility to move in any direction by specifying the angle of approach. The angle is determined as a response to the sensors or target to approach. The wheels give them another opportunity to spin about its own axis. The motors are though capable of only forward and backward movement, a combination of speed and direction allows the robot to find its direction of approach.
The choice for speed and direction to each of the three motors is pre-programmed in each module and therefore is not centrally control. The individual module movement behaviour is thus independent of the central control. It helps to simplify the data being networked over radio-frequency (RF) signals.
The virtual form of actuation uses the tracking data and updates the image on the modules in real-time. It was tested with basic graphics to observe the timelyÂ response during changes in modes of behaviour. This aspect controls graphical data and therefore has to face computational delays to updated information, but functions effectively.
- Hierarchy of responses
The selection of choices is in relation with Brooks’s theory of subsumption architecture (Penny, 2009) discussed earlier in section 2c, decomposing the final behaviour to sub-behaviours or responses. These responses are based on a hierarchy of selection, coupled with sensor data. Independent responses for each module include moving forward, retrieving backwards or rotation. The omni-bot is though capable of moving in any direction, yet we orient it in one direction according to the shape for the ease of understanding the movements. The group responses are based on collective sensing, where the distance to other modules or their orientation added with the gesture of a person makes the choice. The choices are made for the modules to come closer, move apart or orient themselves in a particular direction.
The hierarchy of responses allows the omni-bot to form emergent behaviours by adding or removing individual responses without affecting other responses.
4.2.3 Reorganizational control
This aspect of design is not only essential to provide a utility to the omni-bots but also dominates the form of the physical module.
Since they work in a learning environment, reorganisation gives a possibility to show group behaviours of alignment in an orthogonal or radial way and involve a playful interaction with the users. The virtual representation further adds to the character of the omni-bots and makes them a playful teaching aid or a mode for group interaction.
For reorganising the modules orthogonally a cube is a perfect form as evident from the Generator project. But the modules were given an angular form which helps them to organise in both orthogonal and radial fashion.
All the reorganizational behaviours are centrally controlled and might be perceived as a top-down control. If we look back at the Machina Speculatrix experiment, the target for the vehicle was the light source and it was tested in environments by modifying the position of the light source or with multiple light sources. The vehicle navigates its way to one of them or shows unpredictable behaviour. Therefore, it can be argued that the central controller in this project behaves as a target specifying parameter or an external sensing point which is calculated computationally and is changing the target frequently. Hence the final behaviour is not controlled by the computer but finds its way through the hierarchical layers of the movement . Thus, the omni-bots navigating through the spaces emerge into self-organisation of various possibilities (Figure 23).
Reorganisation of modules imparts a dynamic utilitarian behaviour to the project. TheyÂ materialise as self-sustainable objects in any architectural space. This trait is useful for flexibility and modularity in spaces. Reconfigurability is therefore achieved by virtue of the design of the modules and its control method.
Bottoms-up approach to design is explored throughout the project by assigning behaviours and hierarchy of responses. This characteristic gives a responsive and adaptive feature to the project. The modules act on immediate reflex to user’s presence and prove themselves to be a living entity of the space, building a more resilient environment with artificial objects.
An autonomous condition to reorganisation and movement makes them appear intelligent as compared to other static objects in the space. The project achieves autonomous control over navigational abilities and decision making. Though, there is a possibility to make each module more autonomous to extend the operational range, by embedding the external sensing (camera) and an external actuator (projection) within the module. This will provide a larger functioning area, making each module robust and independent.
- APPLICATION IN A LARGER ARCHITECTURAL CONTEXT
The project demonstrates a method for providing intelligence in interactive dynamical systems using bottom’s up control. It achieves navigational skills in changing environment. It modifies states autonomously to demonstrate different behaviours.
5.1 Exploring learning spaces
Full scaled modules in a learning space are intended to behave as a self-organising furniture with an interactive display. Therefore the modules act as an individual teaching aid or invigilators for students. The different behaviours areÂ attributed to them while functioning together in groups for play or informal learning activity. The behaviours or mode changes can be programmed in response to child interaction or can be based on a certain duration of time.
Such an augmentation will provide a space which is proactive, autonomous and adaptable. It modifies with time and responds to the users providing greater engagement with people. Augmented spaces lead to enhanced learning environment where a room is not just made of static walls and objects are redundant, but, where the walls speak and objects move around to form a narration, helping to distribute knowledge. The final project illustrates a part of this idea by bridging the physical and virtual to demonstrate intelligence.
5.2 Other augmented environments
Physical pixels can be used in other different environments as well. They can be used as self-organising furniture in a contained public area. Apart from providing different configuration options they could also act as advertising or display screens. They can act as interactive information display or guide people to specific areas.
They are a viable option for use in any other space where information needs to be communicated visually and in tangible ways. Physical pixels find application in office spaces where they could be organised as individual work desks and self-organise into meeting table. Or similar aspect can be utilised in a living residential space. They can also be programmed to network amongst each other and perform coordinated tasks.
Another speculative aspect would be to provide other modes of transformations in physical objects than providing wheels. These transformations respond to the users and demonstrate intelligent self-reconfiguration possibilities. This application extends the method developed in the project to be used in external facade or transforming roofing systems.
The report is an attempt to investigate tangible ways of interaction in a lifestyle dominated by connected devices. These devices claim to be accomplishing time-consuming tasks due to faster computing but require an active human engagement to be able to understand his requests. This takes away precious time to calibrate and modify the intent according to the space they are situated. Centrally controlled systems do this job effectively but the control is in the hands of a few who design it for the masses without paying attention to individual user or change of environment.
The design thesis is an effort to bridge this gap by connecting the physical dynamic environment with the virtual space that we are increasingly accommodating in our lives. It is intended to make space more responsive and adaptive to change. This aspect has been investigated in robotics and artificial life. Therefore building robots as fundamental elements of space becomes relevant.
The report derives context from fields of tangible interaction and theory on artificial life. Further exploring these aspects through experimentation, envisaging design parameters of autonomy, emergence and re-organisation. Modules in physical pixels have been designed to achieve these aspects through the various layers of design, to play an active role in the otherwise static architectural spaces. This can develop a ‘sometimes fragile stability’ (Gage,2005) in a not so centrally controlled space. Hence the design is informed by constructing behavioural aspects to achieve autonomy, responsiveness and reconfigurability. These aspects embed visible intelligence in physical objects performing in a dynamic environment. Physical pixels offer several possibilities of application to the flexible use of spaces. They deliver promising results in internal or controlled built environment.
6.1 Further Research
Designing intelligence in space is a challenge which needs further study in possible methods of imparting knowledge. The behaviour-based approach is a complementary approach to other ways of imparting intelligence.
In an augmented space where the physical objects, in reality, are a part of the virtual space and both influence each other, can achieve greater engagement of the user. In such a space, the virtual does not stay disconnected from the user,Â actuation is not just a touch on a screen or a button but their gesture and presence could influence the dynamics of the space in both real and virtual objects. This space behaves intelligently to recognise the presence of the user and functions to communicate in visual and tangible ways.
While integrating information into architectural space, Mark Wieser’s theory of calm computing (Weiser,1997) can be further explored. He advocates the idea of ambient user interface where technology does not dominate its user but the user is in command of the technology. This is achieved by digitally communicating information which is understood by the user through his peripheral vision. In my opinion, this is anÂ intelligent approach to design augmented spaces, since, it takes away the constant attention that is usually required to operate on devices.
Another exploration in this area could be memorising certain aspects of the use of these physical modules or learning from the people’s interaction to perform deliberative tasks. Self-learning algorithms and testing in different environments offer great help to advance this project to a next level. These features in design accompanied with the bottoms-up strategy can make a more robust system. Such a system is able to modify its approaches to a specified task depending on the information that it receives from the present environment and the memory that stores the past.
- Brodey, W.M. and Lindgren, N., 1967. Soft architecture: the design of intelligent environments.Landscape,Â 17(1), pp.8-12.
- Chaturvedi , S., 2016. Synthesis of intelligence and embodiment. [online] interactivearchitecture.org. Available at: http://two.wordpress.test/synthesis-of-intelligence-and-embodiment.html [Accessed 19 Jul. 2016].
- Pfeifer, R. and Scheier, C., 2001.Understanding intelligence. MIT press.
- Spiller, N., 2002.Cyber reader: Critical writings for the digital era. BPR Publishers.
- Bondin, W., 2013. Embodied Dynamics : The Role of Externalised and Embodied Cognition in Kinetic Architecture.
- Steels, L., 1993. The artificial life roots of artificial intelligence.Artificial life,1(1_2), pp.75-110.
- Levy, S., 1992.Artificial life: the quest for a new creation. Random House Inc..
- Benedikt, M., 1994, June. On Cyberspace and Virtual Reality1. InSymposium on” Man and Information Technology.
- Ishii, H., Wisneski, C., Brave, S., Dahley, A., Gorbet, M., Ullmer, B. and Yarin, P., 1998, April. ambientROOM: integrating ambient media with architectural space. InCHI 98 Cconference Summary on Human Factors in Computing SystemsÂ (pp. 173-174). ACM.
- Bartle, R.A., 2004.Designing virtual worlds. New Riders.
- Moon, J. and Nam, S. 2016. Augmented Shadow – Joon Moon portfolio. [online] Joonmoon.net. Available at: http://joonmoon.net/Augmented-Shadow [Accessed 19 Jul. 2016].
- JordÃ , S., Geiger, G., Alonso, M. and Kaltenbrunner, M., 2007, February. The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces. InProceedings of the 1st international conference on Tangible and embedded interactionÂ (pp. 139-146). ACM.
- Reed, S.E., Kreylos, O., Hsi, S., Kellogg, L.H., Schladow, G., Yikilmaz, M.B., Segale, H., Silverman, J., Yalowitz, S. and Sato, E., 2014, December. Shaping watersheds exhibit: An interactive, augmented reality sandbox for advancing earth science education. InAGU Fall Meeting AbstractsÂ (Vol. 1, p. 01).
- Price, C., 2002. Generator Project.Cyber Reader: Critical Writings for the Digital Era, pp.86-89.
- Frazer, J.H., 2002. A natural model for architecture: the nature of the evolutionary model 1995.Cyber reader: Critical writings for the digital era, pp.246-255.
- Soler-Adillon, J. and Penny, S., 2014, June. Self-organization and novelty: pre-configurations of emergence in early British Cybernetics. InNorbert Wiener in the 21st Century (21CW), 2014 IEEE Conference onÂ (pp. 1-8). IEEE.
- Walter, W.G., 1950. An Imitation of Life. Am., vol. 182, no. 5, pp. 42—45.
- Penny, S., 2009. Art and artificial life—a primer.Digital Arts and Culture 2009.
- Gage, S.A. and Thorne, W., 2005. Edge monkeys-the design of habitat specific robots in buildings.Technoetic Arts,Â 3(3), pp.169-179.
- Weiser, M. and Brown, J.S., 1997. The coming age of calm technology. In Beyond calculation(pp. 75-85). Springer New York.
- Negroponte, N. 1975. Soft architecture machines. Cambridge, Mass, The MIT Press.
- Pfeifer, R., Lungarella, M. & Iida, F., 2007. Self-organization, embodiment, and biologically inspired robotics. Science (New York, N.Y.), 318(5853), p.1088-1093.
- Brooks, R., 1986. A robust layered control system for a mobile robot.IEEE journal on robotics and automation,Â 2(1), pp.14-23.
- Lenser, S., Bruce, J. and Veloso, M., 2001, August. A modular hierarchical behavior-based architecture. InRobot Soccer World CupÂ (pp. 423-428). Springer Berlin Heidelberg.
- Engels, C. and SchÃ¶ner, G., 1995. Dynamic fields endow behavior-based robots with representations.Robotics and autonomous systems,Â 14(1), pp.55-77.
- Frazer, J., 1995. An evolutionary architecture.
- Ishii, H., Ratti, C., Piper, B., Wang, Y., Biderman, A. and Ben-Joseph, E., 2004. Bringing clay and sand into digital design–continuous tangible user interfaces.BT technology journal,Â 22(4), pp.287-299.