Image Image Image Image Image Image Image Image Image Image

Bartlett School of Architecture, UCL

Scroll to top

Top

No Comments

Artificial Intelligence and Evolutionary Machines

Artificial Intelligence and Evolutionary Machines

 

How can high-level artificial intelligence emerge in architecture machines, which enable them to learn and evolve by themselves? Can we make use of machine intelligence to help us optimizing design solutions, or creating robust architecture machines that perform complex behaviors, and adaptive to meet ever-changing demands of the occupants and society at large? How can these objectives be achieved? With theses questions in mind, key factors will be examined which contribute to the emergence of high-level intelligence. This literature review will investigate some of the key reference in the fields of artificial intelligence and cognitive science, compare and relate them to one another, with examples of design projects concerning these topics.

 

Keywords: Artificial intelligence, Cognitive system, Evolution, Interaction.  

 

Introduction

 

Alan Turing introduced the Turing Test for machine intelligence in 1950(‘Computing Machinery and Intelligence’): The evaluator ask a question to the computer, the answer might be from a person or might be from a machine, if the evaluator couldn’t tell the difference between the two, the machine was said to have passed the Turing Test. It appears intelligent, but is it conscious? Philosopher John Searle argued strongly that such AI was the ‘weak AI’ and there is no existence of consciousness in his ‘The Chinese Room Argument'(1980), computers simulate thought, but there is no real understanding of behind their seeming understanding. The machine intelligence passed The Turing Test just created an illusion of high-level intelligence that we possess.

In the process of discovering how machine could have the ability to learn and perform intelligence, we could be greatly benefited by observing examples of learning and adaption in nature, thus it becomes essential to research in the field of cognitive science, in order to observe how learning is conducted in nature. J. Holland claimed the importance of natural metaphor in the development of artificial intelligence:

“It would miss the point that the very ideas of adaptation and learning are concepts invented by the most recent representatives of the species Homo sapiens from the careful observation of themselves and life around them. ”[1]

 

Embodied and Embedded

 

The central issue seems to be how to endow the machine with that indefinable capacity called ‘understanding’. The evidence of ‘understanding’ in humans and in machines is some intelligent responses that is ‘meaningful’ and pertinent. Nicholas Negroponte argues that the machine needs to have a body like us in order to think and behave like us [2], in which the embodiment of machine is crucial to its learning process and emergence of intelligence. This raises the question of how the body of machine contributes to the evolution of its ‘mind’? We can refer to the relationship between our mind and body to seek for evidences.

J.Protevi’s describes two standard approaches that use a computer metaphor for the brain-mind connection: brains, like computers, are physical symbol systems, and minds are the “software” run on those computers. The difference is in the respective computer architectures. Computationalism sees cognition as the rule-bound manipulation of discrete symbols in a serial or von Neumann architecture, which passes through a CPU (central processing unit). Connectionism, the second standard approach, is based upon another computer metaphor, but it has a different, allegedly more biologically realistic architecture: parallel distributed processing. In connectionism’s so-called neural nets, cognition is the change in network properties; that is, the strength and number of connections [3]. Similar approach in cybernetics was discussed by Valentino Braitenberg, using a range of vehicle models to demonstrate the concept of connectionism, in which there is no conventional central processing unit, but numerous parallel distributed processing routes that make decisions on the vehicle’s action in respond to stimuli.

 

Fig1. Valentino Braitenberg. (ed.  1986, c1984) Vehicles : experiments in synthetic psychology

braitenberg-illust

Protevi argues that The 4EA approach (embodied, embedded, enactive, extended, affective)[4] breaks with any unidirectional information processing model, in which cognition is the middle slice in what Susan Hurley called the “classical sandwich”[5]: sensory input / processing of representations / motor output. Similarly, the 4EA schools rethink the allegedly central role in cognition of “representation,” the 4EA thinkers restrict representation to a few “offline” problems, and see the vast majority of cognition as real- time interaction of a distributed and differential system composed of brain, body, and world. [6]

Ruairi Glynn’s installation Performative Ecologies provides example that representing the ideas of the 4EA approach. The robots perform dance moves generated by a genetic algorithm to attract people’s attention, their performances evolve over time by selecting ‘successful’ moves that attract people’s attention and eliminating unsuccessful ones, hence the robots develop their ‘character’ through their physical interactions with people.

 

Fig2. “Performative Ecologies” by Ruairi Glynn, 2008

 3338198908_34aab2fe0a

 

Boundaries Between Mind, Body and Context

 

If we were aiming for architecture machine with the level of intelligence that enables it to learn and evolve, the key issue here seems to be the development of machine’s cognitive system. Andy Clark and David Chalmers have built a case for a new way of thinking about the human mind and its boundaries in their ‘The Extended Mind‘. They investigated how closely human cognition depend on environmental resources [7]. Andy Clark and Chalmers claimed that the devices we use to solve a problem competently are the extensions of our mind, Kim Sterelny argued that the mind is scaffolded rather than extended [8], he questioned the extended mind idea by stating the difference between internal and external resources (e.g. tools and devices for cooking do not necessarily become extensions of our digestive system). These two different arguments raise a question about the relationship between human and intelligent architecture machines, do they become extensions of our minds or bodies, or do we also become part of the environmental scaffolding of the machines’ mind? The relationship between us and the machine might be seen as Wexler’s‘mother-infant dyad'[9], in which we provide the scaffold for the machine mind to develop in an interactive process before they could accomplish any meaningful task. From my point of view, this argument responds to the above mentioned Protevi’s explanation of connectionism, the strength and number of connections in neuro-network triggered by external stimuli drives the cognitive process and formation of mind. This metaphor also provides the foundation for my assumptions about the practice of intelligent architecture machine: at the beginning, the machine will internalize the sensor inputs and user feedback data, and the machine will build an initial model of the surrounding context in its mind, then start to adjust its behaviors in attempt to achieve its objectives (such objectives would be set by the user) just like a baby.

In Joseph Malloch and Ian Hattwic’s project ‘Instrumented Bodies’, they designed a family of prosthetic musical instruments, including an external spine and a touch-sensitive rib cage, that create music in response to body gestures. This example shows the connection between external system and instruments — the body as externalized representation of the dancer’s mind and the instruments as extension of the dancer’s body, the instruments augment the body gestures as well as influence the dancers’ performance at the same time.

 

Fig2. “Instrumented Bodies” by”¨Joseph Malloch and Ian Hattwick, 2013

dezeen_instrumented_bodies_musical_digital_prostheses_mcgill_ss_6_Spine

Furthermore, in ‘The Extended Mind’, Clark and Chalmers provided an example of Otto, an agent with memory failures who compensated for his failed internal memory by keeping information in a notebook he kept to hand. Clark and Chalmers argued that items in the notebook were (amongst) Otto’s memories as beliefs. Kim Sterelny argued that there are functional difference between Otto’s notebook and internally represented information, the external representations are stable, physically discrete, assessed through perceptual, the internal resources are unstable such as memories. The internal resources could be inherited whereas external resources could be shared, individualised or entrenched (Sterelny, 2010). He argues the shared resource (e.g. the physical organization of a theatre that helps actors/actresses memorise their roles in vast repertoires [10]) is hard to be shoehorned into the extended mind model, the human problem solving activity is often social and much more dependent on communal resources.

The practice of social intelligence in problem solving can be also found in animals [11], e.g. a swarm of birds of fish adapt their formations to the environment. A research group supported by Wyss Institute for Biologically Inspired Engineering and the National Science Foundation in Harvard University developed a thousand robots swarm using vibration motors for locomotion and infrared for communication, the shape of formations is programmable and self-assembly algorithm allows the robot to robustly assemble into an organism without human intervention. This project is a good example of an intelligent model borrowed from natural bio-system, programmed to perform complex and non-linear behaviors to accomplish a set task. There are also other models in nature have been adopted in the development of artificial intelligence, e.g. genetic algorithm that simulates the process of biological evolution, which associate closely with the type of machine learning I intend to explore relative to architecture machine.

 

Fig.3&4 Kilobots – Wyss Institute for Biologically Inspired Engineering, Harvard University, 2012

 1000kilobots-2kilobot_size

 

 Genetic Algorithms

 

Genetic algorithms are probabilistic search procedures designed to work on large spaces. These methods use a distributed set of samples from the space to generate a new set of samples. This system is inherently parallel. J.Holland uses the word ‘building block’ []to describe these samples. Learning programs designed to exploit this building block property gain a substantial advantage in complex spaces where they must discover both the “rules of the game” and the strategies for playing that “game.”

 

Fig1. Ichiro Nagasaka (1992) Genetic algorithm

IMG_9356

The system responds to a set of environmental inputs and evaluates the relative successs of that response. Environmental signals can be taken from any of the antennae and response transmitted to the output antennae. The nature of the response is based on feedback from the environment and more successful responses are gradually developed.

 “(An internal) model is used to direct behavior, and learning is triggered whenever the model proves to be an inadequate basis for generating behavior in a given situation. This means that overt external rewards are not necessarily the only or the most useful source of feedback for inductive change.”[12]

                                                                                                       Lashon B.Booker, 1988

This computational approach corresponds to previously mentioned natural model of learning, in which strength number of connections in neural network stimulate the internalization of the model of context. However, there are differences between genetic algorithms and neural networks. Genetic algorithms have the advantage that criteria can be clearly stated and controlled within the fitness function. The learning that neural networks rely upon does not afford this level of control over what is to be learned. [13]

One may argue that the evolution process of machine intelligence is very slow, it took human million years to evolve to the current level of intelligence. However, researchers have already learned that evolutionary processes are not “slow,” and that discovery and recombination of building blocks, allied with speedup provided by implicit parallelism, provides a powerful tool for learning in complex domains, approach based on the abstraction of natural example, combined with careful theoretical and computational investigation, will continue to chart useful territory on the landscape of machine learning. [14]

 

Conclusion

 

To sum up, the realization of evolutionary intelligent architecture machines contains three key factors: 1) Embodiment of artificial cognitive system that enables machines to perceive the needs of the occupants and the context through interactions, hence develop a representational system for processing information and giving instant feedback. 2) A communication network adopting parallel distributed processing system, both internally and externally between machines in the likes of neural networks, so that artificial intelligence can emerge in a collective form. 3) An algorithm simulating the natural process of human evolution, generate complex and non-linear behaviors that respond to user and context. These factors contribute to my understanding of soft architecture machine, softness in terms of the abilities to adapt and interact with its occupants and context in real-time, and develop a character by having the unique internal model of context internalized in its mind. The next question is how could such machine deal with both quantifiable and un-quantifiable requirements (e.g. aesthetics)? My future step will be carrying out research into these questions and try to explore the methods to deal with these issues.

 

References

 

[1]David E. Goldberg & John H. Holland (1988) Genetic Algorithms and Machine Learning Metaphors, Machine Learning 3: 95-99, 1988

[2] Nicholas Negroponte (1976) Soft Architecture Machine, The MIT Press

[3] Protevi, John. (2012) Draft: Deleuze And Wexler: Thinking Brain, Body And Affect In Social Context, Brain

[4] Hubert Dreyfus (1972) What Computers Can’t Do (Cambridge, Mass.: MIT Press; revised and reissued as What Computers Still Can’t Do in 1992) influenced the embodied mind school with its critique of computationalism and connectionism.

[5] Susan Hurly (2001), Perception and Action : Alternative Views, Synthese 129 (2001) 3-40.

[6] Michael Wheeler (2007), Reconstructing the Cognitive World, The MIT Press

[7] Andy Clark & David J. Chalmers (1998) ‘The Extended Mind‘, ANALYSIS 58: 1: 1998 p.7-19

[8] Kim Sterelny. (2010) “Minds: Extended Or Scaffolded?”, Phenomenology and the Cognitive Sciences

[9] Bruce E. Wexler (2006) Brain and Culture, The MIT Press

[10] John Sutton, Celia B. Harris, and Amanda J. Barnier, Memory and Cognition, ch.14 in Susannah Radstone & Bill Schwarz (ed.)

[11] Andrew Whiten, Carel P van Schaik (2007) The evolution of animal ‘cultures’ and social intelligence, The Royal Society April 2007 Volume: 362 Issue: 1480

[12] Lashon B. Booker (1982) Intelligent Behavior As An Adaption To The Task Environment, PhD Thesis, University of Michigan

[13] John Frazer (1995) An Evolutionary Architecture, London, Architectural Association

[14] David E. Goldberg & John H. Holland (1988) Genetic Algorithms and Machine Learning, Machine Learning 3: 95-99, 1988

 

 

Bibliography

 

 Andy Clark & David J. Chalmers (1998) ‘The Extended Mind’, ANALYSIS 58: 1: 1998 p.7-19

Kim Sterelny. (2010) Minds: Extended Or Scaffolded?, Phenomenology and the Cognitive Sciences

 Protevi, John. (2012) Draft: Deleuze And Wexler: Thinking Brain, Body And Affect In Social Context. Brain

 Nicholas Negroponte. (1976) Soft Architecture Machine, The MIT Press

 John Frazer. (1995) An Evolutionary Architecture, London, Architectural Association

Valentino Braitenberg. (ed.  1986, c1984) Vehicles : experiments in synthetic psychology,

MIT Press 1st MIT Press pbk.

 Peter J. Bentley. (1999) Evolutionary Design by Computers, Morgan Kaufmann

 

Illustrations

 

Fig 1. Valentino Braitenberg. (ed.  1986, c1984) Vehicles : experiments in synthetic psychology,

Fig 2. Ruairi Glynn, Perfomative Ecologies (2008) [Online] Available from: http://www.ruairiglynn.co.uk/portfolio/performative-ecologies/ [Accessed: 24th Nov 2015]

Fig 3. Joseph Malloch and Ian Hattwick Dezeen, Instrumented Bodies (2013) [Online] Available from: http://www.dezeen.com/2013/08/12/instrumented-bodies-by-joseph-malloch-and-ian-hattwick/ [Accessed: 24th Nov 2015]

Fig 4. Harvard University, Kilobots (2012). [Online] Available from: http://www.eecs.harvard.edu/ssr/projects/progSA/kilobot.html [Accessed: 21th Nov 2015]

 Fig 5. Ichiro Nagasaka (1992) Genetic algorithm. John Frazer. (1995) An Evolutionary Architecture, London, Architectural Association

 

Submit a Comment