2001: A Space Odyssey (1968)
So what does the future hold for us? And what will technology enable us or indeed disable us from achieving. Futuristic visions from science fiction cinema borrow much from the leading technology of their time, to imagine our future homes, workplaces and cities. I’ve always had a laugh watching the 1940’s-50’s films which show a house wife getting her personal tin foil wrapped robot to do the dishes while she can get on with other important jobs like getting her makeup on to greet her husband when he comes in from a hard day at the office.
Well its hardly suprising that film makers had less glossy visions of the future and I’ve been making a special effort recently to watch films that have explored the kinds of future architecture and cities we may one day inhabit. I recently saw on Regine’s excellent We Make Money Not Art a link to the Top 50 Dystopian Movies of All Time. which has given me plenty of the darker visions to examine. One idea in particular which captured the imagination of many writers and directors was that of artificial intelligence and the kinds of power struggles that could ensue between humanity and intelligent agents. With the development of my own work in adaptive systems recently, I have spent a considerable amount of my work invested in understanding the current state of artificial intelligence research. I’m pleased to see that the dystopian visions of man vs machine are for the time being at least, some way off, since we can’t get much more than insect level intelligence out of computational systems.
None the less, progress is being made and it was while visiting MIT last week that I got an opportunity to listen to Marvin Minsky speak a little, about his involvement in the development of AI since the 1960’s. He currently believes "we need to find more complicated ways to explain our most familiar mental events"; we need to break our thought processes down into the most precise steps possible. In fact, in order to truly understand the human mind, Minsky suggests, we’ll probably need to reverse-engineer a machine that can replicate those functions so we can study it. Thus, he rejects the idea of consciousness as a unitary "Self" in favor of "a decentralized cloud" of more than 20 distinct mental processes. In this view, emotional states like love and shame are not the opposite of rational cogitation; both, Minsky says, are ways of thinking.
A FREE draft Copy of his recent book The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is available from his website (find links below)
The Emotion Machine : Marvin Minsky
Introduction Chapter 1. Falling in Love Chapter 2. Attachments and Goals
Chapter 3. From Pain to Suffering Chapter 4, What in the world is Consciousness? Chapter 5, Levels of Mental Activities Chapter 6, Common Sense Chapter 7, Thinking Chapter 8, Resourcefulness Chapter 9, The Self Bibliography
or you can buy the completed book from amazon.
The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind (Hardcover)
One of Minsky’s long standing claims is that common sense is very hard to explain or program. Here is an excerpt from a recent interview.
Back when I was writing The Society of Mind, we worked for a couple of years on making a computer understand a simple children’s story: "Mary was invited to Jack’s party. She wondered if he would like a kite." If you ask the question "Why did Mary wonder about a kite?" everybody knows the answer — it’s probably a birthday party, and if she’s going that means she has been invited, and everybody who is invited has to bring a present, and it has to be a present for a young boy, so it has to be something boys like, and boys like certain kinds of toys like bats and balls and kites. You have to know all of that to answer the question. We managed to make a little database and got the program to understand some simple questions. But we tried it on another story and it didn’t know what to do. Some of us concluded that you’d have to know a couple million things before you could make a machine do some common-sense thinking.
I Robot (2004)
He goes onto explain that emotions enable us to swap between different modes of thinking depending on the situation:
The main idea in the book is what I call resourcefulness. Unless you understand something in several different ways, you are likely to get stuck. So the first thing in the book is that you have got to have different ways of describing things. I made up a word for it: "panalogy." When you represent something, you should represent it in several different ways, so that you can switch from one to another without thinking.
Silent Running (1972)
The second thing is that you should have several ways to think. The trouble with AI is that each person says they’re going to make a system based on statistical inference or genetic algorithms, or whatever, and each system is good for some problems but not for most others. The reason for the title The Emotion Machine is that we have these things called emotions, and people think of them as mysterious additions to rational thinking. My view is that an emotional state is a different way of thinking.
When you’re angry, you give up your long-range planning and you think more quickly. You are changing the set of resources you activate. A machine is going to need a hundred ways to think. And we happen to have a hundred names for emotions, but not for ways to think. So the book discusses about 20 different directions people can go in their thinking. But they need to have extra meta-knowledge about which way of thinking is appropriate in each situation.
Minsky also expresses disappointment about "how few people have been working on higher-level theories of how thinking works", that too many "people look around to see what field is currently popular, and then waste their lives on that. If it’s popular, then to my mind you don’t want to work on it."
Artificial Intelligence: AI (2001)
I’m personally more a fan of Rodney Brooks who claims that Minsky erred in not putting the concepts of situatedness and embodiment onto the AI research agenda and from the work I’ve personally done with building simple robotic systems, Brooks approach is more applicable to the kinds of systems I create but Minsky’s ideas do raise the question, will we need to spoon feed our architecture common sense? There is no clear cut answer for all circumstances but personally I find bottom up strategies of situatedness and embodiment more appealing from an architectural perspective.