Recreating a system of audience control using coloured paddles
Last week I created a series of experiments in collaborative control which produced some interesting results and I plan to continue down this road.
I want to recreate Loren Carpenter’s Pong experiment, because I expect I will be able to learn from going through the process. It will be interesting to see, in person, how an audience reacts and I hope it will inspire more ideas. This week I’m beginning investigations into how to do this technically.
Thinking about the technology involved in the Pong experiment, it’s pretty amazing that it was achieved in 1991. That’s the same year that Tim Berner’s Lee turned on the internet at CERN. I even started to wonder whether the description of the experiment in AWOBMOLG was entirely accurate. After all, this was an age where good computers ran at 25 or 33Mhz. The laptop I’m writing this on today runs at 2.4Ghz, almost a hundred times faster. There were no programming languages designed for visuals, such as Processing or OpenFrameworks. Certainly there were no libraries available for computer vision or blob tracking. How did they manage to do it?
This skepticism was quickly quashed when I found Carpenter’s patent for the technology, which shows that, physically, the system relies on a light mounted next to the camera, and the reflective material on the audience’s paddles, making them easier to identify. In terms of the software, the entire thing was coded from scratch in C.
Luckily for me, these days, OpenCV and Youtube tutorials are here to help. I started experimenting with blob detection using this tutorial by Daniel Shiffman. With a little adjustment of the thresholds, and some coloured cards, I managed to get reliable results.
View this post on Instagram
Prototyping. I'm planning to recreate Loren Carpenter's Pong experiment from 1991, where a crowd played the game by voting to move their paddle up or down by showing red or green on cards they were given. This works pretty well but to get it to track 100+(?) cards, which are all a lot further from the camera than this will be interesting. #processing #blobtracking #blobs #pong #lorencarpenter #siggraph #allwatchedoverbymachinesoflovinggrace #adamcurtis #collaborative #collaboration #crowd #interactive #experiment #prototype #test
It does pick up some background objects, and I expect that in an audience of people there would be lots of false positives from people’s clothes so I ordered some reflective tape to experiment with Carpenters method.
I though I would need a particular type of light and a dark room to get this to work at all. In fact, using my phone’s torch light next to my Canon 7D in the normally lit auditorium worked really well.
View this post on Instagram
Reflective tape seems to work super well as long as it's not too far away. The yellow picks up a lot better than the red, it's just brighter. I'm only using my phone light next to the camera lens at the moment. The reflection is lost quite easily when the paddle is turned to the side. Maybe a bigger light source will help, or 3-5 lights across the front. #experiment #lorencarpenter #blobtracking #blobdetection #processing #proofofconcept #prototype #test #interactiveart #interactive #shiny
Problems do happen when the cards are too far away, as the light from my phone isn’t reaching far enough. Also tape does need to be aimed at the camera fairly well, to reflect the light properly. But this doesn’t stop the system being usable. I’ll explore improving by adding more lights, brighter lights, or perhaps more omnidirectional reflectors.
In the meantime I decided to code up the Pong game and take advantage of my friend’s Pancake Day gathering to test out how the control actually works. The camera feed is split into two halves, creating two teams, each controlling one of the paddles.
People seemed to enjoy playing and got into the competitiveness. (One friend asked what the consensus was on leaning over and putting his card into the other team’s space to mess them up.) I quickly coded up a counter for many passes a rally had achieved, and a “top score” for the longest rally so far. This made it more of a group activity rather than a competitive one.
I found that, unsurprisingly, people weren’t keen on having a bright light shone in their face and we ended up just reverting to the plain colours – not using reflection. At this size of space, webcam resolution was sufficient. I was able to control the space to avoid false positives (we removed a red cushion, and a red pair of scissors) so this worked fine. I’ll experiment with offsetting the angles of the camera and light to reduce the eye glare.
The biggest issue with the control is that it is hard to make the pong paddle stay still. Currently, if there are more red cards than yellow, the paddle moves down; more yellow than red, it moves up. I tried adjusting the code so that a larger majority was needed to make the paddle move. Everyone said it felt a lot less responsive, as everyone on a 3 person team had to have their card in agreement to make the paddle move and we soon reverted back. Perhaps in a larger group this would work better.
People said that the game felt very responsive, which was encouraging to hear. I wonder what it will be like when there are many more people working together to control the paddle – will individuals still feel like they have control? How will it feel to be part of the group?