Infrared Tracking Projection

This is a really exciting invention I came up with for the Sleepy Hollow.  I wanted the pumpkin head to talk – like really talk, and also I wanted it to explode into pieces when the horseman dramatically throws it across the stage at Ichabod.

I wanted to use projection to create the animation on a destructible pumpkin, but I knew it would be impossible to perfectly line up the position of the pumpkin or even the horse with a pre-set projection location.  I came up with a rough idea that I could conceal a simple and very durable infrared LED on a battery in the pumpkin somewhere, and I could write a program that would move the pumpkin’s projected face around in the projection area based on the position of the IR LED.

I needed a nice far infrared wavelength that wouldn’t get too much interference from stage lights and warm bodies, and a matching camera lens filter that would block out all the normal visible light to make it easier for my program to pinpoint the tracker.  I settled on a 950nm IR wavelength and bought a bag of LEDs and a filter from Amazon.  The LEDs had a voltage range that made it a perfect fit for a single AA battery, which simplifies the electronics that were going to take a beating every night.  I had 10 LEDs to work with, knowing I’d probably destroy or lose a couple in testing.

I started by prototyping the projection and tracking.  My early tests involved a different LED wavelength and camera, I ended up getting a higher speed camera and higher frequency IR setup later, but from the first test I could tell that with some tweaking my idea would work, and it gave me what I needed to start developing the software to track the LED.  I used Visual C# and a free webcam library called AForge.NET, which grabs an image from the camera every frame, and I bought the cheapest LED projector I could find that seemed to have the range I needed.  The final camera was 40fps at 800×600 resolution, a compromise between performance and low cost.  My first tests projected my own face onto a paper plate across my bedroom – my dog was not a fan of this, sorry there’s no video of that.

Next, I needed to figure out the pumpkin itself, because the kids were going to have to start working with it early on.  I liked the idea of using a real pumpkin, but there were safety and cleanup concerns having a kid throw a real pumpkin across the stage.  I didn’t want to settle for a plastic pumpkin, because I wanted it to look like a real pumpkin exploding on the stage, so I carved a pumpkin out of layered styrofoam, cut it into 5 jagged shaped jigsaw puzzle peices. and applied the “guts” to the insides with spray foam and paint-soaked yarn.  I screwed steel drywall screws to the cut edges of the pieces and superglued small neodymium magnets to one side of each screw pair so that the peices would stick to one another magnetically but easily break apart on impact.  Finally I applied a killer paint job and brought it down to the theater for a test.

There were some obvious issues with the test.  The I/R LED was very directional, so if I didn’t aim the front of the pumpkin right at the camera I’d lose the signal.  I solved this by adding a big “diffused lens” of hot glue over the LED so that the light became a bulb instead of a beam.

The positioning of the camera/projector was somewhat tricky to calibrate, so I added code to my program to make it easier to adjust the boundaries of the projection space relative to the camera boundaries.  I also needed it to be a little more forgiving about loss of signal, continuing to project to the same location slightly longer before assuming the pumpkin has been thrown, which was a simple software tweak.

tracker screenshot

The last step was to make a real jack o lantern face that could speak the correct line on cue.  That presented a bit of a conundrum.

If I had been a 3D graphics designer I might have rendered a CGI jack o lantern face speaking the words, in fact I did some experiments with Adobe’s characterizer, even tried drawing frames for a few different face shapes, similar to the way cartoons are made.  I wasn’t happy with any of those experiments.  I settled on actually applying some grease paint to my own face and using video processing effects in Adobe Premiere to mask out all but the elements that I wanted to project.

Here is the video clip of my painted face that I started with:

 

And here is the finished version after applying processing in Adobe Premiere:

horseman talking

I found that an animated gif could be moved around the screen much faster than a full-blown video file, so it was converted to GIF format from premiere, and the audio file was separately processed in Audacity to make it much scarier:

To get the flaming video, I first inverted my face video above, to give me white eyes, nose, and mouth, and a black face, played with the brightness and contrast quite a bit to eliminate as much grayscale and human facial features as possible, masked out the area around my face and added masked brightness/contrast layers to the inside of my mouth and eyes.  Once I had the correct black and white line-art animated face, I added flames by using youtube footage of flames with “screen” opacity mixing, and a video of a flaming marshmallow with partial transparency in each eye to achieve the licking flames.

The audio was processed in Audacity with a frequency shift, reversing the sound, applying a reverb (echo) and then reversing it forwards again. if you pause the audio player at the beginning of the word “Ichabod” it’s very easy to unpause it when the face starts to mouth that word and enjoy the combination of video and audio.

Here’s another clip from a better position (in the audience) taken after the show:

 

In the future, the same tracking could be used to project anything on anything whose position is not absolutely known, I could even do the face processing in real time rather than from an animated GIF.  If I were to ever commercialize this application, or use it in a situation with more advanced requirements, I’d tweak the software to include self-calibration and quadrilateral mapping instead of the simple scaling it uses now.    The software could be augmented to work with multiple cameras and projectors to cover a larger area or more angles.  Also, I might try to find a more powerful projector, this cheap one worked for a simple lantern face but to do something more I’d like a more expensive and powerful projector.