The video above is an early panel prototype, just to demonstrate the sensor arrangement, the math to project sensor data to a grid of rings, the audio configuration and capablities, and a little logic to make it pretty.
I am currently working with a sheet of black MDF that LiveWork CNC’ed for me to perfectly house all the components, developing the software for the panels. I’ll post a video of the new prototypes soon. Just this week I made a major change to the hardware and read functions which will improve the isolation between the different sensors. At the moment I’m focused on being able to clearly read and propogate a current across a panel, and next I will be working on passing it on to the next panel. The communications protocols will take some development, but should not be quite as surprising and elusive as getting good readings from all the sensors involved in reading a gesture.
When the software reaches the point that the panels can auto-update in a chain, we’ll start building almost a hundred of them to fill up the shipping container from floor to ceiling.
For Sleepy Hollow, we needed a scary steed for the Headless Horseman to ride, for the grand finale scene of the children’s play. The script suggests putting a child on a dressed up ladder, stationary in the corner of the stage, revealed in a flash of light. Obviously, that was just not good enough for me.
I wanted to build a horse from a rolling platform, to be operated by one adult inside and a child on the horse’s back. I wanted the head to be articulated to some degree to create some realism, but was willing to forgo realistic legs in favor of ghostly gliding for this show.
I watched some videos of the Handspring Puppet Company’s warhorse puppet, https://www.youtube.com/watch?v=h7u6N-cSWtY, which I found incredibly inspiring, but also much too complicated for me to build in a month and much too difficult to be operated by me and one child. I really liked the way that the ears were articulated, and the way that the Warhorse combined real horse movements with artistic abstraction. I found my own simpler compromise between the two for this piece.
I started with some simple mock-ups using Popsicle sticks and rivets to get the basic movements right, I wanted to have a natural bobbing movement for the walking/galloping, an up-and-down nodding movement for whinnying, and a side-to-side motion to turn the head to look at Ichabod Crane during the chase.
I built a skeleton from lauan, and one pipe inside another pipe for the steering / nodding mechanism. The initial design used a wooden dowel, but the wood split during tests, so I replaced it with an aluminum pipe. In the final version of the mechanism, I used chicago screws as axles for the pivot points in the lauan.
In addition to the head movements, I built an articulated ear mechanism, which was to be operated by the rider. The ears, constructed from cardstock sandwiched between sheets of black craft felt, cut and folded into ear shape, were mounted on small carboard tubes normally used for housing mason bees, which pivot inside of PVC pipe sleeves. the mechanism is operated from the base of the neck by a lever that spans the neck lauan. A little wool applied to the inside and outside in correct proportions finished the effect.
The eyes were sculpted by heating transparent Christmas ornament halves and squeezing them into eye shapes, lightly spraypainted black inside, and bordered with a halved corrugated plastic tube. The deep red LEDs are mounted on large heatsinks mounted behind those shells, and wired to a very simple knob-operated dimmer circuit.
I used paper mache techniques to construct a foundation for the nose, which I later coated with a layer of water-based spray foam, followed by black latex paint. A couple spurts of lacquer added wetness to the nostrils and mouth. The hair on the whole head was made from roving wool, which I applied directly to the wire and paper mache foundation in layers, using a spray adhesive, pressing and hair-pulling technique to build layers. Following is a progression of the skinning of the horse…
We plan to re-use the foundation of this horse for our summer production of Pippi Longstocking, of course it will be re-skinned with a white wool, happier, blunter teeth, and no glowing red demon eyes. I might add some legs for fun, although that depends on how busy I am, as the summer shows have a very short timeline and the horse is not a main character.
This is a really exciting invention I came up with for the Sleepy Hollow. I wanted the pumpkin head to talk – like really talk, and also I wanted it to explode into pieces when the horseman dramatically throws it across the stage at Ichabod.
I wanted to use projection to create the animation on a destructible pumpkin, but I knew it would be impossible to perfectly line up the position of the pumpkin or even the horse with a pre-set projection location. I came up with a rough idea that I could conceal a simple and very durable infrared LED on a battery in the pumpkin somewhere, and I could write a program that would move the pumpkin’s projected face around in the projection area based on the position of the IR LED.
I needed a nice far infrared wavelength that wouldn’t get too much interference from stage lights and warm bodies, and a matching camera lens filter that would block out all the normal visible light to make it easier for my program to pinpoint the tracker. I settled on a 950nm IR wavelength and bought a bag of LEDs and a filter from Amazon. The LEDs had a voltage range that made it a perfect fit for a single AA battery, which simplifies the electronics that were going to take a beating every night. I had 10 LEDs to work with, knowing I’d probably destroy or lose a couple in testing.
I started by prototyping the projection and tracking. My early tests involved a different LED wavelength and camera, I ended up getting a higher speed camera and higher frequency IR setup later, but from the first test I could tell that with some tweaking my idea would work, and it gave me what I needed to start developing the software to track the LED. I used Visual C# and a free webcam library called AForge.NET, which grabs an image from the camera every frame, and I bought the cheapest LED projector I could find that seemed to have the range I needed. The final camera was 40fps at 800×600 resolution, a compromise between performance and low cost. My first tests projected my own face onto a paper plate across my bedroom – my dog was not a fan of this, sorry there’s no video of that.
Next, I needed to figure out the pumpkin itself, because the kids were going to have to start working with it early on. I liked the idea of using a real pumpkin, but there were safety and cleanup concerns having a kid throw a real pumpkin across the stage. I didn’t want to settle for a plastic pumpkin, because I wanted it to look like a real pumpkin exploding on the stage, so I carved a pumpkin out of layered styrofoam, cut it into 5 jagged shaped jigsaw puzzle peices. and applied the “guts” to the insides with spray foam and paint-soaked yarn. I screwed steel drywall screws to the cut edges of the pieces and superglued small neodymium magnets to one side of each screw pair so that the peices would stick to one another magnetically but easily break apart on impact. Finally I applied a killer paint job and brought it down to the theater for a test.
There were some obvious issues with the test. The I/R LED was very directional, so if I didn’t aim the front of the pumpkin right at the camera I’d lose the signal. I solved this by adding a big “diffused lens” of hot glue over the LED so that the light became a bulb instead of a beam.
The positioning of the camera/projector was somewhat tricky to calibrate, so I added code to my program to make it easier to adjust the boundaries of the projection space relative to the camera boundaries. I also needed it to be a little more forgiving about loss of signal, continuing to project to the same location slightly longer before assuming the pumpkin has been thrown, which was a simple software tweak.
The last step was to make a real jack o lantern face that could speak the correct line on cue. That presented a bit of a conundrum.
If I had been a 3D graphics designer I might have rendered a CGI jack o lantern face speaking the words, in fact I did some experiments with Adobe’s characterizer, even tried drawing frames for a few different face shapes, similar to the way cartoons are made. I wasn’t happy with any of those experiments. I settled on actually applying some grease paint to my own face and using video processing effects in Adobe Premiere to mask out all but the elements that I wanted to project.
Here is the video clip of my painted face that I started with:
And here is the finished version after applying processing in Adobe Premiere:
I found that an animated gif could be moved around the screen much faster than a full-blown video file, so it was converted to GIF format from premiere, and the audio file was separately processed in Audacity to make it much scarier:
To get the flaming video, I first inverted my face video above, to give me white eyes, nose, and mouth, and a black face, played with the brightness and contrast quite a bit to eliminate as much grayscale and human facial features as possible, masked out the area around my face and added masked brightness/contrast layers to the inside of my mouth and eyes. Once I had the correct black and white line-art animated face, I added flames by using youtube footage of flames with “screen” opacity mixing, and a video of a flaming marshmallow with partial transparency in each eye to achieve the licking flames.
The audio was processed in Audacity with a frequency shift, reversing the sound, applying a reverb (echo) and then reversing it forwards again. if you pause the audio player at the beginning of the word “Ichabod” it’s very easy to unpause it when the face starts to mouth that word and enjoy the combination of video and audio.
Here’s another clip from a better position (in the audience) taken after the show:
In the future, the same tracking could be used to project anything on anything whose position is not absolutely known, I could even do the face processing in real time rather than from an animated GIF. If I were to ever commercialize this application, or use it in a situation with more advanced requirements, I’d tweak the software to include self-calibration and quadrilateral mapping instead of the simple scaling it uses now. The software could be augmented to work with multiple cameras and projectors to cover a larger area or more angles. Also, I might try to find a more powerful projector, this cheap one worked for a simple lantern face but to do something more I’d like a more expensive and powerful projector.
The GCP Junior Stars production of Seussical required some really bold lighting, a special DMX controlled lighting rig was developed to allow six 4-foot segments of flats to be lit individually by pairs of RGB LED strips. This could have been accomplished with commercial theatrical lighting equipment, but budget would not allow for it, and the learning and future customization opportunities add a lot of value to the rig. This rig went through some field repairs on a couple of occasions, so it’s a little beat up in this post-production picture.
In addition to the wall lights, scrim technique was prototyped and developed for the show, so that children could appear and disappear in a whimsical way. This early prototype test video with Isaac in it is pretty funny:
Beside the electronic lights, the bathtub, “drooping” tree effect, and gertrudes measuring-tape derived pop-out tail feathers were handmade for this show. It was a lot of fun!
Doreen Horn created the gorgeous seuss-inspired backdrop for whoville.
I bought a cheap LED projector from amazon that boasted a “water effect”, hoping I would be able to hack it for my own purposes, and I was not disappointed.
I picked up a 700mA LED constant-current driver board and some cheap 365nm UV LEDs from China, took out the original RGB LED, soldered a couple of connections to the board where the DC adaptor plugs in, and voilà!
The 365nm LEDs are the best, because most of that light spectrum is really invisible… Unfortunately, the plastic lens material on the LED itself flouresces a cyan color under the powerful UV light, which ruins the effect. To solve this, I glued a broken peice of incandecent blacklight bulb glass over the lens inside the projector. The glass used for these bulbs is called woods’ glass, is deep purple and blocks all but UV and IR light, and the bulbs are much cheaper than fancy photography filters made of the same material, so it worked perfectly for this need.
What you see in the video are small containers of UV-responsive pigment that will be used to color reefs and fish, against my off-white tile floor in the dark. I left the packaging off to the side, because that white paper picks up the UV too, but the wider area it covers better shows the texture of the water effect.
I spread the containers out so you can see the contrast between the UV-reactive materials and non-reactive tile floor, it’s stunning how well these pigments fluoresce, as if the light came from within them.
Michael volunteered to create art/technology props and effects for the parody show “Star Chix” at the St. Petersburg City Theatre. A couple weeks into work, he was also given the role of the villian, which put a bit of a time crunch on what technical effects could be completed, but the work completed is impressive nonetheless.
Have a look at the tech reel, which is an edit of the filmed show to include just the portions involving technical effects. You will see the the backlit black hole screen, color-changing console, “time-altering life suspension laser ray”, infrared-activated phaser fire strobes, and the programmable ship’s engine.
The phasers were made from toy plastic guns, which were taken apart and retrofitted with circuitry to recognize when the the trigger is fired and set off a small infrared burst, just like hitting the power button on a TV remote. The IR emitter was tucked into a black paper tube so that only a very narrow beam is emitted, allowing the shooter to target only one girl and not set off all phaser strobes at once. The redshirt girls carry tricorder bags (shown in the title image), which have a 100W COB LED and IR sensor built into the strap and disguised with colored sheer fabric, and have 3 AA batteries and supporting circuitry inside that can generate a 100W pulse when the IR sensor picks up the right signal.
Here is a close-up of the life-suspension laser ray, with a slow motion replay at the end. This prop is supposed to freeze all the characters on-stage (and unfreeze them). It is made with mirrored cardboard, automotive vinyl wrap, computer fans, 30W green COB LEDs, EL wire, supporting circuitry, a huge capacitor bank, and a big green arcade button.
Part of the show was a lip synch battle in which two members of the audience were invited onto the stage to participate. Michael made this sign to encourage participation and make sure that the audience would see the sign-up sheet while milling around in the lobby. It’s made from black posterboard, EL wire, sheer fabric to mask the “turned off” EL wire, and supporting circuitry on the back.
Here is a close-up of the Data PADDs. The redshirt chorus starts off the show from the back of the audience dancing their way up the aisles, and these tablets were an accessory that could be used in the dancing while the redshirt girls were out of the stage lights in the audience, and can be slid into the tricorder bags (in the title image above) when the dance is over.
They are made of 2 pieces of sturdy plexiglass, sanded lightly on the surface that was to glow, with LED strips and watch battery holders with an on/off switch between them, sandwiched between two peices of heavy cardboard with vinyl skin and sharpie. The holographic strip is made from bird scare tape. The light-up graphics are made of two back-to-back layers of printed transparency on top of the sanded backlit plexi.
There is one close-up of the starship engine that was taken during early construction, about a month before showtime. The engine is made from a large piece of PVC pipe bent into a semicircle and attached to a metal L-brace in the back, drilled and fitted with wooden dowels, which are connected at each vertex with clear vinyl tubes bundled with a brass brad (the type with fold-able tines). The skeleton is overlaid with cheap frosted shower curtains cut up into the correct shapes, and WS2812 addressable LED strips ere used for the animation. The engine needed to be light and safe enough to lift up on top of the stage flats, you can see it completed in the tech demo reel above.
The engine animation patterns are controlled by a radio-enabled remote that can switch modes, with sliders to control speed and brightness. This remote was operated by one of the actors on stage.