Today, I revisited footage I shot back in January 1995 with my trusty Hi8 camera—the first significant purchase I made after booking a Lee Jeans commercial. My original plan was to create a documentary about my mentor and landlord in Malibu. Now, nearly 30 years later, I’m viewing the footage for the very first time.
Initially, I thought I’d use the tapes as background material for Rogue Wave, the book I’m working on. But a new idea struck: why not finally remix and release the project? The catch? Those tapes weren’t just a creative endeavor—they were also my safety net. Back then, when things felt chaotic, the camera became my shield. It gave me a purpose, a reason to say, “I’m making a documentary,” and keep moving forward.
In the 90s, the indie film dream was alive and well. Everyone in Hollywood aspired to be the next Tarantino, hoping their quirky, low-budget project would catapult them into stardom. I was no different, chasing that elusive Swingers or El Mariachi moment.
Fast forward to today, and the biggest hurdle to releasing this footage is consent. Some of the situations I shot were, let’s say, less than straightforward, and I’m not confident all the subjects would be on board with signing release forms. My workaround? Innovating with animation and sound.
I developed a workflow where I alter the pitch of the audio to anonymize voices and layer in sound effects. For the visuals, I’m recreating scenes with animated sequences using MetaHumans and a toon shader in Unreal Engine 5.5. This process leverages Mixamo’s vast library of animations—thanks to the Mixamo Conversion Tool by Terribilis—and saves a ton of time. Unreal’s new Voice-to-Lipsync feature has been a game-changer, finally delivering believable lip sync without veering into uncanny valley territory.
Here’s how it all comes together: I “shot” MetaHumans on a virtual greenscreen, masked out the green, and overlaid them on backgrounds taken from the original 90s footage. I didn’t need to anonymize myself since I fully consent to appear.
The results? A surprisingly solid hybrid style. With MetaHumans, Mixamo animations, and assets from the Unreal FAB store—props, vehicles, and more—I’ve created a blend of the old and the new. The lip sync looks decent, and the animations feel smooth, making this approach a viable way to breathe life into this decades-old project.
That said, there are still kinks to iron out. I noticed ghosting in some shots, where animations bleed through the background, and I need to split the audio tracks to avoid the AI doubling up on lines spoken by both characters. For now, I’ve tested just a minute of footage, keeping things manageable to avoid getting too deep before refining the process.
Stay tuned for V2! This is shaping up to be something different. It was a fun day!
4o