SIGGRAPH 2020: A Peek Behind LAIKA’s Stop Motion Magic

Oscar-nominated stop-motion film studio discusses how AI and Machine Learning accelerate rotoscoping

Oscar-nominated stop-motion film studio discusses how AI and Machine Learning accelerate rotoscoping

At SIGGRAPH 2020, LAIKA discusses the use of AI to accelerate rotoscoping in stop-motion animation. Image courtesy of LAIKA / Intel

When you hear the word stop motion, you think of a small army of artists manipulating clay dolls and figurines against miniature backdrops to create the illusion of motion. That's how Ray Harryhausen breathed life into the mythical monsters in Clash of the Titans (1981) and Nick Park told the story of a quirky inventor and his canine companion in The Short Adventures of Wallace and Grommit (1972). 

This week, at SIGGRAPH 2020 Virtual, the talents behind the animation studio LAIKA raises the curtain to reveal what it takes to create modern stop-motion classics such as Coraline, The BoxtrollsKubo and the Two Strings, and Missing Link. One of the unsung heroes behind the scene turns out to be AI (artificial intelligence), powered by Intel CPUs.

Stop motion animation studio LAIKA uses Intel workstations and a render farm to speed up rotoscoping using AI / machine learning. Image courtesy of LAIKA/Intel.

AI-Powered Rotoscoping

Part of stop-motion filmmaking is rotoscoping—the process to clean up the unwanted artifacts resulting from the manual stop-motion sequences. The studio uses Intel CPU-powered workstations, with a render farm for reinforcement. They also enlisted the help of Intel's Applied Machine Learning team to speed up the labor-intensive rotoscoping process.

“It is a task that consists of connecting a series of points with curves to create a shape that identifies a portion of a film frame to alter in some way,” explained Steve Emerson, VFX Supervisor at LAIKA. 

“When rotoscoping, artists create shapes in arbitrary ways. But to make the task trainable [for automation], the points and tangents that make up each shape needed to be consistent. The team began by creating a set of persistent landmark points and tangents on the puppet's faces that would drive the shapes and ensure consistency. This information, along with the associated images, was augmented with additional data, including frames that featured randomized backgrounds, color values, warps and blurs. Furthermore, because the puppet's facial performances are created digitally prior to being 3D printed, they were also able to leverage data from the object files and CG renders of various angles and expressions,” he added.

The deep learning neural networks, as they are called, can analyze the sample footage provided, scan the frames, and identify the key points on a puppet's face, explained Jeff Stringer, Director of Production Technology, LAIKA, in a video presentation. The networks are optimized to run on Intel's open-standard oneAPI libraries, announced recently. (For more on Intel's oneAPI Tool Kit, read the news here.)

LAIKA employs AI-powered nural networks to automatically scan the frames and identify areas that need digital cleanup in rotoscoping. Image courtesy of LAIKA / Intel

CPU-powered machine learning and neural networks are the unsung heroes behind stop-motion films by LAIKA. Image courtesy of LAIKA / Intel

AI for the Objective, Humans for the Subjective

Though an early prototype of the AI-powered rotoscoping tool was developed during the making of Missing Link (2019), the studio didn't get the chance to use it. But the tool is expected to become part of the studio's upcoming animated features, according to Emerson.

“Based on our initial tests, we’re seeing a 50% time savings for rotoscoping tasks, which we anticipate could translate to an overall savings of up to 30% on puppet cosmetic work,” estimated Emerson.

Emerson divides the type of rotoscoping works involved in stop-motion filmmaking into two classes: objective and subjective. “The objective tasks are things like removing a rig from a shot or cleaning up a seam line on a character’s face. When the work is done, there’s no subjectivity— either the rig is there or it’s not. The seam line is visible, or it’s not,” he said.

The subjective tasks make a shot more artistic in nature. They make a scene more aesthetically pleasing or visually innovative. If the studio spends too much time and effort on objective and subjective tasks, it runs up against the two biggest enemies of movie making—time and budget. 

“Streamlining or automating objective tasks through AI will enable us to move more resources to the subjective side and we’ll be able to invest more time to create even more compelling content,” said Emerson.

In the future, the tool may be applied more widely to other areas of production, “from character and object match-moving [inserting digital objects into live film footage] to full-body rotoscoping to image-denoising, and color correction,” Emerson said. “It truly feels like we’re stepping into an entirely new era of filmmaking. I’m excited to see what the future holds.”

For the on-demand SIGGRAPH 2020 presentation by LAIKA, go here.

More Intel Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.

About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at

      Follow DE