Familiar with Cartoon Donald Trump? U of T computer scientist made it possible
If you tune into late-night TV, you’ve probably caught a cartoon version of U.S. President Donald Trump on The Late Show with Stephen Colbert.
The cartoon was created by the Late Show‘s lead animator Tim Luecke and senior digital producer and writer Rob Dubbin. But the animation itself was made possible with Adobe’s Character Animator software – software that integrated research by Alec Jacobson, an assistant professor in the University of Toronto’s department of computer science in the Faculty of Arts & Science.
“Real-time character animation – almost like puppeteering – is the kind of thing our research is enabling that couldn’t be done in the past,” says Jacobson.
Traditionally, he says, animating a character is extremely tedious because the artist has to painstakingly set the pose of each character for each frame or every few frames. Seconds of film may require hours and hours of work.
“My research also looks at reducing the workload of the individual artist and so this culminates in some new and exciting ways of controlling characters.”
Jacobson is a recent recipient of the Eurographics Young Researcher Award, which the European Association for Computer Graphics awards to two young researchers who have made a significant contribution to the graphics field.
U of T’s Nina Haikara talked with Jacobson (below) about his work in computer graphics and Cartoon Donald Trump.
When we see an animation on-screen we often don’t think of the math enabling it. How are geometric methods used?
Nearly all animated movies made today use computer graphics. The characters’ shapes are stored on the computer as geometric surfaces and their movements are controlled by geometric curves.
By understanding the mathematics of curves and surfaces, computer graphics makes it possible to animate significantly more complex characters. Computer graphics is truly a playground for applied mathematicians. There are applications ranging from geometry and calculus to group theory.
What led to your research being implemented by Adobe Character Animator?
My research began during my internship at Adobe during my PhD studies. I was deeply grateful for the opportunity and see it as a turning point in my career. We identified that a variety of different methods controlling 2D and 3D characters could be unified using a new mathematical framework.
It used to be a very tedious, manual process to prepare a character for animation by an artist. Our initial work replaced this with a fully automatic and fast computer program.
Building on top of this research, Adobe has done such amazing work creating a practical tool for live cartoon puppetry. The key is automating tasks that are otherwise so tedious and time-consuming – animation that could only be done by large visual effects studios. In fact, Industrial Light & Magic and Pixar also incorporated our work into their visual effects software.
How is Cartoon Donald Trump able to interact with Colbert?
The Cartoon Donald Trump is controlled by an actor off-screen. The actor’s speech and motions are mapped directly and in real-time onto the cartoon via Adobe’s software. It’s possible that Stephen Colbert is also seeing that cartoon in real-time on an off-screen monitor and responding accordingly. Any live improvisation is immediately incorporated into the animation.
Julie Andrews and Dick Van Dyke danced with animated penguins in the film Mary Poppins. How is this animated work different from what’s possible now?
In the past, cartoon characters have been added to live actions films such as Mary Poppins or Who Framed Roger Rabbit, but these were placed into the scene in post-process. Improvisations would be difficult. Traditional, hand-drawn or computer graphics animations takes days, making a live broadcast impossible. In contrast, our research has helped enable Adobe’s software to animate characters instantaneously.
Are there other artist’s tools you’ve worked on?
In my animation research, I grew increasingly frustrated using the mouse and keyboard to control the limbs of 3D characters. I just wished that I could reach into the screen and directly manipulate the character’s body. Working with colleagues at ETH Zurich and New York University, we invented a physical input device to assist character animation. Using our device, a novice user can construct a 3D skeleton out of Lego-like pieces. Each piece has a sensor to map its movement onto the virtual character. This research has the potential to help anyone – even children – animate 3D characters.
What’s next in your research?
3D printing and virtual reality have been in the news a lot lately. Both of these domains have tantalizing possibilities to change our daily lives. However, gathering and controlling geometric data is still very difficult and error prone. My current research aims at making better tools for everyday people to acquire and interact with 3D geometry and making these tools robust to noise and uncertainty found in geometric data gathered “in the wild” – that is, the everyday world around us.