As the above video shows, bringing an image to life requires the aforementioned driving video, which essentially acts as the input. Mapping the driving video’s facial characteristics to those in the target let the researchers create a “coarse” animation, that they then fill in with details that are “hidden” by the original image. Probably the best example of those hidden details are what’s inside your mouth — seeing your teeth when you smile. Those are filled in along with other “fine details” like removing unnatural shadows, and then you’ve got a still photo that can make the same faces as those in the driving video.
Since Facebook was involved with this project, some real-world applications of this would be letting users actually animate their profile pictures to react to things they see on the social network. Facebook added more reactions to post than the simple “thumbs up” a while ago, and the researchers tested making faces respond with happiness, anger or surprise — just like Facebook’s post reaction options. The results actually look pretty good, although they do feel slightly unnatural. But with a more work and automation, something like this could end up in our Facebook profiles sometime down the line.