My Digital Face

Real-Time Live!

My Digital Face

This project puts the capability of producing a photorealistic face into the hands of nearly anyone, without an expensive rig, special hardware, or 3D expertise.

Using a single commodity depth sensor (Intel RealSense) and a laptop computer, the research team captures several scans of a single face with different expressions. From those scans, a near-automatic pipeline creates a set of blendshapes, which are puppeteered in real time using tracking software. An important stage of the blendshape pipeline is automated to identify and create correspondences between the geometry and textures of different scans, greatly reducing the amount of texture drifting between blendshapes. To expand the amount of control beyond individual shapes, the system can automatically include blendshape masks across various regions of the face in order to mix effects from different parts, resulting in independent control over blinks and lip shapes.

The results are photorealistic and sufficiently representative of the capture subjects, so they could be used in social media, video conferencing, business communications, and other places where an accurate representation (as opposed to an artistic or stylized one) is desired or appropriate.

During the demo, the team scans two people who then puppeteer their own faces in real time.

Dan Casas
USC Institute for Creative Technologies

Oleg Alexander
USC Institute for Creative Technologies

Andrew Feng
USC Institute for Creative Technologies

Graham Fyffe
USC Institute for Creative Technologies

Ryosuke Ichikari
USC Institute for Creative Technologies

Paul Debevec
USC Institute for Creative Technologies

Ruizhe Wang
University of Southern California

Evan Suma
USC Institute for Creative Technologies

Ari Shapiro
USC Institute for Creative Technologies