Face Reality

Technical Papers

Face Reality

Monday, 10 August 3:45 PM - 5:35 PM | Los Angeles Convention Center, Room 150/151 Session Chair: Xin Tong, Microsoft Research Asia


Detailed Spatio-Temporal Reconstruction of Eyelids

The first method for detailed spatio-temporal reconstruction of eyelids faithfully tracks the lid where visible and creates a plausible geometry where hidden. It is the first method to capture the complex folding behavior in this important region of the face.

Amit Bermano
Disney Research Zürich, ETH Zürich

Thabo Beeler
Disney Research Zürich

Yeara Kozlov
Disney Research Zürich, ETH Zürich

Derek Bradely
Disney Research Zürich

Bernd Bickel
Institute of Science and Technology Austria, Disney Research Zürich

Markus Gross
Disney Research Zürich, ETH Zürich

Dynamic 3D Avatar Creation from Hand-Held Video Input

A complete pipeline for creating fully rigged and detailed 3D facial avatars from hand-held video. Using a minimalistic acquisition process, the system facilitates a range of new applications in computer animation and consumer-level online communication based on personalized avatars.

Alexandru Ichim
École Polytechnique Fédérale de Lausanne

Sofien Bouaziz
École Polytechnique Fédérale de Lausanne

Mark Pauly
École Polytechnique Fédérale de Lausanne

Driving High-Resolution Facial Scans With Video Performance Capture

Animating facial geometry and reflectance using video performances, borrowing geometry and reflectance detail from high-quality static expression scans. Combining multiple optical-flow constraints weighted by confidence maps eliminates drift.

Graham Fyffe
USC Institute for Creative Technologies

Andrew Jones
USC Institute for Creative Technologies

Oleg Alexander
USC Institute for Creative Technologies

Ryosuke Ichikari
National Institute of Advanced Industrial Science and Technology

Paul Debevec
USC Institute for Creative Technologies

Real-Time High-Fidelity Facial Performance Capture

The first method capable of capturing facial performances in real time at high fidelity, including medium-scale details such as wrinkles. The system requires only a single uncalibrated camera as input and is generic, since it does not require any offline training or manual steps for new users.

Chen Cao
Zhejiang University, Disney Research Zürich

Derek Bradley
Disney Research Zürich

Kun Zhou
Zhejiang University

Thabo Beeler
Disney Research Zürich

Facial-Performance-Sensing Head-Mounted Display

The first system that produces compelling 3D facial performance capture through an HMD to enhance communication in virtual reality. The use of signals obtained from flexible electronic materials combined with an RGB-D camera presents a unique, innovative, and ergonomic solution for real-time facial-performance sensing on wearable devices.

Hao Li
University of Southern California

Laura Trutoiu
Oculus VR, LLC

Kyle Olszewski
University of Southern California

Lingyu Wei
University of Southern California

Tristan Trutna
Oculus VR, LLC

Pei-Lun Hsieh
University of Southern California

Aaron Nicholls
Oculus VR, LLC

Chongyang Ma
University of Southern California