Real-Time Live!

Real-Time Live!

Real-Time Live!

Tuesday, 11 August 5:30 PM - 7:15 PM | Los Angeles Convention Center, West Hall B


BabyX

BabyX is an autonomously animated psychobiological simulation of an infant that reacts and learns in real time.

Face-to-face interaction is vital to social learning, but detailed interactive models that capture the richness and subtlety of human expression do not currently exist. BabyX is a step toward this goal. It is an experimental computer-generated psychobiological simulation of an infant that combines models of the facial motor system and theoretical computational models of the basic neural systems involved in interactive behavior and learning. These models are implemented in a novel modeling language for neural systems designed for animation and embodied through advanced 3D computer graphics models of an infant’s face and upper body. The model reacts in real time to visual and auditory input, and its own evolving internal processes as a dynamic system. The live state of the model that generates the resulting facial behavior can be visualized through graphs and schematics or by exploring the activity mapped to the underlying neuroanatomy.

Mark Sagar
Auckland Bioengineering Institute, University of Auckland

David Bullivant
ABI Laboratory for Animate Technologies

Paul Robertson
ABI Laboratory for Animate Technologies

Oleg Efimov
ABI Laboratory for Animate Technologies

Khurram Jawed
ABI Laboratory for Animate Technologies

Ratheesh Kalarot
ABI Laboratory for Animate Technologies

Tim Wu
ABI Laboratory for Animate Technologies

Auckland Face Simulator

The Auckland Face Simulator has been developed to create realistic, autonomously interactive, highly expressive human faces for use in applications from perceptual psychology research to new human-computer-interface technologies. The face models can represent the full range of facial muscle actions and can be precisely controlled to allow creation of novel static and dynamic stimuli for perceptual experiments. The models can also be driven by cognitive architectures in an interactive integrated system to create synthetic muscle activations. Or they can be driven by motion capture or by animation controls or any combination to achieve a high degree of realism and a full range of expression in real time.

Mark Sagar
Auckland Bioengineering Institute, University of Auckland

David Bullivant
ABI Laboratory for Animate Technologies

Paul Robertson
ABI Laboratory for Animate Technologies

Oleg Efimov
ABI Laboratory for Animate Technologies

Khurram Jawed
ABI Laboratory for Animate Technologies

Ratheesh Kalarot
ABI Laboratory for Animate Technologies

Tim Wu
ABI Laboratory for Animate Technologies

Werner Ollewagen
ABI Laboratory for Animate Technologies

Balloon Burst

The first demonstration of a large-scale physical simulation of the interaction among water, a thin elastic surface, and a rope in real time. The simulation allows the user to look at and interact with the bursting water balloon through
a 4000fps high-speed camera. The target location of the bullet can be chosen with the mouse. The user can also manipulate the camera and switch to “bullet time” mode, in which the simulation is slowed down to 1/8 of the
frame rate.

The objects in the scene are represented by 250,000 particles and simulated using the NVIDIA unified solver Flex. An additional set of 512,000 diffuse particles is added as spray and small droplets. The balloon is modeled with a cloth mesh of particles linked with distance constraints, while the water particles are simulated with a position-based fluid method. The bullet is represented by particles grouped with a shape-matching constraint. The demonstration shows results for three different bullet-hit locations, which cause dramatically different balloon explosions and water splashes.

Miles Macklin
NVIDIA Corporation

Nuttapong Chentanez
NVIDIA Corporation

Matthias Mueller
NVIDIA Corporation

Tae-Yong Kim
NVIDIA Corporation

The Blacksmith

“The Blacksmith” was created by a team of three people working full-time: an artist, an animator, and a programmer. External contractors were used only for asset creation, motion capture, and audio.

The film makes extensive use of physically based shading in the Unity 5 game engine, which allows achievement of high visual quality and a cinematic look in real time. It takes advantage of real-time global illumination, which was applied in an innovative way in the film’s production. It makes per-camera lighting and shot dressing, typically a process used in offline CG and traditional filmmaking, possible also in real time. This allowed the authors of the film to control props, characters, and both small and large environmental assets; change their positions between shot switches; and instantly change a scene’s entire lighting rig between shot switches.

The fast iteration and low resource requirements of real-time filmmaking lead to considerably reduced production time and costs for high-quality film projects. In this way, real-time game engines could democratize the filmmaking process.

Veselin Efremov
Unity Technologies

Torbjorn Laedre
Unity Technologies

Disney Infinity 3.0 Physics-Based Animation

A behind-the-scenes glimpse of a revolutionary technology called physically based animation (PBA), which allows unscripted player interaction with video game characters. Attendees get a close look at a fully simulated AT-AT within Disney Infinity 3.0 and learn how its animation is achieved.

Daniel Zimmermann
Studio Gobo Limited

Gioaccino Noris
Studio Gobo Limited

Huw Bowles
Studio Gobo Limited

Stelian Coros
Carnegie Mellon University

Robert W. Sumner
Disney Research Zürich

Jose Villeta
The Walt Disney Company

Fast Teeth Scanning for Advanced Digital Dentistry

In traditional dentistry, a patient has to bite into a block of silicone to create an impression of the teeth. The procedure can be uncomfortable, and it causes gagging for some patients. The impression is typically poured with gypsum to create a positive copy, a process that can damage the accuracy of the model.

This presentation demonstrates a system for creating digital impressions using a 3D scanner to capture the surface geometry of a patient’s teeth. The scanner uses an advanced optical system to extract depth images of the surface. It can deliver up to 25 frames per second with micron-level resolution and can capture both semi-translucent and reflective surfaces. Automatic surface recognition and filtering improves the scanning experience, making it easy to operate the scanner in a patient’s mouth.

The system is more comfortable for patients, and it is more accurate than traditional methods. It applies the latest research in 3D scanning to provide reconstructions with very high accuracy in real time.

Peter Dahl Ejby Jensen
3Shape A/S

Michael Bing
3Shape A/S

Jens Christian Jørgensen
3Shape A/S

Sverker Rasmuson
3Shape A/S

Lene Lillemark
3Shape A/S

Morten Ryde Holm-Hansen
3Shape A/S

Henrik Öjelund
3Shape A/S

Interactive Performance: The Inheritance

This project combines a live dancer with real-time projected stereoscopic video as the performing backdrop. The dancer wears an IMU-based wireless motion capture system to stream his performing data to a workstation and generate a real-time interactive, stereoscopic CGI projection video of his body movements.

Hsin-Chien Huang
Xsens Technologies B.V.

My Digital Face

This project puts the capability of producing a photorealistic face into the hands of nearly anyone, without an expensive rig, special hardware, or 3D expertise.

Using a single commodity depth sensor (Intel RealSense) and a laptop computer, the research team captures several scans of a single face with different expressions. From those scans, a near-automatic pipeline creates a set of blendshapes, which are puppeteered in real time using tracking software. An important stage of the blendshape pipeline is automated to identify and create correspondences between the geometry and textures of different scans, greatly reducing the amount of texture drifting between blendshapes. To expand the amount of control beyond individual shapes, the system can automatically include blendshape masks across various regions of the face in order to mix effects from different parts, resulting in independent control over blinks and lip shapes.

The results are photorealistic and sufficiently representative of the capture subjects, so they could be used in social media, video conferencing, business communications, and other places where an accurate representation (as opposed to an artistic or stylized one) is desired or appropriate.

During the demo, the team scans two people who then puppeteer their own faces in real time.

Dan Casas
USC Institute for Creative Technologies

Oleg Alexander
USC Institute for Creative Technologies

Andrew Feng
USC Institute for Creative Technologies

Graham Fyffe
USC Institute for Creative Technologies

Ryosuke Ichikari
USC Institute for Creative Technologies

Paul Debevec
USC Institute for Creative Technologies

Ruizhe Wang
University of Southern California

Evan Suma
USC Institute for Creative Technologies

Ari Shapiro
USC Institute for Creative Technologies

Pushing Photorealism in "A Boy and His Kite"

The 100-square-mile setting for "A Boy and His Kite" is based on the Isle of Skye in Scotland and populated using photogrammetric assets: photographs of real-world objects are used to reconstruct textured 3D models. The challenge of rendering a large, photorealistic world led to development of several new rendering features and tools. A distance field representation of the scene provides efficient visibility queries on the GPU in the form of cone traces. Efficient cone tracing enables a number of high-quality dynamic-lighting features used in the demo: detailed soft shadows for the sun up to one kilometer from the camera, ambient occlusion to shadow the sky, and single-bounce global illumination from the terrain. Because the lighting is dynamic, the time-of-day can be changed at runtime, and artists can modify the scene and see the results instantly. Physically based depth-of-field and motion-blur effects help to create images that more closely resemble photographs. New foliage tools to procedurally populate the world allow artists to quickly create a natural landscape filled with millions of trees, bushes, rocks, and grass.

The interactive demo and cinematic are rendered at 30Hz in Unreal Engine 4 on a desktop PC.

Nick Penwarden
Epic Games

Gavin Moran
Epic Games

Real-Time Cinematic Shot Lighting in The Order: 1886

The rendering and content-creation pipelines from The Order: 1886 were built from the ground up to deliver a seamless, rea-ltime experience across both gameplay and cinematics, and achieve a visual fidelity in real time that is generally more associated with traditionally pre-rendered CG. A key ingredient in delivering on these ambitions was a system of camera-based shot lighting developed for use in shot-locked cinematic sequences, which all used a much more highly directed and polished dynamic lighting setup, tailored precisely to each camera cut.

This demonstration shows a cinematic sequence running live on a PlayStation 4 and then unlocks the camera on the debug build to show the same sequence again from the perspective of what is happening behind the scenes, as the engine is rapidly sequencing composed groups of lights from a collection of hundreds and even thousands of dynamic lights in the scene, each tailored exactly to the actor’s performance and camera framing in each shot. The end result in the game sequence is a highly polished and cohesive cinematic that seamlessly blends into gameplay and, from the perspective of
the debug camera, looks quite ridiculous.

Nathan Phail-Liff
Ready at Dawn Studios

Dushyant Agarwal
Ready at Dawn Studios