Transfer & Capture

Technical Papers

Transfer & Capture

Wednesday, 12 August 9:00 AM - 10:30 AM | Los Angeles Convention Center, Room 152 Session Chair: Aseem Agarwala, Adobe Systems Incorporated


LazyFluids: Appearance Transfer for Fluid Animations

A novel flow-guided texture synthesis algorithm that preserves appearance of realistic fluid exemplars including rich boundary effects. It allows transfer of the desired look from a static image or video into an existing fluid animation.

Ondřej Jamriška
Czech Technical University in Prague

Jakub Fišer
Czech Technical University in Prague

Paul Asente
Adobe Research

Jingwan Lu
Adobe Research

Eli Shechtman
Adobe Research

Daniel Sýkora
Czech Technical University in Prague

Fluid-Volume Modeling From Sparse Multi-View Images by Appearance Transfer

This method provides volume sequences of fluids from sparse multi-view images (for example, only a single-view input or a pair of front- and side-view inputs). To create production-ready fluid animations, the paper also proposes a method of rendering and editing fluids using a commercially available fluid simulator.

Makoto Okabe
The University of Electro-Communications, JST CREST

Yoshinori Dobashi
Hokkaido University, JST CREST

Ken Anjyo
OLM Digital, Inc., JST CREST

Rikio Onai
The University of Electro-Communications

Garment Replacement in Monocular Video Sequences

A full processing pipeline to realistically augment monocular video data with complex and animated three-dimensional artificial objects using virtual garments as an example.

Lorenz Rogge
Gesellschaft für Optische Messtechnik

Felix Klose
Technische Universitat Braunschweig

Michael Stengel
Technische Universität Braunschweig

Martin Eisemann
Fachhochschule Köln

Marcus A. Magnor
Technische Universitat Braunschweig

Deformation Capture and Modeling of Soft Objects

This data-driven method for deformation capture and modeling of general soft objects enables realistic motion reconstruction as well as synthesis of virtual soft objects in response to user stimulation.

Bin Wang
Shenzhen Institute of Advanced Technology, National University of Singapore

Longhua Wu
Shenzhen Institute of Advanced Technology

Kangkang Yin
National University of Singapore

Uri Ascher
The University of British Columbia

Libin Liu
The University of British Columbia

Hui Huang
Shenzhen Institute of Advanced Technology