Character Fashion & Style

Technical Papers

Character Fashion & Style

Wednesday, 12 August 3:45 PM - 5:35 PM | Los Angeles Convention Center, Room 150/151 Session Chair: Aaron Hertzmann, Adobe Research


Animating Human Dressing

This technique to synthesize human dressing controls a human character as it puts on an article of simulated clothing.

Alexander Clegg
Georgia Institute of Technology

Jie Tan
Georgia Institute of Technology

Greg Turk
Georgia Institute of Technology

Karen Liu
Georgia Institute of Technology

A Perceptual Control Space for Garment Simulation

This perceptual control space for simulation of cloth works with any physical simulator. The learned space provides intuitive, art-directable control over the simulation behavior based on a learned mapping from common descriptors for cloth (flowiness, softness, etc.) to the parameters of the simulation.

Leonid Sigal
Disney Research

Moshe Mahler
Disney Research

Spencer Diaz
Disney Research

Kyna McIntosh
Disney Research

Elizabeth Carter
Disney Research

Timothy Richards
The Walt Disney Company

Jessica Hodgins
Disney Research

Space-Time Sketching of Character Animation

This paper presents a space-time curve that allows animators to control a character's shape over time, together with its trajectory, using a single stroke. Different space-time curves can be composed together, leading to rich motion with few strokes, possibly including squash and stretch.

Martin Guay
INRIA, Université de Grenoble

Rémi Ronfard
INRIA, Université de Grenoble

Michael Gleicher
University of Wisconsin

Marie-Paule Cani
Université de Grenoble, INRIA

Real-Time Style Transfer for Unlabeled Heterogeneous Human Motion

This paper presents a novel solution for real-time generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles. The key idea is an online learning algorithm that constructs a series of local mixtures of autoregressive models to capture the complex relationships between styles of motion.

Shihong Xia
Institute of Computing Technology, Chinese Academy of Sciences

Congyi Wang
Institute of Computing Technology, Chinese Academy of Sciences

Jinxiang Chai
Texas A&M University

Jessica Hodgins
Carnegie Mellon University

Dyna: A Model of Dynamic Human Shape in Motion

Dyna is a model of realistic human soft-tissue deformation that is learned from 40,000 body scans of people in motion using a novel 4D scanner. It relates body shape and motion to shape deformations, producing realistic soft-tissue animations that can be varied and applied to new bodies and stylized characters.

Gerard Pons-Moll
Max-Planck-Institut für Intelligente Systeme

Javier Romero
Max-Planck-Institut für Intelligente Systeme

Naureen Mahmood
Max-Planck-Institut für Intelligente Systeme

Michael Black
Max-Planck-Institut für Intelligente Systeme