VR, Display, and Interaction

Technical Papers

VR, Display, and Interaction

Tuesday, 11 August 9:00 AM - 10:30 AM | Los Angeles Convention Center, Room 152 Session Chair: Wolfgang Heidrich, King Abdullah University of Science and Technology


Augmented Airbrush for Computer-Aided Painting

The Augmented Airbrush system allows novices to achieve complex physical paintings using specialized hardware that provides force-trigger feedback, high-speed tracking, and a risk-estimation algorithm.

Amit Zoran
The Hebrew University of Jerusalem

Roy Shilkrot
MIT Media Lab

Pattie Maes
MIT Media Lab

Joseph Paradiso
MIT Media Lab

eyeSelfie: Self-Directed Eye Alignment Using Reciprocal Eye-Box Imaging

Eye self-alignment is critical for HMDs, biometrics, and retinal imaging. This paper frames the problem as a user-interface challenge. The approach exploits the reciprocity,“If you see me, I see you,” to develop near-eye alignment displays.

Tristan Swedish
Massachusetts Institute of Technology

Karin Roesch
Massachusetts Institute of Technology

Ik Hyun Lee
Massachusetts Institute of Technology

Krishna Rastogi
Massachusetts Institute of Technology

Shoshana Bernstein
Massachusetts Institute of Technology

Ramesh Raskar
Massachusetts Institute of Technology

Optimal Presentation of Imagery With Focus Cues on Multi-Plane Displays

A technique for displaying three-dimensional imagery of general scenes with nearly correct focus cues on multi-plane displays. Such displays present an additive combination of images at a discrete set of optical distances, allowing the user to focus at different distances in the simulated scene.

Rahul Narain
University of Minnesota

Rachel A. Albert
University of California, Berkeley

M. Abdullah Bulbul
University of California, Berkeley

Gregory J. Ward
Dolby Laboratories, Inc.

Martin S. Banks
University of California, Berkeley

James F. O'Brien
University of California, Berkeley

The Light-Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light -Field Displays With Focus Cues

This practical, inexpensive solution for creating comfortable VR experiences that support all depth cues includes a near-eye stereoscopic light-field display that presents different 4D light fields to the eyes, creating correct or nearly correct retinal blur and the ability to freely focus within the scene.

Fu-Chung Huang
Stanford University

Kevin Chen
Stanford University

Gordon Wetzstein
Stanford University