CHILDHOOD: Wearable Suit for Augmented Child Experience
Emerging Technologies

View Video
CHILDHOOD: Wearable Suit for Augmented Child Experience
Understanding and perceiving the world from a child’s view is a very important in designing products and architectures, and providing safe, productive environments in hospitals and kindergartens. CHILDHOOD virtually realizes a child’s eye and hand movements via a viewpoint translator and hand exoskeletons. The viewpoint translator presents a child’s point of view by using a head-mounted display and a pan-tilt stereo camera attached at the waist. The pan-tilt mechanism follows the user’s head behavior with low latency. The passive hand exoskeletons simulate a child’s tiny grasping motion by using multiple quadric crank mechanisms and a child-size rubber hand. A grasping motion was analyzed with a motion capture system. The systems has no actuators and sensors. It is manipulated passively by users' actions, so they can receive complete real-time haptic feedback.
When it was demonstrated at the National Science Museum in Tokyo, CHILDHOOD received first prize. More than 400 museum visitors experienced the system. It was also tested at the University of Tsukuba Hospital, Department of Neurosurgery. These two studies indicated that the system successfully provides a child’s view and grasping action, and demonstrates its potential for assisting product and spatial design by allowing the user to walk around and freely interact with others.
Jun Nishida
University of Tsukuba
Hikaru Takatori
University of Tsukuba
Kosuke Sato
University of Tsukuba
Kenji Suzuki
University of Tsukuba