Lung-Pan Cheng

PhD student with Patrick Baudisch at Hasso Plattner Insitute.
2016 MSR Intern with E. Ofek, C. Holz, H. Benko, and A. D. Wilson
2014 Apple Intern with Sean Kim and Camille Moussette

PhD studnet with Patrick Baudisch at Hasso Plattner Insitute.

2016 MSR Intern with E. Ofek, C. Holz, H. Benko, and A. D. Wilson

2014 Apple Intern with Sean Kim and Camille Moussette

Human Actuation

My research is about advancing immersion. Today, users see and hear virtual worlds; I want users to also feel virtual worlds. The main challenge I am tackling is that such large-scale force feedback traditionally requires big machinery, such as industrial robots. The key idea behind my research is to bypass this machinery by instead leveraging human power. I thus create software systems that orchestrate humans in doing such mechanical labor--this is what I call human actuation. As part of my PhD work, I first created the basic concept and then I extended it to real-walking. My two most recent projects now attempt to close the circle. These projects eliminate the need for dedicated human actuators, by instead letting regular VR users perform the necessary labor--as a side effect.

CHI/UIST full papers on human actuation

Mutual Human Actuation

Lung-Pan Cheng, Sebastian Marwecki and Patrick Baudisch

In Proceedings of UIST '17, .

We introduce mutual human actuation, a version of human actuation that works without dedicated human actuators. The key idea is to run pairs of users at the same time and have them provide human actuation to each other. Our system, Mutual Turk, achieves this by (1) offering shared props through which users can exchange forces while ob- scuring the fact that there is a human on the other side, and (2) synchronizing the two users’ timelines such that their way of manipulating the shared props is consistent across both virtual worlds.

paper video

Sparse Haptic Proxy: Touch Feedback in Virtual Environments Using a General Prop

Lung-Pan Cheng, Eyal Ofek, Christian Holz, Hrvoje Benko and Andrew D. Wilson

In Proceedings of CHI '17, p3718-3728.

We propose a class of passive haptics that we call Sparse Haptic Proxy: a set of geometric primitives that simulate touch feedback in elaborate virtual reality scenes. Unlike previous passive haptics that replicate the virtual environment in physical space, a Sparse Haptic Proxy simulates a scene’s detailed geometry by redirecting the user’s hand to a matching primitive of the proxy. To bridge the divergence of the scene from the proxy, we augment an existing Haptic Retargeting technique with an on-the-fly target remapping: We predict users’ intentions during interaction in the virtual space by analyzing their gaze and hand motions, and conse- quently redirect their hand to a matching part of the proxy.

paper video

Providing Haptics to Walls and Heavy Objects in Virtual Reality by Means of Electrical Muscle Stimulation

Pedro Lopes, Sijing You, Lung-Pan Cheng, Sebastian Marwecki, and Patrick Baudisch

In Proceedings of CHI '17, p1471-1482.

In this project, we explored how to add haptics to walls and other heavy objects in virtual reality. Our main idea is to prevent the user’s hands from penetrating virtual objects by means of electrical muscle stimulation (EMS). Figure 1a shows an example. As the shown user lifts a virtual cube, our system lets the user feel the weight and resistance of the cube. The heavier the cube and the harder the user presses the cube, the stronger a counterforce the system generates. Figure 1b illustrates how our system implements the physicality of the cube, i.e., by actuating the user’s opposing muscles with EMS.

paper video

TurkDeck: Physical Virtual Reality Based on People

Lung-Pan Cheng, Thijs Roumen, Hannes Rantzsch, Sven Koehler, Patrick Schmidt, Robert Kovacs, Johannes Jasper, Jonas Kemper and Patrick Baudisch

In Proceedings of UIST '15, p417-426.

TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called “human actuators” by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them.

paper video

Haptic Turk: a Motion Platform Based on People

Lung-Pan Cheng, Patrick Lühne, Pedro Lopes, Christoph Sterz and Patrick Baudisch

In Proceedings of CHI '14, p3463-3472.

We present haptic turk, a different approach to motion platforms that is light and mobile. The key idea is to replace motors and mechanical components with humans. All haptic turk setups consist of a player who is supported by one or more human-actuators. The player enjoys an interactive experience, such as a flight simulation. The motion in the player’s experience is generated by the actuators who manually lift, tilt, and push the player's limbs or torso.

paper video

Earlier CHI/UIST full papers on mobile interaction

>

iGrasp: Grasp-Based Adaptive Keyboard for Mobile Devices

Lung-Pan Cheng, Kate Hsiao, Andrew Liu and Mike Y. Chen

In Proceedings of CHI '13, p3037-3046.

We present iGrasp, which automatically adapts the layout and position of virtual keyboards based on how and where users are grasping the devices without requiring explicit user input. Our prototype uses 46 capacitive sensors positioned along the sides of an iPad to sense users’ grasps, and supports two types of grasp-based automatic adaptation: layout switching and continuous positioning.

paper video

iRotate: Automatic Screen Rotation based on Face Orientation

Lung-Pan Cheng, Kate Hsiao and Andrew Liu, Mike Y. Chen

In Proceedings of CHI '12, p2203-2210.

Current approaches to automatic screen rotation are based on gravity and device orientation. Our survey shows that most of the users experienced auto-rotation that leads to incorrect viewing orientation. iRotate solves the problem by automatically rotates screens of mobile devices to match users’ face orientations using front camera and face detection.

paper video

TUIC: Enabling Tangible Interaction on Capacitive Multi-touch Displays

Neng-Hao Yu, Li-Wei Chan, Seng Yong Lau, Sung-Sheng Tsai, I-Chun Hsiao, Dian-Je Tsai, Fang-I Hsiao, Lung-Pan Cheng, Mike Y. Chen, Polly Huang and Yi-Ping Hung

In Proceedings of CHI '11, p2995-3004.

TUIC enables tangible interaction on capacitive multi-touch devices, such as iPad, iPhone, and multi-touch displays without requiring any hardware modifications. TUIC simulates finger touches on capacitive displays using passive materials and active modulation circuits embedded inside tangible objects, and can be used with multi-touch gestures simultaneously. After recognizing the pattern on a TUIC object, users can manipulate the object by transposing and rotating it on the surface to control the virtual object.

paper video

CHI/UIST short papers

Level-Ups: Motorized Stilts that simulates stair steps

Dominik Schmidt, Robert Kovacs, Vikram Mehta, Udayan Umapathi, Sven Koehler, Lung-Pan Cheng and Patrick Baudisch

In Proceedings of CHI '15, p2157-2160.

We present “Level-Ups”, computer-controlled stilts that allow virtual reality users to experience walking up and down steps. Each Level-Up unit is a self-contained device worn like a boot. Its main functional element is a vertical actuation mechanism mounted to the bottom of the boot that extends vertically. Unlike traditional solutions that are integrated with locomotion devices, Level-Ups allow users to walk around freely (“real-walking”).

paper video

iRotateGrasp: Automatic Screen Rotation based on Grasp of Mobile Devices

Lung-Pan Cheng, Meng-Han Lee, Che-Yang Wu, Fang-I Hsiao, Yen-Ting Liu, Hsiang-Sheng Liang, Yi-Ching Chiu, Ming-Sui Lee and Mike Y. Chen

In Proceedings of CHI '13, p3051-3054.

iRotateGrasp automatically rotates screens of mobile devices to match users’ viewing orientations based on how users are grasping the devices. It can rotate screens correctly in different postures and device orientations without explicit user input. Our insight is that users’ grasps are consistent for each orientation, but significantly differ between different orientations. Several prototypes were implemented, which can successfully sense and classify users' grasps into users' viewing orientations.

paper video