2018 MSR Intern
My research is about advancing immersion. Today, users see and hear virtual worlds; I want users to also feel virtual worlds. The main challenge I am tackling is that such large-scale force feedback traditionally requires big machinery, such as industrial robots. The key idea behind my research is to bypass this machinery by instead leveraging human power. I thus create software systems that orchestrate humans in doing such mechanical labor--this is what I call human actuation. As part of my PhD work, I first created the basic concept and then I extended it to real-walking. My two most recent projects now attempt to close the circle. These projects eliminate the need for dedicated human actuators, by instead letting regular VR users perform the necessary labor--as a side effect.
Lung-Pan Cheng, Li Chang, Sebastian Marwecki, and Patrick Baudisch
Full paper in CHI '18
We present a system that complements virtual reality experiences with passive props, yet still allows modifying the virtual world at runtime. The main contribution of our system is that it does not require any actuators; instead, our system employs the user to reconfigure and actuate otherwise passive props. We demonstrate a foldable prop that users reconfigure to represent a suitcase, a fuse cabinet, a railing, and a seat. A second prop, suspended from a long pendulum, not only stands in for inanimate objects, but also for objects that move and demonstrate proactive behavior, such as a group of flying droids that physically attack the user.
Sebastian Marwecki, Maximilian Brehm, Lung-Pan Cheng, Floyd‘ Mueller, and Patrick Baudisch
Full paper in CHI '18
Although virtual reality hardware is now widely available, the uptake of real walking is hindered by the fact that it requires often impractically large amounts of physical space. To address this, we present VirtualSpace, a novel system that allows overloading multiple users immersed in different VR experiences into the same physical space. VirtualSpace accomplishes this by containing each user in a subset of the physical space at all times, which we call tiles; app-invoked maneuvers then shuffle tiles and users across the entire physical space.
Lung-Pan Cheng, Sebastian Marwecki and Patrick Baudisch
Full paper and best demo award in UIST '17, p797-805.
We introduce mutual human actuation, a version of human actuation that works without dedicated human actuators. The key idea is to run pairs of users at the same time and have them provide human actuation to each other. Our system, Mutual Turk, achieves this by (1) offering shared props through which users can exchange forces while ob- scuring the fact that there is a human on the other side, and (2) synchronizing the two users’ timelines such that their way of manipulating the shared props is consistent across both virtual worlds.paper video
Lung-Pan Cheng, Eyal Ofek, Christian Holz, Hrvoje Benko and Andrew D. Wilson
Full paper in CHI '17, p3718-3728.
We propose a class of passive haptics that we call Sparse Haptic Proxy: a set of geometric primitives that simulate touch feedback in elaborate virtual reality scenes. Unlike previous passive haptics that replicate the virtual environment in physical space, a Sparse Haptic Proxy simulates a scene’s detailed geometry by redirecting the user’s hand to a matching primitive of the proxy. To bridge the divergence of the scene from the proxy, we augment an existing Haptic Retargeting technique with an on-the-fly target remapping: We predict users’ intentions during interaction in the virtual space by analyzing their gaze and hand motions, and conse- quently redirect their hand to a matching part of the proxy.paper video
Pedro Lopes, Sijing You, Lung-Pan Cheng, Sebastian Marwecki, and Patrick Baudisch
Full paper and demo in CHI '17, p1471-1482.
In this project, we explored how to add haptics to walls and other heavy objects in virtual reality. Our main idea is to prevent the user’s hands from penetrating virtual objects by means of electrical muscle stimulation (EMS). Figure 1a shows an example. As the shown user lifts a virtual cube, our system lets the user feel the weight and resistance of the cube. The heavier the cube and the harder the user presses the cube, the stronger a counterforce the system generates. Figure 1b illustrates how our system implements the physicality of the cube, i.e., by actuating the user’s opposing muscles with EMS.paper video
Lung-Pan Cheng, Thijs Roumen, Hannes Rantzsch, Sven Koehler, Patrick Schmidt, Robert Kovacs, Johannes Jasper, Jonas Kemper and Patrick Baudisch
Full paper in UIST '15, p417-426.
TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called “human actuators” by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them.paper video
Lung-Pan Cheng, Patrick Lühne, Pedro Lopes, Christoph Sterz and Patrick Baudisch
Full paper and demo in CHI '14, p3463-3472.
We present haptic turk, a different approach to motion platforms that is light and mobile. The key idea is to replace motors and mechanical components with humans. All haptic turk setups consist of a player who is supported by one or more human-actuators. The player enjoys an interactive experience, such as a flight simulation. The motion in the player’s experience is generated by the actuators who manually lift, tilt, and push the player's limbs or torso.paper video
Lung-Pan Cheng, Kate Hsiao, Andrew Liu and Mike Y. Chen
Full paper in CHI '13, p3037-3046.
We present iGrasp, which automatically adapts the layout and position of virtual keyboards based on how and where users are grasping the devices without requiring explicit user input. Our prototype uses 46 capacitive sensors positioned along the sides of an iPad to sense users’ grasps, and supports two types of grasp-based automatic adaptation: layout switching and continuous positioning.paper video
Lung-Pan Cheng, Kate Hsiao and Andrew Liu, Mike Y. Chen
Full paper and demo in CHI '12, p2203-2210.
Current approaches to automatic screen rotation are based on gravity and device orientation. Our survey shows that most of the users experienced auto-rotation that leads to incorrect viewing orientation. iRotate solves the problem by automatically rotates screens of mobile devices to match users’ face orientations using front camera and face detection.paper video
Neng-Hao Yu, Li-Wei Chan, Seng Yong Lau, Sung-Sheng Tsai, I-Chun Hsiao, Dian-Je Tsai, Fang-I Hsiao, Lung-Pan Cheng, Mike Y. Chen, Polly Huang and Yi-Ping Hung
Full paper in CHI '11, p2995-3004.
TUIC enables tangible interaction on capacitive multi-touch devices, such as iPad, iPhone, and multi-touch displays without requiring any hardware modifications. TUIC simulates finger touches on capacitive displays using passive materials and active modulation circuits embedded inside tangible objects, and can be used with multi-touch gestures simultaneously. After recognizing the pattern on a TUIC object, users can manipulate the object by transposing and rotating it on the surface to control the virtual object.paper video
Dominik Schmidt, Robert Kovacs, Vikram Mehta, Udayan Umapathi, Sven Koehler, Lung-Pan Cheng and Patrick Baudisch
Short paper in CHI '15, p2157-2160.
We present “Level-Ups”, computer-controlled stilts that allow virtual reality users to experience walking up and down steps. Each Level-Up unit is a self-contained device worn like a boot. Its main functional element is a vertical actuation mechanism mounted to the bottom of the boot that extends vertically. Unlike traditional solutions that are integrated with locomotion devices, Level-Ups allow users to walk around freely (“real-walking”).paper video
Lung-Pan Cheng, Meng-Han Lee, Che-Yang Wu, Fang-I Hsiao, Yen-Ting Liu, Hsiang-Sheng Liang, Yi-Ching Chiu, Ming-Sui Lee and Mike Y. Chen
Short paper and demo in CHI '13, p3051-3054.
iRotateGrasp automatically rotates screens of mobile devices to match users’ viewing orientations based on how users are grasping the devices. It can rotate screens correctly in different postures and device orientations without explicit user input. Our insight is that users’ grasps are consistent for each orientation, but significantly differ between different orientations. Several prototypes were implemented, which can successfully sense and classify users' grasps into users' viewing orientations.paper video