J. Lee, P. Asente, W. Stuerzlinger (2023). Designing Viewpoint Transition Techinques in Multiscale Virtual Environments, IEEE VR, 9 pages. To appear. 2023-03  

Abstract: Viewpoint transitions have been shown to improve users' spatial orientation and help them build a cognitive map when they are navigating an unfamiliar virtual environment. Previous work has investigated transitions in single-scale virtual environments, focusing on trajectories and continuity. We extend this work with an in-depth investigation of transition techniques in multiscale virtual environments (MVEs). We identify challenges in navigating MVEs with nested structures and assess how different transition techniques affect spatial understanding and usability. Through two user studies, we investigated transition trajectories, interactive control of transition movement, and speed modulation in a nested MVE. We show that some types of viewpoint transitions enhance users' spatial awareness and confidence in their spatial orientation and reduce the need to revisit a target point of interest multiple times.


J. Lee, P. Asente, W. Stuerzlinger (2022). A Comparison of Zoom-In Transition Methods for Multiscale VR, ACM SIGGRAPH '22, 2 pages. Poster. To appear.  2022-08

Abstract: When navigating within an unfamiliar virtual environment in VR, transitions between pre-defined viewpoints are known to facilitate spatial awareness of a user. Previously, different viewpoint transition techniques had been investigated, but mainly for single-scale environments. We present a comparative study of zoom-in transition techniques, where the viewpoint of a user is being smoothly transitioned from a large level of scale (LoS) to a smaller LoS in a multiscale virtual environment (MVE) with a nested structure. We identify that orbiting first before zooming in is preferred over other alternatives when transitioning to a viewpoint at a small LoS.


J. Lee, P. Asente, B. Kim, Y. Kim, W. Stuerzlinger (2020). Evaluating Automatic Parameter Control Methods for Locomotion in Multiscale Virtual Environments, ACM VRST '20, article no. 11, 10 pages.    2018-04  

Abstract: Virtual environments with a wide range of scales are becoming commonplace in Virtual Reality applications. Methods to control locomotion parameters can help users explore such environments more easily. For multi-scale virtual environments, point-and-teleport locomotion with a well-designed distance control method can enable mid-air teleportation, which makes it competitive to flying interfaces. Yet, automatic distance control for point-and-teleport has not been studied in such environments. We present a new method to automatically control the distance for point-and-teleport. In our first user study, we used a solar system environment to compare three methods: automatic distance control for point-and-teleport, manual distance control for point-and-teleport, and automatic speed control for flying. Results showed that automatic control significantly reduces overshoot compared with manual control for pointand-teleport, but the discontinuous nature of teleportation made users prefer flying with automatic speed control. We conducted a second study to compare automatic-speed-controlled flying and two versions of our teleportation method with automatic distance control, one incorporating optical flow cues. We found that pointand-teleport with optical flow cues and automatic distance control was more accurate than flying with automatic speed control, and both were equally preferred to point-and-teleport without the cues.


B. Lee, S. Kim, A. Oulasvirta, J. Lee, E. Park (2018). Moving Target Selection: A Cue Integration Model, ACM CHI '18, pp. 1–12.  2017-10

Abstract: This paper investigates a common task requiring temporal precision: the selection of a rapidly moving target on display by invoking an input event when it is within some selection window. Previous work has explored the relationship between accuracy and precision in this task, but the role of visual cues available to users has remained unexplained. To expand modeling of timing performance to multimodal settings, common in gaming and music, our model builds on the principle of probabilistic cue integration. Maximum likelihood estimation (MLE) is used to model how different types of cues are integrated into a reliable estimate of the temporal task. The model deals with temporal structure (repetition, rhythm) and the perceivable movement of the target on display. It accurately predicts error rate in a range of realistic tasks. Applications include the optimization of difficulty in game-level design.


J. Lee, S. Kim, M. Fukumoto, B. Lee (2017). Reflector: Distance-Independent, Private Pointing on a Reflective Screen, ACM UIST '17, pp. 351–364.  

Abstract: Reflector is a novel direct pointing method that utilizes hidden design space on reflective screens. By aligning a part of the user’s onscreen reflection with objects rendered on the screen, Reflector enables (1) distance-independent and (2) private pointing on commodity screens. Reflector can be implemented easily in both desktop and mobile conditions through a single camera installed at the edge of the screen. Reflector’s pointing performance was compared to today’s major direct input devices: eye trackers and touchscreens. We demonstrate that Reflector allows the user to point more reliably, regardless of distance from the screen, compared to an eye tracker. Further, due to the private nature of an onscreen reflection, Reflector shows a shoulder surfing success rate 20 times lower than that of touchscreens for the task of entering a 4-digit PIN.