Journal
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
Volume 28, Issue 11, Pages 3759-3766Publisher
IEEE COMPUTER SOC
DOI: 10.1109/TVCG.2022.3203098
Keywords
Virtual Reality; Interpupillary Distance; Depth Perception
Categories
Funding
- NSERC Collaborative Research and Development (CRD) grant
- Qualcomm Canada Inc
- Canada First Research Excellence Fund (CFREF) for the Vision: Science to Application (VISTA) program
Ask authors/readers for more resources
This study evaluated the impact of inter-lens and inter-axial camera separations on depth perception in VR headsets. The results showed that misalignment between the lens and camera separations and the user's inter-pupillary distance can distort depth perception. This phenomenon has been largely ignored in previous studies.
Stereoscopic AR and VR headsets have displays and lenses that are either fixed or adjustable to match a limited range of user inter-pupillary distances (IPDs). Projective geometry predicts a misperception of depth when either the displays or virtual cameras used to render images are misaligned with the eyes. However, misalignment between the eyes and lenses might also affect binocular convergence, which could further distort perceived depth. This possibility has been largely ignored in previous studies. Here, we evaluated this phenomenon in a VR headset in which the inter-lens and inter-axial camera separations are coupled and adjustable. In a baseline condition, both were matched to observers' IPDs. In two other conditions, the inter-lens and inter-axial camera separations were set to the maximum and minimum allowed by the headset. In each condition, observers were instructed to adjust a fold created by two intersecting, textured surfaces until it appeared to have an angle of 90 degrees. The task was performed at three randomly interleaved viewing distances, monocularly and binocularly. In the monocular condition, observers underestimated the fold angle and there was no effect of viewing distance on their settings. In the binocular conditions, we found that when the lens and camera separation were less than the viewer's IPD, they exhibited compression of perceived slant relative to baseline. The reverse pattern was seen when the lens and camera separation were larger than the viewer's IPD. These results were well explained by a geometric model that considers shifts in convergence due to lens and display misalignment with the eyes, as well as the relative contribution of monocular cues.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available