Holodepth: Programmable Depth‑Varying Projection via Computer‑Generated Holography | ECCV 2024

Dorian Chan, Matthew O'Toole, Sizhuo Ma, and Jian Wang

Paper Supplement Code

TL;DR, we built a projector that can program unique content at multiple depths per pixel simultaneously using a variant of computer-generated holography. This capability could be useful for future interfaces, depth sensing systems, and more.

Overview

Imagine a projector that is capable of projecting depth-dependent content. Different images could be projected onto different objects at different depths, enabling new modalities of projection mapping and depth sensing. The goal of our paper is to create such a projector.

However, engineering such a system is a challenge using traditional projector configurations. Many past approaches, like coded aperture/light field projectors, are limited in their programmability under such a setting. Other approaches, such as multifocal systems, require high bandwidth and speed requirements, limiting their practicality. To address these challenges, we turn to computer-generated holography, and leverage the wave propagation of laser light to engineer a projector system. Under the right configuration, such a holographic system signficantly improves the visual quality of depth-dependent content compared to past projectors.



To maximize the depth variation of projected patterns, we find that the étendue of the holographic system needs to be increased. We do so by introducing a lens array into the optical path, and appropriately calibrating its effects on the propagation of light within the system.



Using our hardware prototype, we demonstrate a number of potential applications of such a depth-varying projector. For instance, such a system could be used to engineer next-generation screenless AR interfaces, where users interact with a pattern projected onto their palm. Depending on where their hand is placed, a different button could potentially appear.



Such a projector could also be useful for depth sensing. By showing a depth-dependent projection and identifying which patterns are present on different objects, depth can be directly extracted. We can potentially even further refine the projected patterns to improve the depth reconstruction quality.



This depth cue can also be used to build a light curtain system, by simply computing a difference image between the desired visible pattern and the actual visible pattern. This allows for multiple light curtains to be formed simultaneously without loss of resolution, and it does not require stereo calibration and synchronization unlike past approaches.

Acknowledgements

We thank Benjamin Attal, Shree Nayar, and Gurunandan Krishnan for the helpful discussions, and Nancy Pollard, Srinivasa Narasimhan and Arkadeep Narayan Chaudhury for their feedback on the paper. We also acknowledge the support of a NSF CAREER award (IIS 2238485) and a gift from Snap.