Holospeed: High-Speed Holographic Displays for Dynamic Content

Dorian Chan, Oliver Cossairt, Nathan Matsuda, and Grace Kuo

ICCP 2025

Paper Supplement Video visualizations

TL;DR: Time multiplexing is a commonly used modality for mitigating the speckle of holographic displays. Hower, such systems produce significant perceptual artifacts when displaying dynamic content, thanks to a fundamental mismatch between expected and displayed motion. To tackle these problems, we propose a paradigm of high-speed display when using the fast SLMs traditionally used for time multiplexing. By accurately modeling the content perceived by the eye in such a setting, our proposed approach is able to display speckle-free, high-contrast dynamics that are free of motion artifacts.

The Problem

Holographic displays are an attractive choice for future AR/VR devices. By illuminating a spatial-light modulator (SLM for short) with laser light, such systems can naturally display 3D content with accurate focus cues in an extremely compact form factor, addressing key limitations with current AR/VR display architectures. However, holographic displays are not without their challenges. Chief among them is speckle—noise-like artifacts caused by the coherent interference of laser light. Intuitively, these effects occur because 2D control over the propagation of laser light is used to reproduce 3D content—such a mismatch in degrees of freedom results in speckle effects.

To tackle these problems, a variety of solutions have been proposed in the research literature, but perhaps the most enduring of these has been so-called time multiplexing. In short, to reproduce a single output frame, multiple differently-speckled versions of the target are displayed. With a sufficiently fast SLM, only the despeckled average of these frames will be perceived by the user. Given the rapidly increasing availability of fast SLMs, we expect time multiplexing to be the despeckling modality of choice for future holographic architectures.

While effective, we observe a number of perceptual artifacts that may appear when using such systems in practice for dynamic content. For clarity, let's consider a scenario where an object moves relative to a user (or the user moves relative to a static object) in the real world. If we want to study the detail or read the text of this moving object, our eye will actually rotate to track the continuous movement, such that the image of this object remains stationary on the retina—in the real world, this results in a sharp, perceived image despite object motion.

Now, consider the case of a time-multiplexed holographic display. Under such an architecture, the display will first show a set of noisy frames for the moving object at one location, and then another set of frames for the second location, and so forth. However, the user's eye will still rotate continuously, as our visual system expects the continuous dynamics of real-world moving objects. What this means is that the display will actually be time-multiplexing this moving object at incorrect locations relative to the user's eye. This mismatch between eye motion and displayed content manifests motion blur.

A slightly different scenario occurs when the eye does not track the motion of an object. In such a setting, in the real world, this object would move continuously relative to the user's eye, manifesting in natural motion blur.

However, in a time-multiplexed holographic display, this object will be displayed at a discrete set of locations. What this means is that sharp, stroboscopic artifacts appear with fast enough motion, where ghost copies of the moving object appear visually.

Without getting too much into the perceptual weeds, both of these effects are unfortunately well known in AR/VR literature to induce nausea and visual discomfort. Today's AR/VR displays are carefuly architected to avoid such artifacts, e.g., with short persistence times and increasingly fast displays. Unfortunately, if we do not address such issues in holographic displays, these effects could become a major showstopper towards the integration of holographic displays into real AR/VR devices.

Our Proposed Solution

To tackle these challenges, in our work, we propose a paradigm of high-speed display. Intuitively, if we had a perfect high-speed display, we would be able to perfectly replicate real world moving objects, free of any perceptual artifacts. Obviously such a system does not exist, but what we do have in the case of holographic time multiplexing is a fast SLM that produces speckled output, i.e., a high-speed, but noisy display. It turns out that if we use this noisy high-speed system to directly display high-speed content, we observe motion-artifact free content, but it struggles with contrast and speckle convergence as it does not consider the interplay between different output frames. We term such an approach "independent high-speed holographic display".

To mitigate these effects, we propose building a model for the perceived high-speed video, that accounts for both the eye's persistence-of-vision as well as motion. Then, given this model, we can optimize for the set of output frames that best perceptually reproduces the target video:

In practice, such an approach needs accurate knowledge of eye motion; however, real-world eye trackers can be noisy, and some devices may not have eye trackers installed in the first place. Thus, we propose approaches that can handle distributions of eye motion instead --- one particularly effective solution, as we mathematically express below, involves simply optimizing over multiple eye motions at a time. We term this particular approach "stochastic motion-aware high-speed holographic display", as we stochastically select one eye motion per iteration of optimization. We show that such distributions may be estimatable directly from the target videos if eye tracking is not available.

Results

We start with a scene where the user moves relative to a static road sign. Traditional time multiplexing results in significant motion blur when the eye tracks this sign, and ghosting artifacts when it does not. Independent high-speed display mitigates these artifacts, but suffers from loss in contrast and still-visible speckle. Our stochastic motion-aware approach still avoids motion artifacts, but has much improved contrast and speckle convergence.

In this scene, a ball moves relative to the background to simulate a VR basketball game. Traditional time multiplexing results in blurry lines on the basketball when the eye tracks it, and repeated lines when it does not. The independent high-speed display approach yields more similar results to expected perception, but the contrast between the dark lines and orange ball is lost. Our stochastic motion-aware approach remedies these issues.

A bird flies across the screen in this scene. During time multiplexing, the typical motion blur and stroboscopic artifacts appear as shown by the bird's eye. However, stroboscopic effects also appear in the wings when the eye tracks the bird, as the wings do not follow the exact overall motion of the bird. Independent high-speed display avoids all of these issues, but at the cost of muted colors in the feather pattern and background buildings. Our stochastic motion-aware approach avoids all of these problems.

Please refer to here for video visualizations!