When the human eye follows a moving object, we expect to see a sharp image. Conversely, when object motion does not match our eye's motion, we expect motion blur. However, traditional time-multiplexing approaches for holographic displays fail in both of these regards. Objects our eyes track appear blurry thanks to sample-and-hold blur, while other objects produce phantom copies thanks to stroboscopic effects.
By incorporating motion-aware optimization, we can ensure sharp images appear when our eye tracks a moving object. Additionally, with high-speed regularization, motion the eye does not follow devolves gracefully into natural motion blur. In practice, if we use the wrong trajectory in our motion-aware optimization, artifacts can again occur. Our stochastic and kernel approaches let us handle multiple potential eye motions, but the kernel approach degrades without high-speed regularization.
To visualize these effects, we render high-speed videos of what a human would perceive. Here, we show a 1/60th second snippet of these videos in slow motion. Without high-speed regularization, we assume that 24 SLM patterns, displayed at 1440 FPS, are used to create one output frame. We assume an equivalent persistence-of-vision duration for the high-speed regularization setting.
If any playback errors occur, try switching between the different methods a couple times.