Skip to main content

Ultrahigh-fidelity full-color holographic display via color-aware optimization

This article has been updated

Abstract

Holographic display offers the capability to generate high-quality images with a wide color gamut since it is laser-driven. However, many existing holographic display techniques fail to fully exploit this potential, primarily due to the system’s imperfections. Such flaws often result in inaccurate color representation, and there is a lack of an efficient way to address this color accuracy issue. In this study, we develop a color-aware hologram optimization approach for color-accurate holographic displays. Our approach integrates both laser and camera into the hologram optimization loop, enabling dynamic optimization of the laser’s output color and the acquisition of physically captured feedback. Moreover, we improve the efficiency of the color-aware optimization process for holographic video displays. We introduce a cascade optimization strategy, which leverages the redundant neighbor hologram information to accelerate the iterative process. We evaluate our method through both simulation and optical experiments, demonstrating the superiority in terms of image quality, color accuracy, and hologram optimization speed compared to previous algorithms. Our approach verifies a promising way to realize a high-fidelity image in the holographic display, which provides a new direction toward the practical holographic display.

Introduction

Laser-based displays can generate high-contrast, wide-color gamut images [1,2,3]. Holographic displays not only inherit these advantages but also pave the way for realistic three-dimensional (3D) displays by reproducing the wavefront [4,5,6]. In the realm of augmented reality (AR) and virtual reality (VR) displays, holography is regarded as a promising technique for next-generation 3D displays due to its ability to provide continuous focus cues and full parallax [7,8,9,10] without vergence-accommodation conflict [11, 12]. Furthermore, the holographic display also benefits those with visual impairments as it supports vision aberration correction [6, 13,14,15] without need additional optical elements. These advantages make holographic displays promising in various applications [8, 9, 16].

Despite the capability of holographic displays to generate arbitrary 3D image content, the generated image inevitably suffers from quality degradation issues. These issues are mainly caused by the interference of coherent random phases [17,18,19], constraints of the spatial light modulator (SLM) [20, 21], and imperfections of the optical system [22,23,24]. These detrimental factors become an obstacle in making holographic displays a practical technique. Over the past few decades, significant research efforts have been directed toward addressing these challenges. The speckle noise issue [25,26,27], caused by the interference of coherent random phases, has been tackled through techniques such as temporal or spatial multiplexing [7, 28, 29], and complex-amplitude encoding methods [1, 30, 31]. Additionally, the deep learning-based method provides an efficient approach for reducing speckles in holographic displays by generating and encoding the complex wavefront [32,33,34]. The imperfections of the spatial light modulators, such as non-linear voltage-to-phase responses, zeroth diffraction order, and issues related to phase-only or amplitude-only modulation, can heavily degrade the quality of holographic images [21, 35]. Solutions, including phase calibration [36, 37], off-axis encoding [38, 39], and phase-only or amplitude-only hologram generation methods [30, 40], have been proposed to alleviate these problems. When considering the optical system, lens distortion, dust and scratches on optical element surfaces, and poor illumination conditions [23, 41] can also significantly contaminate the holographic display results. Recently, the camera-in-the-loop (CITL) method [17, 42,43,44] has been introduced to address these issues, using the physically captured image as feedback during the hologram optimization process. This camera-integrated optimization accounts for the system’s imperfections and can effectively suppress the noises. With these advantages, the CITL method can perform a noiseless full-color holographic display. However, the optimization process is not efficient enough, especially for video hologram generation, since it optimizes the hologram for frame individually. More importantly, there is a lack of consideration for the accuracy of the displayed color, which results in a color-shift problem in the reconstructed images.

As one of the crucial display quality evaluation standards, color accuracy has been widely recognized by commercial display manufacturers [45, 46]. Color accuracy has a significant influence on the audience’s emotions [47]. Inaccurate color representations can diverge from our expectations and give us an impression of discrepancy. These inaccurate colors can divert our attention from the ongoing scenario on the screen, thereby deteriorating the our visual experience [48, 49]. Color accuracy also plays an important role in delivering an immersive visual experience for the audience in holographic displays. Current research mainly focuses on achieving full-color holographic displays through methods such as time multiplexing [7, 50], spatial multiplexing [51, 52], and frequency multiplexing [53, 54], rather than addressing the issue of color accuracy. These methods typically generate holograms for the R, G, and B channels separately and manually adjust the power ratio for each channel to achieve color-corrected full-color displays. However, this approach is less efficient and lacks effective feedback to improve color accuracy. Until now, the problem of color accuracy in full-color holographic displays remains inadequately investigated.

In this study, we propose an effective color-aware hologram optimization strategy to address the color accuracy and hologram optimization speed issues in the full-color holographic display. In our approach, we utilize the gradient descent optimization technique and integrate color-dependent learnable s parameters into illumination. The full-color loss function is employed to simultaneously update the s parameters and SLM phase patterns for the R, G, B channels, facilitating color-accurate optimization. Moreover, we develop an efficient cascade color-aware hologram generation strategy to speed up the iterative process for holographic video displays while maintaining high image quality. In the cascade color-aware hologram optimization, we adopt the previous hologram and s parameter as the initial optimization conditions for the following frame to accelerate the hologram optimization speed. Our method can reproduce high-quality images with superior color accuracy in an efficient way, realizing a ultrahigh-fidelity full-color holographic display.

Methods

Color-aware hologram optimization

To realize the color-accurate holographic display, we introduce the color-aware laser-camera-in-the-loop (LCITL) optimization. The principle of the proposed LCITL optimization is depicted in Fig. 1. Initially, three randomly initialized phase-only holograms (POH) are separately uploaded to the SLM. Following light propagation, the camera captures the reconstructed full-color results. Subsequently, we compute the full-color loss. Then, based on back-propagation, the three POHs and the laser’s power can be updated. \(p_r\), \(p_g\), and \(p_b\) represent the initial voltage of the laser’s RGB channels, while \(s_r\), \(s_g\), and \(s_b\) denote the learnable parameters, which can be utilized to adjust the laser’s intensity ratio and strength in the R, G, B channels.

Fig. 1
figure 1

Principle of color-aware LCITL optimization. Initially, the color of the light source is not well balanced. After loading the hologram onto the SLM, the light is modulated by the SLM and passes through the 4-f filter system. The display results are then captured by the camera. The captured image can be used to calculate the actual loss value. Based on the back-propagation and gradient descent process, the holograms and s factors can be updated. Meanwhile, the laser’s voltages can be dynamically adjusted based on the s factors to achieve optimized color output. Images Credits: Unsplash License [55]

We employ the angular-spectrum method to simulate light propagation. The expression is defined as follows:

$$\begin{aligned} \tilde{g}(s_{r/g/b}, \varphi _{r/g/b}){} & {} = \mathcal {F}^{-1}\left\{ \mathcal {F}\left\{ s_{r/g/b} \cdot exp(i\varphi _{r/g/b})\right\} \cdot H(f_x,f_y,\lambda )\right\} , \nonumber \\ H(f_x,f_y,\lambda ){} & {} =\left\{ \begin{array}{ll} e^{ikz\sqrt{1-(\lambda f_x)^2-(\lambda f_y)^2}},\, \textrm{if} \sqrt{f_x^2+f_y^2} < \frac{1}{\lambda },\\ 0, \qquad \qquad \qquad \qquad \textrm{otherwise}, \end{array}\right. \end{aligned}$$
(1)

where \(\varphi _{r/g/b}\) and \(\tilde{g}(s_{r/g/b},\varphi _{r/g/b})\) represent the input phase and the output complex amplitude distribution, respectively. \(s_{r/g/b}\) is a learnable variable with an initial value of 1.0. \(\mathcal {F}\) and \(\mathcal {F}^{-1}\) denote the Fourier transform and its inverse, respectively. \(H(f_x,f_y,\lambda )\) indicate the transfer function, and \(\lambda\) corresponds to the wavelength of light. k denotes the wavanumber \(2\pi /\lambda\). z signifies the propagation distance, and \(f_x\), \(f_y\) are the spatial frequencies.

In the LCITL optimization, the objective is to identify the optimized \(\varphi _{r/g/b}\) and \(s_{r/g/b}\), which can be reformulated as the following minimization problem:

$$\begin{aligned} \underset{{\phi },{s}}{\textrm{argmin}} \;\mathcal{L}_m\left( |\tilde{g}(s, \phi )|^{2},{I_{target}}\right) , \end{aligned}$$
(2)

where \(I_{target}\) represents the intensity of the target image, and \(\mathcal {L}_{m}\) represent the loss function, such as the mean squared error loss. Different from the vanilla CITL method, we do not use any scale factor between the \(|\tilde{g}(s, \phi )|^{2}\) and the \(I_{target}\) in the loss function, since we need an absolute value instead a relative value to achieve accurate color reproduction. The relative value could potentially cause overexposure or underexposure issues, thereby exacerbating the color-shift problem in the full-color holographic display. Based on the loss function, we can update our RGB POHs \(\varphi _r\), \(\varphi _g\), \(\varphi _b\), and s factors \(s_r\), \(s_g\), \(s_b\) through the gradient descent process. The basic update rules are provided as follows:

$$\begin{aligned} \varphi _{r/g/b}^{k+1} = \varphi _{r/g/b}^{k} - \alpha _{\varphi }\cdot \left( \frac{\partial \mathcal {L}_m}{\partial g_{r/g/b}} \cdot \frac{\partial \tilde{g}_{r/g/b} }{\partial \varphi _{r/g/b}}\right) , \end{aligned}$$
(3)
$$\begin{aligned} s_{r/g/b}^{k+1} = s_{r/g/b}^{k} - \alpha _{s}\cdot \left( \frac{\partial \mathcal {L}_m}{\partial g_{r/g/b}} \cdot \frac{\partial \tilde{g}_{r/g/b} }{\partial s_{r/g/b}}\right) , \end{aligned}$$
(4)

where \(\tilde{g}_{r/g/b}\) and \(g_{r/g/b}\) represent the amplitude of the simulated and the captured reconstruction results, respectively. \(\alpha _{\varphi }\) and \(\alpha _s\) denote the learning rates. In the gradient descent process, we replace the value of \(\tilde{g}_{r/g/b}\) with that of the \(g_{r/g/b}\), but retain the gradient. In our experiment, we employ the Nadam optimizer [56] instead of the Adam optimizer [57] as optimization update rules. Compared to the Adam optimizer, the Nadam optimizer introduces additional Nesterov momentum [58], which can accelerate the convergence speed and find a superior solution.

Fig. 2
figure 2

Compensation process for the laser’s voltage to intensity response. a Original laser’s voltage to intensity response of the red laser. b The calculate continues LUT map for the laser’s original voltage and the new voltage. c The laser’s new voltage to intensity response

Moreover, we compensate for the non-linear relationship between the laser’s voltage and the output intensity to improve the accuracy of the voltage update. In our case, the relationship between the voltage and intensity of the red laser exhibits obvious non-linearity, as depicted in Fig. 2a. To compensate this non-linearity, we employ 12 polynomials interpolation create a continues look-up-table (LUT) to make the voltage to intensity for the red channel more linear. Figure 2b represents the new relationship between the original laser’s voltage and the new laser’s voltage with the LUT, and Fig. 2c shows the relationship between the laser’s new voltage and the output intensity, which demonstrates significantly improved linearity after the compensation. More details can be found in the supplementary material. Through proposed the color-aware optimization process, we can dynamically adjust the laser’s intensity ratio and strength in the R, G, and B channels and update holograms to achieve accurate color representation.

Cascade hologram optimization strategy

In holographic video displays, when employing the proposed LCITL method to generate holograms, it requires optimization for each frame separately. However, this strategy is not effective for generating holograms for holographic video display. Inspired by the video compression methods [59], we propose a color-aware cascade laser-camera-in-the-loop (CLCITL) optimization strategy that leverages redundant information from neighboring frames to expedite the color-aware optimization process for holographic video display. Besides, this cascade optimization strategy also provide an effective way to mitigate the impact of the system’s strong noise and further improve the image quality of the LCITL method. The schematic of the cascade hologram optimization is depicted in Fig. 3.

Fig. 3
figure 3

Schematic of the cascade hologram optimization. Images Credits: Big Buck Bunny, Blender Foundation [60]

Firstly, the randomly initialized holograms and s factors are used in the LCITL optimization of “Frame1”, and the optimized holograms and s factors serve as the initialization conditions for the subsequent frame. We then obtain the newly optimized holograms and s factors, and repeat the process until the final frame. The pseudo-code of CLCLTL optimization can be found in the supplementary material. In this optimization strategy, the previously optimized holograms and s factors are not discarded, but rather used to expedite the optimization process for the next frame. This strategy can significantly reduce the number of optimization steps. Moreover, this strategy can suppress the system’s noises at a very early stage, as the previous hologram already contains the noise suppression phase pattern.

Fig. 4
figure 4

The soft-mask generation process. We let the intensity of the two frames be subtracted and then take the absolute value to obtain the difference mask, the soft-mask is then generated by apply the Gaussian blur to the difference mask. Images Credits: Big Buck Bunny, Blender Foundation [60]

In the CLCITL optimization, we propose a soft-mask-based loss function to achieve different learning rates for the background and foreground. The soft-mask is crucial for the background area which undergoes minor change. A large learning speed in such regions might force the optimized hologram to deviate from the optimum value, subsequently generating noise in the background. The soft mask, used to distinguish between the background and foreground, is generated by comparing the intensity difference between frames. Figure 4 shows an example of the soft-mask generation. The generation of the soft mask \(M_s\) is defined by the following equation:

$$\begin{aligned} M_s = GaussianBlur\{|(I_{frame2}-I_{frame1})|\}. \end{aligned}$$
(5)

The final loss function in the cascade hologram optimization with the soft mask is defined as follows:

$$\begin{aligned} \mathcal {L}_{c}= \alpha \cdot \mathcal{L}_m\left( M_s\cdot |\tilde{g}(s, \phi )|^{2},M_s\cdot {I_{target}}\right) + (1-\alpha )\cdot \mathcal{L}_m\left( (1-M_s)\cdot |\tilde{g}(s, \phi )|^{2},(1-M_s)\cdot {I_{target}}\right) . \end{aligned}$$
(6)

The first and second term in Eq. (6) represent the loss functions for the foreground and background, respectively. \(\alpha\) is weight factor used to balance the background and foreground loss functions.

Fig. 5
figure 5

Cascade optimization results without and with soft-mask. a When the soft-mask is not applied, the noises are appear in the background due to the over-large learning rate. b The introduced soft-mask can efficiently eliminate these noises. Images Credits: Big Buck Bunny, Blender Foundation [60]

Figure 5 shows the necessity of the soft-mask in the cascade hologram optimization. In the hologram optimization of “Frame2”, the background and foreground will have the same learning speed when there is no soft mask. This will lead to optimization issue in the background and need more iterations to fix it, as shown in Fig. 5a. When with the soft-mask, the learning speed of the background becomes slow, thereby can avoid this background noise issue, as shown in Fig. 5b.

Results and discussion

We conduct both simulations and optical experiments to demonstrate the superiority of our method. We simulate a SLM with a resolution of 3840\(\times\)2160 and a pixel pitch of 3.74 \(\upmu\)m, and the propagation distance between the SLM plane and target plane is 80 mm. The wavelengths of the RGB light source are 678 nm, 520 nm, and 450 nm. The original resolution of the input image is 2560\(\times\)1440, which we zero-pad to 3840\(\times\)2160. In the simulation, we add intensity variations to the light propagation model to simulate the imperfect illumination condition in the optical experimental system.

Fig. 6
figure 6

Numerical simulation results. a Due to the non-uniform intensity distribution of the light source, the noises appear in the SGD method. b The CITL method shows a better noise reduction while the color tone is shifted. c The LCITL method demonstrates better performance in terms of color accuracy and image quality compared to the CITL and SGD methods. d Target images. Additional numerical simulation results can be found in the supplementary material. Images Credits: Unsplash License [55]

Figure 6 presents the numerical simulation results of the stochastic gradient descent (SGD), CITL, and the proposed LCITL methods. To better evaluate the color accuracy of the reconstructed results, we introduced the color difference metric \(\Delta E\) [61] for color accuracy evaluation. This metric is developed based on the human vision system, which offers a more accurate reflection of color differences as perceived by the human eye. A lower value of \(\Delta E\) indicates better color representation. More details can be found in the supplementary material. In the result of the SGD method, the image quality is the lowest among these methods, and the color tone of the image is shifted to blue. The CITL method successfully suppresses the noises, but the color representation is still unsatisfactory. In the results of the proposed LCITL method, it is evident that the noises are almost eliminated and the color tone aligns well with the target image. In comparison to the CITL method, the LCITL method demonstrates a significant improvement, yielding a 10.08 dB increase in peak signal-to-noise ratio (PSNR) and an 82.30% decrease in \(\Delta E\).

Fig. 7
figure 7

Numerical simulation results of the CLCITL optimization. a-d The simulated results of the “Frame1”, most of the color and noise issues are addressed well. e-h The simulated results of “Frame2”, the reconstructed image is approach to the target image faster than that of “Frame1”. Images Credits: Big Buck Bunny, Blender Foundation [60]

Fig. 8
figure 8

Experimentally captured holographic images with different methods. a, e In the results of SGD method, the images are contaminated by noises, and the color tone is shifted due to the color of the light source. b, f The CITL method can suppress the noises well while the accuracy of color reproduction is limited. Among these methods, the LCITL method c, g performs the best image quality and color accuracy, and the color tone of the reproduced images are almost the same as that of the target images d, h. Additional captured results can be found in the supplementary material. Images Credits: Unsplash License [55]

The simulation results of the CLCITL optimization are presented in Fig. 7. The first row illustrates the optimization process of “Frame1”, and we can observe that the color tone of the image is progressively corrected by the color-aware optimization. The second row displays the optimization results of “Frame2”. It is evident that the background noise is effectively suppressed within a few iterations, and the reconstructed image fully transitions from “Frame1” to “Frame2” at approximately 15 iterations in this case. The convergence speed of “Frame2” is much faster than that of “Frame1”. A larger discrepancy between frames may require additional iterations in the CLCITL method to obtain comparable image quality. Techniques such as early stopping in CLCITL optimization can be employed to adaptively reduce hologram generation time for different frames.

Fig. 9
figure 9

Experimentally captured 3D holographic images with different methods. Reconstruction results when focusing at the “flower” a and focusing at the “butterfly” b. Images Credits: Unsplash License [55]

In our holographic display prototype, we utilize an RGB fiber-coupled module with three optically aligned laser diodes as the light source. The wavelengths of the laser in R, G, and B channels are same as the light source in the simulation. The SLM is HOLOEYE MEGA phase-only liquid crystal on silicon, which has the same resolution and pixel pitch as the SLM in numerical simulation. The bit depth and diffraction efficiency of the SLM are 8 bits and over 90%, respectively. We employ a \(4-f\) system to filter high-order lights and noises. The initial laser’s voltages are same for all methods. A color charge-coupled device camera is used to capture the results. Additional hardware details can be found in the supplementary material.

Fig. 10
figure 10

Color distribution comparison among the target image and the reconstructed image from the SGD, CITL, and LCITL methods in the CIE 1931 color space. a The SGD method shows a severe color shift issue, as we can see the blue color shifts away from the red color. b This situation is slightly better in the CITL method, while the blue color still shifts from the center of the red color. c Our color-aware LCITL method shows the best color accuracy, as we can see that most of the blue and red colors overlap each other

Figure 8 presents experimentally captured 2D holographic images using the SGD, CITL, and proposed LCITL methods. In the results of the SGD method, it is evident that both system noise and color shift issues are not adequately addressed, and the image quality is the lowest among these methods. In the CITL method, the noises are effectively suppressed while the color shift problem persists. The proposed LCITL method presents the best color reproduction performance and maintains high image quality. Compared to the CITL method, the LCITL method demonstrates a substantial improvement, achieving a 4.35 dB quantitative increase in PSNR and a 37.3% decrease in \(\Delta E\) when averaged over 10 distinct images. We also demonstrate the color representation capability of our method under different illumination conditions, and the details can be found in the supplementary material. Figure 9a and b indicate the captured 3D results when focusing at the “flower” and “butterfly”. The enlarged images provide a comparison among the SGD method, Vanilla CITL method, proposed LCITL method and the target image. It is evident that the proposed LCITL method exhibits superior performance in terms of color accuracy.

Since the \(\Delta E\) may not intuitively reflect how the reproduced color shifts away from the target color, we project all the pixels’ color of Fig. 8a-d into the International Commission on Illumination (CIE) 1931 color space [62]. Figure 10 illustrates the color distributions of Figs. 8a-d in the CIE 1931 color space. Note that before we project the images into other color spaces, we applied the same Gaussian blur operation to them to avoid the effect of noises on color. In Fig. 10, the red and blue areas in the color space indicate the distribution of project color. The red and blue edges represent the maximum range of the project color in the color space. From Fig. 10a and b, we can see that the color shift issue in the SGD and CITL methods can be clearly observed in the chromatic domain. Our proposed color-aware LCITL optimization can mitigate this issue effectively. The color reconstruction achieved by our method closely aligns with that of the target image, as demonstrated in Fig. 10c.

Fig. 11
figure 11

Experimentally captured holographic images of the CLCITL optimization. a-d The reconstruction results in optimization of the “Frame1”. e-h The reconstruction results in the optimization of “Frame2”. The optical experimental results are matched well with the simulation results, the convergence speed and the noises are improved in the second frame optimization (see visualization 1 in supplementary material). Images Credits: Big Buck Bunny, Blender Foundation [60]

Fig. 12
figure 12

Experimentally captured holographic images with different methods. a SGD method, b CITL method, c LCITL method, d CLCITL method, e Target image. The CLCITL method out-performs the other methods achieved the best image quality and color accuracy, the heavy system noise indicated in the blue box is also eliminated (see visualization 2 in supplementary material). Additional comparisons can be found in the supplementary material. Images Credits: Big Buck Bunny, Blender Foundation [60]

In the CLCITL optimization, since the second hologram reutilizes the previous frame’s hologram, which has already effectively eliminated system noise, the subsequent optimization can focus on the differences between frames. As illustrated in Fig. 11, the first row displays the captured results during the optimization of “Frame 1”, where the color is gradually corrected and noise is well suppressed, albeit requiring many iterations. The second row presents the optimization results of “Frame 2”. As shown in Fig. 11g, 15 iterations already guarantee image quality in this case, and the image quality is much better than that of Fig. 11c. Figure 12 presents the comparison results of the SGD, vanilla CITL, LCITL, and CLCITL methods. Notably, the CLCITL method outperforms others in achieving superior image quality and color accuracy while also effectively mitigating the system’s noises.

To demonstrate the effectiveness of our method for holographic video display, we compute holograms using different methods for continuous frames. Figure 13 illustrates a comparison among the SGD, CITL, LCITL, and CLCITL methods at different frames. It is evident that the CLCITL method requires only 30 iterations while achieving the best image quality.

Fig. 13
figure 13

Experimentally captured results with SGD, CITL, LCITL, and CICITL method at different frames. Compared to the SGD method a-e, the CITL method f-j can eliminate most of the noises, while the color shift issue still exists in both methods. When we adopt the color-aware LCITL optimization k-o, both noise and color accuracy issues are addressed well. p-t The CLCITL method can significantly speed up the color-aware optimization while maintaining the similar image quality (see visualization 3 in supplementary material). Images Credits: Big Buck Bunny, Blender Foundation [60]

Figure 14 illustrates the averaged PSNR and overall iterations of the SGD, CITL, LCITL, and the CICITL methods of 50 frames. The overall iterations of the SGD, CITL, and LCITL methods are the same, while the LCITL method performed the best image quality, showing a 9.03 dB and 4.46 dB improvement compared to the SGD and CITL methods. The overall iterations for the CLCITL method are reduced by 57.2% compared to the LCITL method, while the image quality is still maintained at a high level. Note that each iteration time in the CITL, LCITL, and CLCITL are similar, which means the CLCITL method reduced the hologram generation time by approximately 57.2% compared to the CITL and LCITL methods.

Fig. 14
figure 14

Comparison among the SGD, CITL, LCITL and the CICITL methods in terms of the averaged PSNR and overall iterations of 50 frames. The averaged PSNR in the CICITL method shows a 4.76 dB improvement compared to the CITL method. Meanwhile, the overall iterations reduced from 5000 to 2140

Conclusion

In this paper, we have proposed a color-aware hologram optimization strategy, which could reproduce color-accurate high-quality holographic images. In our approach, the laser’s output color and phase pattern were dynamically adjusted to achieve optimal color reproduction. Compared to the SGD method and the original CITL method, our method demonstrated a clear advantage in terms of color accuracy and image quality. The proposed CLCITL method also exhibited significant improvement in hologram generation time. With the proposed color-aware hologram generation strategy, we could produce a vivid full-color holographic video display. We believe our method has the potential to be used in the area of AR/VR display and holographic projection since our approach is capable of providing high-quality holographic images, which may give audiences a more immersive visual experience.

However, our method has room for improvement. The current optimization model is based on the camera’s color feedback, which may differ from the human visual system. In the future, we will aim to improve the optimization process from the viewer’s perspective to realize a more realistic color holographic display. Another improvement can focus on the differentiable color-difference loss functions (such as \(\Delta E\)). Currently, the loss functions for the full-color image are based on the RGB color space. Nevertheless, this color space cannot accurately reflect the human eye’s perception of color difference, and thereby, we may lose some accuracy in color reproduction. A possible solution is to calculate the differentiable color-difference loss function in other color spaces (such as CIELab) to further enhance our control of color accuracy.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Change history

  • 12 June 2024

    The supplementary material has been added

Abbreviations

3D:

Three-dimensional

2D:

Two-dimensional

AR:

Augmented reality

VR:

Virtual reality

SLM:

Spatial light modulator

POH:

Phase-only hologram

SGD:

Stochastic gradient descent

LUT:

Look-up table

CIE:

International Commission on Illumination

CITL:

Camera-in-the-loop

PSNR:

Peak signal-to-noise ratio

LCITL:

Laser-camera-in-the-loop

CLCITL:

Cascade-laser-camera-in-the-loop

References

  1. Qi Y, Chang C, Xia J. Speckleless holographic display by complex modulation based on double-phase method. Opt Express. 2016;24(26):30368–78.

    Article  Google Scholar 

  2. Akram MN, Chen X. Speckle reduction methods in laser-based picture projectors. Opt Rev. 2016;23(1):108–20.

    Article  Google Scholar 

  3. Maimone A, Wang J. Holographic optics for thin and lightweight virtual reality. ACM Trans Graph (TOG). 2020;39(4):67–1.

    Article  Google Scholar 

  4. Gabor D. A New Microscopi Prinnciple. Nature. 1948;161:777–8.

  5. Jang C, Bang K, Li G, Lee B. Holographic Near-Eye Display with Expanded Eye-Box. ACM Trans Graph (TOG). 2018;37(6):14. https://doi.org/10.1145/3272127.3275069.

    Article  Google Scholar 

  6. Chang C, Bang K, Wetzstein G, Lee B, Gao L. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. Optica. 2020;7(11):1563–78.

    Article  Google Scholar 

  7. Choi S, Gopakumar M, Peng Y, Kim J, O’Toole M, Wetzstein G. Time-multiplexed neural holography: a flexible framework for holographic near-eye displays with fast heavily-quantized spatial light modulators. In: ACM SIGGRAPH 2022 Conference Proceedings. Association for Computing Machinery; 2022;1–9. https://doi.org/10.1145/3528233.3530734.

  8. Shi L, Huang FC, Lopes W, Matusik W, Luebke D. Near-Eye Light Field Holographic Rendering with Spherical Waves for Wide Field of View Interactive 3D Computer Graphics. ACM Trans Graph (TOG). 2017;36(6):17. https://doi.org/10.1145/3130800.3130832.

    Article  Google Scholar 

  9. Maimone A, Georgiou A, Kollin JS. Holographic Near-Eye Displays for Virtual and Augmented Reality. ACM Trans Graph (TOG). 2017;36(4):16. https://doi.org/10.1145/3072959.3073624.

    Article  Google Scholar 

  10. Kim D, Nam SW, Lee B, Seo JM, Lee B. Accommodative holography: improving accommodation response for perceptually realistic holographic displays. ACM Trans Graph (TOG). 2022;41(4):1–15.

    Google Scholar 

  11. Gao Q, Liu J, Duan X, Zhao T, Li X, Liu P. Compact see-through 3D head-mounted display based on wavefront modulation with holographic grating filter. Opt Express. 2017;25(7):8412–24.

    Article  Google Scholar 

  12. Xiong J, Hsiang EL, He Z, Zhan T, Wu ST. Augmented reality and virtual reality displays: emerging technologies and future perspectives. Light Sci Appl. 2021;10(1):216.

    Article  Google Scholar 

  13. Yaraş F, Kang H, Onural L. State of the Art in Holographic Displays: A Survey. J Disp Technol. 2010;6(10):443–54.

    Article  Google Scholar 

  14. Hong J, Kim Y, Choi HJ, Hahn J, Park JH, Kim H, et al. Three-dimensional display technologies of recent interest: principles, status, and issues [Invited]. Appl Opt. 2011;50(34):H87–115. https://doi.org/10.1364/AO.50.000H87.

    Article  Google Scholar 

  15. Nam SW, Moon S, Lee B, Kim D, Lee S, Lee CK, et al. Aberration-corrected full-color holographic augmented reality near-eye display using a Pancharatnam-Berry phase lens. Opt Express. 2020;28(21):30836–50.

    Article  Google Scholar 

  16. Li G, Lee D, Jeong Y, Cho J, Lee B. Holographic display for see-through augmented reality using mirror-lens holographic optical element. Opt Lett. 2016;41(11):2486–9.

    Article  Google Scholar 

  17. Peng Y, Choi S, Kim J, Wetzstein G. Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration. Sci Adv. 2021;7(46):eabg5040.

  18. Lee S, Kim D, Nam SW, Lee B, Cho J, Lee B. Light source optimization for partially coherent holographic displays with consideration of speckle contrast, resolution, and depth of field. Sci Rep. 2020;10(1):18832.

    Article  Google Scholar 

  19. Wang D, Liu C, Shen C, Xing Y, Wang QH. Holographic capture and projection system of real object based on tunable zoom lens. PhotoniX. 2020;1:1–15.

    Article  Google Scholar 

  20. Arrizon V, Carreon E, Testorf M. Implementation of Fourier array illuminators using pixelated SLM: efficiency limitations. Opt Commun. 1999;160(4–6):207–13.

    Article  Google Scholar 

  21. Zhang H, Xie J, Liu J, Wang Y. Elimination of a zero-order beam induced by a pixelated spatial light modulator for holographic projection. Appl Opt. 2009;48(30):5834–41.

    Article  Google Scholar 

  22. Kim D, Nam SW, Bang K, Lee B, Lee S, Jeong Y, et al. Vision-correcting holographic display: evaluation of aberration correcting hologram. Biomed Opt Express. 2021;12(8):5179–95.

    Article  Google Scholar 

  23. Chakravarthula P, Tseng E, Srivastava T, Fuchs H, Heide F. Learned Hardware-in-the-loop Phase Retrieval for Holographic Near-Eye Displays. ACM Trans Graph (TOG). 2020;39(6):186.

    Article  Google Scholar 

  24. Piao YL, Erdenebat MU, Kwon KC, Gil SK, Kim N. Chromatic-dispersion-corrected full-color holographic display using directional-view image scaling method. Appl Opt. 2019;58(5):A120–7.

    Article  Google Scholar 

  25. Jones R, Wykes C. Holographic and speckle interferometry, 6. Cambridge University Press; 1989.

  26. Dainty JC. Laser speckle and related phenomena, vol. 9. Springer science & business Media; 2013.

  27. Lee D, Jang C, Bang K, Moon S, Li G, Lee B. Speckle reduction for holographic display using optical path difference and random phase generator. IEEE Trans Ind Inform. 2019;15(11):6170–8.

    Article  Google Scholar 

  28. Takaki Y, Yokouchi M. Speckle-free and grayscale hologram reconstruction using time-multiplexing technique. Opt Express. 2011;19(8):7567–79.

    Article  Google Scholar 

  29. Lee B, Kim D, Lee S, Chen C, Lee B. High-contrast, speckle-free, true 3D holography via binary CGH optimization. Sci Rep. 2022;12(1):2811.

    Article  Google Scholar 

  30. Chen C, Lee B, Li NN, Chae M, Wang D, Wang QH, et al. Multi-depth hologram generation using stochastic gradient descent algorithm with complex loss function. Opt Express. 2021;29(10):15089–103. https://doi.org/10.1364/OE.425077.

    Article  Google Scholar 

  31. Yoo D, Jo Y, Nam SW, Chen C, Lee B. Optimization of computer-generated holograms featuring phase randomness control. Opt Lett. 2021;46(19):4769–72.

    Article  Google Scholar 

  32. Liu K, Wu J, He Z, Cao L. 4K-DMDNet: diffraction model-driven network for 4K computer-generated holography. Opto-Electron Adv. 2023;6(5):220135–1.

    Article  Google Scholar 

  33. Lee J, Jeong J, Cho J, Yoo D, Lee B, Lee B. Deep neural network for multi-depth hologram generation and its training strategy. Opt Express. 2020;28(18):27137–54.

    Article  Google Scholar 

  34. Wu J, Liu K, Sui X, Cao L. High-speed computer-generated holography using an autoencoder-based deep neural network. Opt Lett. 2021;46(12):2908–11.

    Article  Google Scholar 

  35. Engström D, Persson M, Bengtsson J, Goksör M. Calibration of spatial light modulators suffering from spatially varying phase response. Opt Express. 2013;21(13):16086–103.

    Article  Google Scholar 

  36. Shi L, Li B, Matusik W. End-to-end learning of 3d phase-only holograms for holographic display. Light Sci Appl. 2022;11(1):247.

    Article  Google Scholar 

  37. Li R, Cao L. Progress in phase calibration for liquid crystal spatial light modulators. Appl Sci. 2019;9(10):2012.

    Article  Google Scholar 

  38. Takaki Y, Tanemoto Y. Band-limited zone plates for single-sideband holography. Appl Opt. 2009;48(34):H64–70.

    Article  Google Scholar 

  39. Lee B, Yoo D, Jeong J, Lee S, Lee D, Lee B. Wide-angle speckleless DMD holographic display using structured illumination with temporal multiplexing. Opt Lett. 2020;45(8):2148–51.

    Article  Google Scholar 

  40. Chang C, Cui W, Gao L. Holographic multiplane near-eye display based on amplitude-only wavefront modulation. Opt Express. 2019;27(21):30960–70.

    Article  Google Scholar 

  41. Chakravarthula P, Tseng E, Srivastava T, Fuchs H, Heide F. Learned Hardware-in-the-Loop Phase Retrieval for Holographic near-Eye Displays. ACM Trans Graph (TOG). 2020;39(6):18. https://doi.org/10.1145/3414685.3417846.

    Article  Google Scholar 

  42. Peng Y, Choi S, Padmanaban N, Wetzstein G. Neural Holography with Camera-in-the-Loop Training. ACM Trans Graph (TOG). 2020;39(6):14. https://doi.org/10.1145/3414685.3417802.

    Article  Google Scholar 

  43. Choi S, Kim J, Peng Y, Wetzstein G. Optimizing image quality for holographic near-eye displays with Michelson Holography. Optica. 2021;8(2):143–6. https://doi.org/10.1364/OPTICA.410622.

    Article  Google Scholar 

  44. Chen C, Kim D, Yoo D, Lee B, Lee B. Off-axis camera-in-the-loop optimization with noise reduction strategy for high-quality hologram generation. Opt Lett. 2022;47(4):790–3.

    Article  Google Scholar 

  45. Zhao B, Xu Q, Luo MR. Color difference evaluation for wide-color-gamut displays. JOSA A. 2020;37(8):1257–65.

    Article  Google Scholar 

  46. Witt K. CIE guidelines for coordinated future work on industrial colour-difference evaluation. Color Res Appl. 1995;20(6):399–403.

    Article  Google Scholar 

  47. Kaya N, Epps HH. Relationship between color and emotion: A study of college students. Coll Stud J. 2004;38(3):396–405.

    Google Scholar 

  48. Stone MC. Color and brightness appearance issues in tiled displays. IEEE Comput Graph Appl. 2001;21(5):58–66.

    Article  Google Scholar 

  49. Sharples S, Cobb S, Moody A, Wilson JR. Virtual reality induced symptoms and effects (VRISE): Comparison of head mounted display (HMD), desktop and projection display systems. Displays. 2008;29(2):58–69.

    Article  Google Scholar 

  50. Kazempourradi S, Ulusoy E, Urey H. Full-color computational holographic near-eye display. J Inf Disp. 2019;20(2):45–59.

    Article  Google Scholar 

  51. Pi D, Liu J, Wang Y. Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display. Light Sci Appl. 2022;11(1):231.

    Article  Google Scholar 

  52. Xue G, Liu J, Li X, Jia J, Zhang Z, Hu B, et al. Multiplexing encoding method for full-color dynamic 3D holographic display. Opt Express. 2014;22(15):18473–82.

    Article  Google Scholar 

  53. Kozacki T, Chlipala M. Color holographic display with white light LED source and single phase only SLM. Opt Express. 2016;24(3):2189–99.

    Article  Google Scholar 

  54. Lin SF, Kim ES. Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods. Opt Express. 2017;25(10):11389–404.

    Article  Google Scholar 

  55. Unsplash. Unsplash License. https://unsplash.com/license. Accessed 2018.

  56. Dozat T. Incorporating nesterov momentum into adam. Technical Report (Stanford University). 2015. Preprint at: http://cs229.stanford.edu/proj2015/054_report.pdf.

  57. Kingma DP, Ba J. Adam: a method for stochastic optimization. 2014. arXiv preprint arXiv:1412.6980.

  58. Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. In: International conference on machine learning. PMLR; 2013. pp. 1139–1147.

  59. Le Gall DJ. The MPEG video compression algorithm. Signal Process Image Commun. 1992;4(2):129–40.

    Article  Google Scholar 

  60. Foundation B. Big buck bunny. https://peach.blender.org/. Accessed 2013.

  61. Zhang Y, Wang R, Peng Y, Hua W, Bao H. Color contrast enhanced rendering for optical see-through head-mounted displays. IEEE Trans Vis Comput Graph. 2021;28(12):4490–502.

    Article  Google Scholar 

  62. Hunt R, Pointer M. A colour-appearance transform for the CIE 1931 standard colorimetric observer. Color Res Appl. 1985;10(3):165–79.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No.2020-0-00548, (Sub3) Development of technology for deep learning-based real-time acquisition and pre-processing of hologram for 5G service, No.2021-0-00091, Development of real-time high-speed renderer technology for ultra-realistic hologram generation).

Funding

Institute of Information & Communications Technology Planning & Evaluation grant funded by the Korean government (MSIT) (2021-0-00091, 2020-0-00548).

Author information

Authors and Affiliations

Authors

Contributions

C. C. made an original concept of experimental scheme and wrote the initial draft of the manuscript. S-W. N., D. K. and J. L. contribute to the discussion and analysis. Y. J. and B.L. advised and supervised the project.

Corresponding author

Correspondence to Yoonchan Jeong.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary Material 1. See supplementary material for supporting content.

43074_2024_134_MOESM2_ESM.pdf

Supplementary Material 2. Supplementary document: Ultrahigh-fidelity full-color holographic display via color-aware optimization.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, C., Nam, SW., Kim, D. et al. Ultrahigh-fidelity full-color holographic display via color-aware optimization. PhotoniX 5, 20 (2024). https://doi.org/10.1186/s43074-024-00134-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-024-00134-7