Skip to main content

Deep learning wavefront sensing and aberration correction in atmospheric turbulence

Abstract

Deep learning neural networks are used for wavefront sensing and aberration correction in atmospheric turbulence without any wavefront sensor (i.e. reconstruction of the wavefront aberration phase from the distorted image of the object). We compared and found the characteristics of the direct and indirect reconstruction ways: (i) directly reconstructing the aberration phase; (ii) reconstructing the Zernike coefficients and then calculating the aberration phase. We verified the generalization ability and performance of the network for a single object and multiple objects. What’s more, we verified the correction effect for a turbulence pool and the feasibility for a real atmospheric turbulence environment.

Introduction

In general, the wavefront aberrations induced by fluid (such as atmospheric turbulence) or biological tissues in an imaging system can be corrected by a deformable mirror (DM) or a spatial light modulator (SLM) [1]. To obtain the appropriate DM or SLM control signal, there are two types of methods: optimization method and wavefront sensing method. The former searches the appropriate control signal by stochastic, local or global search algorithm [2], which is time-consuming because of the large number of iterations and measurements. The latter restores the wavefront distortion by a wavefront sensor (such as Hartmann–Shack sensor) to guide the control signal of DM or SLM [3], which suffers from costly optical elements, multiple measurements and strict calibration requirements.

For an imaging system without wavefront aberration, the object can be clearly imaged. When atmospheric turbulence or other wavefront-affecting media exists in the imaging path, the image of the object would be distorted. Different wavefront aberrations lead to different image distortions, which means that there is a mapping relationship between them. Supervised deep learning has played an important role in computer vision [4, 5]. For example, convolutional neural networks are used for classification and recognition [6, 7], that is, learning the mapping relationship from images to categories and locations; encoder-decoder-mode neural networks are used for semantic segmentation [8, 9], that is, learning the mapping relationship from the image to the category of each pixel. It is natural to ask: can the deep learning neural network learn the mapping relationship from image distortion to wavefront aberration?

In fact, currently, deep learning has become a powerful tool to solve various inverse problems in computational imaging by learning the corresponding mapping relationship, such as digital holography (from hologram to phase and intensity images of objects) [10, 11], phase unwrapping (from wrapped phase to absolute phase) [12, 13], imaging through scattering media (from speckle map to object image) [14, 15]. In addition, the phase distribution of an object can be directly restored from a single intensity image by the deep learning neural network [16, 17]. Similarly, from the distorted intensity image, the deep learning neural network is also used to reconstruct the wavefront aberration phase [18, 19] or its Zernike coefficients [20,21,22,23,24,25,26,27,28], called deep learning wavefront sensing. As an end-to-end method, the deep learning wavefront sensing can be done just from a camera without the need of traditional wavefront sensors, which has great significance in free-space optical communications, astronomical observations and laser weapons. But in these works [18,19,20,21,22,23,24,25,26,27,28], firstly, only one of the two ways of aberration phase reconstruction or Zernike coefficients reconstruction was studied. on the other hand, there are no real-environment experiments (either purely numerical simulations [18, 20, 21, 25,26,27], or use SLM or Lens movement to simulate aberration phases [19, 22,23,24, 28]).

In this paper, we test the generalization ability for single and multiple objects cases by employing deep learning neural network, and compare the performance of using the wavefront aberration phase or its corresponding Zernike coefficients as ground truth (GT) in simple and complex cases. What’s more, the correction effect in the turbulent pool and the feasibility in real atmospheric turbulence are verified.

Method

As shown in Fig. 1(a), due to the atmospheric turbulence, a wavefront aberration ϕ( x, y) is induced into the object field O(x, y), where x and y represent transverse spatial coordinates. Then the distorted intensity distribution I(x, y) is given by

$$I{\text{(}}x,y{\text{)=}}{\left| {FT\left\{ {O(x,y) \times {{\text{e}}^{{\text{j}}\varphi (x,y)}}} \right\}} \right|^2},$$
(1)

where FT{} represents the Fourier transform. That is, there exists a mapping relationship between the distorted wavefront aberration, the object field and the intensity distribution:

$$\varphi {\text{=}}f(O,I).$$
(2)

The deep learning neural network can learn this mapping relationship from a large number of datasets, and reconstructs the wavefront aberration phase (or its corresponding Zernike coefficients) from the intensity distribution, just as the red part in Fig. 1, which is the main pursuit of this paper. Then DM or SLM can be used to correct the wavefront aberration by the guidance of the network output, as shown in Fig. 1(b).

Fig. 1
figure 1

Schematic diagram of the deep learning wavefront aberration correction. a Production of distortion; b wavefront aberration reconstruction and correction

The convolution neural network (CNN) architectures are inspired by U-Net [8], Residual block [29] and Inception module [30], as illustrated in Fig. 2. The CNN1 consists of an encoding path (left), a decoding path (right) and a bridge path (middle). The encoding and decoding paths each contain four Residual blocks, while the Residual block of the encoding path is followed by max pooling for downsampling and the Residual block of the decoding path is preceded by transposed convolution for upsampling. The CNN2 consists of an encoding path and two fully connected layers. The numbers in Fig. 2(a) and (b) represent the number of channels in the convolutional layer and the number of neurons in the fully connected layer. We add the idea of the Inception module to the Residual block, as shown in Fig. 2(c), where the four right paths separately use one, two, three, and four 3 × 3 convolutions for more effective extraction of the features in different scales. The concatenations in Fig. 2(a) transmit gradient and information to improve the convergence speed of the CNN, while the concatenations in Fig. 2(c) merge the feature in different scales. The CNN1 and CNN2 are used to reconstruct the wavefront aberration phase and its Zernike coefficients from the intensity distribution, respectively. The parameter quantity of CNN2 is approximately equal to half of CNN1.

Fig. 2
figure 2

CNN architectures. a CNN1 for aberration phase reconstruction; b CNN2 for Zernike coefficients reconstruction, in which ‘M’ indicates the number of neurons; c residual block, in which ‘N’ and ‘N/4’ indicate the number of channels

The adaptive moment estimation (ADAM) based optimization is used to train all the networks. The batch size is 64 and the learning rate is 0.01 (75 % drop per epoch if the learning rate is greater than 10− 7). The epoch size is 200 for 10,000 pairs of datasets. The L2 norm and cross-entropy loss functions are used for CNN1 and CNN2, respectively.

All the networks are implemented by Pytorch 1.0 based on Python 3.6.1, which is performed on a PC with Core i7-8700 K CPU (3.8 GHz) and 16 GB of RAM, using NVIDIA GeForce GTX 1080Ti GPU. The training time is about 6 h for CNN1 and 4 h for CNN2, while the testing time is about 0.05 s for CNN1 and 0.04 s for CNN2.

Three parameters are used to evaluate the accuracy of the neural networks:

  1. i.

    SSIM: Structural similarity index.

  2. ii.

    RMSE: Root mean square error.

  3. iii.

    MAE: Mean absolute error (and its percentage of the aberration phase range).

Results and discussion

Simulation

In the simulation, the samples are distorted by the wavefront aberration phase generated by Zernike polynomials with 2–15 order coefficients. The coefficients are randomly set within the range of [-5, 5]. The aberration phases (Zernike coefficients) and corresponding distorted images are used as the GT and input, respectively.

For the mapping relationship in Eq. (2), there are two cases: the same object with different wavefront aberration phases (single object) or different objects with different wavefront aberration phases (multiple objects). It is thus necessary to compare the performance of the network in these two cases.

For the single object, we use a grid as the object to generate 11,000 pairs of data (10,000 for training and 1,000 for testing), partially shown in Fig. 3. The shape of the grid deforms correspondingly with the aberration phases, which guides the convergence of the neural network.

Fig. 3
figure 3

Examples of aberration phase and distorted image

For the multiple objects, Number (from EMNIST [31]) and LFW [32] datasets are separately used as the object to generate 10,000 pairs of data for training and 1,000 pairs of data for testing, while Letter (from EMNIST) and ImageNet [33] are used as to generate 1,000 pairs of data for testing. Note that the aberration phase used in generating the dataset is the same as that used for a single object.

After training, the three CNN1s are tested. The accuracy evaluation of the networks is shown in Table 1 and Fig. 4, from which the following can be observed:

Table 1 Accuracy evaluation of the single object and multiple objects.
Fig. 4
figure 4

Demonstration of partial results of the single object and multiple objects

  1. i.

    Whether it is a single object or multiple objects, neural networks have the ability to learn the mapping relationships among them.

  2. ii.

    The accuracy of the neural network on a single object is higher than that of multiple objects.

  3. iii.

    The neural network trained with a type of dataset (Number or LFW) can also work on another type of similar dataset (Letter or ImageNet).

  4. iv.

    Note that when using the EMNIST-trained network to reconstruct the LFW or ImageNet distorted image, wrong results will be obtained, and vice versa. Therefore, in actual applications, it is recommended to use similar objects to create dataset for the target objects.

In addition to directly reconstructing the aberration phase, it is also an option to reconstruct the Zernike coefficient which is then used to calculate the aberration phase. We compare these two ways in two cases: aberration phase without details (simple) and aberration phase with internal details (complex).

For the simple case, the Zernike coefficients of the aberration phase from the Grid dataset in Sect. 3.2 are used as the GT of the CNN2 (M = 14). For the complex case, as shown in Fig. 5, to generate the complex aberration phase as the GT of the CNN1, a random phase is added into the sample aberration phase; then the Zernike coefficients (2-101 orders) calculated from the complex aberration phase are used as the GT of the CNN2 (M = 100).

Fig. 5
figure 5

Dataset generation for Zernike coefficients reconstruction. a Zernike coefficients (2–15 orders); b simple aberration phase; c random phase, d complex aberration phase; e Zernike coefficients (2-101 orders). a and b are from the Grid dataset in Sect. 3.2. For the simple case, (a) and (b) are used as the GT of the CNN2 and CNN1, respectively. For the complex case, (d) and (e) are used as the GT of CNN1 and CNN2, respectively

After training, the three networks are tested, in which the coefficients from CNN2 are calculated to the phase to compare with CNN1. The accuracy evaluation of the networks is shown in Table 2 and Fig. 6, from which the following can be observed:

Table 2 Accuracy evaluation of the simple and complex cases.
Fig. 6
figure 6

Demonstration of partial results of the simple and complex cases

  1. i.

    For the simple case, CNN1 and CNN2 have the same accuracy.

  2. ii.

    For the complex case, the accuracy of CNN2 drops a lot, due to the loss of detailed information (lower resolution).

  3. iii.

    Given that SLM has a higher resolution than DM in general, CNN1 (direct reconstruction of aberration phase) has a higher resolution which is more suitable for SLM, while CNN2 (reconstruction of Zernike coefficient) has fewer network parameter quantity but lower resolution which is more suitable for DM.

Correction experiment

In order to verify the correction effect of this method, we used the way of directly reconstructing the wavefront aberration phase to train and test CNN1 in the turbulence pool. As shown in Fig. 7, the setup contains five parts including the aberration phase acquisition part, the distortion image acquisition part, the correction part, the calculation part and the turbulence generating part:

Fig. 7
figure 7

Setup for the correction experiment. TL: telecentric lens, CP: calibration plane, ISLM: intensity-type SLM, PSLM: phase-type SLM

  1. i.

    The distortion phase acquisition part includes a laser source (532nm), a Mach-Zehnder interferometer for generating the hologram, a telecentric lens for conjugating the calibration plane and the CCD1 target plane, and a CCD1 for recording the hologram.

  2. ii.

    The distorted image acquisition part includes an ISLM (intensity-type SLM) with a white LED for generating objects (grid), a double lens for adjusting the beam size and a CDD2 with a lens (300mm) for imaging.

  3. iii.

    The correction part includes a triplet lens for adjusting the beam size while conjugating the calibration plane and the PSLM (phase-type SLM) target plane, and a PSLM for correction.

  4. iv.

    The calculation control part is a computer for reconstructing the phase from the hologram, training the neural network, reconstructing the phase from the distorted image and controlling the PSLM to achieve turbulence correction.

  5. v.

    The turbulence generating part is a 1.5-meter-long turbulence pool with heating and cooling at the bottom (200℃) and top (20℃), respectively.

When collecting the dataset, a constant plane is loaded on the PSLM. We use CCD1 to record the hologram and reconstruct the aberration phase as GT, and use CCD2 to record the distorted image loaded on the ISLM as input, which is partially shown in Fig. 8.

Fig. 8
figure 8

Examples of aberration phase and distorted image for the turbulence pool

After training, in real time, the computer controls the PSLM by the aberration phase reconstructed from the network to correct the turbulence (correction frequency is about 100HZ). In order to verify the correction effect, we use CCD1 to continuously record the hologram (phase), and then turn on the correction system. As shown in Fig. 9, we calculate the standard deviation (StdDev) of the phase recorded by CCD1, and display the phases of the frames 1, 101, 201, 301, 401, 501, 601, 701, 801, 901 below. The average StdDev of the phase for the first 500 frames (before correction) is 7.51, while that of the phase for the next 500 frames (after correction) is 1.79.

Fig. 9
figure 9

Phase StdDev before and after correction

To further test the correction effect, we blocked the reference light in the setup, replaced the TL with a convex lens, and moved the CCD1 to the focal plane of the convex lens. Then the focus spots before and after correction are recorded and compared in Fig. 10. From Fig. 10(a) and (b), it can be found that the energy of the corrected spot is more concentrated. To be more quantitative, in Fig. 10(c), we plot the intensity across the horizontal lines of Fig. 10(a) and (b), from which we can find that the maximum intensity of the focus spot after correction is about 2.5 times that before correction.

Fig. 10
figure 10

Focus spots before and after correction. a Focus spot before correction; b Focus spot after correction; c Intensity across the medium horizontal lines of (a) and (b)

Real atmospheric turbulence experiment

In order to verify the feasibility of this method in real atmospheric turbulence, we transferred the setup in Fig. 7 to an open-air environment. Since the reference beam of the holographic part is not stable enough at a long distance, a Hartmann–Shack sensor is used to measure the wavefront aberration phase as GT. The focal length of the CCD2 lens is increased to 600mm to photograph a stationary object near the Hartmann–Shack sensor as input. The length of the atmospheric turbulence is about 130 m. The Hartmann–Shack sensor and the camera are triggered synchronously using a section of optical fiber at a frequency of 10HZ. 11,000 pairs of data are sampled to train and test with the CNN1 (10,000 for training and 1,000 for testing).

The partial results of the networks are shown in Fig. 11, while the SSIM, RMSE and MAE are 0.961, 2.912 and 2.419 (5.69 %), respectively, which means that the network can reconstruct the real turbulent phase but the performance is lower than the single sample case in Sect. 3.2. As indicated by the red arrow in the second column, there are relatively large errors in the reconstruction results of individual samples (8 %). We attribute this performance degradation to more constantly changing factors in the real environment, such as ambient light intensity, wind speed, humidity, etc. More in-depth exploration will be carried out in our follow-up work.

Fig. 11
figure 11

Demonstration of partial results for real turbulence

Conclusions

In this paper, we have verified the feasibility of deep learning wavefront distortion sensing for single and multiple objects. Compared with a single object, the network performance of multiple objects will be a little reduced. We compared the two ways of direct phase reconstruction or Zernike coefficient reconstruction by the network, and found that the direct way is more accurate for the complex aberration phase. In addition, the correction effect of this method has been verified in a turbulent pool environment, and the feasibility of the method has been verified in a real atmospheric turbulent environment.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to confidentiality but are available from the corresponding author on reasonable request.

Abbreviations

DM:

Deformable mirror

SLM:

Spatial light modulator

GT:

Ground truth

FT:

Fourier transform

CNN:

Convolution neural network

ADAM:

Adaptive moment estimation

LFW:

Labled Faces in the Wild

PSLM:

Phase-type spatial light modulator

ISLM:

Intensity-type spatial light modulator

StdDev:

Standard deviation

Gr:

Graphene

References

  1. Tyson R. Principles of adaptive optics. 0 ed.. Boca Raton: CRC Press; 2010. https://doi.org/10.1201/EBK1439808580.

  2. Vorontsov MA, Carhart GW, Cohen M, Cauwenberghs G. Adaptive optics based on analog parallel stochastic optimization: analysis and experimental demonstration. J Opt Soc Am A. 2000;17:1440. https://doi.org/10.1364/JOSAA.17.001440.

    Article  Google Scholar 

  3. Platt BC, Shack R. History and Principles of Shack-Hartmann Wavefront Sensing. J Refract Surg. 2001;17:573–7. https://doi.org/10.3928/1081-597X-20010901-13.

    Article  Google Scholar 

  4. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44. https://doi.org/10.1038/nature14539.

    Article  Google Scholar 

  5. Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep learning for computer vision: a brief review. Comput Intell Neurosci. 2018;2018:7068349. https://doi.org/10.1155/2018/7068349.

  6. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv. 2015;1409:1556. https://arxiv.org/abs/1409.1556.

  7. Redmon J, Divvala S, Girshick R, Farhadi A. You Only Look Once: Unified, Real-Time Object Detection. 2016 IEEE Conference on Computer Vision and Recognition P. (CVPR), Las Vegas: IEEE; 2016, p. 779–88. https://doi.org/10.1109/CVPR.2016.91.

  8. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. arXiv preprint arXiv. 2015;1505:04597. https://arxiv.org/abs/1505.04597.

  9. Badrinarayanan V, Kendall A, Cipolla R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. arXiv preprint arXiv. 2016;1511:00561. https://arxiv.org/abs/1511.00561.

  10. Rivenson Y, Zhang Y, Günaydın H, Teng D, Ozcan A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci Appl. 2018;7:17141–1. https://doi.org/10.1038/lsa.2017.141.

    Article  Google Scholar 

  11. Wang K, Dou J, Kemao Q, Di J, Zhao J. Y-Net: a one-to-two deep learning framework for digital holographic reconstruction. Opt Lett. 2019;44:4765. https://doi.org/10.1364/OL.44.004765.

    Article  Google Scholar 

  12. Spoorthi GE, Gorthi S, Gorthi RKSS. PhaseNet:. A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping. IEEE Signal Process Lett. 2018;26:54–8. https://doi.org/10.1109/LSP.2018.2879184.

    Article  Google Scholar 

  13. Wang K, Li Y, Kemao Q, Di J, Zhao J. One-step robust deep learning phase unwrapping. Opt Express. 2019;27:15100. https://doi.org/10.1364/OE.27.015100.

    Article  Google Scholar 

  14. Borhani N, Kakkava E, Moser C, Psaltis D. Learning to see through multimode fibers. Optica. 2018;5:960. https://doi.org/10.1364/OPTICA.5.000960.

    Article  Google Scholar 

  15. Rahmani B, Loterie D, Konstantinou G, Psaltis D, Moser C. Multimode optical fiber transmission with a deep learning network. Light Sci Appl. 2018;7:69. https://doi.org/10.1038/s41377-018-0074-1.

    Article  Google Scholar 

  16. Sinha A, Lee J, Li S, Barbastathis G. Lensless computational imaging through deep learning. Optica. 2017;4:1117. https://doi.org/10.1364/OPTICA.4.001117.

    Article  Google Scholar 

  17. Wang K, Di J, Li Y, Ren Z, Kemao Q, Zhao J. Transport of intensity equation from a single intensity image via deep learning. Opt Lasers Eng. 2020;134:106233. https://doi.org/10.1016/j.optlaseng.2020.106233.

    Article  Google Scholar 

  18. Liu J, Wang P, Zhang X, He Y, Zhou X, Ye H, et al. Deep learning based atmospheric turbulence compensation for orbital angular momentum beam distortion and communication. Opt Express. 2019;27:16671. https://doi.org/10.1364/OE.27.016671.

    Article  Google Scholar 

  19. Guo H, Xu Y, Li Q, Du S, He D, Wang Q, et al. Improved Machine Learning Approach for Wavefront Sensing Sensors. 2019;19:3533. https://doi.org/10.3390/s19163533.

    Article  Google Scholar 

  20. Paine SW, Fienup JR. Machine learning for improved image-based wavefront sensing. Opt Lett. 2018;43:1235. https://doi.org/10.1364/OL.43.001235.

    Article  Google Scholar 

  21. Li J, Zhang M, Wang D, Wu S, Zhan Y. Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication. Opt Express. 2018;26:10494. https://doi.org/10.1364/OE.26.010494.

    Article  Google Scholar 

  22. Jin Y, Zhang Y, Hu L, Huang H, Xu Q, Zhu X, et al. Machine learning guided rapid focusing with sensor-less aberration corrections. Opt Express. 2018;26:30162. https://doi.org/10.1364/OE.26.030162.

    Article  Google Scholar 

  23. Ju G, Qi X, Ma H, Yan C. Feature-based phase retrieval wavefront sensing approach using machine learning. Opt Express. 2018;26:31767. https://doi.org/10.1364/OE.26.031767.

    Article  Google Scholar 

  24. Nishizaki Y, Valdivia M, Horisaki R, Kitaguchi K, Saito M, Tanida J, et al. Deep learning wavefront sensing. Opt Express. 2019;27:240. https://doi.org/10.1364/OE.27.000240.

    Article  Google Scholar 

  25. Ma H, Liu H, Qiao Y, Li X, Zhang W. Numerical study of adaptive optics compensation based on Convolutional Neural Networks. Opt Commun. 2019;433:283–9. https://doi.org/10.1016/j.optcom.2018.10.036.

    Article  Google Scholar 

  26. Tian Q, Lu C, Liu B, Zhu L, Pan X, Zhang Q, et al. DNN-based aberration correction in a wavefront sensorless adaptive optics system. Opt Express. 2019;27:10765. https://doi.org/10.1364/OE.27.010765.

    Article  Google Scholar 

  27. Andersen T, Owner-Petersen M, Enmark A. Neural networks for image-based wavefront sensing for astronomy. Opt Lett. 2019;44:4618. https://doi.org/10.1364/OL.44.004618.

    Article  Google Scholar 

  28. Chen M, Jin X, Xu Z. Investigation of Convolution Neural Network-Based Wavefront Correction for FSO Systems. 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi’an: IEEE; 2019, p. 1–6. https://doi.org/10.1109/WCSP.2019.8927850.

  29. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Recognition P. (CVPR), Las Vegas: IEEE; 2016, p. 770–8. https://doi.org/10.1109/CVPR.2016.90.

  30. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. 2016 IEEE Conference on Computer Vision and Recognition P. (CVPR), Las Vegas: IEEE; 2016, p. 2818–26. https://doi.org/10.1109/CVPR.2016.308.

  31. Cohen G, Afshar S, Tapson J, van Schaik A. EMNIST: an extension of MNIST to handwritten letters. arXiv preprint arXiv. 2017;1702:05373. https://arxiv.org/abs/1702.05373.

  32. Huang GB, Mattar M, Berg T, Learned-Miller E. Labeled faces in the wild: a database for studying face recognition in unconstrained environments. 2008:15. https://hal.inria.fr/inria-00321923.

  33. Deng J, Dong W, Socher R, Li L-J, Li K, Li Fei-Fei. ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Recognition P. Miami: IEEE; 2009, p. 248–55. https://doi.org/10.1109/CVPR.2009.5206848.

Download references

Acknowledgements

The authors thank all members of the Key Laboratory of Atmospheric Composition and Optics of the Chinese Academy of Sciences for providing an experimental site and turbulence pool. The authors thank the Lijiang Astronomical Observatory of the Chinese Academy of Sciences for providing an experimental site.

Funding

National Natural Science Foundation of China (61927810, 62075183).

Author information

Authors and Affiliations

Authors

Contributions

KW: Conceptualization, experiment, methodology, writing - original draft. MZ: Experiment, software. JT: Experiment. LW: Experiment. LH: Experiment. XW: Experiment. WL: Experiment. JD: Conceptualization, investigation, resources, project administration. GL: Supervision, project administration, funding acquisition. JZ: Writing - review & editing, supervision, funding acquisition. All the authors analyzed the data and discussed the results. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Jianlin Zhao.

Ethics declarations

Competing interest

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, K., Zhang, M., Tang, J. et al. Deep learning wavefront sensing and aberration correction in atmospheric turbulence. PhotoniX 2, 8 (2021). https://doi.org/10.1186/s43074-021-00030-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-021-00030-4

Keywords