Skip to main content

Intelligent optoelectronic processor for orbital angular momentum spectrum measurement

Abstract

Orbital angular momentum (OAM) detection underpins almost all aspects of vortex beams’ advances such as communication and quantum analogy. Conventional schemes are frustrated by low speed, complicated system, limited detection range. Here, we devise an intelligent processor composed of photonic and electronic neurons for OAM spectrum measurement in a fast, accurate and direct manner. Specifically, optical layers extract invisible topological charge information from incoming light and a shallow electronic layer predicts the exact spectrum. The integration of optical-computing promises us a compact single-shot system with high speed and energy efficiency (optical operations / electronic operations ~\({10}^{3}\)), neither necessitating reference wave nor repetitive steps. Importantly, our processor is endowed with salient generalization ability and robustness against diverse structured light and adverse effects (mean squared error ~\(10^{(-5)}\)). We further raise a universal model interpretation paradigm to reveal the underlying physical mechanisms in the hybrid processor, as distinct from conventional ‘black-box’ networks. Such interpretation algorithm can improve the detection efficiency up to 25-fold. We also complete the theory of optoelectronic network enabling its efficient training. This work not only contributes to the explorations on OAM physics and applications, and also broadly inspires the advanced links between intelligent computing and physical effects.

Introduction

Vortex beams carrying orbital angular momentum are ubiquitous in optical sciences. All OAM states of light constitute a Hilbert space, providing a new avenue to high-capacity communications [1], micro-manipulation [2], and quantum information processing [3]. Consequently, precise and efficient detection of OAM distribution or OAM spectrum becomes pivotal for these promising applications. Several previously proposed detection methods such as interferometry [4, 5], diffractometry [6, 7], and coordinate transformation [8, 9] are suitable for only detecting the existence of OAM within a limited range but inappropriate to extract accurate power distribution of OAM states. To this end, schemes based on mode projection and phase retrieval are explored [10,11,12] yet requiring not only many repetitive steps in data acquiring and postprocessing but also strict experimental calibration. Rotational Doppler effect is also employed to construct an OAM complex spectrum analyzer [13] at the cost of complicated detection setup and low speed. Another sequential weak and strong measurements in single-photon scenarios successfully reconstructs complex probability of 27-dimensional OAM states [14] but still fall short in terms of system conciseness and working speed. In short, despite of these realizations [4, 6, 8,9,10,11, 13,14,15,16,17,18] the limitations of detection speed, accuracy, range, robustness, generalization ability, and system conciseness hinder their practicability towards stable, fast and accurate information transfer in the modern age.

Along with enduring efforts in vortex beams comes the development of artificial intelligence. In particular, deep learning (DL) [19], has gradually revolutionized wide-ranging disciplines such as genetics [20], biomedical diagnosis [21] and physics [22]. In optics, data-driven DL algorithms are becoming pervasive tools to augment performance and to infuse new functionalities in imaging [23], holography [24], ultrafast photonics [25], optical metrology [26], etc. Recently, DL is also introduced to recognize OAM modes [27,28,29,30,31] due to their inherent ability to analyze complex patterns. However, the distinguishment of degenerate intensities patterns carrying different OAM spectra becomes a major challenge. More importantly, previous DL-related endeavors based on digital signal processing techniques often face poor generalization ability due to their ‘black-box’ nature and heavy computational consumption.

The flourishment of DL drives inconceivable demands for computing hardware especially at the era of big data. The computing power required to train or execute state-of-the-art DL models increases vastly while people’s expectation on faster computing speed is still high. On the other hand, the development of integrated electronic circuits is unable to keep pace with the well-known Moore’s law. Owing to the advantages of low latency, high energy efficiency and parallelism, photonics thus establishes itself in a central position when seeking alternative technologies for continued performance improvements [32, 33]. Many seminal photonic computing schemes are proposed recently [34,35,36,37,38,39]. Among them, wave-optics-based Diffractive Deep Neural Network (D2NN) distinguishes itself with great flexibility [35, 40], depth advantage [41], and scalability [42]. In addition to many machine-vision demonstrations [35, 42], it has been successfully incorporated into the diffuser imaging system [43], pulse shaping system [44], optical logic operation system  [45], along with extension to different wavelengths [46,47,48,49].

Here, we demonstrate a single-shot measurement scheme called POAMS (processor for OAM spectrum) with high speed and interpretability, leveraging a hybrid optoelectronic neural network (Fig. 1). An optical diffractive network is synergized with a shallow electronic readout layer to predict the exact OAM spectrum for incoming structured light in a regressive manner. The obtained results on unknown experimentally generated single and multiplexed modes show that POAMS could be an optimal solution compared with the most advanced alternatives (see Supplementary Table S1 for comparison details), featuring several critical properties: (1) high speed and energy efficiency: it works at microsecond level with most computation operations (~ 99.98% of all operations) optically conducted which cost little to no energy; (2) high accuracy: it can reconstruct sophisticated even random relative weights with mean squared error (MSE) around \({10}^{(-5)}\)~\({10}^{(-3)}\)  ; (3) conciseness: it entails neither reference wave nor repetitive measurements with a system size of ~\(100\lambda \times 100\lambda \times 200\lambda\); (4) high robustness: it exhibits successful results even in the presence of adverse effects such as atmosphere turbulence and spatial dislocations; (5) great generalization ability: it functions well on experimental modes with diverse OAM spectra but without necessitating massive experimental training data; (6) great extendibility: it is implanted onto directly calculating OAM complex spectrum (i.e. the relative power and phase distribution). Moreover, we propose an efficient training method for optoelectronic models and demonstrate a universal model interpretation/visualization algorithm to comprehend the underlying physical mechanisms of our hybrid processor thus (1) removing the ‘black-box’ nature of neural networks and (2) benefitting system detection efficiency improvement. The synergy of optical and electronic neurons promises us a powerful and concise platform to facilitate OAM-based high speed information processing and to explore new opportunities and mechanisms of hybrid optoelectronic neural networks.

Fig. 1
figure 1

Illustration of the hybrid processor for OAM spectrum measurement. The incident structured light with certain OAM distribution is firstly processed by the diffractive optical neural network, whereby the OAM information is transformed into high-dimensional sparse feature in the photoelectric detector plane. For this model, the optical-diffraction-based processing part can ‘split’ the input beam into two main lobes that are related to positive and negative topological charges respectively. Once the complex optical field is turned into real-numbered intensity patterns, the shallow fully connected layer can recover the OAM spectrum in a regressive manner (see detailed model in Methods)

Results

Processor for OAM spectrum

A general structured light \(E(\rho , \theta ,z)\) can be characterized by spatially mathematically decomposing onto orthogonal vortex modes:

$$\begin{array}{c}E\left(\rho,\theta,z\right)=\sum\nolimits_{\ell=K_N}^{K_P}c_{\ell}\left(\rho,z\right)\text{exp}\left(i\ell\theta\right)\end{array},$$
(1)

where \((\rho , \theta , z)\) are cylindrical coordinates, \({K}_{P}\) and \({K}_{N}\) denote positive and negative OAM spectrum bounds for a given detection scenario (normally \(\left|{K}_{P}\right|=\left|{K}_{N}\right|\) and they could be infinite), and \({c}_{\ell}\left(\rho ,z\right)\)represents complex coefficient function with respect to certain helical mode. Here we limit our attention to the degree of OAM by setting \({c}_{\ell}\left(\rho ,z\right)={a}_{\ell}{\text{L}\text{G}}_{0,\ell}\left(\rho ,z\right)\) with \({\text{L}\text{G}}_{0,\ell}\left(\rho,z\right)={\text{L}\text{G}}_{0,\ell}\left(\rho,\theta,z\right)\text{exp}\left(-i\ell\theta\right)\) representing the radial amplitude of Laguerre-Gaussian model \({\text{L}\text{G}}_{0,\ell}\left(\rho ,\theta , z\right)\) (radial index 0, azimuthal index \(\ell\)). Therefore, the normalized complex coefficient \({a}_{\ell}=\left|{a}_{\ell}\right|\text{e}\text{x}\text{p}\left(i{\varphi }_\ell\right)\) contains OAM spectrum information where the amplitude of \({a}_{\ell}\) satisfies \(\sum _{\ell={K}_{N}}^{{K}_{P}}{\left|{a}_{\ell}\right|}^{2}=1\) and \({\varphi }_\ell\) indicates intermodal phase with respect to a global reference phase. The OAM spectrum that elucidates the relative power distribution of all components can thus be expressed as a vector \(\varvec{s}=\left[{\left|{a}_{{K}_{N}}\right|}^{2}, {\left|{a}_{{K}_{N+1}}\right|}^{2},\dots ,{\left|{a}_{{K}_{P}}\right|}^{2}\right]\) (see derivation in Supplementary Note 1). Calculating \(\varvec{s}\) from a given structured light is generally not easy and can be treated as an inverse problem. Figure 1 illustrates the workflow of our system to retrieve the OAM spectrum \(\widehat{\varvec{s}}\). Accordingly, the whole picture of information processing in our scheme can be expressed as:

$$\begin{array}{c}\widehat{\varvec{s}}={\mathcal{F}}_{E}\left({\mathcal{F}}_{N}\left({\mathcal{F}}_{O}\left(E\left(\rho , \theta ,z\right)\right)\right)\right)\end{array},$$
(2)

with \({\mathcal{F}}_{E}\) and \({\mathcal{F}}_{O}\) the electronic regression function and the optical-diffraction-based wavefront transforming function respectively, and \({\mathcal{F}}_{N}\) the natural quadratic (nonlinear) function brought by photoelectric effect of a sensor. The optical neural network is composed of five cascaded phase layers with fixed distance \(\left(40\lambda \right)\) and each layer is endowed with 2002 programmable neurons. They serve as a special mapping to project the incident complex optical wave into a latent feature space once trained. The transformed complex features are switched to measurable real signals by the photoelectric sensor. Then the real features are fed into the shallow electronic fully connected layer (FCL) to obtain the spectrum (detailed in Methods). Note that earlier approaches involving all-optical D2NN or hybrid D2NN are normally applied for classification tasks in machine vision such as MNIST database [35, 50]. The POAMS in this work is distinctive from previous endeavors because OAM spectrum retrieval is a regression photonic problem in essence, which is rather challenging if only using optical neurons due to lack of nonlinearity. In this regard, the POAMS represents a new physical ‘smart sensor’ [38] in structured light processing in addition to recent demonstrations in fiber nonlinearity compensation [51] and optical computational imaging [43].

Results and analysis

The proposed system architecture is shown in Fig. 1, where we craft the OAM spectrum analyzer during iteratively training with error backpropagation technique. The objective function can be expressed as:

$$\begin{array}{c}\underset{{\theta }_{o},{ }{\theta }_{E}}{\text{min}}\mathcal{L}\left(\varvec{s}, \widehat{\varvec{s}};{\theta }_{o}, {\theta }_{E}\right)+\mathcal{C}{\parallel{\theta }_{o}\parallel}_{2}^{2}+\mathcal{C}{\parallel{\theta }_{E}\parallel}_{2}^{2}\end{array},$$
(3)

where \(\mathcal{L}\left(\varvec{s}, \widehat{\varvec{s}};{\theta }_{o}, {\theta }_{E}\right)\) refers to the loss function comparing the processor’s output \(\widehat{\varvec{s}}\) and ground truth \(\varvec{s}\), and \({\theta }_{o}, {\theta }_{E}\) are optical and electronic neurons. The efficient training of hybrid networks is notoriously challenging [50] meaning that though parameters \({\theta }_{o}\) and \({\theta }_{E}\) are updated simultaneously, they are not balanced initially resulting in static optimization of \({\theta }_{o}\). We introduce two more hyperparameters (see Eq. (17) in Methods) for the smooth convergence of the hybrid model as shown in Fig. 2(a). The latter two terms with constant \(\mathcal{C}\), known as \({L}_{2}\) regularization, are added as penalties to prevent overfitting and to increase model parameter sparsity. More training details are in Supplementary Note 2.

The two adjacent loss curves in Fig. 2(a) validate that the model is not overfitting and the training ends after dozens of epochs. The optical processing part of our converged model is presented in Fig. 2(b). The resultant five diffractive layers mutually work to transform implicit OAM information into hierarchical features in the detector plane. Note that here these layers are not the same as those of Fig. 1 (trained with different hyperparameters and datasets), and the following test results are all based on the system in Fig. 2(b) instead of Fig. 1. Inspired by the beam steering effect induced by phase gradients (e.g. a focus lens), we analyze the fifth layer’s gradient values in Fig. 2(b6) to straightforwardly show what exactly the diffractive layers are doing from the perspective of optics. It turns out that as the structured light propagates and interacts with these layers consecutively, the spatial wavevectors are mixed sufficiently in an ‘intelligent’ manner that the light fields are redirected towards different regions of the sensor plane. We repeat the training process several times under different hyperparameters and conclude that in most cases the incident optical field is transformed into speckle-like pattern, which contains high-dimensional features related to topological charges (TCs). Yet, an interesting case we obtain is the scheme sketched in Fig. 1: the optical part can ‘split’ the input beam into two main spatial lobes, where one lobe determines the OAM spectrum components with positive TCs and the other determines those with negative TCs (see Supplementary Video 1). This phenomenon indicates that our optical neural network is actually transforming the wavefront of structured field. In this way, our scheme can naturally get rid of the misleading brought by degenerate intensity patterns, as confronted by earlier works using intensity-based DL recognition algorithms [27, 28, 30, 52].

Fig. 2
figure 2

Training results and experimental data collection. a The loss curves of training set and validation set versus updating epochs. The learning rate is tuned dynamically every 15 epochs. The POAM converges after dozens of epochs and the inset indicates that the neural network is not overfitting. The loss value is the sum of 300 (one batch) samples. b Final designs of the optical diffractive network. (b1 - b5) Five cascaded diffractive layers with a fixed distance of \(40{\uplambda }\) between two successive layers. (b6) The gradient distributions of the 5th layer. PS: Color-encoded gradient map. (b7) Phase value distributions of 5 diffractive layers. c The optical setup for generating experimental structured light. CW laser, continuous-wave laser; BE, beam expander; HWP, half-wave plate; BS, beam splitter; SLM, spatial light modulator; P, polarizer; L1, L2, lens. For each data sample, we reconstruct the complex optical fields using 4-step phase shift method as depicted in the inset (I1, I2, I3, I4). See details in Supplementary Note 1. d Four selective reconstruction results based on c, which are single mode (TC = 5), multiplexed (mul.) mode (TC =\(-\)4, \(-\)1, equal weights), mul. mode (TC =\(-\)4, \(-\)3, 2, 5, equal weights), mul. mode (TC = \(-\)10 ~ 10, random weights), from left to right

Furthermore, we calculate the statistical distributions of the optical (Fig. 2(b7)) and electronic (Supplementary Fig. S3) neurons. All optical neurons are initialized with value \(\pi\). After training, though most phase values are close to \(\pi\), especially for the first layer, the gradual trend from the 1st layer to the 5th layer reflects that the interaction region between the layer and local optical field expands bigger. This quantitative observation not only indicates that the optical part is mixing the wavevectors towards the final target but also shows that the values of both optical and electronic neurons are sparse. In neural network studies, parameter sparsity benefits the feature selection and model interpretability [53]. In optics, this sparsity could manifest the convenience of physical fabrication/implementation of our engineered layers as well.

Next, we present numerical and experimental results from the POAMS for different tasks. Though the optical layers are implemented in silico, various structured light is experimentally generated as blind test samples to validate the reliability of trained model. We first reconstruct the complex optical fields through the 4-step phase shift method using the experimental setup in Fig. 2(c), which is also adopted in Refs. [10, 54] for OAM spectrum detection (see Supplementary Note 1 and Supplementary Note 3 for details and comparison). Note the setup in Fig. 2(c) is not used for implementing optical layers and instead it is for generating various experimental test sets. Some representative results are depicted in Fig. 2(d). We can clearly see the intensity and phase imperfections brought by nonideal laser source, possible astigmatism and distortion, sensor’s noise, and other sources of error in our setup. Even so, the inference results from the processor are encouraging as we will exemplify in Fig. 3.

We first investigate the blind test performance on single (pure) vortex modes. We average the results from 30 repeated experiments and show them in Fig. 3(a). The dominant diagonal values indicate that the processor outputs nearly perfect OAM spectra with experimental single modes input, only with a little deviation in Gaussian modes. Then the processor is employed to measure the spectra of multiplexed OAM beams and two typical results based on 30 repeated experiments are illustrated in Fig. 3(b), where sample 1 denotes multiplexed beam with equal weights and sample 2 represents beam with random weights at the OAM bases (see more in Supplementary Video 2). We can see a decent match between the output results and ground truths, even for very complicated OAM distributions. To further evaluate the measured results quantitatively, we calculate the R-squared (R2 ) and mean squared error (MSE) values between the outputs and corresponding ground truths, as shown in Fig. 3(c). The average Rvalues for single modes, multiplexed modes with equal weights and random weights are 0.9924, 0.9268 and 0.8409, respectively, while the averaged MSE values are calculated as \(2.878\times {10}^{-4}\), \(4.822\times {10}^{-4}\), \(1.617\times {10}^{-4}\) respectively, validating the success of POAMS to detect OAM spectrum. Interestingly, the R2 metric reveals that the model performs better on single modes while the MSE metric shows better results on multiplexed modes with random OAM weights. This can be partly explained by the better generalization performance, due to the fact that the MSE metric and simulated multiplexed modes are employed as the loss function and the training dataset. Beyond that, we find the hyperparameter \(T\) (also called temperature value) at the readout layer can sharpen/smoothen the OAM output (see Methods, Eq. (8)). To further explore the influence of \(T\), we train another 11 models with different \(T\) values (from 0.01 to 1, logspace, each model trained with 60 epochs) and present the averaged MSE values upon different experimental datasets in Fig. 3(d). Indeed, \(T\) can impact the convergence of the models as well as the final performance. With an optimized value e.g. \(T\) ~\({10}^{-1.2}\), one can achieve better results on multiplexed modes or single modes or mixture of them.

In addition, as the complex spectrum (weight \({\left|{a}_{l}\right|}^{2}\) and relative intermodal phase \({\varphi }_{l}\)) is reconstructed, the complex OAM state may be fully attained with prior knowledge of superposition principle for analyzing the in-depth optical properties (e.g. beam quality M2, wavefront at any longitudinal distance) [11, 13, 55]. We extend our processor to regress the OAM complex spectrum by retraining a hybrid neural network with two parallel FCLs, which are responsible for the readouts of weights and relative phases respectively. Note the extension is rather implementable without losing the system’s speed and energy advantages. The blind test results in Fig. 3(e) – (f) on simulated structured light are quite promising. Obtained diffractive layers and more results are shown in Supplementary Video 3.

Fig. 3
figure 3

OAM spectrum results for different structured light. a The results of experimental single modes with TC from \(-\)10 to 10. Horizontal axis denotes different modes under test and longitudinal axis represents measured OAM spectrum. b Two selective results of experimental mul. modes. #1: mul. mode with equal weights. #2: mul. mode with random weights. Results of (a) and (b) are calculated based on 30 repeated experiments. Errorbar represents standard deviation. c Quantitative metrics to assess OAM spectrum measurement performance on different experimental datasets. Horizontal axis: index of each sample. R2: R-squared. MSE: mean squared error. d Model performance comparison on different datasets versus temperature value \(T\). Each MSE value is calculated through the average on the whole experimental dataset. Each \(T\) correspondes to a trained model. The opaque cyan region means that the models with corresponding \(T\) values are not converged during training. The vertical gray line marks the model in Fig. 2a – b. e – f OAM complex spectrum on simulated mode. Left: weight. Right: relative phase

Robustness against several adverse effects

In realistic circumstances, incident structured light is prone to be distorted by many uncontrollable adverse factors. Consequently, pronounced channel crosstalk (or OAM redistribution) arises, which is detrimental for practical communication links [1, 56]. Therefore, the robustness of an OAM spectrum detection system is of great significance [57]. Here, we quantitatively analyze how the POAMS reacts when facing spatial dislocations including transverse rotation (TR), longitudinal shift (LS), transverse shift (TS), angular shift (AS) and atmosphere turbulence (AT), as illustrated in the left panels of Fig. 4. The system is robust against these conditions if it can output the OAM spectra as the distorted ground truths. We evaluate our model on diverse test sets including pure vortex modes, multiplexed modes with equal intermodal weights or random intermodal weights.

Transverse rotation

It usually occurs when the detection module is indeliberately rotated. Additionally, as the optical layers in Figs. 1 and 2(b) possess no rotation symmetry, it’s necessary to explore whether the POAMS is rotation-invariant. As shown in the right panel of Fig. 4(a), when the rotation angle varies from −𝜋 to 𝜋, the averaged MSE errors barely change \({10}^{(-5)}\)~\({10}^{(-4)}\) . Some randomly selected spectrum results are displayed in the inset, implying that the POAMS exhibits strong robustness against TR.

Longitudinal shift

Without changing OAM spectrum, distance change between the generation and detection module (i.e. longitudinal shift) will cause the deformation of incident wavefront induced by Gouy phase. Meanwhile, due to the divergence nature of propagating beams, LS also (de)magnifies beam width. As shown in Fig. 4(b), by changing LS from 0 to \({ 1.0 z}_{R}\) (Rayleigh range), the output spectra maintain accurate, leading to distance-invariance of POAMS. It’s mention-worthy that like any other optical systems, our model has an effective entrance pupil that can be observed from the interaction regions of the diffractive network as depicted in Fig. 2(b1). When the incident beam width exceeds the effective entrance pupil, there may be no sufficient modulation/processing and thus the processor loses efficacy.

Transverse shift & Angular shift

Another crucial factor is misalignment between generation and detection module due to transverse or angular shift [58] as shown in the left panels of Figs. 4(c) and (d). It is an essential hurdle because it is unavoidable in realistic setups. Unlike TR and LS which will not affect the ground truth OAM spectra, misalignment can induce distinct OAM spectrum changes [57]. Therefore, our preliminary results on single modes imply that there may exist mismatch between the outputs and ground truths. Specifically, the induced sidelobes of single mode spectra can be suppressed after the softmax output layer. To compensate for such mismatch, we leverage the flexibility of electronic neurons of the hybrid processor i.e. we conveniently retrain the electronic neurons (fine-tuning) with another 16,000 distorted data samples (8000 for each) and keep the optical layers fixed. The blind test results are presented in Figs. 4(c) and (d). Owing to the TR robustness, we study TS varying from -1 to 1 (in beam width unit) and AS varying from \(-9.1\times {10}^{-3}\) to \(9.1\times {10}^{-3}\)(in rad) only in \(xoz\) plane. The symmetric curves indicate identical performances.

Atmosphere turbulence

Besides the spatial imperfections, another critical factor that distorts the structured light during propagation is atmosphere turbulence (AT) [59]. To demonstrate the corresponding robustness, we put the POAMS into different AT environments. Specifically, the modified von Karman model is employed to generate random phase plates, where the outer and inner scales of turbulence are set as 10 m and 0.01 m respectively. AT refractive index structure parameter \({C}_{n}^{2}\) varies from \({10}^{-4.5}\) to \({10}^{-3}\)\({\text{m}}^{-2/3}\)and five different phase plates are used to accumulate 500 m propagation distance in turbulence. As TS and AS, AT also changes the OAM spectrum, therefore we also fine-tune the electronic weights. Test results from the retrained model is shown in the right panel of Fig. 4(e).

Fig. 4
figure 4

Robustness analysis of the POAMS against adverse conditions. Left panel: schematic diagrams of different adverse conditions. In the right panel, different curves represent the MSE values between POAMS outputs and ground truths after distortions on different datasets. Inset: randomly selected OAM spectrum results (TC range: \(-\)10 ~ 10) under corresponding adverse conditions. Every MSE value on curves is obtained from the average of 21 test modes. The test models in (c), (d) and (e) are fine-tuned with new electronic neurons to compensate for potential mismatches

To sum up, the POAMS exhibits considerable robustness against above five adverse effects from the intuitional curves shown in Fig. 4. This robustness can further verify the wavefront transforming nature of the optical diffractive neural network. Namely our system is actually learning the global OAM information hidden in phase structures rather than memorizing the local trivial features. The analysis of LS manifests a large tolerance of effective entrance pupil of the system, which benefits high efficiency and signal-to-noise ratio. As for TS, AS and AT, the OAM spectrum changes after their distortions. Advanced fine-tuning techniques in neural networks can help us maintain the robustness of POAMS against them without compromising other advantages. Such robustness can effectively improve the practicability of OAM-based communication links and can inspire novel applications of vortex beams, e.g. the POAMS could act as a novel sensor for fast AT prediction [60]. Besides, we further investigate the robustness of the hybrid model itself including detector noise, interlayer distances and out-of-range mode inputs and put extended data in Supplementary Note 4.

Model interpretation

As shown in Fig. 5(a), a ubiquitous convolutional neural network (CNN) is composed of several convolutional layers for extracting features and FCLs for combining learned features. Despite the popularity of CNN-enabled breakthroughs, a criticism is often raised on its ‘black-box’ nature, i.e. it is hard to explain why a given input produces a corresponding output [61]. On the other hand, clearly understanding how the neural networks work and what computations they perform can assist us to better improve the system. Otherwise, development of models is reduced to trial-and–error and misleading may arise without warning or explanation when models collapse [62,63,64]. To date, researchers have developed various techniques to visualize the CNN working principles e.g. t-SNE [65], CAM [66], and Grad-CAM [63]. Whereas, interpretation of hybrid optoelectronic neural networks still remains elusive. With the aid of optical diffraction theory [35] and previous visualization methods [62], here we propose a methos by: (1) analyzing the complex features behind each optical layer and (2) connecting the high-dimensional features in the sensor plane with the final OAM spectrum results.

For the POAMS, we treat diffractive layers as CNN’s convolutional layers and also investigate the interaction between the last diffractive layer and readout layer. Specifically, we firstly present complex optical fields behind each diffraction-modulation unit to see how the incident structured light is processed optically. Figure 5(b) elucidates the hierarchical features that the optical neural network is extracting. Though the features seem rambling to us, they are critical to the network. Stated differently, the blended OAM information of structured light is transformed to easy-separable high-dimensional features in the latent space, so as to be globally regressed to model outputs. From our theoretical model in Methods, the latent space in the detector plane plays a significant role in communicating complex optical signals with real electronic signals. On one hand, complex optical signals are turned to be real to provide greatly reduced parametric complexity. On the other hand, the compressed features enable us to interpret how each TC number connects to the spatial regions in the detector plane. Particularly, here we propose a method akin to occlusion sensitivity analysis in neural network studies [62], in accordance with the workflow in Fig. 5(c) and following steps:

  1. i)

    Utilize a window ahead of the detector which only leaks a portion of signals to flow into the readout layer and occludes others.

  2. ii)

    Monitor the consequence of OAM spectrum output and pick the OAM component which sticks out from others.

  3. iii)

    Repeat step ii) for massive (e.g. 200) different input modes and decide one OAM component with maximum likelihood.

  4. iv)

    Slide the window with a fixed stride across the whole plane and repeat steps i), ii) and iii) to obtain a feature map.

Fig. 5
figure 5

Visualization of the POAMS. a Architecture comparison between a conventional CNN (left) and the POAMS (right) for further model interpretation. The red dashed rectangles represent the latent space connecting electronic convolutional layers (or optical diffractive layers) and FCLs. b Visualization of the complex optical features behind each diffractive layer. Intensity distributions are normalized. It can be seen that the diffractive layers can mix and redirect the spatial wavevectors ‘intelligently’. c The visualization workflow for searching the specific spatial regions in the detector plane that connects corresponding TC number. Window size (white box size): 2 \(\times\) 2, stride size: 1. d Evolution of obtained characteristic graphs during training. Colormap encoded with TC number. Model at #108 epoch is the best model (with the lowest validation loss)

Here, we define such map that identifies the connection between feature spatial regions and exact OAM components as the characteristic graph. Without occlusion, the model results show high consistency with ground truth. With a sliding window, we obtain such graphs shown in Fig. 5(d). The characteristic graph exhibits several interesting properties. First, from the evolution of these graphs in Fig. 5(d), the distribution converges after adequate training (~ 45 epochs) i.e. the general structure remains stable, only with tiny changes along evolution. Second, the distribution of the graph is globally invariant against the window size. Third, the graph tends to be complete during training, meaning that the graph contains all 21 OAM components. More importantly, we find that once obtaining the graph, we are able to: (1) determine the existence of certain OAM component and (2) reconstruct the OAM spectra quite accurately, by only detecting the corresponding spatial regions at the graph. In other words, according to the graph, one can obtain the OAM information with much less scanning region and measurement steps when using a single pixel detector (photodiode) or with much less detector size when using an array, which significantly improves detection efficiency by 25-fold (see more details and discussions in Supplementary Note 5, Figs. S7 to S11). This liberates us from 2D full screen demodulating devices [12] and detectors [10, 11] when measuring OAM to push the speed to the limit — now one can only use single pixel detectors to obtain the OAM information. We should note that although in modal decomposition approach like ref. [12], one can also use single pixel detector to record the intensity signals, the bottleneck is actually the SLM/DMD ahead of the detector used for loading the demodulating mask, therefore one cannot utilize the high refresh rate of the 1D photodetector in practice for that approach. Besides, the characteristic graph also reflects that the optically extracted features are sparse in the detector plane.

In short, visualization of this model helps us better understand the interaction mechanisms between layers. Especially, we can determine which part of intensity signals in the detector plane contributes to certain OAM component. In this regard, the POAMS is significantly suitable for structured light detection by optically transforming corresponding wavefronts, extracting decisive features, decoupling the mixed TC information and electronically combining the global features to the final results. Compared to all-optical or all-electronic neural networks, the POAMS reported in this work strikes a remarkable balance among model expressivity, application scenario, inference speed and energy efficiency.

Discussions

Model analysis

The above results clearly show the exceptional power of the hybrid optoelectronic processor for OAM spectrum detection. First, the POAMS contains 0.2 million optical neurons and 5021 electronic neurons, leading to remarkable speed and energy efficiency. In other words, the optical-achieved \(7.68\times {10}^{10}\) computation operations cost little to no energy and are performed nearly at the speed of light, and electronic-achieved \(1.26\times {10}^{7}\) operations cost relative low energy using a moderate computer. Then, compared with existing OAM spectrum measurement methods, the POAMS functions in a direct single-shot manner, preventing the need of strict alignment for mode projection, reference wave for phase retrieval, etc. This, in one way, promises us a compact and elegant system (approximately \(100\lambda \times 100\lambda \times 200\lambda\) with \(\lambda\) the working wavelength), and in another way, saves time and energy (see Supplementary Note 3 for details).

For the neural network convergence, despite fruitful discussions in all-optical D2NN training, the instructional analysis of hybrid optoelectronic neural networks is elusive [50], especially in terms of smooth and direct training pipeline, which hampers the advanced applications of these models. In this work, we find the imbalance between optical neurons and electronic neurons. Specifically, the gradient-descent-based algorithm updates the electronic neurons effectively but the optical parameters stay static, thus hindering the successful updating of the joint model. By calculating the back propagation errors analytically (see Methods), we introduce two hyperparameters to augment the errors with respect to optical neurons, yielding a synchronized network parameter update.

Implementation

The overall system is composed of three parts: structured light generation, optical diffractive layers and electronic readout layer. For the first part, we experimentally generate various optical states as blind test sets to evaluate the POAMS. We emphasize that the generalization ability of POAMS is highly satisfactory. Typically, the applications of DL in optics are often hindered by the need of massive labelled experimental data to drive the training of the network parameters because the collection of substantial experimental data in optics is prohibitively time-consuming and sometimes impractical [67]. However, in this work even the model is trained in silico using limited simulated dataset (only multiplexed modes with random OAM weights), the obtained processor is capable of predicting experimental single modes and multiplexed modes with equal as well as random weights collected via the setup in Fig. 2(c) (see results in Fig. 3 and Supplementary Video 2). Training and test sets information are also presented in Supplementary Note 1 to fully reflect the great generalization ability. Free from experimental training sets and great generalization ability are desirable for real-world applications by fast decoding various unknown OAM beams. For the second part, we test the optical layers in simulation. Note that in addition to many insightful simulation studies on D2NN [40, 41, 68,69,70], some experimental advances also demonstrate the feasibility of transferring in silico diffractive layers to real devices [35, 47,48,49], indicating that the computer-optimized models are also accurate and convincing. Note the working wavelength is decisive when physically implementing these layers. 3D printed layers have been unlocked in the terahertz spectral range [35], radio frequency range [48], and acoustic wave scheme [47], which are relatively large in size thus easy to fabricate. Considering vortex beams are mostly investigated in optical frequencies e.g. 532 nm, the fabrication of optical neurons is harder. Even so, if with sufficient laboratory sources, we can resort to multi-step photolithography-etching technique [49], or five cascaded SLMs [71] or (multi-reflective) metasurface [72,73,74], or micro-structured liquid crystal devices [75, 76] to optically extract TC features. To further improve the reliability, in this work we conduct the free space propagation simulation based on angular spectrum method with zero padding [77], which has been proven to be accurate enough in holography [78]. For the third part, since the electronic FCL is shallow and only covers limited computational operations, we use a moderate computer that is qualified to achieve high speed processing. More importantly, the FCL can mitigate the possible discrepancy between the experimental results and simulation counterparts through adaptive training technique [42]. Specifically, given 5 fabricated optical layers, one can compensate the imperfections arisen from fabrication or experimental setup e.g. misalignment by only fine-tuning the electronic FCL without changing any devices as demonstrated in above robustness section. This can be another advantage of our hybrid system. The codes and data are available for the whole implementation to train/test the system and for developing new processors at https://github.com/hao-focus/ModelForOAMLight.

Future improvement

The presented POAMS is scalable and can be further improved. For example, precisely detecting structured light in the OAM basis in other optical spectra can be achieved utilizing this platform. Also, one can easily extend the OAM detection range by adjusting the number of neurons in the last layer, without compromising other advantages. To advance the results reported in Figs. 3 and 4, one can scale up the training dataset by adding more diverse data samples. Note we do not have to start from scratch to train the network, transfer learning technique [79] can be employed to fine-tune the POAMS. In this work, we limit our attention to scalar structured beams and the engineered diffractive layers are polarization-insensitive. Vectorial counterpart is also feasible [70, 72], which could inspire efficient vector structured light detection. In addition to azimuthal mode spectrum this work discussed, crafting the POAMS into a radial mode spectrum can be relevant future work [80]. In this case, one is expected to obtain two characteristic graphs — one for and another for \(p\) (radial index). Since structured light is rich in members [81, 82], we expect to extend POAMS to other beams generally, for instance, to tailor another optoelectronic network to measure Bessel beams, fractional-order OAM beams, perfect vortex beams, among many others. But one should that the trained POAMS in this work cannot be used for predicting other types of beams. Instead, one may add more training data properly to fine-tune the POAMS or to start from scratch for a new prediction task. Because of the proposed model interpretation method, one doesn’t have to use two-dimensional camaras to detect the intensity signals and this liberates the potential opportunities of such hybrid systems in processing low-intensity vortex beams such as in quantum domain [14]. Besides, the POAMS can be recognized as a solution to a regression problem in essence, and it could be extended to solve similar problems in other fields which necessitates high speed, accuracy and robustness. For example, we may adapt it to image generation [83], image super-resolution, natural language generation, etc. It’s noteworthy that in addition to those different tasks’ application, the proposed model visualization method is also ready to be extended to further stimulate new insights and phenomena with the combination of hardware, software and algorithm.

Conclusion

In summary, we have demonstrated a compact optoelectronic processor for structured mode analysis, which can directly detect OAM spectrum of structured light in a fast, accurate and robust manner. Our processor allows one to immediately obtain the TC information without any interference measurements nor repetitive steps, empowered by the hybrid computing nature. We sharply ease the workload of deep-learning models on collecting massive experimental training data in obtaining a hybrid model with great generalization ability. We validate the performance on experimental and simulated modes with diverse (complex) OAM spectra even against nonideal conditions such as atmosphere turbulence, misalignment. In addition, we observe interesting connections between TC numbers and the optical neurons and consequently propose a universal model interpretation paradigm for hybrid neural networks, which not only help us understand the overall system but also further improve the detection efficiency. This study highlights the advantages of fusing optics and electronics in settling photonics problems through physical smart sensors. More specifically, this work closes a practical gap in OAM-based high-speed information processing and facilitates both OAM and optoelectronic neural networks studies.

Methods

Forward propagation model

In addition to Eq. (2) and Fig. 1, here we provide a detailed illustration of the forward model. The optical computing part is based on the Rayleigh-Sommerfeld diffraction principles [35]. Suppose there are \(M\) diffractive layers in the system and the vectorized output field after the modulation of the \(p\)th layer is denoted as \({\varvec{E}}_{p}\), \(p=1, \text{2,3},\dots , M\), dimensions of which are \({n}^{2}\times 1\) with \(n\) the neuron number along one direction of the layer. Then the complex transform between two modulation layers can be

$$\begin{array}{c}{\varvec{E}}_{p}=\text{d}\text{i}\text{a}\text{g}\left({\varvec{T}}_{p}\right){\varvec{D}}_{p}{\varvec{E}}_{p-1}\end{array},$$
(4)

where \(\text{d}\text{i}\text{a}\text{g}(\bullet )\) represents the diagonalization of a vector, \({\varvec{T}}_{p}={\varvec{\alpha }}_{p}\text{e}\text{x}\text{p}\left(j{\varvec{\phi }}_{p}\right)\) with \({j}^{2}=-1\) is the modulation function and \({\varvec{D}}_{p}\) is the diffraction weight matrix that is related with propagation distance and wavelength. Since in our case only phase modulation is employed, we set \({\varvec{\alpha }}_{p}\equiv 1\). We define \({\varvec{\phi }}_{p}=\alpha \pi \left[\text{s}\text{i}\text{n}\left(\beta {\varvec{\theta }}_{p}\right)+1\right]\), where \(\text{s}\text{i}\text{n}\left(\bullet \right)\) function is to satisfy the periodicity condition of the argument and hyperparameters \(\alpha\) and \(\beta\) are to facilitate the efficient update of these layers. Consequently, the output optical field in the sensor’s plane is

$$\begin{array}{c}{\varvec{E}}_{M+1}={\varvec{D}}_{M+1}\left({\prod }_{p=1}^{M}\text{d}\text{i}\text{a}\text{g}\left({\varvec{T}}_{p}\right){\varvec{D}}_{p}\right){\varvec{E}}_{0}\end{array},$$
(5)

with \({\varvec{E}}_{0}\) the incident structured light. Then a sensor measures the intensity distribution of output field \({\varvec{A}}_{0}\) based on photoelectric effect, which also provides a quadratic nonlinear function and can be written as

$$\begin{array}{c}{\varvec{A}}_{0}={\varvec{E}}_{M+1}{\odot}{\varvec{E}}_{M+1}^{*}\end{array},$$
(6)

where is the Hadamard product and \({\varvec{E}}_{M+1}^{*}\) represents the complex conjugate of output field. Then the high-dimensional sparse features of TC turn into real numbers and are entered as inputs of electronic readout layers. In our case, fully connected layers are used to regress the results. The weighted input value of \(q\)th layer can be formulated as

$$\begin{array}{c}{\varvec{z}}_{q}={\varvec{W}}_{q}{\varvec{A}}_{q-1}+{\varvec{B}}_{q}\end{array},$$
(7)

where \(q=1, \text{2,3},\dots ,N\), \({\varvec{W}}_{q}\) is the weight matrix between two layers, \({\varvec{A}}_{q-1}\) denotes the activation value of \((q-1)\)th layer, and \({\varvec{B}}_{q}\) is the bias vector. We add nonlinear function \(\sigma \left(\bullet \right)\) (Rectified Linear Units) into the electronic part and obtain activation value \({\varvec{A}}_{q-1}=\sigma \left({\varvec{z}}_{q}\right)\). At the last layer, the neurons’ input is \({\varvec{z}}_{N}\). Considering that all weight components are non-negative and the sum of them must equal to 1, we adopt \(\text{s}\text{o}\text{f}\text{t}\text{m}\text{a}\text{x}(\bullet )\) function and a temperature parameter \(T\) to confine \({\varvec{z}}_{N}\),

$$\begin{array}{c}\widehat{\varvec{s}}=\text{s}\text{o}\text{f}\text{t}\text{m}\text{a}\text{x}\left(\frac{{\varvec{z}}_{N}}{T}\right)\end{array},$$
(8)

where \(\widehat{\varvec{s}}\) is the OAM spectrum output (or \({\varvec{A}}_{N}\)). Interestingly, \(T\) is a hyperparameter called temperature value that is normally set to 1 and a higher value leads to a softer probability distribution in knowledge distillation studies [84]. Here we find that proper value of \(T\) can improve the robustness and performance of POAMS. In other words, proper temperature values (\(<1\)) can successfully mitigate the channel crosstalk especially for single modes, due to the ‘sharpening effect’ \(T\) brings [85]. Figure 3(d) clearly illustrates the influences of \(T\) on model performance.

Error Backpropagation

To effectively drive the update of POAMS parameters and minimize the loss function \(\mathcal{L}\left(\varvec{s}, \widehat{\varvec{s}}\right)\) in Eq. (3) of the main manuscript, we apply an error backpropagation algorithm. To implement this, we derive the key derivatives using the chain rule. First, we explore the derivatives of electronic layers and define the error at \(i\)th layer \({\varvec{\delta }}_{i}\equiv \partial \mathcal{L}/\partial {\varvec{z}}_{i}\). Then the error in the output layer can be directly written as

$$\begin{array}{c}{\varvec{\delta }}_{L}=\frac{\partial \mathcal{L}}{\partial {\varvec{A}}_{N}}{\odot}{ \text{s}\text{o}\text{f}\text{t}\text{m}\text{a}\text{x}}^{{\prime }}\left({\varvec{z}}_{N}\right)\end{array}.$$
(9)

The error \({\varvec{\delta }}_{q}\) in terms of the error in the next layer \({\varvec{\delta }}_{q+1}\) can be derived as

$$\begin{array}{c}{\varvec{\delta }}_{q}=\left({\varvec{W}}_{q+1}^{T}{\varvec{\delta }}_{q+1}\right){\odot}{{\upsigma }}^{{\prime }}\left({\varvec{z}}_{q}\right)\end{array}.$$
(10)

By combining Eq. (9) with Eq. (10) we can calculate the error \({\varvec{\delta }}_{q}\) for any layer (\({\varvec{\delta }}_{1}, {\varvec{\delta }}_{2}, \dots ,{\varvec{\delta }}_{N}\)). In this regard, the rate of change of the loss with respect to any weight and bias in the electronic layer can be obtained:

$$\begin{array}{c}\begin{array}{c}\frac{\partial \mathcal{L}}{\partial {\varvec{W}}_{q}}={\varvec{\delta }}_{q}{\varvec{A}}_{q-1}^{T}\\ \frac{\partial \mathcal{L}}{\partial {\varvec{B}}_{q}}={\varvec{\delta }}_{q }\end{array}\end{array}.$$
(11)

For the optical neurons, we first calculate the error in the sensor plane,

$$\begin{array}{c}\frac{\partial \mathcal{L}}{\partial {\varvec{A}}_{0}}={\left(\frac{\partial {\varvec{z}}_{1}}{\partial {\varvec{A}}_{0}}\right)}^{T}\frac{\partial \mathcal{L}}{\partial {\varvec{z}}_{1}}={\varvec{W}}_{1}^{T}{\varvec{\delta }}_{1}\end{array}.$$
(12)

Then to obtain the gradient of the \(p\)th diffractive layer, i.e.

$$\begin{array}{c}\frac{\partial \mathcal{L}}{\partial {\varvec{T}}_{p}}={\left(\frac{\partial {\varvec{A}}_{0}}{\partial {\varvec{T}}_{p}}\right)}^{T}\frac{\partial \mathcal{L}}{\partial {\varvec{A}}_{0}}\end{array},$$
(13)

we need to derive \(\partial {\varvec{A}}_{0}/\partial {\varvec{T}}_{p}\), which can be represented as

$$\begin{array}{c}\begin{array}{c}\frac{\partial {\varvec{A}}_{0}}{\partial {\varvec{T}}_{p}}=\frac{\partial {\varvec{A}}_{0}}{\partial {\varvec{E}}_{M+1}}\frac{\partial {\varvec{E}}_{M+1}}{\partial {\varvec{T}}_{p}}+\frac{\partial {\varvec{A}}_{0}}{\partial {\varvec{E}}_{M+1}^{*}}\frac{\partial {\varvec{E}}_{M+1}^{*}}{\partial {\varvec{T}}_{p}}\\ =2\text{R}\text{e}\left[\frac{\partial {\varvec{E}}_{M+1}}{\partial {\varvec{T}}_{p}}\text{d}\text{i}\text{a}\text{g}\left({\varvec{E}}_{M+1}^{*}\right)\right]\end{array}\end{array},$$
(14)

where \(\text{R}\text{e}(\bullet )\) denotes the operation of extracting the real part of complex values. Within Eq. (13), \(\partial {\varvec{E}}_{M+1}/\partial {\varvec{T}}_{p}\) can be easily formulated as

$$\begin{array}{c}\frac{\partial {\varvec{E}}_{M+1}}{\partial {\varvec{T}}_{p}}={\varvec{D}}_{M+1}\left({\prod }_{i=M}^{p+1}\text{diag}\left({\varvec{T}}_{i}\right){\varvec{D}}_{i}\right){\varvec{D}}_{p}\text{diag}\left({\varvec{E}}_{p-1}\right)\end{array}.$$
(15)

Substituting Eq. (15) into Eq. (14) and Substituting Eq. (14) and Eq. (12) into Eq. (13), we have

$$\begin{array}{c}\frac{\partial \mathcal{L}}{\partial {\varvec{T}}_{p}}={\left\{2\text{R}\text{e}\left[{\varvec{D}}_{M+1}\left({\prod }_{i=M}^{p+1}\text{d}\text{i}\text{a}\text{g}\left({\varvec{T}}_{i}\right){\varvec{D}}_{i}\right){\varvec{D}}_{p}\text{d}\text{i}\text{a}\text{g}\left({\varvec{E}}_{p-1}\right)\text{d}\text{i}\text{a}\text{g}\left({\varvec{E}}_{M+1}^{*}\right)\right]\right\}}^{T}{\varvec{W}}_{1}^{T}{\varvec{\delta }}_{1}\end{array}.$$
(16)

Considering that for POAMS, only phase modulation is applied, more specifically, \({\varvec{T}}_{p}=\text{e}\text{x}\text{p}\left\{i\alpha \pi \left[\text{s}\text{i}\text{n}\left(\beta {\varvec{\theta }}_{p}\right)+1\right]\right\}\), we are able to obtain the final gradient formula of optical parameters

$$\begin{array}{c}\begin{array}{c}\frac{\partial \mathcal{L}}{\partial {\varvec{\theta }}_{p}}=\frac{\partial {\varvec{T}}_{p}}{\partial {\varvec{\theta }}_{p}}\frac{\partial \mathcal{L}}{\partial {\varvec{T}}_{p}}\\ =\text{d}\text{i}\text{a}\text{g}\left\{\alpha \beta \pi \text{c}\text{o}\text{s}\left(\beta {\varvec{\theta }}_{p}\right)\right\}\text{d}\text{i}\text{a}\text{g}\left\{{je}^{j\alpha \pi \left[\text{s}\text{i}\text{n}\left(\beta {\varvec{\theta }}_{p}\right)+1\right]}\right\}\frac{\partial \mathcal{L}}{\partial {\varvec{T}}_{p}}\end{array}\end{array}.$$
(17)

In short, Eq. (11) and Eq. (17) represents the update of electronic neurons and optical neurons respectively. In the training phase, due to the nontrivial nonlinearity and mature training algorithms with respect to electronic neurons, the calculated errors and gradients can effectively optimize \({\varvec{W}}_{q}\) and \({\varvec{B}}_{q}\). However, the backpropagated errors cannot update \({\varvec{\theta }}_{p}\) efficiently. In other words, when we first trained the POAMS from the scratch, we found the optical part barely changes as training proceeds. For example, if every phase plate is initialized with \(\pi\) value, after ~ 30 epochs, the loss drops but each phase plate remains like a \(\pi\)-equiphase plate. This tells us that electronic neurons are effectively trained while optical neurons stay static (inefficient training). We call this the imbalance between the two types of neurons. To address this hurdle, we introduce \(\alpha\) and \(\beta\) to enhance the corresponding gradients as indicated in Eq. (17) and yield the successful refreshment of the overall system. Empirically, we set \(\alpha\) and \(\beta\) as 11 and 2 respectively. Note the final phase distribution of each diffractive layer is the result of mod(\(\alpha \pi [\text{sin}\left(\beta {\theta }_{p}\right)+1]\), \(2\pi\)), where mod (\(\bullet\), \(2\pi\)) denotes taking remainders w.r.t. \(2\pi\).

Availability of data and materials

The datasets, codes and several pretrained models for training and testing the POAMS are available at https://github.com/hao-focus/ModelForOAMLight.

Abbreviations

OAM:

orbital angular momentum

DL:

deep learning

D2NN:

diffractive deep neural network

POAMS:

processor for OAM spectrum

MSE:

mean squared error

FCL:

fully connected layer

TC:

topological charges

CW:

continuous-wave

BE:

beam expander

HWP:

half-wave plate

BS:

beam splitter

SLM:

spatial light modulator

P:

polarizer

L1, L2:

lens

R2 :

R-squared

TR:

transverse rotation

LS:

longitudinal shift

TS:

transverse shift

AS:

angular shift

AT:

atmosphere turbulence

CNN:

convolutional neural network

DMD:

digital micromirror device

References

  1. Willner AE, Pang K, Song H, Zou K, Zhou H. Orbital angular momentum of light for communications. Appl Phys Rev. 2021;8:041312.

    Article  Google Scholar 

  2. Yuanjie Y, Yuxuan R, Mingzhou C, Yoshihiko A, Carmelo R-G. Optical trapping with structured light: a review. Adv. Photonics 3 (2021).

  3. Erhard M, Fickler R, Krenn M, Zeilinger A. Twisted photons: new quantum perspectives in high dimensions. Light Sci Appl. 2018;7:17146–6.

    Article  Google Scholar 

  4. Xie Z, et al. Ultra-broadband on-chip twisted light emitter for optical communications. Light Sci Appl. 2018;7:18001–1.

    Article  Google Scholar 

  5. Lin Z, Hu J, Chen Y, Brès C-S, Yu S (2022) arXiv:2206.12883

  6. Hickmann JM, Fonseca EJS, Soares WC, Chávez-Cerda S. Unveiling a truncated Optical Lattice Associated with a triangular aperture using light’s Orbital Angular Momentum. Phys Rev Lett. 2010;105:053904.

    Article  Google Scholar 

  7. Lv Y, et al. Sorting orbital angular momentum of photons through a multi-ring azimuthal-quadratic phase. Opt Lett. 2022;47:5032–5.

    Article  Google Scholar 

  8. Wen Y, et al. Spiral Transformation for high-resolution and efficient sorting of Optical Vortex Modes. Phys Rev Lett. 2018;120:193904.

    Article  Google Scholar 

  9. Grillo V, et al. Measuring the orbital angular momentum spectrum of an electron beam. Nat Commun. 2017;8:15536.

    Article  Google Scholar 

  10. Fu S, et al. Universal orbital angular momentum spectrum analyzer for beams. PhotoniX. 2020;1:19.

    Article  Google Scholar 

  11. D’Errico A, D’Amelio R, Piccirillo B, Cardano F, Marrucci L. Measuring the complex orbital angular momentum spectrum and spatial mode decomposition of structured light beams. Optica. 2017;4:1350–7.

    Article  Google Scholar 

  12. Schulze C, Dudley A, Flamm D, Duparré M, Forbes A. Measurement of the orbital angular momentum density of light by modal decomposition. New J Phys. 2013;15:073025.

    Article  Google Scholar 

  13. Zhou H-L, et al. Orbital angular momentum complex spectrum analyzer for vortex light based on the rotational Doppler effect. Light Sci Appl. 2017;6:e16251–1.

    Article  Google Scholar 

  14. Malik M, et al. Direct measurement of a 27-dimensional orbital-angular-momentum state vector. Nat Commun. 2014;5:3115.

    Article  Google Scholar 

  15. Chen P, et al. Digitalizing self-assembled Chiral Superstructures for Optical Vortex Processing. Adv Mat. 2018;30:1705865.

  16. Forbes A, Dudley, A,McLaren, M. Creation and detection of optical modes with spatial light modulators. Adv Opt Photonics. 2016;8:200–27.

    Article  Google Scholar 

  17. Zhang S, et al. Broadband detection of multiple spin and Orbital Angular Momenta via Dielectric Metasurface. Laser Photonics Rev. 2020;14:2000062.

    Article  Google Scholar 

  18. Xu C-T, et al. Tunable band-pass optical vortex processor enabled by wash-out-refill chiral superstructures. Appl Phys Lett. 2021;118:151102.

    Article  Google Scholar 

  19. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–44.

    Article  Google Scholar 

  20. Eraslan G, Avsec Ž, Gagneur J, Theis FJ. Deep learning: new computational modelling techniques for genomics. Nat Rev Genet. 2019;20:389–403.

    Article  Google Scholar 

  21. Elmarakeby HA, et al. Biologically informed deep neural network for prostate cancer discovery. Nature. 2021;598:348–52.

    Article  Google Scholar 

  22. Lu L, Jin P, Pang G, Zhang Z, Karniadakis GE. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat Mach Intell. 2021;3:218–29.

    Article  Google Scholar 

  23. Joowon L, Ahmed BA, Demetri P. Three-dimensional tomography of red blood cells using deep learning. Adv Photonics. 2020;2:1–9.

    Google Scholar 

  24. Zhenbo R, Zhimin X, Edmund YML. End-to-end deep learning framework for digital holographic reconstruction. Adv Photonics. 2019;1:1–12.

    Google Scholar 

  25. Genty G, et al. Machine learning and applications in ultrafast photonics. Nat Photonics. 2021;15:91–101.

    Article  Google Scholar 

  26. Shijie F, et al. Fringe pattern analysis using deep learning. Adv Photonics. 2019;1:1–7.

    Google Scholar 

  27. Giordani T, et al. Machine learning-based classification of Vector Vortex Beams. Phys Rev Lett. 2020;124:160401.

    Article  Google Scholar 

  28. Liu Z, Yan S, Liu H, Chen X. Superhigh-Resolution Recognition of Optical Vortex Modes assisted by a deep-learning method. Phys Rev Lett. 2019;123:183902.

    Article  Google Scholar 

  29. Wang H, et al. Deep-learning-based recognition of multi-singularity structured light. Nanophotonics. 2022;11:779–86.

    Article  Google Scholar 

  30. Feng F, et al. Deep learning-enabled Orbital Angular Momentum-Based information encryption transmission. ACS Photonics. 2022;9:820-9.

  31. Wang J, Fu S, Shang Z, Hai L, Gao C. Adjusted EfficientNet for the diagnostic of orbital angular momentum spectrum. Opt Lett. 2022;47:1419–22.

    Article  Google Scholar 

  32. Wetzstein G, et al. Inference in artificial intelligence with deep optics and photonics. Nature. 2020;588:39–47.

    Article  Google Scholar 

  33. Goi E, Zhang Q, Chen X, Luan H, Gu M. Perspective on photonic memristive neuromorphic computing. PhotoniX. 2020;1:3.

    Article  Google Scholar 

  34. Shen Y, et al. Deep learning with coherent nanophotonic circuits. Nat Photonics. 2017;11:441–6.

    Article  Google Scholar 

  35. Lin X, et al. All-optical machine learning using diffractive deep neural networks. Science. 2018;361:1004–8.

    Article  MathSciNet  MATH  Google Scholar 

  36. Feldmann J, et al. Parallel convolutional processing using an integrated photonic tensor core. Nature. 2021;589:52–8.

    Article  Google Scholar 

  37. Rafayelyan M, Dong J, Tan Y, Krzakala F, Gigan S. Large-scale Optical Reservoir Computing for Spatiotemporal Chaotic Systems Prediction. Phys Rev X. 2020;10:041037.

    Google Scholar 

  38. Wright LG, et al. Deep physical neural networks trained with backpropagation. Nature. 2022;601:549–55.

    Article  Google Scholar 

  39. Ying Z, et al. Optical neural network quantum state tomography. Adv Photonics. 2022;4:1–7.

    Google Scholar 

  40. Jingxi L, Deniz M, Yi L, Yair R, Aydogan O. Class-specific differential detection in diffractive optical neural networks improves inference accuracy. Adv Photonics. 2019;1:1–13.

    Google Scholar 

  41. Kulce O, Mengu D, Rivenson Y, Ozcan A. All-optical information-processing capacity of diffractive surfaces. Light Sci Appl. 2021;10:25.

    Article  Google Scholar 

  42. Zhou T, et al. Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit. Nat Photonics. 2021;15:367–73.

    Article  MathSciNet  Google Scholar 

  43. Luo Y, et al. Computational imaging without a computer: seeing through random diffusers at the speed of light. eLight. 2022;2:4.

    Article  Google Scholar 

  44. Veli M, et al. Terahertz pulse shaping using diffractive surfaces. Nat Commun. 2021;12:37.

    Article  Google Scholar 

  45. Qian C, et al. Performing optical logic operations by a diffractive neural network. Light Sci Appl. 2020;9:59.

    Article  Google Scholar 

  46. Goi E, et al. Nanoprinted high-neuron-density optical linear perceptrons performing near-infrared inference on a CMOS chip. Light Sci Appl. 2021;10:40.

    Article  Google Scholar 

  47. Weng J, et al. Meta-neural-network for real-time and passive deep-learning-based object recognition. Nat Commun. 2020;11:6309.

    Article  Google Scholar 

  48. Liu C, et al. A programmable diffractive deep neural network based on a digital-coding metasurface array. Nat Electron. 2022;5:113–22.

    Article  Google Scholar 

  49. Chen H, et al. Diffractive Deep Neural Networks at Visible Wavelengths Engineering. 2021;7:1483–91.

    Google Scholar 

  50. Mengu D, Luo Y, Rivenson Y, Ozcan A. Analysis of Diffractive Optical neural networks and their integration with electronic neural networks. IEEE J Sel Top Quantum Electron. 2020;26:1–14.

    Article  Google Scholar 

  51. Huang C, et al. A silicon photonic–electronic neural network for fibre nonlinearity compensation. Nat Electron. 2021;4:837–44.

    Article  Google Scholar 

  52. Wang Z, et al. Recognizing the orbital angular momentum (OAM) of vortex beams from speckle patterns. Sci China Phys Mech Astron. 2022;65:244211.

    Article  MathSciNet  Google Scholar 

  53. Venkatesh B, Anuradha JA. Review of feature selection and its methods. Cybern Inf Technol. 2019;19:3–26.

    MathSciNet  Google Scholar 

  54. Shiyao F, et al. Orbital angular momentum comb generation from azimuthal binary phases. Adv Photonics Nexus. 2022;1:016003.

    Google Scholar 

  55. Lin Z, et al. Single-shot Kramers-Kronig complex orbital angular momentum spectrum retrieval. 2022;ArXiv.2206.12883. Preprint at https://arxiv.org/abs/2206.12883.

  56. Shen Y, et al. Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities. Light Sci Appl. 2019;8:90.

    Article  Google Scholar 

  57. Wang X, et al. Learning to recognize misaligned hyperfine orbital angular momentum modes. Photonics Res. 2021;9:B81–6.

    Article  Google Scholar 

  58. Lin J, Yuan XC, Chen M, Dainty JC. Application of orbital angular momentum to simultaneous determination of tilt and lateral displacement of a misaligned laser beam. J Opt Soc Am A. 2010;27:2337–43.

    Article  Google Scholar 

  59. Fu S, Gao C. Influences of atmospheric turbulence effects on the orbital angular momentum spectra of vortex beams. Photonics Res. 2016;4:B1–4.

    Article  Google Scholar 

  60. Lavery M, Chen Z, Cheng M, Mckee D,Yao A. Sensing with structured beams. 2021;11926. (SPIE).

  61. Huff DT, Weisman AJ, Jeraj R. Interpretation and visualization techniques for deep learning models in medical imaging. Phys Med Biol. 2021;66:04TR01.

    Article  Google Scholar 

  62. Zeiler MD, Fergus R. Computer vision – ECCV 2014. (Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors) pp. 818–33 (Springer International Publishing, Cham; 2014).

    Chapter  Google Scholar 

  63. Selvaraju RR, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis. 2020;128:336–59.

    Article  Google Scholar 

  64. Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H Understanding Neural Networks Through Deep Visualization. 2015;ArXiv.1506.06579. Preprint at https://arxiv.org/abs/1506.06579.

  65. Laurens VDM, Hinton GJ. J.o.M.L.R. Visualizing Data using t-SNE. J Mach Learn Res. 2008;9:2579–605.

    MATH  Google Scholar 

  66. Zhou B, et al. Learning Deep Features for Discriminative Localization. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. p. 2921–29. https://doi.org/10.1109/CVPR.2016.319.

  67. Li J, et al. Deep learning-based quantitative optoacoustic tomography of deep tissues in the absence of labeled experimental data. Optica. 2022;9:32–41.

    Article  MathSciNet  Google Scholar 

  68. Rahman MSS, Li J, Mengu D, Rivenson Y, Ozcan A. Ensemble learning of diffractive optical networks. Light Sci Appl. 2021;10:14.

    Article  Google Scholar 

  69. Sakib Rahman MS, Ozcan A, Computer-Free. All-Optical Reconstruction of Holograms using Diffractive Networks. ACS Photonics. 2021;8:3375–84.

    Article  Google Scholar 

  70. Li J, Hung Y-C, Kulce O, Mengu D, Ozcan A. Polarization multiplexed diffractive computing: all-optical implementation of a group of linear transformations through a polarization-encoded diffractive network. Light Sci Appl. 2022;11:153.

    Article  Google Scholar 

  71. Chen R, et al Physics-aware Complex-valued Adversarial Machine Learning in Reconfigurable Diffractive All-optical Neural Network. 2022;ArXiv.2203.06055. Preprint at https://arxiv.org/abs/2203.06055.

  72. Luo X, et al. Metasurface-enabled on-chip multiplexed diffractive neural networks in the visible. Light Sci Appl. 2022;11:158.

    Article  Google Scholar 

  73. Georgi P, et al. Optical secret sharing with cascaded metasurface holography. Sci Adv. 2021;7:eabf9718.

    Article  Google Scholar 

  74. Faraji-Dana M, et al. Compact folded metasurface spectrometer. Nat Commun. 2018;9:4196.

    Article  Google Scholar 

  75. Zhu L, et al. Pancharatnam–Berry phase reversal via opposite-chirality-coexisted superstructures. Light Sci Appl. 2022;11:135.

    Article  Google Scholar 

  76. Chen P, Wei B-Y, Hu W, Lu Y-Q. Liquid-crystal-mediated geometric phase: from Transmissive to Broadband Reflective Planar Optics. Adv Mat. 2020;32:1903665.

    Google Scholar 

  77. Matsushima K, Shimobaba T, Band-Limited. Angular Spectrum Method for Numerical Simulation of Free-Space Propagation in Far and Near Fields. Opt Express. 2009;17:19662–73.

    Article  Google Scholar 

  78. Shi L, Li B, Kim C, Kellnhofer P, Matusik W. Towards real-time photorealistic 3D holography with deep neural networks. Nature. 2021;591:234–9.

    Article  Google Scholar 

  79. Zhuang F, et al. A Comprehensive Survey on Transfer Learning. Proc IEEE. 2021;109:43–76.

    Google Scholar 

  80. Zhou Y, et al. Sorting photons by Radial Quantum Number. Phys Rev Lett. 2017;119:263602.

    Article  Google Scholar 

  81. Wang H, et al. Deep-learning-assisted communication capacity enhancement by non-orthogonal state recognition of structured light. Opt Express. 2022;30:29781–95.

    Article  Google Scholar 

  82. Forbes A, de Oliveira M, Dennis MR. Structured light. Nat Photonics. 2021;15:253–62.

    Article  Google Scholar 

  83. Wu C, et al. Harnessing optoelectronic noises in a photonic generative network. Sci Adv. 2022;8:eabm2956.

  84. Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. 2015;ArXiv.1503.02531. Preprint at https://arxiv.org/abs/1503.02531.

  85. Berthelot D, et al MixMatch: A Holistic Approach to Semi-Supervised Learning. 2019;ArXiv.1905.02249. Preprint at https://arxiv.org/abs/1905.02249.

Download references

Acknowledgementss

The authors acknowledge all anonymous reviewers for their time and efforts to improve this work.

Supplementary information

For details regarding the dataset generation, computing speed and energy efficiency, training loss function, network hyperparameters, training configurations, more robustness analyses, more model visualization discussion and more results, please refer to the Supplementary information.

Funding

This work is funded by the National Natural Science Foundation of China (61975087) and Natural Science Foundation of China (62275137).

Author information

Authors and Affiliations

Authors

Contributions

H.W. and Z.Z. conceived the original idea, developed the theory, conducted the experiment, analyzed the data and prepared the manuscript with input from all authors. Q.L. and X.F. supervised the project. All authors provided critical feedback and helped shape the research.

Corresponding authors

Correspondence to Xing Fu or Qiang Liu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

43074_2022_79_MOESM1_ESM.docx

Additional file 1: Table S1. Comparison with SOTA. Supplementary Note 1. Dataset acquisition and processing details. FigureS1. Figure S2. Supplementary Note 2. Training-related details. Figure S3. Supplementary Note 3. Computing speed and energy efficiency. Supplementary Note 4. Further robustness analyses. Figure S4. Figure S5. Figure S6. Supplementary Note 5. Visualization-related details. FigureS7. Figure S8. Figure S9. Figure S10. Figure S11. 

Additional file 2: Supplementary Video 1. This video is a supplement to Fig. 1 of the main manuscript. We select the POAMS model in Fig. 1 and showcase test results of multiplexed structured modes after occluding right or left half of intensity signals in the sensor plane. We can clearly see that the positive and negative topological charges are strongly related to the intensity signals at distinct regions.

Additional file 3: Supplementary Video 2. This video is a supplement to Fig. 3 of the main manuscript. We select the POAMS model in Fig. 2 and illustrates more OAM spectrum results in terms of unseen experimental and simulated samples.

Additional file 4: Supplementary Video 3. This video is a supplement to Fig. 3 of the main manuscript. We present the optical neurons of the POAMS in the upper panel and showcase more blind test results on OAM complex spectra retrieval in the lower panel. Note the optical part of this POAMS are initialized with random values rather than above  initialization.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Zhan, Z., Hu, F. et al. Intelligent optoelectronic processor for orbital angular momentum spectrum measurement. PhotoniX 4, 9 (2023). https://doi.org/10.1186/s43074-022-00079-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-022-00079-9

Keywords