Skip to main content

Fiber laser development enabled by machine learning: review and prospect

Abstract

In recent years, machine learning, especially various deep neural networks, as an emerging technique for data analysis and processing, has brought novel insights into the development of fiber lasers, in particular complex, dynamical, or disturbance-sensitive fiber laser systems. This paper highlights recent attractive research that adopted machine learning in the fiber laser field, including design and manipulation for on-demand laser output, prediction and control of nonlinear effects, reconstruction and evaluation of laser properties, as well as robust control for lasers and laser systems. We also comment on the challenges and potential future development.

Introduction

Fiber lasers feature good beam quality, high efficiency, compact structure, and enable to be tuned extensively and work efficiently from continuous-wave operation to ultrashort optical pulses [1, 2], from low power to high power schemes [3,4,5], which has been widely applied in nonlinear microscopy [6], optical communication [7, 8], and materials processing [9]. In the past several decades, the performance enhancement of fiber lasers mainly relied on fiber development, system optimization, algorithm improvements, and other means [10,11,12,13,14]. Among them, the role of machine learning is becoming ever prominent.

Machine learning (ML) is an umbrella term, broadly defined as “field of study that gives computers the ability to learn without being explicitly programmed” [15]. As an emerging role, it has been introduced into many fields and achieved gratifying results in speech recognition [16], object classification [17], chemical health and safety study [18], computational imaging [19,20,21], optical metrology [22], optical communications and networking [23, 24], sensing [25], and photonic design [26,27,28,29]. Recently, various machine learning methods, particularly deep neural networks (DNNs), have attracted more attention to solving problems in fiber lasers. For example, learning enables approximate models of the underlying physics or dynamic process for complex fiber laser systems in the form of a “black box”, serving for proxy measurement and tracking control of physical parameters.

The purpose of this review is to highlight the recent progress utilizing machine learning techniques for developing advanced fiber lasers in terms of design and manipulation for on-demand laser output, prediction and control of nonlinear effects, reconstruction and evaluation of laser properties, as well as robust control for lasers and laser systems. Challenges and perspectives are considered in the end.

General description

The field of machine learning yields multiple sources that involve various disciplines as diverse as probability theory, statistics, adaptive control theory, psychological models, and complexity theory. Different sources bring different methods and terms into the machine learning field. At the same time, machine learning continues to develop dramatically, and new technologies continue to emerge. It is not easy to summarize all machine learning content perfectly. Here we introduce some general descriptions of machine learning and its application in fiber lasers, which aims to provide a reference for readers in the fiber laser community.

Machine learning basics

This section first introduces the concept of machine learning, followed by the learning algorithm taxonomy, and emphasizes a widely adopted algorithm, artificial neural networks (ANNs).

Concept

The field of machine learning and optimization are intertwined. Most machine learning problems can transform into optimization ones in the end. Some researchers put several works with purely adaptive and robust optimization algorithms into the category of machine learning, for example, evolutionary algorithms, typically genetic algorithms, for coherent control of ultrafast dynamics [30], intelligent breathing soliton generation [31], and self-tuning mode-locked fiber lasers [32]. More common definitions of machine learning emphasize “learning” and “to gain knowledge” from data, and a classical one of them is “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E” [33]. Generally, experience is usually presented in the form of data in tasks, and learning algorithms are methods of generating models from data. With the learned model, the machine can make a prediction or take actions in tasks. Obviously, datasets, models, and learning algorithms are three core elements of machine learning.

The collection of data from experiments or numerical simulation of specific tasks is called a dataset, marked as D = {(xi, yi)}i = 1,2,…,N, where (xi, yi) is an example and N is the number of examples. xi is a property description of an example, usually named as ‘sample’ or ‘feature vector’. For example, xi = {xij}j = 1,2,…,d is a feature vector with dimensionality d, where each dimension contains a value xi j that describes the example somehow. yi is the label of xi, which can be the form of one of a finite set of classes, a vector, a matrix, a graph, or others. In some tasks, yi may not exist. Training (also known as learning) is the process of using data to generate models through learning algorithms. The undetermined parameters of the model would be modified during the training. Therefore, the model can be regarded as the parameterized representation of the learning algorithm on the given data and model parameter space. The data used in training is called a training dataset. Sometimes, a validation dataset is split proportionally from the training dataset to show the performance of the model during the training process. After training, the model needs to be tested on an independent dataset from the same or similar statistical distribution to the training dataset, the testing dataset, to evaluate its generalization applicability for new data. Figure 1 shows the general working framework of machine learning, including data preparation, algorithm selection, training, and test.

Fig. 1
figure 1

The working framework of machine learning

Learning algorithm taxonomy

Machine learning covers a very broad field, and it has developed a variety of learning algorithms to handle different types of learning tasks. We describe four rough classifications of machine learning algorithms. In different tasks, the available data have different forms, labeled or unlabeled, research object itself or only a metric value of it. According to the form of data algorithms used, machine learning can be divided into supervised, unsupervised, semi-supervised, and reinforcement learning (RL) [34,35,36,37,38,39,40]. The data for supervised learning is labeled, that is, D = {(xi, yi)}i = 1,2,…,N. With the difference between actual label yi and model output, the model parameters can be iteratively modified to map the label better. Supervised learning aims to find a mapping f: χ → y, where xiχ (sample space), yiy (label space), Dχ × y, so that f (xi) = yi. Typical supervised learning problem includes classification and regression. Unsupervised learning specializes in learning the internal representation or potential relationships or structures of samples without labels, where D = {xi}i = 1,2,…,N. Clustering and dimensionality reduction are two common unsupervised learning problems. Semi-supervised learning adopts partially labeled datasets, D=D1 + D2, D1 = {(xi, yi)}i = 1,2,…,N and D2 = {xi}i = 1,2,…,M, where M> > N. Reinforcement learning attempts to learn what to do and how to map situations to actions to maximize a reward function [41]. To some degree, deep reinforcement learning is a control strategy that does not require accurate object models because it can adapt to the environment via interacting [42].

Machine learning algorithms can be classified according to the learning tasks, such as classification algorithms, regression algorithms, clustering algorithms, and dimensionality reduction algorithms. For example, principal component analysis and manifold learning are popular dimensionality reduction algorithms. Some learning algorithms can work for not only one one kind of task, like support vector machines for classification and regression tasks [43] and ANNs for almost all machine learning tasks [44,45,46,47,48].

Depending on whether physical knowledge is involved, machine learning algorithms can be categorized as physics-based and physics-free. Physics-free machine learning is a purely data-driven method. The core is data-driven modeling, extracting hidden physical and mathematical models from available system data and representing them by learned models [49]. Unlike physical and mathematical models represented by explicit equations, data-driven models belong to empirical models that can be a universal functional approximator, acting as a black box that allows people to solve problems without professional background or expertise. Generally, physically-free machine learning requires big data for training and is not available for specific tasks where data acquisition costs are prohibitive. By contrast, physics-informed machine learning integrates data-driven modeling and prior knowledge [50]. For example, a physics-informed neural network (PINN) is designed to satisfy some physical constraints automatically, improving accuracy and enhancing generalization in small data regimes [51]. In some cases, prior physical laws can act as a regularization term that constrains the space of admissible solutions to a manageable size, enabling it to steer itself towards the right solution and converge quickly [51,52,53].

Machine learning algorithm features shallow or deep architecture. The performance of machine learning methods is heavily dependent on the choice of data representation (or features) [54]. In the early stage, machine learning works with shallow architectures, for example, hidden Markov model, maximum entropy models, conditional random fields, and perceptron or ANN with a single hidden layer [55]. They all have a few nonlinear feature transformations, resulting in a limited ability to extract features from raw data and requiring expertise in engineering for design [56]. In recent years, deep learning (DL) under deep architectures represented by various deep neural networks (DNN) has become a hot subfield of machine learning. Deep learning shows amazing power in discovering intricate structures in high-dimensional data by transforming raw data into more abstract and ultimately more useful representations through multiple simple but nonlinear models [56].

Artificial neural networks

Here, we provide more information about ANN because of its notable impact on fiber laser research. ANN is a mathematical model that imitates the structure and function of biological neural networks, which is usually used to estimate or approximate functions [38]. ANNs consist of three types of layers: input, hidden, and output. Each layer consists of many processing elements, known as neurons or nodes, which have a bias (or called threshold) b and an active function f that is usually nonlinear (such as the softmax, relu, and sigmoid). According to the McCulloch-Pitts (MP) Model [57], when node j in the network has n inputs, and xi (i = 1, 2, …, n) notes the ith inputs with interconnection weight wij, the output of node j is \({y}_j={f}_j\left({\sum}_{i=1}^n{w}_{ij}{x}_i-{b}_j\right)\), where bj and fj means the bias and activation function of node j. Plenty of nodes are arranged in a certain hierarchical structure to form a network.

The architecture of an ANN can be classified by its topological structure, i.e., the overall connectivity and active function of nodes. ANNs can be divided into feedforward and recurrent classes according to their topological connectivity structure. Feedforward neural network is the most common network with a unidirectional multilayer structure, where data flows from the input to the hidden layer and then to the output layer. The simplest feedforward neural network is the fully connected network (FCNN), the nodes in each layer are connected with all the nodes in the last layer. The recurrent neural network (RNN) is developed mainly to process sequence data, the feature of which is that the current output is related to the previous output, for example, video and text. RNNs will memorize the previous information and apply it to the calculation of the current output. The input of the hidden layer includes not only the output of the input layer but also the output of the previously hidden layer. Theoretically, RNNs can process sequence data of any length. However, in practice, to reduce complexity, it is often assumed that the current state is only related to the first few states. Mainstream RNNs are long short-term memory (LSTM) and gated recurrent unit (GRU) [58] (Fig. 2).

Fig. 2
figure 2

Architectures of artificial neural network

The training process of the ANN is to determine these weights with search operators. Optimization is the core of the training, and most machine learning problems boil down to optimization problems [59]. In practice, a great variety of gradient descent algorithms, for example, stochastic gradient descent (SGD) algorithm, Adam, AdaGrad, RMSProp [60,61,62], combined with the backpropagation algorithm, are used to train ANNs. The working details of the backpropagation are similar to the chain rule for derivatives [56]. In recent years, in addition to the gradient descent algorithm, there has been a great interest in combining learning with metaheuristics optimization algorithms, like evolution algorithms [63,64,65] and simulated annealing algorithms [66].

ANN with a multilayer structure rather than a single hidden layer is expected to yield a better learning ability. However, the weights of multilayer networks are difficult to optimize because of gradient diffusion (Gradient Diffusion). As the number of network layers increases, this situation will become more serious. The existence of these problems restricts the development of multilayer networks. In 2006, Geoffrey E. Hinton et al. proposed improved training methods for deep architectures, which is regarded as the beginning of deep learning [67]. Nowadays, DNNs, a FNN with more than one hidden layer [16], is still the mainstream deep learning framework. Popular DNNs include restricted Boltzmann machine (RBM), deep belief network (DBN), and convolutional neural network (CNN). Studies that exploit supervised, unsupervised, and semi-supervised learning have developed various architectures like autoencoder (AE), generative adversarial network (GAN), variational autoencoder (VAE), and graph convolutional network (GCN) [68]. Besides, the deep Q-network (DQN) is a representative algorithm in deep reinforcement learning, trained with a variant of Q-learning [69].

Learning-enabled fiber laser

We first analyze typical problems in fiber lasers and then explain what machine learning can do for them.

The learning problems in the field of the fiber laser can be divided into identification (learning the input-output prediction model), estimation (learning how to characterize unmeasured parameters, such as reconstructed inputs, predicted theoretical outputs, and inferred evaluation metrics of outputs), design (learning how to obtain the target), and control (learning the control law). In practice, these problems are interrelated. For example, the identified prediction model can help solve estimation (including prediction, reconstruction, and evaluation), design, and control problems. For the convenience of description, a general formulation of data relationship is considered, y = Ax, where x and y are the input and corresponding output of the fiber laser system, A is the forward operator or transfer function of fiber or fiber laser setup, which describes the explicit relationship (e.g., physical principles and rules) or implicit relationship (without enough physical knowledge) between the input x and output y. Sometimes, some special terms are considered, such as Δx, the disturbance of input coming from the environment, n, noise included in the output, and E(y), an evaluation function of output y. Table 1 and Fig. 3 illustrate the typical problems in the fiber laser systems.

Table 1 Typical problems in the fiber laser systems (*means a specific value)
Fig. 3
figure 3

Typical problems in the fiber laser systems

Prediction 

Machine learning has demonstrated an outstanding system identification ability to reproduce physical models by identifying hidden structures and learning input-output functions based on data analysis, which even can distill theories of dynamic processes, transforming observed data into predictive models [53]. For example, recurrent neural networks are influential in successful applications because of their ability to represent sequential dependent data, such as forecasting the spatiotemporal dynamics of high-dimensional and reduced-order complex systems [70], modeling the large-scale structure and low-order statistics of turbulent convection [71], and inferring high-dimensional chaos [72]. In the fiber laser field, nonlinear dynamic systems described by nonlinear partial differential equations (PDEs), e.g., the nonlinear Schrödinger equation (NLSE), usually have no analytical solutions. Numerical methods and related calculation strategies are studied for numerical solutions. There is a strong interest in finding a data-driven solution through machine learning. In recent years, machine learning has shown power in predicting complex nonlinear evolution governed by NLSE [73,74,75]. PINNs guided with specific theories can also be an effective analytical tool to solve PDEs from incomplete models and limited data [76].

Reconstruction and design (inverse problem)

The inverse problem in fiber laser fields can be divided into two categories. The first one is the reconstruction problem: recovering the x* from measurement data y*, where y* = Ax* + n, for example, pulse reconstruction from a speckle pattern through a multimode fiber, mode decomposition from measured intensity patterns. The noise n might be an obstacle to achieving a high-precision reconstruction. The second is the design and manipulation problem: given a specific design target y* (e.g., a gain profile), to determine the required input of fiber laser system x* (such as the input voltages, currents, powers, and wavelengths), or the laser system itself A*(e.g., fiber with specific structures) where y* = Ax* or  y* = A*x. The noise n is usually ignored during the design process. Typical design problems include finding suitable geometric parameters during fiber structure design and shaping signals to produce target temporal and spatial characteristics. In some special cases, the target y* is too ideal and cannot be achieved because of physical theories or the restricted experimental condition and can only find one close to it.

It should be noted that the forward operator, A, can be completely known, partly known, or unknown in different applications. When A is well known, some conventional methods can transfer the inverse problem to an optimization problem and solve it with an iterative process. For each y*, a similar operation needs to be solved from scratch. However, this scheme is weak or cannot work when the forward operator, A, is complex, requiring a time-consuming calculation procedure, partly known or even totally unknown. Machine learning is a powerful tool to solve inverse problems, simply relying on learning the inverse mapping A−1 and then obtaining a solution x* = A−1 (y*) in a single step. Further, additional feedback and control can help to improve the result accuracy, and a well-trained model can accelerate this process by replacing complex computation in A.

Control

When there is a high requirement for control accuracy and speed because of dynamical environmental disturbance, a feedback loop and the corresponding control unit are required to follow the specific change. Learning and optimization are two primary means to affect robustness. They usually involve computational processes incorporated within the system that trigger parametric updating and knowledge or model enhancement, improving progressively. Machine learning provides new insights for feedback and control [77, 78], particularly in the dynamic, complex, and disturbance-sensitive system, where conventional control algorithm shows low control bandwidth and weak robustness. An exciting discovery in published literature is that learning models can automatically reject instrumental or environmental noise. Some applications combine machine learning with traditional algorithms to enhance performance [79, 80].

Denoising

This part has a tight relationship with image and signal processing. Machine learning techniques can overcome data error to some extent, such as removing bad points and blur in the raw data [81, 82] and completing tasks when the measurement device yields strong noise [83]. The denoising ability of machine learning is significant in many practical applications.

Other applications

Machine learning can be used to reduce manual engineering in experimental operations of laboratories by modifying the hardware, such as the alignment of laser beams [84].

Fiber and laser design

Different applications require lasers output with specific characteristics in time, space, and frequency domains. In the design problems to obtain on-demand output, factors like fiber structure, laser type, experimental beam path, etc., usually come into consideration. This section will review typical applications of machine learning techniques in fiber structure design and fiber amplifiers design. Machine learning can complement iterative design methods based on physical principles and optimization algorithms where each design problem needs to be repeated. Besides, nonlinear effects enable laser shaping in an optical fiber with many degrees of freedom. A prediction model of nonlinear phenomena and laser propagation can help with laser properties shaping. The related content of properties manipulation based on the study of nonlinear effects can be found in Section 4 (Fig. 4).

Fig. 4
figure 4

Design and manipulation for targeted laser properties

Fiber design

Photonic crystal fiber (PCF) is an important new optical waveguide. Different from the structure of conventional fibers with two concentric regions (core and cladding) with varying doping levels, the core of PCFs has air holes periodically arranged along the fiber’s length, which makes the cladding index wavelength-dependent [85]. The optical properties of PCF result from a series of structure parameters, such as the holes size, hole spacing, and the number of air-hole rings. Therefore, the parameters design of the PCF structure relies on high-precision modeling of structure parameters. Conventional numerical methods like the finite element method, block-iterative frequency-domain method, and plane wave expansion method need to perform multiple times for a specific fiber design, which requires significant computing resources when dealing with complex fiber structures.

In 2019, Sunny Chugh et al. adopted a FCNN with supervised learning to model a solid-core PCF [86]. PCF geometric parameters, including the diameter of holes (d), the separation between the center of two adjacent holes (pitch, Λ), the refractive index of core (nc), wavelength (λ), and the number of rings (Nr), are considered as the inputs of ANN, and optical properties including effective index (neff), effective mode area (Aeff), dispersion (D), and confinement loss (αc), are their labels, respectively, which are calculated using Lumerical Mode Solutions. Simulation quantitative analyses show that this method can support the accurate and quick design for PCF structure parameters. The predicted optical properties of the trained ANN model have an acceptable MSE value with their labels. As for the computation runtimes, Lumerical Mode Solutions takes a few minutes for each parameter, while the ANN model only needs a few milliseconds (Fig. 5).

Fig. 5
figure 5

Photonic crystal fiber modeling with a fully connected neural network. Figure adapted with permission from ref. [86] (© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement)

Fiber amplifier design

Machine learning plays a more important role in innovative fiber amplifier design. Raman amplifier (RA) is a typical research exemplar [79, 87,88,89,90]. RA is an attractive optical amplification scheme that offers gain availability across a broad range of wavelengths while maintaining low noise due to distributed amplification. Inverse design for Raman amplifiers focuses on selecting pump powers and wavelengths that would result in a targeted gain profile. The challenge of this problem lies in the highly-complex interaction between pumps and Raman gain [87].

In 2019, Darko Zibar et al. demonstrated an ANN method for the highly-accurate design of arbitrary Raman gain profiles, numerically in C and C + L–band and experimentally in C band [87]. The ANN resembles auto-encoders, including two FNNs. The first is the backward neural network NNbw, mapping from the Raman gain profile to the required pump power and wavelength configurations. The second one NNfw, forward neural network, represents the forward mapping between the pump powers and wavelengths and the Raman gain profile, which can work to fine-adjust the predicted results of NNbw when combined with a gradient descent algorithm.

Prediction and control of nonlinear effect

Machine learning provides physics-free and physics-informed manners for modeling nonlinear fiber laser systems. On the one hand, with an amount of data representing system behaviors and powerful computation hardware, machine learning techniques can find the relationship between system state variables (input, internal, and output variables), providing new avenues for exploring high-dimensional dynamical systems without solving complex mathematical and physical equations. On the other hand, incorporating physical principles into neural networks can help regularize the training in small data regimes. Further, the obtained model of the nonlinear effect can also be used to design and control the laser properties (Fig. 6).

Fig. 6
figure 6

Prediction of laser properties governed by the nonlinear effects

Pulse prediction of nonlinear dynamics

High nonlinearity in pulse evolution is an obstacle to establishing accurate numerical propagation simulations. Machine learning can provide an alternative solution by modeling the propagation and evolution of laser properties based on collected data from the nonlinear system.

In 2021, Lauri Salmela et al. used a RNN to achieve model-free prediction of complex nonlinear propagation in optical fibers governed by an NLSE system [73]. The trained network was proved to work for higher-order soliton compression and ultra-broadband supercontinuum generation, predicting temporal and spectral evolutions of ultrashort pulses in highly nonlinear fibers solely from the input pulse intensity profile. Other propagation scenarios for a wider range of input conditions and fiber systems can also be generalized, including multimode propagation.

Hao Sui et al. demonstrated a compressed convolutional neural network as an inverse computation tool to predict initial pulse distribution from a series of discrete power profiles at different propagation distances [75]. Two nonlinear dynamics, the pulse evolution in fiber optical parametric amplifier systems and the soliton pair evolution in high-nonlinear fiber, are studied in simulations. The simulation results on the test datasets hold a deviation with fair stability, which indicts the potential applications of this method in optimizing the initial pulse of fiber optics systems (Fig. 7).

Fig. 7
figure 7

Initial pulse distribution prediction with a convolutional neural network. Figure adapted with permission from ref. [75] (© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement)

Xiaotian Jiang et al. presented a physical-informed neural network to solve NLSE and characterize the pulse evolution of different input waveforms [91]. The network uses an initial pulse and its label (responsive NLSE solution) for training. The predicted NLSE solution from the network will be used to calculate its corresponding pulse via physical theory. The loss is combined with two terms: the loss of initial pulse and calculated pulse, and the other term describes the difference between the NLSE solution from network and label. In this way, the predicted results of the network can always satisfy the NLSE. The network can work with less computational complexity than a commonly used numerical method, the split-step Fourier method.

Spatiotemporal nonlinearities prediction and control

In 2020, Uğur Teğin studied spatiotemporal nonlinearities in multimode fibers for spectrum shaping. The results show that a multilayer neural network could learn nonlinear frequency conversion dynamics, serving for generating a target beam spectrum [92]. Another two highly nonlinear phenomena, cascaded stimulated Raman scattering based broadening of the spectrum and supercontinuum generation, are also under consideration. Later in 2021, they extended the method of ref. [73] (see Pulse Prediction of Nonlinear Dynamics) to predict spatiotemporal nonlinear propagation for an arbitrary number of modes in graded-index multimode fibers through a RNN [74] (Fig. 8).

Fig. 8
figure 8

Spectrum shaping with a fully connected neural network. Figure adapted with permission from ref. [92] (© The Author(s) 2020, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/ licenses/by/4.0/)

Spatiotemporal nonlinearities prediction is significant for generating a white-light continuum (WLC). The accuracy model of the underlying spatiotemporal nonlinear optical process will help the generation of the broad and stable WLC, replacing the time-consuming empirical optimization procedures of WLC properties (such as bandwidth, energy, and stability). In 2021, Carlo M. Valensise et al. adopted deep reinforcement learning to control the spatiotemporal dynamics for WLC generation [93]. The learning agent can learn an effective control policy of three degrees parameters (the energy of the pump pulse, the numerical aperture of the focused beam, and the position of the nonlinear plate concerning the beam waist), achieving stable and broadband WLC generation experimentally (Fig. 9).

Fig. 9
figure 9

White-light continuum generation with deep reinforcement learning. Figure adapted with permission from ref. [93] (© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement)

Pulse nonlinear shaping

Pulse shaping in optical fibers based on wave-chopping devices and nonlinear control are two efficient methods to tailor the on-demand laser properties. Wave-chopping is a primary method for pulse laser generation from extra-cavity modulation of continuous laser, which commonly relies on chopper devices such as electro-optic and acousto-optic modulators [94, 95] and enables flexible shaping of temporal properties to get arbitrary pulse shape and duration. Limited by the response time of the driver, it is hard for the extra-cavity modulated pulse fiber laser to obtain the ultra-short pulse fiber, usually with pulse durations in μs and ns levels [96,97,98,99,100]. The pulse power obtained from extra-of-cavity modulation is limited by the power handleability of the modulation device. A multi-stage amplifier is required to get a high pulse output, but this will lead to pulse distortion because of the gain saturation effect [99]. A method to overcome this problem is to find the suitable modulation signal by optimization algorithm to pre-compensate distortion [97].

An accuracy model of pulse nonlinear propagation is another way for pulse shaping [101, 102]. In 2020 and 2021, Sonia Boscolo et al. used artificial neural networks to model nonlinear pulse propagation in fibers with normal and anomalous dispersion [103, 104]. A FCNN is trained to learn the relationship between the temporal and spectral intensity profiles of the pulses and the fiber parameters. Further, the network can identify the initial pulse shape according to the pulse shape from the fiber output.

Reconstruction and evaluation of laser properties

In laser property reconstruction and evaluation, recent research involving machine learning focuses on indirect methods, highlighting the advantage of low experimental cost and high immunity to instrumental and environmental noise. In detail, measurement images (such as intensity patterns like pulse intensity, spectral intensity, near-field beam intensity) are mapped to the required laser properties (such as ultrashort pulses spectral amplitude, phase, and temporal duration) for one-step inference rather than a direct measurement. For example, a deep learning method has been explored to map the speckle pattern of a single-mode fiber followed by a disordered medium to the wavelength of a diode laser directly [105]. When considering phase detection, there may be a phase ambiguity problem (e.g., multiple phases result in the same intensity pattern). To eliminate the ambiguity, a speckle pattern passing through a scattering device, e.g., the multimode fiber, is used to break the degeneracy of data and build a one-to-one correspondence to the required laser properties, such as a single-shot full-field pulse measurement technique enabled by deep learning [106] (Fig. 10).

Fig. 10
figure 10

Reconstruction and evaluation of laser properties

Ultrashort pulses reconstruction

Ultrashort laser pulse reconstruction is a challenging topic in ultrafast science, such as ultrafast imaging, femtochemistry, coherent control, and high-harmonic spectroscopy [107]. Typically, the duration of an ultrashort pulse is below picoseconds and too short to be measured directly by photodiodes. Frequency-resolved optical gating (FROG) [108] and dispersion scan (d-scan) [109] are two widespread indirect methods. With a recovery algorithm, such as the principal component general projections algorithm [110] and Ptychographic reconstruction algorithm [111], the 2D trace can support the reconstruction of ultrafast pulses. In recent years, deep learning has been introduced in ultrafast pulses reconstruction. In 2018, Tom Zahavy et al. first applied the DNN technique to FROG to characterize ultrashort optical femtosecond pulses phase [112]. Later, more work on the deep-learning reconstruction of the ultrashort pulse phase was demonstrated for the attosecond pulse [113, 114].

One research in the fiber laser field showed temporal duration characterization of the mode-locked pulses using the dispersive Fourier transform trace [115]. The trained artificial neural network can predict the pulse duration with an average consistency of 95%. The proposed technique can be adapted to create a compact and low-cost feedback loop in complex laser systems.

Mode decomposition

Mode decomposition (MD) technique of multimode fibers, which aims to calculate the amplitude and phase information of each eigenmode, is essential for analyzing the complete optical field and its beam properties. A challenge of complete mode decomposition is that different combinations of modal weights and phases may result in the same near-field intensity pattern [116]. In the few-mode fiber, the main ambiguity comes from the modal phase.

In 2019, Yi An et al. used a CNN, modified from VGG-16, with supervised training to predict modal weights and relative phases with only the near-field intensity pattern for the first time [117]. Because of phase ambiguity, the cosine value of relative phases is adopted rather than relative phases themselves as a part of labels to ensure the network can converge under a one-to-one mapping relationship. Considering the sign of relative modal phases, the process from predicted cosine values to relative phases takes up a higher time cost as the number of modes increases. Restricted by the capturing speed of the CCD (30 Hz), the real-time decomposing rate is experimentally limited to 29.9 Hz for 3-mode and 6-mode cases if it only needs modal weight determination [118, 119]. When predicting both modal weights and relative phases, the real-time decomposing rate is 29.9 Hz for 3-mode and 24 Hz for 6-mode cases. Later, this work was extended to modal analysis for Hermite–Gaussian beams emitting from solid-state lasers [120] (Fig. 11).

Fig. 11
figure 11

Mode decomposition with a convolutional neural network modified from VGG-16. Figure adapted with permission from ref. [117] (© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement)

In 2020, Xiaojie Fan et al. handled the phase ambiguity of modal coefficients using two labels: near-field and far-field images [121]. A combined loss to train a convolutional neural network considers the reconstruction loss of near-field and far-field images together. Simulation results show that the training accuracy can be improved at the cost of increased labels in the datasets.

In 2021, with more powerful computational graphics processing units, Stefan Rothe et al. presented mode decomposition with another type of CNN, DenseNet, achieving 10 modes experimentally [122]. Like [117,118,119,120], they also used the cosine value of relative phases to make the label. The trained network can work for datasets with modes unknown, implementing mode decomposition on a subset of 10 modes of a 55-mode fiber.

Han Gao et al. used optimized datasets for network training to reduce the calculation complexity [123]. The principal component analysis, a dimensionality reduction algorithm, was adopted to remove redundant information and noise in the near-field beam patterns. A 3-layer FCNN is trained to map the pre-processed near-field beam patterns to its label (the cosine value of the modal phase and the modal weights). In a 3-mode simulation, dataset optimization can help reduce the speed of complete modal demodulation from 40 ms per frame to 5 ms per frame. In the 3-mode experiment test, the averaged correlation between reconstructed images and target images of 300 samples is 0.9224.

Beam quality evaluation

Beam propagation factor M2 is an important parameter for assessing laser beam quality. A standard M2 measurement determined by the International Organization for Standardization is experimentally complex and relatively time-consuming. Improved techniques for fiber laser include a single-shot scheme with a Fabry-Perot resonator [124], complex amplitude reconstruction methods with interferometers [125, 126], and two identical Charge-Coupled Device (CCD)s [127], and mode decomposition methods [128,129,130]. Although the relationship between the M2 factor and a single near-field pattern is implicit, deep learning method can extract a straightforward mapping based on data analysis.

In 2019, Yi An et al. utilized a trained CNN to achieve M2 determination of the fiber beams in about 5 ms with only one near-field beam pattern from the CCD, which is highly competitive in real-time measurement for time-varying beams [131]. This method also shows excellent robustness for imperfect beam patterns, such as noisy patterns and patterns from the CCD with vertical blooming [83] (Fig. 12).

Fig. 12
figure 12

M2 evaluation with a convolutional neural network modified from VGG-16. Figure adapted with permission from ref. [131] (© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement)

Robust control for laser and laser system

Introducing a feedback mechanism into the architecture of fiber laser or fiber laser system to perform closed-loop control of external and internal factors through servo drive components is a feasible solution to maintaining its stable operation and state locking. Ongoing efforts in machine learning have made the control of complex and sensitive fiber laser and fiber laser systems.

Mode locking in mode-locked laser

Mode-locked fiber lasers (MLFLs) based on nonlinear polarization rotation are the mainstream commercial products, and their performance is extremely sensitive to perturbations inside or outside the cavity, thus requiring strict environmental control to maintain robust performance. Cavity sensitivity to birefringence has a significant impact on mode-locking dynamics. However, quantitative modeling of stochastic and sensitive birefringence is unclear. Traversal and optimization algorithms have been wildly studied for automatic mode-locking techniques [30, 132,133,134,135,136,137].

In 2014, a research group at the University of Washington achieved birefringence characterization of MLFLs based on machine learning sparse representation in a numerical simulation [138]. Further, by combining the adaptive extremum-seeking controllers and the machine learning based birefringence classification, they proposed a self-tuning fiber laser based on numerical simulations [139]. In 2018, they developed a self-tuning laser based on deep learning and model predictive control (DL-MPC) algorithm [140]. The centerpiece of the DL-MPC algorithm is the model prediction module, a recurrent neural network to predict the future laser states. When the difference between the predicted and real laser state exceeds a certain threshold, a VAE will work at first to infer the birefringence. Then a simple FCNN will map its result to the control input (angles of a polarizer and three quarter-waveplates) to maintain mode-locking (Fig. 13).

Fig. 13
figure 13

Mode-locking with deep-learning model predictive control algorithm. Figure adapted with permission from ref. [140] (© 2018 Optical Society of America, Open Access)

In 2021, Qiuquan Yan et al. demonstrated a deep-reinforcement learning algorithm with low latency (DELAY) for the automatic mode-locked operation in a saturable absorber-based ultrafast fiber laser [141]. The DELAY algorithm has four deep neural networks, two for selecting the appropriate action (adjust input voltages of the electrical polarization controller) according to the laser state and the other two to evaluate the effect of the executed actions. The experiment result shows that the fastest recovery time of the algorithm after vibration is 0.472 s, and the average recovery time is 1.948 s (Fig. 14).

Fig. 14
figure 14

Mode-locking with low-latency deep-reinforcement learning algorithm. Figure adapted with permission from ref. [141] (© 2021 Chinese Laser Press, Open Access)

Phase locking in coherent laser combination

Fiber laser has attracted research interest in many fields because of its compact structure, high efficiency, high portability, and good beam quality. As the power increases, the beam quality of a single fiber laser will decrease due to some physical limits [3,4,5]. Coherent beam combination (CBC) of multiple fiber lasers is a practical approach to breaking the power limitation [142, 143]. With phase synchronization between the sub-beams, the coherent output could be realized, thereby improving the power of the entire output beam while maintaining beam quality and improving the brightness.

Because of mechanical and thermal perturbations in actual engineering, a phase control technique is required to ensure phase synchronization and stabilization of sub-beams and thus maximize combination efficiency. The active phase-locking method uses a phase-detection and feedback servo control system to compensate for the dynamic phase noise to realize the in-phase coherent output of each sub-beam. Phase detection in classic active phase-locking can be divided into two categories: direct and indirect detection. Direct ones yield high accuracy while requiring complex experimental structures, such as the heterodyne detection method [144, 145], interferometric phase measurement method [146], phase-intensity mapping method [147], and pattern recognition method [148]. Indirect detection techniques utilize electrical modulation and demodulation to find phase information, typically dithering techniques [149,150,151,152] and stochastic parallel gradient descent (SPGD) algorithm [153].

A common question in CBC is how to combine a large number of sub-beams efficiently to achieve a high output power. However, the control bandwidth of most classic active methods decreases along with the number of sub-beams. The control bandwidth of the phase control is still a challenging problem in large-scale CBC systems. Machine learning has recently been introduced to extend the classic control methods, where reinforcement and supervised learning are two main approaches in applications.

In 2019, Henrik Tünnermann et al. demonstrated deep reinforcement learning for Mach-Zehnder interferometer CBC set up and tiled aperture beam combining [77, 78]. The result of the critic network is regarded as a reward for beam quality, and it will decide the action of the control network. The control strategy is similar to that in an optimization scheme. Since it needs time to take action, the robustness still needs to be enhanced before practical applications (Fig. 15).

Fig. 15
figure 15

Tiled aperture beam combining with deep reinforcement learning. Figure adapted with permission from ref. [78] (© The Author(s) 2019, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/ licenses/by/4.0/)

In 2021, Maksym Shpakovych et al. proposed a quasi-reinforcement learning algorithm for an array of up to 100 laser beams [154]. The piston aberration in this system is created by a spatial light modulator (SLM) rather than coming from the actual environment. In this case, the influence of dynamic random noise in real systems was not considered.

In the same year, Xi Zhang et al. applied the Q-learning algorithm, an iterative reinforcement learning method, to CBC [155]. When the number of channels is low, it has a similar performance to the SPGD algorithm, while its parameter debugging is more convenient than the SPGD algorithm.

A feasible supervised deep-learning based CBC relies on a well-trained neural network that can reverse mapping the phase of each sub-beam from an intensity profile. During the phase control process, the phase errors are compensated by the predicted phase of the neural network. The premise of this method is to ensure one-to-one correspondence between input and output. Only this way can the network converge and thus have phase prediction capabilities. In the symmetrically arranged beam array of tiled aperture systems, the far-field intensity distribution can correspond to the different phase distributions in the near-field [156], so it cannot be directly paired with the phase as training data.

In 2019, Tianyue Hou et al. incorporated a CNN based on supervised learning into tiled aperture CBC systems to learn the relationship between the intensity profile of the combined beam and the relative phases of array elements for the first time [157]. In this way, the required phase for compensation can be obtained directly, which is quite different from the methods based on reinforcement learning [77, 78, 154, 155]. This work adopted non-focal-plane rather than focal-plane images to train the deep-learning model to avoid the data collision problem. Later, this method is extended to generate orbital angular momentum beams with different topological charges from a CBC system [80]. It is to be noted that structured light [158,159,160,161], one of the research frontiers, can be generated based on a beam array inspired by similar methods (Fig. 16).

Fig. 16
figure 16

Coherent beam combination with a deep convolutional neural network and non-focal-plane images. Figure adapted with permission from ref. [157] (© The Author(s) 2019, distributed under the terms of the Creative Commons Attribution license, http://creativecommons.org/ licenses/by/4.0/)

Furthermore, to obtain a straightforward optical design that could adopt the focal-plane intensity image, in 2021, Qi Chang et al. considered breaking the degeneracy of the combined beam pattern in the focal plane with a diffuser [162]. A CNN maps the scattering intensity images in the focal plane to the phase error of the fiber array. To some extent, the role of the diffuser is equivalent to applying both intensity and phase modulation.

Similarly, Renqi Liu et al. applied amplitude modulation into sub-beams of a tiled-aperture coherent beam combination system [163]. In the two-beam coherent combination experiment, the system can simultaneously measure sub-beam beam-pointing and phase difference with an RMS accuracy of about 0.2 μrad and λ/250, respectively.

Another research adopted a four-layer FCNN to combine 81 channels on a two-dimensional, 9×9 beam diffractive optical element (DOE) combiner [164]. Similar to [154], this system also works in an ideal situation. The network was trained to map the far-field interference pattern to the phase of DOE. Since nearly identical interference patterns might come from different beam phases, the phase ambiguity is an obstacle to the convergence of networks. A core operation of this work is that the training data is produced from a limited phase perturbation range (less than the 180-degree range) which is regarded as the unambiguous region. The trained network can quickly predict the phase for small yet frequent perturbations, which is the usual case. A feedback loop is introduced to pull the phase into the trained region through a random-walk if the phase is out of the trained range. This work also discussed why limited region training is adequate for the full phase perturbation range.

Discussions and prospects

Over the last decade, machine learning has dramatically boosted the development of fiber lasers, leading to new paradigms for advanced research and practical engineering in fiber lasers. Even though many pretty impressive results have been presented in the available studies, potential problems and challenges remain. For example, some work is based only on numerical data or numerical verification. Further work needs to be concerned, such as uncovering the governing models from experimental responses, validating learning models in laboratories, and online learning involving environmental changes in real-time applications. Besides, the high burden of data collection, expensive computation cost, and poor interpretability might severely restrict the application possibilities of machine learning, especially the deep learning methods with a black-box mechanism. Applications that require high interpretability in the industry still prefer classical optimization algorithms. When should we choose the machine learning method under the trade-off of effectiveness and cost? What makes one machine learning method better than another? To what extent would we trust the machine learning results and conclusions? The exploration of fundamental questions like the above will drive machine learning research.

Opportunities and challenges are on the way. Going forward, effective tools are supposed to accelerate machine learning research. Mature open sources like TensorFlow [165] and PyTorch [166], have brought great convenience to the popularization of machine learning. However, standardized benchmarks for fiber and fiber laser research are rare. Further work could consider creating open datasets for a specific topic that serve as a standard for judging or comparing other things, which would benefit machine learning research in the relative field.

Novel machine learning techniques are emerging in an endless stream. Various new frameworks and new mathematics for scalable, robust, and rigorous next-generation learning machines are under development, which will continue to promote the development of lasers and achieve more brilliant results in the foreseeable future. For example, algorithm design by hand is a laborious process. To improve it, the concept of learning to optimize deserves more attention [167, 168], which shows algorithm design can be cast as a learning problem. The relative technique might benefit the creation of autonomous fiber lasers for self-learning and intelligence, featuring unlimited scalability and resistance to disruption.

Availability of data and materials

Data sharing is not applicable to this article as no new datasets were created in this review.

Abbreviations

DL:

Deep learning

DNNs:

Deep neural networks

RL:

Reinforcement learning

PINN:

Physics-informed neural network

ANN:

Artificial neural networks

FCNN:

The fully connected network

RNN:

The recurrent neural network

LSTM:

The long short-term memory

GRU:

The gated recurrent unit

RBM:

Restricted Boltzmann machine

DBN:

Deep belief network

CNN:

Convolutional neural network

AE:

Autoencoder

GAN:

Generative adversarial network

VAE:

Variational autoencoder

GCN:

Graph convolutional network

DQN:

Deep Q-network

SGD:

Stochastic gradient descent

PDE:

Partial differential equation

NLSE:

The nonlinear Schrödinger equation

WLC:

White-light continuum

PCF:

Photonic crystal fiber

RA:

Raman amplifier

FROG:

Frequency-resolved optical gating

MD:

Mode decomposition

CCD:

Charge-Coupled Device

MLFL:

Mode-locked fiber lasers

CBC:

Coherent beam combination

References

  1. Fermann ME, Hartl I. Ultrafast fiber laser technology. IEEE J Select Topics Quantum Electron. 2009;15(1):191–204. https://doi.org/10.1109/JSTQE.2008.2010246.

    Article  Google Scholar 

  2. Fermann ME, Hartl I. Ultrafast fibre lasers. Nat Photonics. 2013;7(11):868–74. https://doi.org/10.1038/nphoton.2013.280.

    Article  Google Scholar 

  3. Zervas MN, Codemard CA. High power fiber lasers: A review. IEEE J Select Topics Quantum Electron. 2014;20(5):219–41. https://doi.org/10.1109/JSTQE.2014.2321279.

    Article  Google Scholar 

  4. Jauregui C, Limpert J, Tünnermann A. High-power fibre lasers. Nat Photonics. 2013;7(11):861–7. https://doi.org/10.1038/nphoton.2013.273.

    Article  Google Scholar 

  5. Liu Z, et al. High-power coherent beam polarization combination of fiber lasers: progress and prospect [Invited]. J Opt Soc Am B. 2017;34(3):A7. https://doi.org/10.1364/josab.34.0000a7.

    Article  Google Scholar 

  6. Xu C, Wise FW. Recent advances in fibre lasers for nonlinear microscopy. Nat Photonics. 2013;7(11):875–82. https://doi.org/10.1038/nphoton.2013.284.

    Article  Google Scholar 

  7. Kapron FP, Keck DB. Pulse Transmission Through a Dielectric Optical Waveguide. Appl Opt. 1971;10(7):1519. https://doi.org/10.1364/ao.10.001519.

    Article  Google Scholar 

  8. Li T. Optical Fibers for Communications. Opt News. 1977;3(3):10–5. https://doi.org/10.1364/on.3.2.000010.

    Article  Google Scholar 

  9. Olsen FO, Hansen KS, Nielsen JS. Multibeam fiber laser cutting. J Laser Appl. 2009;21(3):133–8. https://doi.org/10.2351/1.3184436.

    Article  Google Scholar 

  10. Yang J, Tang Y, Xu J. Development and applications of gain-switched fiber lasers [Invited]. Photonics Res. 2013;1(1):52. https://doi.org/10.1364/prj.1.000052.

    Article  Google Scholar 

  11. Churkin DV, et al. Recent advances in fundamentals and applications of random fiber lasers. Adv Opt Photon. 2015;7(3):516. https://doi.org/10.1364/aop.7.000516.

    Article  Google Scholar 

  12. Fu S, et al. Review of recent progress on single-frequency fiber lasers. J Opt Soc Am B. 2017;34(3):A49. https://doi.org/10.1364/josab.34.000a49.

    Article  Google Scholar 

  13. Shang C, et al. Review on wavelength-tunable pulsed fiber lasers based on 2D materials. Opt Laser Technol. 2020;131(September 2019). https://doi.org/10.1016/j.optlastec.2020.106375.

  14. Dragic PD, Cavillon M, Ballato J. Materials for optical fiber lasers: A review. Appl Phys Rev. 2018;5(4). https://doi.org/10.1063/1.5048410.

  15. Samuel AL. Some studies in machine learning using the game of checkers. IBM J Res Dev. 2000;44(1–2):207–19. https://doi.org/10.1147/rd.441.0206.

    Article  Google Scholar 

  16. De Santana LMQ, et al. Deep Neural Networks for Acoustic Modeling in the Presence of Noise. IEEE Lat Am Trans. 2018;16(3):918–25. https://doi.org/10.1109/TLA.2018.8358674.

    Article  Google Scholar 

  17. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90. https://doi.org/10.1145/3065386.

    Article  Google Scholar 

  18. Jiao Z, et al. Machine learning and deep learning in chemical health and safety: A systematic review of techniques and applications. J Chem Health Saf. 2020;27(6):316–34. https://doi.org/10.1021/acs.chas.0c00075.

    Article  Google Scholar 

  19. Ongie G, et al. Deep Learning Techniques for Inverse Problems in Imaging. IEEE J Select Areas Inform Theory. 2020;1(1):39–56. https://doi.org/10.1109/jsait.2020.2991563.

    Article  Google Scholar 

  20. Barbastathis G, Ozcan A, Situ G. On the use of deep learning for computational imaging. Optica. 2019;6(8):921. https://doi.org/10.1364/optica.6.000921.

    Article  Google Scholar 

  21. Zhao R, Huang L, Wang Y. Recent advances in multi-dimensional metasurfaces holographic technologies. PhotoniX. 2020;1(1):1–24. https://doi.org/10.1186/s43074-020-00020-y.

    Article  Google Scholar 

  22. Zuo C, et al. Deep learning in optical metrology: a review. Light Sci Appl. 2022;11(1). https://doi.org/10.1038/s41377-022-00714-x.

  23. Musumeci F, et al. An Overview on Application of Machine Learning Techniques in Optical Networks. IEEE Commun Surv Tutorials. 2019;21(2):1383–408. https://doi.org/10.1109/COMST.2018.2880039.

    Article  Google Scholar 

  24. Wang D, et al. Data-driven Optical Fiber Channel Modeling: A Deep Learning Approach. J Lightwave Technol. 2020;38(17):4730–43. https://doi.org/10.1109/JLT.2020.2993271.

    Article  Google Scholar 

  25. Zhang Y, et al. Ultrafast and Accurate Temperature Extraction via Kernel Extreme Learning Machine for BOTDA Sensors. J Lightwave Technol. 2021;39(5):1537–43. https://doi.org/10.1109/JLT.2020.3035810.

    Article  Google Scholar 

  26. Ma W, et al. Deep learning for the design of photonic structures. Nat Photonics. 2021;15(2):77–90. https://doi.org/10.1038/s41566-020-0685-y.

    Article  Google Scholar 

  27. Wiecha PR, et al. Deep learning in nano-photonics: inverse design and beyond. Photonics Res. 2021;9(5):B182. https://doi.org/10.1364/prj.415960.

    Article  Google Scholar 

  28. Malkiel I, et al. Plasmonic nanostructure design and characterization via Deep Learning. Light Sci Appl. 2018;7(1). https://doi.org/10.1038/s41377-018-0060-7.

  29. Situ G, Westbrook P. AI boosts photonics and vice versa AI boosts photonics and vice versa: AIP Publishing, LLC; 2020. https://doi.org/10.1063/5.0017902.

    Book  Google Scholar 

  30. Woodward RI, Kelleher EJR. Genetic algorithm-based control of birefringent filtering for self-tuning, self-pulsing fiber lasers. Opt Lett. 2017;42(15):2952. https://doi.org/10.1364/ol.42.002952.

    Article  Google Scholar 

  31. Wu X, et al. Intelligent Breathing Soliton Generation in Ultrafast Fiber Lasers. Laser Photonics Rev. 2022;16(2):2100191. https://doi.org/10.1002/lpor.202100191.

    Article  Google Scholar 

  32. Nathan Kutz J, Fu X, Brunton S. Self-tuning fiber lasers: Machine learning applied to optical systems. Nonlinear Photonics. 2014;2014:1–2. https://doi.org/10.1364/np.2014.ntu4a.7.

    Article  Google Scholar 

  33. Mitchell TM. Machine Learning. New York: McGraw-Hill; 1997.

    MATH  Google Scholar 

  34. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. In: 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings; 2016. p. 1–16.

    Google Scholar 

  35. Tamir JI, Yu SX, Lustig M. Unsupervised Deep Basis Pursuit: Learning inverse problems without ground-truth data; 2019. p. 1–5.

    Google Scholar 

  36. van Engelen JE, Hoos HH. A survey on semi-supervised learning. Mach Learn. 2020;109(2):373–440. https://doi.org/10.1007/s10994-019-05855-6.

    Article  MathSciNet  MATH  Google Scholar 

  37. Nilsson NJ. Introduction to Machine Learning. An early draft of a proposed textbook. Mach Learn. 2005;56(2):387–99 10.1.1.167.8023.

    Google Scholar 

  38. Shalev-Shwartz S, Ben-David S. Understanding Machine Learning, in Understanding Machine Learning: From Theory to Algorithms 9781107057. Cambridge: Cambridge University Press; 2014. https://doi.org/10.1017/CBO9781107298019.

    Book  MATH  Google Scholar 

  39. Qiu J, et al. A survey of machine learning for big data processing. Eurasip J Adv Signal Process. 2016;(1). https://doi.org/10.1186/s13634-016-0355-x.

  40. Martin E, et al. Semi-Supervised Learning. In: Encyclopedia of Machine Learning. Boston: Springer; 2011. p. 892–7. https://doi.org/10.1007/978-0-387-30164-8_749.

    Chapter  Google Scholar 

  41. Morales EF, Zaragoza JH. An introduction to reinforcement learning. In: Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions; 2011. p. 63–80. https://doi.org/10.4018/978-1-60960-165-2.ch004.

    Chapter  Google Scholar 

  42. Nousiainen J, et al. Adaptive optics control using model-based reinforcement learning. Opt Express. 2021;29(10):15327. https://doi.org/10.1364/oe.420270.

    Article  Google Scholar 

  43. Brereton RG, Lloyd GR. Support Vector Machines for classification and regression. Analyst. 2010;135(2):230–67. https://doi.org/10.1039/b918972f.

    Article  Google Scholar 

  44. Bo D, et al. Structural Deep Clustering Network. In: The Web Conference 2020 - Proceedings of the World Wide Web Conference, WWW 2020; 2020. p. 1400–10. https://doi.org/10.1145/3366423.3380214.

    Chapter  Google Scholar 

  45. Min E, et al. A Survey of Clustering with Deep Learning: From the Perspective of Network Architecture. IEEE Access. 2018;6(July):39501–14. https://doi.org/10.1109/ACCESS.2018.2855437.

    Article  Google Scholar 

  46. LeCun Y, et al. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989;1(4):541–51. https://doi.org/10.1162/neco.1989.1.4.541.

    Article  Google Scholar 

  47. Lecun Y, et al. Gradient-based learning applied to document recognition. Proc IEEE. 1998;86(11):2278–324. https://doi.org/10.1109/5.726791.

    Article  Google Scholar 

  48. Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings; 2017. p. 1–14.

    Google Scholar 

  49. Solomatine D, See LM, Abrahart RJ. Data-Driven Modelling: Concepts, Approaches and Experiences. Pract Hydroinf. 2008:17–30. https://doi.org/10.1007/978-3-540-79881-1_2.

  50. Karniadakis GE, et al. Physics-informed machine learning. Nat Rev Physics. 2021;3(6):422–40. https://doi.org/10.1038/s42254-021-00314-5.

    Article  MathSciNet  Google Scholar 

  51. Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys. 2019;378(October):686–707. https://doi.org/10.1016/j.jcp.2018.10.045.

    Article  MathSciNet  MATH  Google Scholar 

  52. Raissi M. Deep hidden physics models: Deep learning of nonlinear partial differential equations. J Mach Learn Res. 2018;19:1–24.

    MathSciNet  MATH  Google Scholar 

  53. Brunton SL, et al. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc Natl Acad Sci U S A. 2016;113(15):3932–7. https://doi.org/10.1073/pnas.1517384113.

    Article  MathSciNet  MATH  Google Scholar 

  54. Bengio Y, Courville A, Vincent P. Representation Learning: A Review and New Perspectives. IEEE Trans Pattern Anal Mach Intell. 2013;35(8):1798–828. https://doi.org/10.1109/TPAMI.2013.50.

    Article  Google Scholar 

  55. Yu D, et al. Deep learning and its applications to signal and information processing. IEEE Signal Process Mag. 2011;28(1):145–50. https://doi.org/10.1109/MSP.2010.939038.

    Article  Google Scholar 

  56. Lecun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. https://doi.org/10.1038/nature14539.

    Article  Google Scholar 

  57. McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115–33. https://doi.org/10.1007/BF02478259.

    Article  MathSciNet  MATH  Google Scholar 

  58. Salehinejad H, et al. Recent Advances in Recurrent Neural Networks; 2017. p. 1–21.

    Google Scholar 

  59. Bennett KP, Parrado-Hernández E. The interplay of optimization and machine learning research. J Mach Learn Res. 2006;7:1265–81. https://doi.org/10.5555/1248547.

    Article  MathSciNet  MATH  Google Scholar 

  60. Zhang J, et al. Why gradient clipping accelerates training: A theoretical justification for adaptivity; 2019. p. 1–21.

    Google Scholar 

  61. Wilson AC, et al. The marginal value of adaptive gradient methods in machine learning. Adv Neural Inf Proces Syst. 2017;(Nips):4149–59. http://arxiv.org/abs/1705.08292.

  62. Ruder S. An overview of gradient descent optimization algorithms. In: arXiv preprint arXiv:160904747; 2016. p. 1–14. http://arxiv.org/abs/1609.04747.

  63. Yao X. Evolving artificial neural networks. Proc IEEE. 1999;87(9):1423–47. https://doi.org/10.1109/5.784219.

    Article  Google Scholar 

  64. F. P. Such et al., “Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning” (2017).

    Google Scholar 

  65. Conti E, et al. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. Adv Neural Inf Proces Syst. 2018;(NeurIPS):5027–38. http://arxiv.org/abs/1712.06560.

  66. Rere LMR, Fanany MI, Arymurthy AM. Simulated Annealing Algorithm for Deep Learning. Procedia Comput Sci. 2015;72:137–44. https://doi.org/10.1016/j.procs.2015.12.114.

    Article  Google Scholar 

  67. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7. https://doi.org/10.1126/science.1127647.

    Article  MathSciNet  MATH  Google Scholar 

  68. Wang H, Czerminski R, Jamieson AC. Neural Networks and Deep Learning. In: The Machine Age of Customer Insight; 2021. p. 91–101. https://doi.org/10.1108/978-1-83909-694-520211010.

    Chapter  Google Scholar 

  69. Mnih V, et al. Playing Atari with Deep Reinforcement Learning. In: Deep Reinforcement Learning; 2013. p. 135–60.

    Google Scholar 

  70. Vlachas PR, et al. Backpropagation algorithms and Reservoir Computing in Recurrent Neural Networks for the forecasting of complex spatiotemporal dynamics. Neural Netw. 2020;126:191–217. https://doi.org/10.1016/j.neunet.2020.02.016.

    Article  Google Scholar 

  71. Pandey S, Schumacher J. Reservoir computing model of two-dimensional turbulent convection. Phys Rev Fluids. 2020;5(11):113506. https://doi.org/10.1103/PhysRevFluids.5.113506.

    Article  Google Scholar 

  72. Vlachas PR, et al. Data-Driven Forecasting of High-Dimensional Chaotic Systems with Long Short-Term Memory Networks. (arXiv:1802.07486v4 [physics.comp-ph] UPDATED). Phys Today. 2018. https://doi.org/10.1098/rspa.2017.0844.

  73. Salmela L, et al. Predicting ultrafast nonlinear dynamics in fibre optics with a recurrent neural network. Nat Machine Intell. 2021;3(4):344–54. https://doi.org/10.1038/s42256-021-00297-z.

    Article  Google Scholar 

  74. Teğin U, et al. Reusability report: Predicting spatiotemporal nonlinear dynamics in multimode fibre optics with a recurrent neural network. Nat Machine Intell. 2021;3(5):387–91. https://doi.org/10.1038/s42256-021-00347-6.

    Article  Google Scholar 

  75. Sui H, et al. Deep learning based pulse prediction of nonlinear dynamics in fiber optics. Opt Express. 2021;29(26):44080. https://doi.org/10.1364/oe.443279.

    Article  Google Scholar 

  76. Lim J, Psaltis D. MaxwellNet: Physics-driven deep neural network training based on Maxwell’s equations. APL Photonics. 2022;7(1):011301. https://doi.org/10.1063/5.0071616.

    Article  Google Scholar 

  77. Tünnermann H, Shirakawa A. Deep reinforcement learning for coherent beam combining applications. Opt Express. 2019;27(17):24223. https://doi.org/10.1364/oe.27.024223.

    Article  Google Scholar 

  78. Tünnermann H, Shirakawa A. Deep reinforcement learning for tiled aperture beam combining in a simulated environment. JPhys Photonics. 2021;3(1). https://doi.org/10.1088/2515-7647/abcd83.

  79. Chen J, Jiang H. Optimal Design of Gain-Flattened Raman Fiber Amplifiers Using a Hybrid Approach Combining Randomized Neural Networks and Differential Evolution Algorithm. IEEE Photonics J. 2018;10(2). https://doi.org/10.1109/JPHOT.2018.2817843.

  80. Hou T, et al. Deep-learning-assisted, two-stage phase control method for high-power mode-programmable orbital angular momentum beam generation. Photonics Res. 2020;8(5):715. https://doi.org/10.1364/prj.388551.

    Article  Google Scholar 

  81. Vincent P, et al. Stacked denoising autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J Mach Learn Res. 2010;11:3371–408.

    MathSciNet  MATH  Google Scholar 

  82. Vincent P, et al. Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, vol. 311. New York: ACM Press; 2008. p. 1096–103. https://doi.org/10.1145/1390156.1390294.

    Chapter  Google Scholar 

  83. An Y, et al. Suppressing the Influence of CCD Vertical Blooming on M2 Determination through Deep Learning. In: 2019 18th International Conference on Optical Communications and Networks, ICOCN 2019(1); 2019. p. 2–4. https://doi.org/10.1109/ICOCN.2019.8934887.

    Chapter  Google Scholar 

  84. Mathew RS, et al. The Raspberry Pi auto-aligner: Machine learning for automated alignment of laser beams. Rev Sci Instrum. 2021;92(1). https://doi.org/10.1063/5.0032588.

  85. Arismar Cerqueira S. Recent progress and novel applications of photonic crystal fibers. Rep Prog Phys. 2010;73(2):024401. https://doi.org/10.1088/0034-4885/73/2/024401.

    Article  Google Scholar 

  86. Chugh S, et al. Machine learning approach for computing optical properties of a photonic crystal fiber. Opt Express. 2019;27(25):36414. https://doi.org/10.1364/oe.27.036414.

    Article  Google Scholar 

  87. Zibar D, et al. Inverse System Design Using Machine Learning: The Raman Amplifier Case. J Lightwave Technol. 2020;38(4):736–53. https://doi.org/10.1109/JLT.2019.2952179.

    Article  Google Scholar 

  88. Zhou J, et al. Robust, compact, and flexible neural model for a fiber Raman amplifier. J Lightwave Technol. 2006;24(6):2362–7. https://doi.org/10.1109/JLT.2006.874602.

    Article  Google Scholar 

  89. Singh S, Kaler RS. Performance optimization of EDFA-Raman hybrid optical amplifier using genetic algorithm. Opt Laser Technol. 2015;68:89–95. https://doi.org/10.1016/j.optlastec.2014.10.011.

    Article  Google Scholar 

  90. M. Ionescu, A. Ghazisaeidi, and J. Renaudier, “Machine Learning Assisted Hybrid EDFA-Raman Amplifier Design for C+L Bands,” 2020 European Conference on Optical Communications, ECOC 2020(1), 2020–2022. 2020. https://doi.org/10.1109/ECOC48923.2020.9333241.

  91. Jiang X, et al. Solving the nonlinear Schrödinger equation in optical fibers using physics-informed neural network. In: Optics InfoBase Conference Papers: OSA; 2021. p. 3–5. https://doi.org/10.1364/ofc.2021.m3h.8.

    Chapter  Google Scholar 

  92. Teǧin U, et al. Controlling spatiotemporal nonlinearities in multimode fibers with deep neural networks. APL Photonics. 2020;5(3):030804. https://doi.org/10.1063/1.5138131.

    Article  Google Scholar 

  93. Valensise CM, et al. Deep reinforcement learning control of white-light continuum generation. Optica. 2021;8(2):239. https://doi.org/10.1364/OPTICA.414634.

    Article  Google Scholar 

  94. Su R, et al. Active coherent beam combining of a five-element, 800 W nanosecond fiber amplifier array. Opt Lett. 2012;37(19):3978. https://doi.org/10.1364/ol.37.003978.

    Article  Google Scholar 

  95. Su R, et al. Active coherent beam combination of two high-power single-frequency nanosecond fiber amplifiers. Opt Lett. 2012;37(4):497. https://doi.org/10.1364/ol.37.000497.

    Article  Google Scholar 

  96. Vu KT, et al. Adaptive pulse shape control in a diode-seeded nanosecond fiber MOPA system. Opt Express. 2006;14(23):10996. https://doi.org/10.1364/oe.14.010996.

    Article  Google Scholar 

  97. Malinowski A, et al. High power pulsed fiber MOPA system incorporating electro-optic modulator based adaptive pulse shaping. Opt Express. 2009;17(23):20927. https://doi.org/10.1364/oe.17.020927.

    Article  Google Scholar 

  98. Malinowski A, et al. High peak power, high-energy, high-average power pulsed fibre laser system with versatile pulse duration and shape. Optics InfoBase Conf Pap. 2013;38(22):4686. https://doi.org/10.1364/ol.38.004686.

    Article  Google Scholar 

  99. Schimpf DN, et al. Compensation of pulse-distortion in saturated laser amplifiers. Opt Express. 2008;16(22):17637. https://doi.org/10.1364/oe.16.017637.

    Article  Google Scholar 

  100. Shi H, et al. High-power diode-seeded thulium-doped fiber MOPA incorporating active pulse shaping. Appl Phys B Lasers Opt. 2016;122(10). https://doi.org/10.1007/s00340-016-6543-4.

  101. Kutuzyan AA, et al. Dispersive regime of spectral compression. Quantum Electron. 2008;38(4):383–7. https://doi.org/10.1070/qe2008v038n04abeh013737.

    Article  Google Scholar 

  102. Finot C, et al. Parabolic pulse generation and applications. In: 2nd IEEE LEOS Winter Topicals, WTM 2009 45(11); 2009. p. 110–1. https://doi.org/10.1109/LEOSWT.2009.4771681.

    Chapter  Google Scholar 

  103. Boscolo S, Finot C. Artificial neural networks for nonlinear pulse shaping in optical fibers. Opt Laser Technol. 2020;131(February):106439. https://doi.org/10.1016/j.optlastec.2020.106439.

    Article  Google Scholar 

  104. Boscolo S, Dudley JM, Finot C. Modelling self-similar parabolic pulses in optical fibres with a neural network. Results in Optics. 2021;3(November 2020):100066. https://doi.org/10.1016/j.rio.2021.100066.

    Article  Google Scholar 

  105. Gupta RK, et al. Deep Learning Enabled Laser Speckle Wavemeter with a High Dynamic Range. Laser Photonics Rev. 2020;14(9):1–19. https://doi.org/10.1002/lpor.202000120.

    Article  Google Scholar 

  106. Xiong W, et al. Deep learning of ultrafast pulses with a multimode fiber. APL Photonics. 2020;5(9). https://doi.org/10.1063/5.0007037.

  107. Genty G, et al. Machine learning and applications in ultrafast photonics. Nat Photonics. 2021;15(2):91–101. https://doi.org/10.1038/s41566-020-00716-4.

    Article  Google Scholar 

  108. Bendory T, Beinert R, Eldar YC. Fourier phase retrieval: Uniqueness and algorithms. Appl Numer Harmon Anal. 2017;(9783319698014):55–91. https://doi.org/10.1007/978-3-319-69802-1_2.

  109. Escoto E, et al. Advanced phase retrieval for dispersion scan: a comparative study. J Opt Soc Am B. 2018;35(1):8. https://doi.org/10.1364/josab.35.000008.

    Article  Google Scholar 

  110. Kane DJ. Principal components generalized projections: a review [Invited]. J Opt Soc Am B. 2008;25(6):A120. https://doi.org/10.1364/josab.25.00a120.

    Article  Google Scholar 

  111. Sidorenko P, et al. Ptychographic reconstruction algorithm for FROG: Supreme robustness and super-resolution. In: 2016 Conference on Lasers and Electro-Optics, CLEO 2016 3(12); 2016. https://doi.org/10.1364/cleo_si.2016.stu4i.3.

    Chapter  Google Scholar 

  112. Zahavy T, et al. Deep learning reconstruction of ultrashort pulses. Optica. 2018;5(5):666. https://doi.org/10.1364/OPTICA.5.000666.

    Article  Google Scholar 

  113. Zhu Z, et al. Attosecond pulse retrieval from noisy streaking traces with conditional variational generative network. Sci Rep. 2020;10(1):1–7. https://doi.org/10.1038/s41598-020-62291-6.

    Article  Google Scholar 

  114. White J, Chang Z. Attosecond streaking phase retrieval with neural network. Opt Express. 2019;27(4):4799. https://doi.org/10.1364/oe.27.004799.

    Article  Google Scholar 

  115. Kokhanovskiy A, et al. Machine learning-based pulse characterization in figure-eight mode-locked lasers. Opt Lett. 2019;44(13):3410. https://doi.org/10.1364/ol.44.003410.

    Article  Google Scholar 

  116. Bruning R, et al. Comparative analysis of numerical methods for the mode analysis of laser beams. Appl Opt. 2013;52(32):7769–77. https://doi.org/10.1364/AO.52.007769.

    Article  Google Scholar 

  117. An Y, et al. Learning to decompose the modes in few-mode fibers with deep convolutional neural network. Opt Express. 2019;27(7):10127. https://doi.org/10.1364/oe.27.010127.

    Article  Google Scholar 

  118. An Y, et al. Numerical mode decomposition for multimode fiber: From multi-variable optimization to deep learning. Opt Fiber Technol. 2019;52(June):101960. https://doi.org/10.1016/j.yofte.2019.101960.

    Article  Google Scholar 

  119. An Y, et al. Deep Learning-Based Real-Time Mode Decomposition for Multimode Fibers. IEEE J Select Topics Quantum Electron. 2020;26(4):1–6. https://doi.org/10.1109/JSTQE.2020.2969511.

    Article  Google Scholar 

  120. An Y, et al. Fast modal analysis for Hermite–Gaussian beams via deep learning. Appl Opt. 2020;59(7):1954. https://doi.org/10.1364/ao.377189.

    Article  Google Scholar 

  121. Fan X, et al. Mitigating ambiguity by deep-learning-based modal decomposition method. Opt Commun. 2020;471(February):125845. https://doi.org/10.1016/j.optcom.2020.125845.

    Article  Google Scholar 

  122. Rothe S, et al. Intensity-Only Mode Decomposition on Multimode Fibers Using a Densely Connected Convolutional Network. J Lightwave Technol. 2021;39(6):1672–9. https://doi.org/10.1109/JLT.2020.3041374.

    Article  Google Scholar 

  123. Gao H, et al. Rapid Mode Decomposition of Few-Mode Fiber by Artificial Neural Network. J Lightwave Technol. 2021;39(19):6294–300. https://doi.org/10.1109/JLT.2021.3097501.

    Article  Google Scholar 

  124. Scaggs M, Haas G. Real time laser beam analysis system for high power lasers. In: Laser Resonators and Beam Control XIII 7913; 2011. p. 791306. https://doi.org/10.1117/12.871369.

    Chapter  Google Scholar 

  125. Du Y, Fu Y, Zheng L. Complex amplitude reconstruction for dynamic beam quality M^2 factor measurement with self-referencing interferometer wavefront sensor. Appl Opt. 2016;55(36):10180. https://doi.org/10.1364/ao.55.010180.

    Article  Google Scholar 

  126. Han Z-G, et al. Determination of the laser beam quality factor (M^2) by stitching quadriwave lateral shearing interferograms with different exposures. Appl Opt. 2017;56(27):7596. https://doi.org/10.1364/ao.56.007596.

    Article  Google Scholar 

  127. Pan S, et al. Real-time complex amplitude reconstruction method for beam quality M^2 factor measurement. Opt Express. 2017;25(17):20142. https://doi.org/10.1364/oe.25.020142.

    Article  Google Scholar 

  128. Yoda H, Polynkin P, Mansuripur M. Beam quality factor of higher order modes in a step-index fiber. J Lightwave Technol. 2006;24(3):1350–5. https://doi.org/10.1109/JLT.2005.863337.

    Article  Google Scholar 

  129. Huang L, et al. Real-time mode decomposition for few-mode fiber based on numerical method. Opt Express. 2015;23(4):4620. https://doi.org/10.1364/oe.23.004620.

    Article  Google Scholar 

  130. Flamm D, et al. Fast M2 measurement for fiber beams based on modal analysis. Appl Opt. 2012;51(7):987–93. https://doi.org/10.1364/AO.51.000987.

    Article  Google Scholar 

  131. An Y, et al. Deep learning enabled superfast and accurate M 2 evaluation for fiber beams. Opt Express. 2019;27(13):18683. https://doi.org/10.1364/OE.27.018683.

    Article  Google Scholar 

  132. Pu G, et al. Automatic mode-locking fiber lasers: progress and perspectives. Sci China Inf Sci. 2020;63(6):1–24. https://doi.org/10.1007/s11432-020-2883-0.

    Article  Google Scholar 

  133. Pu G, et al. Intelligent programmable mode-locked fiber laser with a human-like algorithm. Optica. 2019;6(3):362. https://doi.org/10.1364/optica.6.000362.

    Article  Google Scholar 

  134. Brunton SL, Fu X, Kutz JN. Extremum-seeking control of a mode-locked laser. IEEE J Quantum Electron. 2013;49(10):852–61. https://doi.org/10.1109/JQE.2013.2280181.

    Article  Google Scholar 

  135. Andral U, et al. Toward an autosetting mode-locked fiber laser cavity. J Opt Soc Am B. 2016;33(5):825. https://doi.org/10.1364/josab.33.000825.

    Article  Google Scholar 

  136. Woodward RI, Kelleher EJR. Towards ‘smart lasers’: Self-optimisation of an ultrafast pulse source using a genetic algorithm. Sci Rep. 2016;6(November):1–9. https://doi.org/10.1038/srep37616.

    Article  Google Scholar 

  137. Andra U, et al. Fiber laser mode locked through an evolutionary algorithm. In: Proceedings 2015 European Conference on Lasers and Electro-Optics - European Quantum Electronics Conference, CLEO/Europe-EQEC 2015 2(April); 2015. p. 2–6. https://doi.org/10.1364/optica.2.000275.

    Chapter  Google Scholar 

  138. Fu X, Brunton SL, Nathan Kutz J. Classification of birefringence in mode-locked fiber lasers using machine learning and sparse representation. Opt Express. 2014;22(7):8585. https://doi.org/10.1364/oe.22.008585.

    Article  Google Scholar 

  139. Brunton SL, Fu X, Kutz JN. Self-Tuning Fiber Lasers. IEEE J Select Topics Quantum Electron. 2014;20(5):464–71. https://doi.org/10.1109/JSTQE.2014.2336538.

    Article  Google Scholar 

  140. Baumeister T, Brunton SL, Nathan Kutz J. Deep learning and model predictive control for self-tuning mode-locked lasers. J Opt Soc Am B. 2018;35(3):617. https://doi.org/10.1364/josab.35.000617.

    Article  Google Scholar 

  141. Yan Q, et al. Low-latency deep-reinforcement learning algorithm for ultrafast fiber lasers. Photonics Res. 2021;9(8):1493. https://doi.org/10.1364/prj.428117.

    Article  Google Scholar 

  142. Su R, et al. High Power Narrow-Linewidth Nanosecond Coherent Beam Combination. Ieee J Select Topics Quantum Electron. 2014;20(5):IEEE.

  143. Chang H, et al. First experimental demonstration of coherent beam combining of more than 100 beams. Photonics Res. 2020;8(12):1943. https://doi.org/10.1364/prj.409788.

    Article  Google Scholar 

  144. Goodno GD, et al. Active phase and polarization locking of a 14 kW fiber amplifier. Opt Lett. 2010;35(10):1542. https://doi.org/10.1364/ol.35.001542.

    Article  Google Scholar 

  145. Goodno GD, et al. Brightness-scaling potential of actively phase-locked solid-state laser arrays. IEEE J Select Topics Quantum Electron. 2007;13(3):460–71. https://doi.org/10.1109/JSTQE.2007.896618.

    Article  Google Scholar 

  146. Fsaifes I, et al. Coherent Beam combining of 37 femtosecond fiber amplifiers. In: Optics InfoBase Conference Papers Part F140-(14); 2019. p. 20152. https://doi.org/10.1364/oe.394031.

    Chapter  Google Scholar 

  147. Kabeya D, et al. Efficient phase-locking of 37 fiber amplifiers by phase-intensity mapping in an optimization loop. Opt Express. 2017;25(12):13816. https://doi.org/10.1364/oe.25.013816.

    Article  Google Scholar 

  148. Du Q, et al. Deterministic stabilization of eight-way 2D diffractive beam combining using pattern recognition. Opt Lett. 2019;44(18):4554. https://doi.org/10.1364/ol.44.004554.

    Article  Google Scholar 

  149. Ahn HK, Kong HJ. Cascaded multi-dithering theory for coherent beam combining of multiplexed beam elements. Opt Express. 2015;23(9):12407. https://doi.org/10.1364/oe.23.012407.

    Article  Google Scholar 

  150. Ahn HK, Kong HJ. Feasibility of cascaded multi-dithering technique for coherent addition of a large number of beam elements. Appl Opt. 2016;55(15):4101. https://doi.org/10.1364/ao.55.004101.

    Article  Google Scholar 

  151. Ma Y, et al. Coherent beam combination with single frequency dithering technique. Opt Lett. 2010;35(9):1308. https://doi.org/10.1364/ol.35.001308.

    Article  Google Scholar 

  152. Jiang M, et al. Coherent beam combining of fiber lasers using a CDMA-based single-frequency dithering technique. Appl Opt. 2017;56(15):4255. https://doi.org/10.1364/ao.56.004255.

    Article  Google Scholar 

  153. Ma P, et al. 7.1 kW coherent beam combining system based on a seven-channel fiber amplifier array. Opt Laser Technol. 2021;140(October 2020):107016. https://doi.org/10.1016/j.optlastec.2021.107016.

    Article  Google Scholar 

  154. Shpakovych M, et al. Experimental phase control of a 100 laser beam array with quasi-reinforcement learning of a neural network in an error reduction loop. Opt Express. 2021;29(8):12307. https://doi.org/10.1364/oe.419232.

    Article  Google Scholar 

  155. Zhang X, et al. Coherent beam combination based on Q-learning algorithm. Opt Commun. 2021;490(February):126930. https://doi.org/10.1016/j.optcom.2021.126930.

    Article  Google Scholar 

  156. Hou T, et al. High-power vortex beam generation enabled by a phased beam array fed at the nonfocal-plane. Opt Express. 2019;27(4):4046. https://doi.org/10.1364/oe.27.004046.

    Article  Google Scholar 

  157. Hou T, et al. Deep-learning-based phase control method for tiled aperture coherent beam combining systems. High Power Laser Sci Eng. 2019;7:e59. https://doi.org/10.1017/hpl.2019.46.

    Article  Google Scholar 

  158. Chen J, Wan C, Zhan Q. Engineering photonic angular momentum with structured light: a review. Adv Photonics. 2021;3(06):1–15. https://doi.org/10.1117/1.ap.3.6.064001.

    Article  Google Scholar 

  159. Qiao Z, et al. Multi-vortex laser enabling spatial and temporal encoding. PhotoniX. 2020;1(1):13. https://doi.org/10.1186/s43074-020-00013-x.

    Article  Google Scholar 

  160. Chen Y, Cai Y. Optical coherence structure: A novel tool for light manipulation. Sci China Technol Sci. 2021. https://doi.org/10.1007/s11431-021-1966-6.

  161. Forbes A, de Oliveira M, Dennis MR. Structured light. Nat Photonics. 2021;15(4):253–62. https://doi.org/10.1038/s41566-021-00780-4.

    Article  Google Scholar 

  162. Chang Q, et al. Phase-locking System in Fiber Laser Array through Deep Learning with Diffusers. In: 2020 Asia Communications and Photonics Conference, ACP 2020 and International Conference on Information Photonics and Optical Communications, IPOC 2020 - Proceedings; 2020. p. 7–9. https://doi.org/10.1364/acpc.2020.m4a.96.

    Chapter  Google Scholar 

  163. Liu R, et al. Coherent beam combination far-field measuring method based on amplitude modulation and deep learning. Chin Opt Lett. 2020;18(4):041402. https://doi.org/10.3788/col202018.041402.

    Article  Google Scholar 

  164. Wang D, et al. Stabilization of the 81-channel coherent beam combination using machine learning. Opt Express. 2021;29(4):5694. https://doi.org/10.1364/oe.414985.

    Article  Google Scholar 

  165. Abadi M, et al. TensorFlow: A system for large-scale machine learning. In: Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016; 2016. p. 265–83.

    Google Scholar 

  166. Imambi S, Prakash KB, Kanagachidambaresan GR. PyTorch. 2021:87–104. https://doi.org/10.1007/978-3-030-57077-4_10.

  167. Li K, Malik J. Learning to optimize. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings; 2017.

    Google Scholar 

  168. Andrychowicz M, et al. Learning to learn by gradient descent by gradient descent. In: Advances in Neural Information Processing Systems(Nips); 2016. p. 3988–96.

    Google Scholar 

Download references

Acknowledgments

This work is supported by Projects for National Excellent Young Talents and Hunan Provincial Innovation Construct Project (No. 2019RS3017).

Funding

Natural Science Foundation of Hunan province, China (Grant No. 2019JJ10005).

Author information

Authors and Affiliations

Authors

Contributions

Methodology, Pu Zhou; writing—original draft preparation, Min Jiang; writing—review and editing, Hanshuo Wu, Yi An, Tianyue Hou, Qi Chang, Liangjin Huang, Jun Li, Rongtao Su, and Pu Zhou; supervision, Pu Zhou. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Rongtao Su or Pu Zhou.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing financial interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, M., Wu, H., An, Y. et al. Fiber laser development enabled by machine learning: review and prospect. PhotoniX 3, 16 (2022). https://doi.org/10.1186/s43074-022-00055-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-022-00055-3

Keywords