Skip to main content

Adaptive optical microscopy via virtual-imaging-assisted wavefront sensing for high-resolution tissue imaging

Abstract

Adaptive optics (AO) is a powerful tool for optical microscopy to counteract the effects of optical aberrations and improve the imaging performance in biological tissues. The diversity of sample characteristics entails the use of different AO schemes to measure the underlying aberrations. Here, we present an indirect wavefront sensing method leveraging a virtual imaging scheme and a structural-similarity-based shift measurement algorithm to enable aberration measurement using intrinsic structures even with temporally varying signals. We achieved high-resolution two-photon imaging in a variety of biological samples, including fixed biological tissues and living animals, after aberration correction. We present AO-incorporated subtractive imaging to show that our method can be readily integrated with resolution enhancement techniques to obtain higher resolution in biological tissues. The robustness of our method to signal variation is demonstrated by both simulations and aberration measurement on neurons exhibiting spontaneous activity in a living larval zebrafish.

Introduction

Adaptive optics (AO) is widely employed in optical microscopy to achieve ideal spatial resolution in biological tissues [1, 2]. Correction of sample-induced aberrations by AO restores not only the resolution by recovering the aberrated wavefront but also the signal intensity by focusing the light more efficiently. Measuring the aberrated wavefront is a necessary step for AO, which can be generally categorized into direct wavefront sensing or indirect wavefront sensing [3, 4]. Direct wavefront sensing methods use a dedicated wavefront sensor, such as the Shack-Hartmann wavefront sensor, to measure the distorted wavefront originating from a guide star inside biological tissues [5,6,7], and these methods are usually applied to transparent or weakly scattering samples. In contrast, indirect wavefront sensing methods determine the aberrated wavefront from a sequence of images or the variation in the signal intensity instead of a wavefront sensor and thus have simpler hardware implementation and are preferable for scattering samples.

Among indirect methods, modal wavefront sensing methods use a quality metric, such as brightness, of the recorded image as a clue to retrieve the aberrated wavefront. By intentionally distorting the wavefront with a series of orthogonal aberration modes, such as Zernike polynomials, the amplitudes of these aberration modes of the aberrated wavefront can be calculated according to the variation in the resulting image metric [8,9,10]. In addition to modal wavefront sensing, the aberrated wavefront can also be obtained in a zonal manner, in which the entire aberrated wavefront is divided into segments based on pupil segmentation and reconstructed from the gradient of each constituent wavefront segment [11]. A wavefront segment of the aberrated wavefront at the back pupil of a microscope objective corresponds to a focusing beamlet that is deflected from the ideal focus of the objective, and the gradient of this wavefront segment can be calculated from the lateral displacement between the beamlet focus and the ideal focus. By acquiring an image under full-pupil illumination as a reference image, the lateral displacement of the beamlet focus can be reflected as the shift in the image acquired with that beamlet (shifted image) relative to the reference image [11]. Therefore, the gradient of a wavefront segment can be calculated from the shift between the shifted image and the reference image. However, for a wide variety of samples, since the beamlet focus is substantially elongated due to the much reduced effective numerical aperture (NA), the shifted image captures sample structures beyond the excitation volume of full-pupil illumination. As these additional structures dominate the shifted image, deriving the underlying shift between the shifted image and the selected reference is very difficult. To enable aberration measurement via image shift measurement in these samples, one can introduce fluorescence beads into the biological samples, and then, the aberrated wavefront can be measured using the sparsely distributed exogenous beads rather than the intrinsic biological structures [12, 13]. Instead of image shift measurement, the gradient of each wavefront segment can also be obtained by applying a set of gradients to this wavefront segment and retrieving the gradient that leads to the maximum fluorescence intensity [14]. This approach is not limited by the sample structure, and the gradient measurement for different wavefront segments can be parallelized by using a high-speed wavefront modulator, such as a deformable mirror (DM), to implement a frequency multiplexing scheme [15]. However, since the gradient of a wavefront segment is determined based on the maximum signal intensity during wavefront modulations, the signal intensity without the intended modulations needs to be constant.

Here, we report an indirect wavefront sensing method that leverages a virtual imaging scheme to measure aberrations using intrinsic structures by image shift. To derive the lateral displacement of each beamlet focus from the resulting shifted image, the virtual imaging scheme computationally constructs an ideal beamlet focus, which corresponds to the same wavefront segment as the actual beamlet and is free from aberrations, based on the focusing model of the objective. The sample structures are virtually imaged via computations using the constructed ideal focus according to the imaging model. Since the ideal beamlet has the same effective NA as the actual beamlet, most of the content in the shifted image corresponds to a shifted counterpart in the virtually imaged reference image, which is crucial for measurement of the underlying shift. The underlying shift is then measured with a structural-similarity-based shift measurement algorithm and used to derive the corrective wavefront. We implement this method in a two-photon fluorescence microscope with a commonly used liquid crystal spatial light modulator (SLM) as a wavefront shaper and demonstrate the effectiveness of our method in a variety of samples, from dense fluorescent beads to fixed biological tissues and living animals. Since SLMs are also widely used in resolution enhancement techniques [16,17,18,19,20,21], our method can be directly integrated with these techniques to obtain higher resolution in biological tissues without any hardware modifications, and we present AO-incorporated subtractive imaging as an example. The robustness of our method to signal variation is further demonstrated by simulations and aberration measurement on neurons exhibiting spontaneous activity in a living larval zebrafish.

Results and discussion

Principle of wavefront measurement via virtual imaging

To measure the aberrated wavefront induced by biological tissues, the wavefront at the back pupil of the microscope objective is split into an array of segments (Methods). The gradient of each wavefront segment is measured first to obtain the entire aberrated wavefront. As shown in Fig. 1a, according to the focusing model of the objective, if a wavefront segment is tilted with gradient \(\overset{\rightharpoonup }{g}\), then the focus of the corresponding beamlet will be laterally displaced relative to the ideal focus of the objective along vector \(\overrightarrow{d}\), which can be calculated as \(\overrightarrow{d}= f\lambda \overset{\rightharpoonup }{g}/\left(2\pi \right)\), where f is the focal length of the objective and λ is the wavelength. In point-scanning-based microscopy, where the image is formed by scanning the focus across the sample, the lateral displacement of the focus results in the same displacement of the field of view (FOV), which is consequently presented as a shift of the resulting image. Therefore, by acquiring an image with the beamlet corresponding to a wavefront segment (shifted image), the gradient of this wavefront segment can be estimated from the shift \(\overset{\rightharpoonup }{s}\) between the shifted image and a reference image acquired with the corresponding ideal beamlet, which is free from aberrations, as \(\overset{\rightharpoonup }{g}=c\overset{\rightharpoonup }{s}\), where c = 2πl/(fλ) and l is the pixel size of the image.

Fig. 1
figure 1

Schematic of virtual-imaging-assisted wavefront measurement. a Principle of image-based wavefront gradient measurement. b Measuring the gradient of a wavefront segment in biological tissues via virtual imaging. (I) Three-dimensional (3D) biological structures. (II) Shifted image acquired under zonal illumination. (III) Image stack acquired under full-pupil illumination, which is also the sample for virtual imaging. (IV) Reference image constructed by virtual imaging. (V) Image shift measurement between the test image and the reference image. c Corrective wavefront determination

A schematic of measuring the gradient of a representative wavefront segment in biological tissues is depicted in Fig. 1b. A plane with the feature of interest, such as a neuron, is chosen as the central imaging plane for aberration measurement (e.g., the z = 0 plane in Fig. 1b-I). By illuminating a small zone at the back pupil of the objective corresponding to the wavefront segment and laterally scanning the resulting beamlet, which is deflected from the ideal focus by the sampled-induced aberration, across the sample, the structures excited at the beamlet focus are recorded in the shifted image (Fig. 1b-II). Ideally, the gradient of the wavefront segment could be derived from the shift between the shifted image and a reference image acquired with the corresponding ideal beamlet. As the ideal beamlet is not available in practice, the conventional approach [11] acquires an image under full-pupil illumination as the reference image. Based on this reference image, the underlying shift can be measured from the shifted image when the sample structures are sparsely distributed (Fig. S1a). However, since the beamlet focus is greatly elongated due to the low effective NA, the shifted image can record structures beyond the excitation volume of full-pupil illumination. For a variety of samples, these additional structures are dominant in the shifted image, so the underlying shift cannot be derived from the selected reference image acquired under full-pupil illumination (Fig. S1b).

Therefore, a reference image containing structures similar to the shifted image is essentially needed for accurate shift measurement. To this end, we leverage a computational scheme, termed virtual imaging (Methods), to construct the reference image. Prior to virtual imaging, the sample structures surrounding the central imaging plane are imaged under full-pupil illumination and recorded as a three-dimensional stack of images (Fig. 1b-III). Next, the ideal beamlet focus, which corresponds to the same wavefront segment but without aberration, is calculated based on the focusing model of the objective and used to virtually image the recorded structures through computations. The computed image is then taken as the reference image for shift measurement (Fig. 1b-IV). Because the effective NA of the ideal beamlet is the same as that of the actual beamlet, most of the contents in the shifted image have a shifted counterpart in the constructed reference image, which provides the basis for accurate shift measurement. After the reference image construction, we use a structural-similarity-based shift measurement algorithm (Methods) to obtain the shift between the shifted image and the reference image (Fig. 1b-V). This algorithm places the shifted image at every position within the reference image and measures the structural similarity between the shifted image and the overlapped patch in the reference image each time. The measured results can be viewed as a structural similarity map, with which the underlying shift can be estimated from the position with the largest similarity. The gradient of the wavefront segment can thus be obtained from the calculated shift.

With the gradient of each wavefront segment of the aberrated wavefront measured, we can obtain the corrective wavefront by applying wavefront reconstruction algorithms (Methods) used for the Shack-Hartmann wavefront sensor [22, 23]. As shown in Fig. 1c, two corrective wavefronts are reconstructed by zonal and modal reconstruction algorithms. To obtain a better correction performance, each of these corrective wavefronts is applied for aberration correction. The corrective wavefront that gives a higher signal intensity under the same input power for the objective is then determined as the final corrective wavefront.

Wavefront measurement with dense fluorescent beads

We implemented our method in a homebuilt two-photon microscope (Methods), employing an SLM to achieve zonal illumination and wavefront correction. To verify the ability of the proposed method to measure an aberrated wavefront with densely distributed structures, we performed AO correction for a dense aggregate of 2 μm fluorescent beads embedded in agarose with applied artificial aberration. To simulate the aberrations induced by biological tissues, we generated the artificial aberrations from the experimentally measured aberrations in biological tissues by adding perturbation to the Zernike coefficients.

We first captured a sequence of shifted images for different wavefront segments under zonal illumination (Fig. 2a). Compared with the image acquired under full-pupil illumination (indicated by the cyan square in Fig. 2a), the shifted images are dominated by additional structures beyond the excitation volume of full-pupil illumination. To derive the underlying shifts of these images, we acquired an image volume of the beads surrounding the original imaging plane (Fig. 2b) and constructed reference images based on the acquired volume through the virtual imaging scheme. For each shifted and reference image pair, the corresponding similarity map was measured, and the underlying shift was estimated from the position with the largest similarity (Fig. 2c). As shown in Fig. 2d, the image segment that corresponds to the estimated shift of the reference image matches the shifted image, indicating that the underlying shift was successfully measured. Before AO correction, the beads in the image were dim and elongated in the axial direction (Fig. 2e (No AO)). When the AO correction was performed with the conventional method, which uses the image acquired under full-pupil illumination as the reference, the correction led to a deteriorated imaging performance, as evidenced by a 1/3-fold decrease in the peak signal intensity and severe image distortion (Fig. 2e (Conv)). In contrast, after applying our method (the corrective wavefront is shown in Fig. 2f), both the signal intensity and shape of the beads were almost restored to the original state when no artificial aberration was applied (Fig. 2e (AO, Init)). According to the intensity profiles shown in Fig. 2g, AO correction using our method reduced the full width at half maximum of the bead from 4.2 μm to 2.6 μm and increased the signal intensity by nearly 3-fold. The imaging performance after AO correction approaches that when no artificial aberration was applied, which verifies that our method can be applied to densely distributed structures.

Fig. 2
figure 2

Virtual-imaging-assisted AO enables aberration correction for dense 2 μm fluorescent beads. a Shifted images acquired under zonal illumination and image acquired under full-pupil illumination. The entire wavefront is split into an array of segments (red dashed squares), and the acquired shifted images are arranged by the corresponding wavefront segment. The image marked by the cyan square was acquired under full-pupil illumination with the same FOV as the shifted images, and it was used as the reference image in the conventional method. b Maximum intensity projection of the captured image volume, which serves as the sample in virtual imaging. The yellow dashed box in the lateral view marks the imaging FOV during zonal illumination. The yellow dashed line in the axial view indicates the objective focal plane during the aberration measurement process. c Maps of structural similarity with respect to the shift. In each map, the shift with the largest similarity is marked by the red dot. d Shifted image, reference image and similarity map for a representative wavefront segment marked by the yellow asterisks in (a) and (c). The red box indicates the patch of the reference image exhibiting the largest similarity with the shifted image. e Maximum intensity projection of the image volume without AO correction (No AO), with AO correction using the conventional method (Conv) and our method (AO), and under the initial condition where no artificial aberration is applied (Init). The insets in the images without AO correction and with AO correction using the conventional method are digitally enhanced 3-fold and 8-fold, respectively, to increase the visibility. f Corrective wavefront. The dashed squares indicate the layout of the wavefront segments, where each wavefront segment corresponds to a unit square. An independent mask approach was used in the measurement process. g Signal profiles along the cyan, brown, magenta and orange lines in e. Scale bars, 5 μm

Aberration correction in biological tissues

To test the effectiveness of our method in biological tissues, we imaged neuronal structures expressing the red fluorescent protein mRuby3 in a fixed mouse brain tissue section. We measured the aberration with the neuronal structures (Fig. 3a, b) axially centered 250 μm below the tissue surface and observed substantial improvements in both image brightness and contrast after AO correction (Fig. 3c). As shown in Fig. 3d, AO correction increased the fluorescence intensity of a soma (the pink line in Fig. 3c) and a dendrite (the cyan line in Fig. 3c) by approximately 2.5-fold and 4-fold, respectively, and allowed more neuronal structures to be observed. As the axial resolution is more sensitive to the presence of aberrations, the resolution improvement brought by AO correction can be readily recognized from the axial views in Fig. 3c, e, where more dendrites can be clearly distinguished after AO correction. With the signal gain and the resolution improvement provided by AO correction, dendritic spines that are unresolvable without AO correction (the yellow arrows in Fig. 3f) can also be identified (Fig. 3f).

Fig. 3
figure 3

AO enables high-resolution imaging in brain tissue sections. a Maximum intensity projection of the neuronal structures with which the aberration was measured. The structures are centered 250 μm below the tissue surface. b Corrective wavefront. The dashed squares indicate the layout of the wavefront segments, where each wavefront segment corresponds to a unit square. An independent mask approach was used in the measurement process. c Maximum intensity projection of the neuronal structures 240–260 μm (XY) and 230–270 μm (XZ) below the tissue surface. The dashed yellow boxes indicate the positions of the structures in (a). d Intensity profiles along the magenta and cyan lines in (c). e, f Zoomed-in views of the blue and brown boxes in (c) . The image without AO correction in the brown box is enhanced with a 3x digital gain to increase the visibility. The yellow arrowheads in (f) indicate the spines. Scale bars, 10 μm in (a), 20 μm in (c), 5 μm in (e) and 2 μm in (f)

To explore the capability of our method for living scattering samples, we applied our method to imaging the tumor microenvironment in a living mouse with enhanced green fluorescent protein (EGFP) expressed throughout the entire body, excluding erythrocytes and hair. mCerulean-B16 tumor cells were injected between the fascia and dermis of the rear skin of the EGFP mouse, and the tumor region was imaged through a skin-fold window chamber implanted on the back of the EGFP mouse. Prior to imaging, elastin fibers of the arterial vessels were labeled by Alexa Fluor 633 via tail vein injection so that the tumor cells, host-derived EGFP-expressing cells and arterial vessels could be distinguished by three different fluorescence spectra (Fig. 4a).

Fig. 4
figure 4

AO improves three-color intravital imaging of a tumor microenvironment. a The tumor microenvironment was imaged through a dorsal skin-fold window chamber. The tumor cells, host cells and arterial vessels were labeled with mCerulean, EGFP and Alexa Fluor 633, respectively. b Maximum intensity projection of the microvessels centered 92 μm below the tissue surface, with which the aberration was measured. c Maximum intensity projection (XY) of the image volume 76–81 μm below the tissue surface, and image of the axial section (XZ) indicated by the red arrowheads without and with AO correction. The yellow dashed boxes indicate the positions of the microvessels used for aberration measurement. d Intensity profiles along the pink, orange and cyan lines in (c). e Zoomed-in views of the white boxes in (c). The images are enhanced with a 2x digital gain to increase the visibility. The yellow arrowhead indicates a microvessel. f Zoomed-in views of the orange boxes in (c). g Corrective wavefront. The dashed squares indicate the layout of the wavefront segments, where each wavefront segment corresponds to a unit square. A 2 × 2 with 1 × 1 stepped overlapping mask approach was used in the measurement process. Scale bars, 5 μm in (b), 10 μm in (c), 4 μm in (e), and 2 μm in (f)

We measured the aberration with the microvessels axially centered 92 μm below the tissue surface (Fig. 4b). As shown in Fig. 4c, enhanced fluorescence signals were observed after AO correction in all three channels, where the signal intensities of a host cell (pink line), a tumor cell (orange line) and a microvessel (cyan line) were enhanced approximately 1.7-fold, 2-fold and 3-fold, respectively (Fig. 4d). The signal enhancement would be helpful for better distinguishing cell types and quantifying the number of cells. With AO correction, the resulting signal gain and resolution improvement allowed more microvessels to be visualized (Fig. 4e) and individual microvessels to be resolved from a blurred cluster (Fig. 4f). Thus, AO correction enables observation of the detailed network structure of microvessels, which would benefit the monitoring of the morphological development of tumor microvessel network.

AO-incorporated subtractive imaging

Resolution enhancement techniques have been proposed to pursue higher resolution in biological tissues, and many of them employ SLMs to implement the desired wavefront modulations [16,17,18,19,20,21]. Since only a single SLM is required in our AO correction method, our method can be readily incorporated with these techniques without extra hardware complexity. Here, as an example, we demonstrate the application of our method with subtractive imaging [21, 24, 25], which enhances the resolution and contrast of the conventional imaging modality by image subtraction. In brief, subtractive imaging employs a conventional focal spot and a donut-shaped focal spot, which can be shaped through wavefront modulation, to probe the sample (Methods). By leveraging the different feature sizes for the focal spots, an image with enhanced resolution and contrast can be constructed via weighted subtraction between images acquired with the different focal spots.

We imaged a Pinus pollen grain (Pine Mature Pollen, Carolina Biological Supply Company), as shown in Fig. 5a. Before applying subtractive imaging, AO correction was first conducted on this sample with the intrinsic structures shown in Fig. 5b. With the resolution improvement and signal gain brought by AO, the gap between the exine and body of the grain (the yellow arrow in Fig. 5c) can be clearly distinguished, and porous structures inside the grain can be resolved at higher contrast (Fig. 5d). Next, we acquired the subtracted images (Fig. 5e) of the same lateral section as in Fig. 5a through subtractive imaging with and without AO correction (the wavefront for focus shaping was superimposed with the corrective wavefront in Fig. 5f when AO was applied). To differentiate the features revealed by different image contrasts, the images in Fig. 5e, g, and h are normalized to their respective peak intensities. As shown in Fig. 5g, more pores can be clearly identified in the body of the grain in the subtracted image with AO correction. Although subtractive imaging is proposed to boost the resolving ability of conventional imaging, the quality of the subtracted image can be even worse with the existence of aberrations. As shown in Fig. 5h, without AO correction, more artifacts can be found in the subtracted image than in the conventional image, which makes the blurred boundary indistinguishable from the artifact-corrupted background in the subtracted image. In contrast, while the boundary can be clearly discerned in the conventional image with AO correction, the exine separated from the body of the grain (the cyan arrow in Fig. 5h) can be further resolved at the boundary by subtractive imaging. The results demonstrate the effectiveness of our method for subtractive imaging and suggest that AO correction is necessary to achieve the desired resolution and contrast enhancement.

Fig. 5
figure 5

AO-incorporated subtractive imaging in pine mature pollen. a Lateral and axial sections of a Pinus pollen grain without and with AO correction. The red and black arrowheads indicate the positions of the axial section and the lateral section with respect to each other. The yellow dashed boxes indicate the positions of the structures used for aberration measurement. b Maximum intensity projection of the structures with which the aberration was measured. c Zoomed-in views of the gray boxes in (a). The yellow arrow indicates the gap between the exine and body of the pollen. d Intensity profiles along the green and blue lines in (a). e Subtracted images of the same lateral section as in (a) without and with AO correction. f Corrective wavefront. The dashed squares indicate the layout of the wavefront segments, where each wavefront segment corresponds to a unit square. An independent mask approach was used in the measurement process. g, h Zoomed-in views of the blue and white boxes in (a) and (e). The cyan arrow indicates the exine of the pollen. Images in (e), (g) and (h) are normalized to their respective peak intensities to differentiate the details revealed by the local image contrast. Scale bars, 10 μm in (a), (b), and (e) and 2 μm in (c), (g), and (h)

Aberration correction for structures with temporally varying signals

Instead of retrieving aberrations from the variation in the signal intensity, which is susceptible to signal fluctuations, our approach relies on structure comparison and can be applied to samples with temporally varying signals. To demonstrate this capability, we numerically simulated the AO correction for a sample that consists of two axially separated hollow spheres with identical brightness (Fig. 6a) and applied artificial aberration that is generated in the same way as described in the beads experiment. According to Fig. 6b (No AO), the image of the hollow spheres was severely distorted by the aberration, and neither of the spheres could be discerned. When the aberration was measured with constant signals, isolated spheres were clearly identified and the peak intensity was increased 8-fold after AO correction (Fig. 6b (AO CS)). To simulate signal fluctuations, we multiplied the contributions from different spheres in each acquired image frame by respective factors, which were randomly generated from 0.5 to 2, during each aberration measurement trial. In a representative aberration measurement trial with fluctuating signals, whose modulation sequences consisting of the multiplicative factors for each image to produce the signal fluctuations are shown in Fig. 6c, we observed similar improvement of the image quality as in AO correction with constant signals, while the peak intensity was increased 7.8-fold (Fig. 6b (AO FS)). As shown in Fig. 6d, the features of the hollow spheres recovered in both cases closely matched. In ten trials of AO correction with fluctuating source signals, the peak intensity of the image was increased 7.7-fold on average, which, along with the above results, suggests the robustness of our method to signal fluctuations.

Fig. 6
figure 6

Aberration correction for a hollow sphere sample with fluctuating signals. a Axial section (XZ) of the hollow sphere sample. The sample consists of two hollow spheres (outer diameter 6 μm, inner diameter 5 μm) whose centers are separated by 12 μm in the axial direction (Z). b Axial section of the image before AO correction (No AO), after AO correction with constant signals (AO CS), and after AO correction with fluctuating signals (AO FS) in a representative trial. The image without AO correction is digitally enhanced 8-fold to increase the visibility. The insets show the applied aberrated wavefront (No AO) and the measured aberrated wavefront (AO). c Modulation sequences to produce the fluctuating signals during aberration measurement in a representative trial. d Intensity profiles along the solid and dashed lines in (b). Scale bars, 4 μm

We then applied our method to imaging neurons in the hypothalamus of a living larval zebrafish from transgenic line Ki(th:Gal4-VP16);Tg(UAS:GCaMP6s) with temporally varying fluorescence signals resulting from spontaneous neuronal activity in the brain. As shown in Fig. 7a, the neuronal structures at imaging depths from 262 to 267 μm below the skin were obscured by the aberration. With the neuronal structures axially centered 274 μm below the skin (Fig. 7b), we measured the sample-induced aberration, in which we could identify the presence of astigmatism probably stemming from the cylindrical-like surface of the zebrafish larva (Fig. 7c). Correction of the sample-induced aberration led to images with higher resolution and sharper fluorescence signals (Fig. 7d, e) and allowed finer neuronal processes to be better resolved. Next, we recorded the spontaneous activity of the neurons at an imaging depth of 279 μm below the skin. To give a reasonable comparison between images without and with AO correction, image frames without and with AO correction were acquired in an interleaved manner. The maximum intensities of the neuronal structures during calcium imaging without and with AO correction are shown in Fig. 7f. Although bright neurons and fibers could be observed both without and with AO correction, faint structures were only detectable with AO correction. We further monitored the fluorescence signal variation for a few faint features (marked by cyan arrows in Fig. 7f) during the imaging session and observed more calcium transients after AO correction (Fig. 7g), which suggests the benefit of AO correction for detecting the calcium activity of faint targets.

Fig. 7
figure 7

AO improves in vivo structural and functional imaging in the hypothalamus of a zebrafish larva. a Maximum intensity projection of the neuronal structures 262–267 μm below the skin without and with AO correction. b Maximum intensity projection of the neuronal structures axially centered 274 μm below the skin, which were used for aberration measurement. c Corrective wavefront. The dashed squares indicate the layout of the wavefront segments, where each wavefront segment corresponds to a unit square. A 2 × 2 with 1 × 1 stepped overlapping mask approach was used in the measurement process. d Zoomed-in views of the orange boxes. e Intensity profiles along the pink lines in (d). f Maximum intensity projection over time of a plane 279 μm below the skin without and with AO correction. g Fluorescence signals during calcium imaging at four regions of interest (ROIs) without and with AO correction. The ROIs are marked by cyan arrows in (f). Scale bars, 10 μm in (a), (b), and (f) and 5 μm in (d)

Conclusions

In summary, we have developed an indirect wavefront sensing method, which uses a virtual imaging scheme and a structural-similarity-based shift measurement to derive aberrations from microscopy images. The presented method could be viewed as a variant of pupil-segmentation-based methods, which have been employed to achieve high-resolution tissue imaging in numerous applications [11,12,13, 15, 26, 27]. For the conventional approach with single segment illumination [11], the intrinsic structures of biological samples, which are usually not sparsely distributed, present challenges for aberration measurement. In these scenarios, reference beads should be introduced into the samples to assist aberration measurement [12, 13], which may not be preferred since this not only complicates the preparation process but also perturbs the native state of the biological samples. Leveraging a virtual imaging scheme, our method is capable of measuring aberrations with more diverse intrinsic structures, and its effectiveness has been demonstrated in both fixed biological tissues and living animals. The full-pupil illumination approach is not limited by the sample structure and can be efficiently implemented via a frequency multiplexing strategy using a high-speed DM [15, 26]. However, this method requires the fluorescence signal to be stable during wavefront gradient measurement. Employing a structural-similarity-based shift measurement method, our method can be applied to structures exhibiting fluctuating signals, and the robustness of our method to signal variation has been tested by simulations and aberration measurements with firing neurons. Since our method only requires a commonly used SLM, which is also widely adopted in resolution enhancement techniques, as a wavefront shaper, our method can be directly incorporated with these techniques to image biological tissues with higher resolution and contrast, as shown in AO-incorporated subtractive imaging. Therefore, our method would be a powerful complement for existing pupil-segmentation-based methods that could benefit studies in various fields, such as neurobiology and oncology, by enabling high-resolution imaging within thick tissues.

Methods

System configuration

The homebuilt two-photon microscope is depicted in Fig. S2. The two-photon excitation source is a tunable mode-locked titanium sapphire femtosecond laser (Mai Tai DeepSee, Spectra-Physics), with its power controlled by a half-wave plate mounted on a motorized rotation stage (PRM1Z8, Thorlabs) and a polarizing beam splitter cube. The laser (920 nm for zebrafish imaging and 800 nm for the remaining experiments) is expanded to overfill the display panel of a reflective phase-only liquid crystal SLM (X13138–07, Hamamatsu). A second half-wave plate orientates the polarization of the laser for effective wavefront modulation. A pair of lenses (AC254–250-B and AC254–100-B, Thorlabs) relays the modulated light to a scanning unit. The scanning unit is composed of a pair of galvanometers (6-mm aperture, 6215H, Cambridge Technology) conjugated by a pair of lenses (AC508–080-B and AC508–080-B, Thorlabs). A field stop is placed at the intermediate image plane between the SLM and the scanning unit to block undesired diffraction orders. The scanned beam is further relayed by a scan lens (SL50-2P2, Thorlabs) and a tube lens (TTL200MP, Thorlabs) to the back focal plane of a microscope objective (XLPLN25XWMP2, NA 1.05, 25 ×, Olympus). The SLM, galvanometers and back focal plane of the microscope objective are mutually conjugated by the three pairs of lenses. A quarter-wave plate is used to change the polarization of the incident laser into circular polarization. The microscope objective is mounted on an objective scanner (ND72Z2LAQ, Physik Instrumente) for three-dimensional imaging. Along the detection path, two-photon excited fluorescence is separated from the excitation laser and split into two spectral components by dichroic mirrors (FF665-Di02 and FF560-Di01, Semrock) and collected by photomultiplier tubes (H7422–40, Hamamatsu). Bandpass filters (FF02–613/73, FF02–525/40 and FF02–447/60, Semrock, for red, green and blue fluorescence, respectively) are used to purify the collected spectral components. A custom LabVIEW program is used for hardware control and image acquisition. By applying a certain linear phase ramp on the selected subregion of the SLM display, only the light coming from this subregion can pass through the field stop so that the corresponding zonal illumination can be achieved. The computational tasks are implemented with MATLAB scripts and executed on a Dell Precision T7920 workstation (processor: Intel Xeon Gold 5118, graphics card: Nvidia GeForce RTX2080 Ti). The system aberration is corrected before imaging experiments, and images labeled ‘No AO’ are acquired with system correction.

Wavefront segmentation

We use mask approaches described in [11] to segment the back pupil of the objective and correspondingly the wavefront therein. The pupil is divided into N square subregions (the peripheral nonsquare subregions are not used in our experiments). To measure the gradient of each wavefront segment, each corresponding subregion or mask can be illuminated one at a time, which corresponds to the independent mask approach. Since a single subregion of the pupil may not deliver sufficient signal for aberration measurement, a mask corresponding to contiguous subregions can be simultaneously illuminated, where the gradient being measured can be seen as the average of the gradients of the corresponding constituent wavefront segments. In this case, overlapped masks are used for aberration measurement, which corresponds to the overlapping mask approach. In two of our experiments, we used a 2 × 2 with 1 × 1 stepped overlapping mask approach, in which each mask consists of 2 × 2 subregions and is displaced from its neighboring masks by 1 subregion both horizontally and vertically.

Virtual imaging

According to the layout of the wavefront segments, the point spread functions (PSFs) required for virtual imaging are obtained prior to imaging experiments. For an input vectorial field, \({\overset{\rightharpoonup }{E}}_i\), at the back pupil plane of the objective, the focus field, \({\overset{\rightharpoonup }{E}}_o\), can be obtained by vectorial focus field calculations [28]. For zonal illumination, the amplitude of \({\overset{\rightharpoonup }{E}}_i\) is set as 0 beyond the corresponding illuminating aperture. The PSF of two-photon imaging can then be calculated as \({\left\Vert {\overset{\rightharpoonup }{E}}_o\right\Vert}^4\).

The volume of the sample structures to be acquired for virtual imaging can be estimated from the excitation volume of each ideal beamlet focus, which we approximate as the volume with at least 1/e2 of the maximum intensity of the corresponding calculated PSF. Assuming that the effective NA under zonal illumination is NAe, the height of the sample volume can be estimated as \(H=1.064\lambda /\Big(n-\sqrt{n^2-{\mathrm{NA}}_e^2}\)) [29], where λ is the wavelength and n is the refractive index of the immersion medium of the objective. Then, the width of the image volume can be estimated as L =  tan (θ)H + l, where θ is the maximum convergence angle and l is the width of the FOV of shifted images, assuming that the FOV is a square. In practice, we acquire a three-dimensional stack of images of the sample structures within the volume axially centered at the central imaging plane, with a width of L′ and a height of H′, where l < L′ ≤ L and 0 ≤ H′ ≤ H. L′ and H′ are adjusted according to the structure distribution. For example, when the structures are sparsely distributed, H′ can be determined as 0, which means that only a single image is acquired to construct the reference image and can be seen as the reference image choice in the conventional approach [11]. As the structure distribution becomes denser, larger L′ and H′ are required. The imaging parameters used in our experiments are listed in Table S1.

With the acquired image stack as the sample and the calculated PSFs, each reference image can then be constructed from the convolution between the sample and the corresponding PSF. If the sampling intervals of the reference image and the shifted image are different, then linear interpolation is applied to the reference image to match the sampling interval of the shifted image.

Image shift measurement

Given a reference image F and a shifted image G, the goal is to find the shift vector \(\overset{\rightharpoonup }{s}\) for G such that the similarity between the underlying structures in G and their counterparts in F is the largest with respect to \(\overset{\rightharpoonup }{s}\). Considering that the intensity of the same underlying structures could change due to signal variation and deviation of the calculated PSFs from the real ones, the similarity measure should be robust to intensity variation. Inspired by the structural similarity index (SSIM), which is widely used for image quality assessment [30], we adopt the structure comparison term of the SSIM, which is robust to intensity variation, to measure the structural similarity for shift measurement. Since the intensity variation could be spatially variant, local statistics of images are used to calculate the structural similarity, and the structural similarity between image a and image b, both consisting of n × n pixels, is thus given as

$$Sim\left(a,b\right)=\frac{1}{n^2}\ {\sum}_{\overset{\rightharpoonup }{p}}\frac{\sigma \left({a}_{\overset{\rightharpoonup }{p}},{b}_{\overset{\rightharpoonup }{p}}\right)}{\sqrt{\sigma \left({a}_{\overset{\rightharpoonup }{p}},{a}_{\overset{\rightharpoonup }{p}}\right)\sigma \left({b}_{\overset{\rightharpoonup }{p}},{b}_{\overset{\rightharpoonup }{p}}\right)}}$$

where \({a}_{\overset{\rightharpoonup }{p}}\) and \({b}_{\overset{\rightharpoonup }{p}}\) are the patches, both consisting of 11 × 11 pixels, centered at position \(\overset{\rightharpoonup }{p}\) in a and b, respectively, and

$$\sigma \left(x,y\right)={\sum}_{\overset{\rightharpoonup }{r}}w\left(\overset{\rightharpoonup }{r}\right)\left[x\left(\overset{\rightharpoonup }{r}\right)-{\Sigma}_{\overset{\rightharpoonup }{t}}w\left(\overset{\rightharpoonup }{t}\right)x\left(\overset{\rightharpoonup }{t}\right)\right]\left[y\left(\overset{\rightharpoonup }{r}\right)-{\Sigma}_{\overset{\rightharpoonup }{t}}w\left(\overset{\rightharpoonup }{t}\right)y\left(\overset{\rightharpoonup }{t}\right)\right]$$

where w is a 11 × 11 Gaussian smoothing kernel with a standard deviation of 1.5 pixels that is normalized to unit sum.

The image shift \(\overset{\rightharpoonup }{s}\) is obtained in two steps. In the first step, both F and G are downsampled k (k = 8) times in both dimensions as Fk and Gk with the average method. By searching all the patches of the same size as Gk in Fk for the maximum similarity, a coarse estimate of \(\overset{\rightharpoonup }{s}\) is obtained as \({\overset{\rightharpoonup }{s}}_k\). In the second step, we generate the corresponding fine estimates \(\left\{\overset{\rightharpoonup }{s}^{\prime}\right\}\) from \({\overset{\rightharpoonup }{s}}_k\) by

$${\overset{\rightharpoonup }{s}}^{\prime }={\overset{\rightharpoonup }{s}}_k+\overset{\rightharpoonup }{e},-k/2\le {e}_x,{e}_y<k/2.$$

For each fine shift \(\overset{\rightharpoonup }{s^{\prime }}\), we obtain the corresponding patch of the same size as G in F, namely, F′, and the similarity is calculated between \({F}_k^{\prime }\), which is obtained similarly to Fk, and Gk. The fine shift that gives the maximum similarity is eventually determined as \(\overset{\rightharpoonup }{s}\).

Wavefront reconstruction

Based on the measured gradients of all wavefront segments, a modal reconstruction algorithm and a zonal reconstruction algorithm, which are similar to those described in [22, 23], are each used to reconstruct the corrective wavefront. The reconstructed wavefront that gives a higher signal intensity with the same input laser power for the objective is then chosen as the final corrective wavefront. Next, we present the wavefront reconstruction algorithms. Let N denote the number of segments, Σi denote the region of the i-th segment, \({\overset{\rightharpoonup }{o}}_i=\left({o}_i^x,{o}_i^y\right)\) denote the coordinates of the center of Σi, and \({\overset{\rightharpoonup }{g}}_i=\left({g}_i^x,{g}_i^y\right)\) denote the measured gradient of the i-th segment.

Modal reconstruction algorithm

In modal reconstruction, the wavefront φ is approximated by the sum of a series of Zernike polynomials such that \(\varphi ={\sum}_{i=1}^M{c}_i{Z}_i\), where M is the number of Zernike polynomials, Zi is the i-th Zernike polynomial, and ci is the corresponding coefficient. The wavefront gradient can be written as

$$\left[\begin{array}{c}\frac{\partial \varphi }{\partial x}\\ {}\frac{\partial \varphi }{\partial y}\end{array}\right]=\left[\begin{array}{c}{\sum}_{i=1}^M{c}_i\frac{\partial {Z}_i}{\partial x}\\ {}{\sum}_{i=1}^M{c}_i\frac{\partial {Z}_i}{\partial y}\end{array}\right]$$

Let Ax and Ay denote M × N matrices composed of the average gradients of each Zernike polynomial on each segment such that \({A}_{ij}^x=\frac{\iint_{\overset{\rightharpoonup }{r}\in {\Sigma}_i}\frac{\partial {Z}_j}{\partial x}\left(\overset{\rightharpoonup }{r}\right)d\Sigma}{\mid {\Sigma}_i\mid }\) and \({A}_{ij}^y=\frac{\iint_{\overset{\rightharpoonup }{r}\in {\Sigma}_i}\frac{\partial {Z}_j}{\partial y}\left(\overset{\rightharpoonup }{r}\right)d\Sigma}{\mid {\Sigma}_i\mid }\), where |Σi| is the area of the i-th segment, and let Bx and By denote column vectors composed of the measured gradients such that \({B}_i^x={g}_i^x\) and \({B}_i^y={g}_i^y\). Then, the coefficients {ci} can be estimated as the least square solution of the following linear equation:

$$\left[\begin{array}{c}{A}^x\\ {}{A}^y\end{array}\right]X=\left[\begin{array}{c}{B}^x\\ {}{B}^y\end{array}\right]$$

Zonal reconstruction algorithm

In zonal reconstruction, the entire wavefront φ is expressed in a segment-wise manner. Let φi denote the representation of the i-th wavefront segment; then, \({\varphi}_i\left(\overset{\rightharpoonup }{r}\right)={\overset{\rightharpoonup }{g}}_i\bullet \left(\overset{\rightharpoonup }{r}-{\overset{\rightharpoonup }{o}}_i\right)+{p}_i\), where \(\overset{\rightharpoonup }{r}=\left(x,y\right)\) is the normalized pupil coordinates, ‘ ∙ ’ denotes the scalar product operation, and pi is a constant to be determined. According to [22], we have

$${p}_i=\frac{\sum_{j\in E(i)}{p}_j-\frac{1}{2}\left({\overset{\rightharpoonup }{o}}_j-{\overset{\rightharpoonup }{o}}_i\right)\bullet \left({\overset{\rightharpoonup }{g}}_j+{\overset{\rightharpoonup }{g}}_i\right)}{\left|E(i)\right|}$$

where E(i) denotes the set of segments that are adjacent to the i-th segment. Then, {pi} can be obtained via an iterative approach or by solving a matrix equation.

When an overlapping mask is used, the wavefront of an overlapping region is set as the average of the covering wavefront segments.

Wavefront correction considerations

The overall aberration measurement time (Tm) consists of two parts: image acquisition (Ta) and aberration calculation (Tc). The image acquisition time Ta(N, M) = T2d(N) + T3d(M), where T2d(N) is the time for acquiring N shifted images, and T3d(M) is the time for acquiring M images in 3D imaging. The aberration calculation time for N wavefront segments is Tc(N), which accounts for the construction of reference images, calculation of image shifts and reconstruction of the corrective wavefront. For typical parameters used in our experiments (N = 37, M = 41), the images were acquired at a frame rate of approximately 2 ~ 3 Hz and the computational overhead was less than 0.7 second for each wavefront segment, which results in an overall measurement time with less than 70 seconds. The obtained corrective wavefront is then applied for aberration correction in the formal experiment. As long as the optical aberration is stable in the biological tissues, the corrective wavefront will remain effective. In our experiments, we found that the corrective wavefronts remained effective over the whole experimental duration.

Subtractive imaging

An implementation of subtractive imaging similar to [21] is adopted here. A conventional Gaussian focal spot and a donut-shaped focal spot, which is shaped through 0 ~ 2π vortex phase modulation, are employed to probe the sample. A subtracted image with enhanced resolution and contrast is then constructed by

$${I}_s={I}_c-\gamma {I}_d$$

where Ic denotes the normalized conventional image acquired with the Gaussian focal spot, Id denotes the normalized image acquired with the donut-shaped focal spot, Is denotes the subtracted image, and γ denotes the weighting factor. Negative values resulting from the subtraction operation are set to zero. In our experiment, γ is set as 0.6, and Ic and Id are smoothed by a Gaussian kernel with a standard deviation of 1.5 pixels to suppress noise. When AO is applied, the modulation pattern for desired focus shaping is superimposed with the wavefront correction pattern.

Sample preparation

Fluorescent bead preparation

A suspension of 2 μm yellow–green carboxylate-modified microspheres (F8827, Thermo Fisher) was mixed with 1% low-melting-temperature agarose, dispersed by means of a bath sonicator and then mounted onto the bottom well of a glass bottom dish (35 mm dish with a 15 mm bottom well). The bottom well of the dish was fully filled by the sample and sealed with a coverslip.

Brain slice preparation

Mice expressing mRuby3 via virus injection were transcardially perfused with 1X phosphate buffered saline (PBS) followed by 4% paraformaldehyde (PFA). The brains were postfixed with 4% PFA overnight and then washed five times with PBS. Sections of 400 μm thickness were cut on a Leica VT1200S vibrating microtome, and selected sections were mounted onto glass slides and embedded with antifade mounting medium under coverslips.

Mouse preparation

EGFP female mice (age 9–12 weeks), which expressed EGFP throughout the entire body, excluding erythrocytes and hair, were prepared for establishing intravital tumor microenvironment imaging models. All of the mice were bred and maintained in a specific pathogen-free barrier facility at the Animal Center of Wuhan National Laboratory for Optoelectronics. A skin-fold window chamber (APJ Trading Co., Inc., Ventura, CA) was implanted onto the back of the mouse, as previously described [31]. One day later, a total of 1 × 106 tumor cells (resuspended in 25 mL PBS), in which mCerulean-B16 tumor cells and B16 tumor cells were mixed at a ratio of 7:3, were injected between the fascia and dermis of the rear skin. The monoclonal cell line mCerulean-B16 is ubiquitously expressing mCerulean fluorescent protein in the whole B16 cell, which was constructed by our group. The entire surgical process was conducted under sterile conditions to avoid infection. To relieve pain associated with surgery and inflammation, the mice received Tolfedine via intraperitoneal injection (16.25 mg/kg, Vétoquinol, Québec, Canada) immediately and within 24 hours after implantation. The mouse was allowed to recover for 24 hours, then anesthetized using isoflurane (1% in oxygen) and maintained at 37 °C for imaging. To label the elastin fibers of the arterial vessels, 100 μL of 0.25 mM Alexa Fluor 633 hydrazide (A30634, Life Technologies) in saline [32] was injected into the EGFP mouse via the tail vein 3 hours before imaging.

Zebrafish preparation

Zebrafish larvae from the transgenic line Ki(th:Gal4-VP16);Tg(UAS:GCaMP6s) in a nacre background were generously provided by the Du laboratory (Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China). At 5 days post fertilization, zebrafish larvae were paralyzed by short immersion in α-bungarotoxin (1 mg/ml, Tocris) and dorsally mounted onto the bottom well of a glass bottom dish (35 mm dish with a 15 mm bottom well) with 1.5% low-melting-temperature agarose.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author on reasonable request.

Abbreviations

AO:

Adaptive optics

NA:

Numerical aperture

DM:

Deformable mirror

SLM:

Spatial light modulator

FOV:

Field of view

EGFP:

Enhanced green fluorescent protein

SSIM:

Structural similarity index

PBS:

Phosphate buffered saline

PFA:

Paraformaldehyde

References

  1. Kubby JA. Adaptive Optics for Biological Imaging. Boca Raton: CRC Press; 2013.

  2. Ahn C, Hwang B, Nam K, Jin H, Woo T, Park J-H. Overcoming the penetration depth limit in optical microscopy: adaptive optics and wavefront shaping. J Innovative Optical Health Sci. 2019;12(04):1930002.

    Article  Google Scholar 

  3. Booth MJ. Adaptive optical microscopy: the ongoing quest for a perfect image. Light Sci Appl. 2014;3(4):e165.

    Article  Google Scholar 

  4. Ji N. Adaptive optical fluorescence microscopy. Nat Methods. 2017;14(4):374–80.

    Article  Google Scholar 

  5. Wang K, Sun W, Richie CT, Harvey BK, Betzig E, Ji N. Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue. Nat Commun. 2015;6(1):7276.

    Article  Google Scholar 

  6. Tao X, Fernandez B, Azucena O, Fu M, Garcia D, Zuo Y, et al. Adaptive optics confocal microscopy using direct wavefront sensing. Opt Lett. 2011;36(7):1062–4.

    Article  Google Scholar 

  7. Azucena O, Crest J, Kotadia S, Sullivan W, Tao X, Reinig M, et al. Adaptive optics wide-field microscopy using direct wavefront sensing. Opt Lett. 2011;36(6):825–7.

    Article  Google Scholar 

  8. Débarre D, Botcherby EJ, Watanabe T, Srinivas S, Booth MJ, Wilson T. Image-based adaptive optics for two-photon microscopy. Opt Lett. 2009;34(16):2495–7.

    Article  Google Scholar 

  9. Débarre D, Booth MJ, Wilson T. Image based adaptive optics through optimisation of low spatial frequencies. Opt Express. 2007;15(13):8176–90.

    Article  Google Scholar 

  10. Booth MJ. Wavefront sensorless adaptive optics for large aberrations. Opt Lett. 2007;32(1):5–7.

    Article  Google Scholar 

  11. Ji N, Milkie DE, Betzig E. Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues. Nat Methods. 2010;7(2):141–7.

    Article  Google Scholar 

  12. Chen W, Natan RG, Yang Y, Chou S-W, Zhang Q, Isacoff EY, et al. In vivo volumetric imaging of calcium and glutamate activity at synapses with high spatiotemporal resolution. Nat Commun. 2021;12(1):6630.

    Article  Google Scholar 

  13. Ji N, Sato TR, Betzig E. Characterization and adaptive optical correction of aberrations during in vivo imaging in the mouse cortex. Proc Natl Acad Sci. 2012;109(1):22.

    Article  Google Scholar 

  14. Milkie DE, Betzig E, Ji N. Pupil-segmentation-based adaptive optical microscopy with full-pupil illumination. Opt Lett. 2011;36(21):4206–8.

    Article  Google Scholar 

  15. Wang C, Liu R, Milkie DE, Sun W, Tan Z, Kerlin A, et al. Multiplexed aberration measurement for deep tissue imaging in vivo. Nat Methods. 2014;11(10):1037–40.

    Article  Google Scholar 

  16. Maurer C, Jesacher A, Bernet S, Ritsch-Marte M. What spatial light modulators can do for optical microscopy. Laser Photonics Rev. 2011;5(1):81–101.

    Article  Google Scholar 

  17. Gould TJ, Burke D, Bewersdorf J, Booth MJ. Adaptive optics enables 3D STED microscopy in aberrating specimens. Opt Express. 2012;20(19):20998–1009.

    Article  Google Scholar 

  18. Patton BR, Burke D, Owald D, Gould TJ, Bewersdorf J, Booth MJ. Three-dimensional STED microscopy of aberrating tissue using dual adaptive optics. Opt Express. 2016;24(8):8862–76.

    Article  Google Scholar 

  19. Wang P, Slipchenko MN, Mitchell J, Yang C, Potma EO, Xu X, et al. Far-field imaging of non-fluorescent species with subdiffraction resolution. Nat Photonics. 2013;7(6):449–53.

    Article  Google Scholar 

  20. Zhao G, Rong Z, Kuang C, Zheng C, Liu X. 3D fluorescence emission difference microscopy based on spatial light modulator. J Innov Optical Health Sci. 2016;9(03):1641003.

    Article  Google Scholar 

  21. Tian N, Fu L, Gu M. Resolution and contrast enhancement of subtractive second harmonic generation microscopy with a circularly polarized vortex beam. Sci Rep. 2015;5(1):13580.

    Article  Google Scholar 

  22. Southwell WH. Wave-front estimation from wave-front slope measurements. J Opt Soc Am. 1980;70(8):998–1006.

    Article  Google Scholar 

  23. Panagopoulou SI, Neal DP. Zonal matrix iterative method for wavefront reconstruction from gradient measurements. J Refract Surg. 2005;21(5):S563–S9.

    Article  Google Scholar 

  24. Kuang C, Li S, Liu W, Hao X, Gu Z, Wang Y, et al. Breaking the diffraction barrier using fluorescence emission difference microscopy. Sci Rep. 2013;3(1):1441.

    Article  Google Scholar 

  25. Dehez H, Piché M, De Koninck Y. Resolution and contrast enhancement in laser scanning microscopy using dark beam imaging. Opt Express. 2013;21(13):15912–25.

    Article  Google Scholar 

  26. Rodríguez C, Chen A, Rivera JA, Mohr MA, Liang Y, Natan RG, et al. An adaptive optics module for deep tissue multiphoton imaging in vivo. Nat Methods. 2021;18(10):1259–64.

    Article  Google Scholar 

  27. Wang C, Ji N. Pupil-segmentation-based adaptive optical correction of a high-numerical-aperture gradient refractive index lens for two-photon fluorescence endoscopy. Opt Lett. 2012;37(11):2001–3.

    Article  Google Scholar 

  28. Leutenegger M, Rao R, Leitgeb RA, Lasser T. Fast focus field calculations. Opt Express. 2006;14(23):11277–91.

    Article  Google Scholar 

  29. Zipfel WR, Williams RM, Webb WW. Nonlinear magic: multiphoton microscopy in the biosciences. Nat Biotechnol. 2003;21(11):1369–77.

    Article  Google Scholar 

  30. Zhou W, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600–12.

    Article  Google Scholar 

  31. Qi S, Li H, Lu L, Qi Z, Liu L, Chen L, et al. Long-term intravital imaging of the multicolor-coded tumor microenvironment during combination immunotherapy. eLife. 2016;5:e14756.

    Article  Google Scholar 

  32. Shen Z, Lu Z, Chhatbar PY, O'Herron P, Kara P. An artery-specific fluorescent dye for studying neurovascular coupling. Nat Methods. 2012;9(3):273–6.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the Du laboratory (Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China) for providing the zebrafish larvae.

Funding

This work was supported by National Natural Science Foundation of China (No. 61890952).

Author information

Authors and Affiliations

Authors

Contributions

L.F. conceived and oversaw the project. Z.Z devised the AO scheme and developed the program for AO and instrument control. Z.Z. and J.H. built the optical setup. X.L., X.G., Z.C., Z.J., Z.Z. and J.H. prepared the samples. Z.Z. and J.H. conducted the imaging experiments. Z.Z. and F.L. wrote the manuscript. All authors contributed to the data analysis and revision of the manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Ling Fu.

Ethics declarations

Ethics approval and consent to participate

All animal experiments were approved by the animal experiment guidelines of the Animal Experimentation Ethics Committee of Huazhong University of Science and Technology (HUST, Wuhan, China).

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, Z., Huang, J., Li, X. et al. Adaptive optical microscopy via virtual-imaging-assisted wavefront sensing for high-resolution tissue imaging. PhotoniX 3, 13 (2022). https://doi.org/10.1186/s43074-022-00060-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-022-00060-6

Keywords