Skip to main content

Digital staining in optical microscopy using deep learning - a review

Abstract

Until recently, conventional biochemical staining had the undisputed status as well-established benchmark for most biomedical problems related to clinical diagnostics, fundamental research and biotechnology. Despite this role as gold-standard, staining protocols face several challenges, such as a need for extensive, manual processing of samples, substantial time delays, altered tissue homeostasis, limited choice of contrast agents, 2D imaging instead of 3D tomography and many more. Label-free optical technologies, on the other hand, do not rely on exogenous and artificial markers, by exploiting intrinsic optical contrast mechanisms, where the specificity is typically less obvious to the human observer. Over the past few years, digital staining has emerged as a promising concept to use modern deep learning for the translation from optical contrast to established biochemical contrast of actual stainings. In this review article, we provide an in-depth analysis of the current state-of-the-art in this field, suggest methods of good practice, identify pitfalls and challenges and postulate promising advances towards potential future implementations and applications.

Biomedical sciences heavily rely on numerous biochemical staining protocols to achieve specific cell identification or tissue classification. Chemical binding between target molecules and engineered molecular markers, loaded with artificial contrast agents, can create artificial yet specific contrast for optical microscopy (see Fig. 1A & B). Due to their molecular specificity, these staining protocols are the established benchmark for most biomedical problems related to clinical diagnostics, fundamental research, and biotechnology. Beyond their undisputed success, these labeling-intense protocols still require extensive processing of the samples, which can cause substantial time delays, affect tissue homeostasis, might limit the choice of available contrast agents and often only allow 2D imaging of tissue slices or cell cultures instead of 3D tomography. The most common histological stain of hematoxylin and eosin (H &E), for instance, is based on tissue embedding, fixation (Formalin-Fixed Paraffin-Embedded - FFPE), manually slicing into thin sections (typically 3-7µm) and staining, before imaging under conventional microscopes or whole slide scanners can be pursued.

Fig. 1
figure 1

Basic principle of Digital staining. a Conventional staining of 3D tissue samples requires a time-demanding and cumbersome procedure of biopsy acquisition, formalin-fixed paraffin-embedding (FFPE), manual sectioning, dehydration and artificial staining. These prepared tissue slices are then imaged by optical microscopes and the obtained images are quantified (e.g., via histopathology scoring by experienced experts). b Staining of cell cultures is conventionally based on antibody reactions with immuno-fluorescence (IF) stains. This process does not require embedding, sectioning and dehydration, and can even compatible with live cell imaging. However, the image quantification is still specific to the applied staining (e.g., nuclei staining for segmentation of nuclei). c Label-free optical technologies exploit the natural contrast of biomedical samples, without relying on artificial stainings. Although this omits the need for extensive sample preparation, the quantification is bound to the specific type of optical contrast that was used (e.g., dry mass approximation in quantitative phase imaging). d Digital staining (DS) can combine the advantages of an experimentally more practical imaging technique, with the high specificity of a thorough but cumbersome staining approach. Thus, DS can be used to digitally enhance label-free optical microscopy (e.g., generation of IF images based on white light microscopy) or to perform stain-to-stain translation (e.g., generation of specific IHC staining based on already available H&E stainings). A detailed literature overview of commonly used input-target image pairings and respective examples images is displayed in Fig. 2

Label-free optical technologies (see Fig. 1C), on the other hand, exploit natural contrast mechanisms, instead of relying on a limited choice of exogenous markers in the above mentioned staining procedures. Simple white-light microscopy, for instance, rely on amplitude differences based on scattering and absorption properties of cells and tissues, optical phase microscopy measures phase contrast based on refractive index (RI) differences, birefringence, or orientation, while other imaging modalities use intensity or lifetime of natural autofluorescence (AF). Although these label-free contrast mechanisms can actually carry highly-relevant information related to factors like density and thickness, mass, redox-ratio and many more, their specificity as direct biomarkers is typically less obvious to the human observer. Over the past decades, machine learning (ML) or artificial intelligence (AI) demonstrated vast success in optical microscopy, e.g. in automated detection of diseases [1], 3D image segmentation [2] or simultaneous optimization of microscopy and software components [3]. In conventional pathology, AI models are often used to perform classification or segmentation of histology images from diseased and healthy tissues. As common in most supervised ML, training of these models requires large datasets with reliable ground truth labels. These labels are commonly generated manually by experts, i.e., for the automated segmentation of background, cell boundaries, and cell compartments by convolutional neural networks (CNN) [4]. Especially with the rise of the U-Net architecture [5] for image segmentation, cell segmentation could be solved more effectively. Nevertheless, the conventional procedures of histological staining and manual annotations are still rather time-consuming, and the need for reliable ground-truth data often acts as bottlenecks for throughput in digital pathology.

Over the past decade, ML researchers have developed several techniques for image-to-image translation. Upon training and validation, these generative models allowed transfer from one image domain to another, e.g., from maps to satellite images [6], from horses to zebras [7] or for style transfer in art [8]. Recently, alternative image-to-image training strategies, such as Normalizing Flows [9, 10] and Denoising Diffusion Probabilistic Models [11, 12] have also gained significant popularity. Digital staining (DS) is an emerging concept in the field of computational microscopy that can digitally augment microscopy images by transferring the contrast of input images into a target domain (see Fig. 1D). Implementation of digital models is most often based on machine learning algorithms, that are trained on pairs of input and target images. In a nutshell, these ML models then learn to link characteristic features in structure and contrast from one input domain (most often a label-free image) with those of the target domain (most often images from staining with well known molecular specificity). Thereby, digital staining very elegantly bypasses two obstacles: (i) during the development and training of computational models, digital staining omits the need of manual annotations of ground truth data, by obtaining the ground truth annotations from specific stainings and (ii) upon deployment, the inference with a trained model can then circumvent the time-consuming and tedious procedure of actual sample preparation, including sectioning and staining.

Despite the vast potential of this technique, the growing number of new digital staining pipelines and a wider range of applications, thorough review articles on this topic are rare and only touch side aspects of digital staining. A 2022 review on GANs in ophthalmology [13] mentioned some DS techniques in the specific use case of transforming fundus photographs to angiography images. Jiang et al. provide a concise review on deep learning (DL) in cytology, including classification, segmentation, object detection and stain normalization of microscopy images [14], but without covering digital staining as such. In a similar fashion, Wu et al., touch on the topic of style transfer in microscopy in their 2021 review on computational histopathology [15], however only in the context of color normalization. In 2018, Jo et al., mentioned ‘image enhancement via style transfer’ as promising developments for the specific technique of quantitative phase imaging (QPI) [16], but without generally reviewing the entire field of digital staining. Rivenson et al., published a 2020 review article on virtual staining for histopathology [17]. However, it only targeted digital staining of FFPE sections and did not include the immense increase in publications in this field over the past three to four years (see Fig. 5A). Latest reviews from 2022 and 2023 summarized the concept to translate input images into target images in the sole context of histological tissue sections [18, 19], but without an in-depth analysis of other digital staining applications, including (live) cell staining.

Basic principle and key examples

Successful implementation of digital staining essentially relies on four key parts:

  • the use of input images that carry a sufficient amount of information to allow the translation into the target domain (see chapter on  Input domains). This input domain usually relates to the use of a label-free technique, but it is not limited to that.

  • the use of target images with reliable ground-truth information that can be linked to the features in the input domain (see chapter on Target domain). These target images usually use the biochemical specificity of molecular stains as ground-truth, but are not restricted to those.

  • the use of appropriate computational models that can translate input images to target images (see chapter on Computational models ). Most often, this image-to-image regression problem is solved by machine learning algorithms (specifically U-Net or GAN architectures), but earlier implementations also relied on linear, mathematical formulas to translate color spaces.

  • a procedure to accurately generate paired input and target images (see chapter on Generation of paired images). The exact registration of input pixels and target pixels of the same structures might seem trivial but is essential to enable the model to perform accurate image-to-image regression. While a few recent implementations use unsupervised learning with unpaired images for training, all implementations at least require paired input and target images for a truthful validation of the predictions, as discussed below.

Thus, digital staining can be viewed as a holistic concurrence of biology, optical microscopy and computational modeling. Successful implementations rely on an understanding of the entire workflow that starts from a reasonably posed biological problem, involve input images that carry a sufficient amount of information with respect to that biological problem, as well as target images that can be linked to the information from the input domain and end with a computational model that is able to accurately translate input images into target images. Furthermore, the practical workflow to generate pairs of input and target images, as well as the choice of quantitative metrics for training and validation are important considerations for digital staining.

Depending on the mode of operation and the preference of the authors, the concept of DS is also termed ‘virtual staining’, ‘in silico staining’, ‘pseudo-H&E staining’ or ‘virtual fluorescence’.

The earliest, and still one of the most common, implementation of DS translates label-free images of tissue sections into target images of well-known and widely accepted histological stainings. This is often based on two subsequent tissue sections, where one is imaged with label-free modalities as input, while the consecutive section is used for a conventional histology stain as target. This digital H&E staining has been shown extensively and for a multitude of different organ samples based on label-free autofluorescence [20].

In 2018, Christiansen et al. demonstrated the next stage for digital staining from live cell cultures with different IF dyes in a shared optical path, by using phase microscopy and a U-Net model [21]. The use of fluorescently labelled antibodies for digital staining in live cells opened the door for many new biomedical experiments, like an extension into 3D digital staining [22], the use of digital staining to promote prior-informed cell segmentation [23], digital staining of two different cell cycle markers for mitosis stage classification [24] or a detailed evaluation of virtual labeling of mitochondria in living cells [25].

Besides the conceptual advancements of digital staining, the field was undoubtedly fueled by the introduction of more powerful ML models for image-to-image regression, such as U-Net [5], generative adversarial networks (GAN) [26] and cycle conditional GANs [7]. Since these models became generally more available and were applied to digital staining, e.g., the use of the ‘Pix2Pix’ for digital staining in 2018 [27] or the stainGAN, which was initially used for stain normalization [28], the number of publications in this field increased exponentially over the past 3-4 years (see Fig. 5A).

Input domains: label-free contrast mechanisms in optical microscopy as “optical specificity”

As mentioned above, label-free contrast mechanisms often carry highly-relevant information that can be linked to functional and/or morphological features like density, thickness or mass, the redox-ratio of a cell cycle, surface topography, presence or absense of certain molecules and many more. Whether this information is sufficient for a given digital staining task, is among the first and most important considerations when implementing a digital staining model, as discussed in Trends & methods of good practice below.

In this chapter, all reviewed publications are categorized according to contrast mechanism of the input domain. Label-free microscopy techniques are commonly used to generate input images, while elaborate staining procedures of known biochemical specificity are usually used as target images. The two label-free techniques of optical phase contrast and wide-field / white light illumination are the most commonly used techniques to generate input images with 19% and 16% of our reviewed literature respectively. Other notable label-free input imaging methods include autofluorescence (AF), nonlinear techniques, or photoacoustic imaging. There are several studies that employ combinations of different contrasts. On the one hand, this can be implemented in one single setup, e.g., in Fourier ptychographic microscopy (FPM) as a combination of amplitude and phase contrast [29], in dark field reflectance and autofluorescence (DRUM) [30] or in complementary nonlinear techniques [31,32,33]. On the other hand, some papers present the use of different imaging systems for combined data input, e.g., wide-field and phase contrast [21, 22, 34]. Furthermore, there are also several implementations of stain-to-stain translation, where inputs from one stain are digitally transferred to a different target stain.

While H&E is the most wide-spread stain used for digital staining, the single most popular combination is the use of phase contrast microscopy as input images to predict multiple IF stains, as displayed in Fig. 2. Since most of these IF stains are targeting membrane (Dil stain), nuclei (DAPI or Hoechst) or cytoskeleton (Microtubuli, MAP-stains), phase imaging techniques are an ideal match, as their optical phase contrast is highest for the very same cellular structures (membranes, nuclei and cytoskeleton).

Fig. 2
figure 2

Pairings of input and target contrast. a Target image contrast is plotted against the input contrast, the number of publications in each combination is color-coded. Selected examples in (b) are indicated by numbers in (a): (B1) a translation of autofluorescence images from tissue slides to H&E images by Rivenson et al. [20], re-use permitted and licensed by Springer Nature. (B2) translation of phase contrast images of human neuron cells to specific fluorescence images (DAPI, anti-MAP2 and anti-neurofilament). Data available at https://github.com/google/in-silico-labeling from Ref. [21], re-use was permitted and licensed by Elsevier and Copyright Clearance Center. (B3) translation of bright field images of cells to multiple fluorescence stains by Ounkomol et al. [22]. Images are publicly available at https://downloads.allencell.org/publication-data/label-free-prediction/index.html, re-use was permitted and licensed by Springer Nature. (B4) translation of bright field images of living cells to genetically encoded mitochondria markers by Somani et al. [25]. Images are publicly available at https://doi.org/10.18710/11LLTW [35], re-use licensed under CC0 1.0. (B5) stain-to-stain translation of H&E images into cytokeratin stain by Hong et al. [36], Images are publicly available at https://github.com/YiyuHong/ck_virtual_staining_paper, re-use licensed under CC BY 4.0. (B6) stain-to-stain translation of IHC images into different IHC images by Ghahremani et al. [37]. Images are publicly available at https://zenodo.org/record/4751737#.YV379XVKhH4, re-use permitted and licensed by Springer Nature. IHC = immuno-histochemcial stain, IF = immuno-fluorescence stain, WF = wide field (white light illumination), AF = autofluorescence, PAM = photo-acoustic microscopy, IR = infra-red. An extended version of the detailed literature analysis can be found in the Supplementary material of this manuscript

The most important label-free optical techniques are briefly presented in this chapter, while biochemical staining methods which are usually used as target images, are presented in the following chapter Target domain: biochemical stains as ground-truth.

Wide-field (WF) microscopy

Perhaps the most basic type of optical microscope is the standard wide-field microscope, known since the beginnings of optical microscopy. Wide-field (WF) microscopy is an imaging technique where the whole sample is illuminated with light. The basic design consists of a light source that illuminates an extended area typically of a thin sample which scatters and transmits a fraction of the illumination into a lens or collection of lenses that image the light onto an arrayed detector. Depending on the particular illumination conditions, discussed below, wide-field microscopy has also been referred to as bright-field microscopy and white-light microscopy, among others.

In its most basic form, wide-field microscopy offers qualitative contrast derived from the spatially-varying complex transmittance of the sample:

$$\begin{aligned} t(x,y) = \exp (j2\pi n(x,y)\Delta z(x,y)/\lambda ) \end{aligned}$$
(1)

where n(xy) is the spatially-varying complex refractive index of the sample, \(\Delta z(x,y)\) is the sample thickness, and \(\lambda\) is the wavelength of the illumination. The real part of the refractive index imparts a phase shift on the incident light that is often difficult to observe in thin samples using standard bright-field illumination, or illumination whose angular range falls within the numerical aperture (NA) of the objective lens. However, off-axis illumination in the bright-field regime or in the dark-field regime (i.e., with illumination angles higher than the cutoff imposed by the NA of the objective) can highlight certain features that may be used for virtual staining, such as cell or organelle boundaries. The imaginary part of the refractive index corresponds to absorption induced by the sample. As such, wide-field microscopy can be useful for imaging certain types of cells that contain strongly absorbing molecules at certain wavelengths, such as red blood cells and melanocytes. The wavelength dependence of the absorption is often useful for distinguishing certain types of molecules, which can be achieved with a wide-field microscope by sweeping the illumination wavelength or by using white-light illumination with a multi- or hyperspectral camera.

For thicker samples, a simple model based on complex transmittance map (Eq. 1) is insufficient. Such samples may exhibit higher attenuation contrast due to multiple scattering and absorption, which can be quantified by an attenuation coefficient, \(\mu _t\), that subsumes both the absorption coefficient, \(\mu _a\), and scattering coefficient, \(\mu _s\), though a standard wide-field microscope generally cannot distinguish the effects of the two sources. While such coefficients are gross or bulk metrics of biological samples (i.e., having an opaque relationship with the 3D structure of the sample), they can still offer useful sources of contrast for virtual staining [22].

Phase sensitive methods

Phase contrast (PC) is an important endogenous contrast mechanism of label-free samples. Small changes in the refractive index and thickness of cells result in detectable changes in the optical phase. Generally, phase contrast microscopy attenuates the background light and compensates the phase shift of the scattered light. This way, the scattered light interferes with the background light more constructively, which enhances the image contrast [38]. Phase microscopy techniques are quite diverse in their exact implementation. They range from the use of phase rings or spatial light modulators, to interferometric setups or active illumination control and most often include computational phase reconstruction.

Phase contrast microscopy and differential interference contrast (DIC) microscopy are still two of the most commonly used phase imaging modalities that reveal structures of semi-transparent cells that are invisible to the previously discussed wide-field microscopy. Due to the substantial development of PC and DIC in the last half-century and the increasing demand for monitoring in vitro cells, those two modalities are now commonly available in commercial microscope solutions. Therefore, a large and diverse amount of PC and DIC studies have been conducted on multiple sites for predicting fluorescence labels including nuclei and dendrites for human motor neurons cells, as well as nuclei and membranes for human breast cancer line cells [21]. Further, DIC-based virtual staining has been proposed in hematology to replace the laborious and inconsistent H&E stain of blood smears [39]. In this case, as DIC only preserves the edges of phase images, they tend to lack details for accurate predictions of the inner-cellular structures. To relieve this issue, Tomczak et al. [39] proposed to add an auxiliary task of nucleus and cytoplasm segmentation in addition to the prime domain transformation task (i.e., to predict H&E stain from DIC images), which forces the encoder to be aware of the shape of structures. Compared to transformation networks trained with the prime domain transformation task alone, such a multi-task learning method can improve performance on digitally staining leukocytes from hematology slides imaged with DIC.

Another imaging technique based on the RI of the sample is optical coherence tomography (OCT) [40]. Modern point-scan OCT is typically implemented in the frequency domain with a Michelson or Mach-Zehnder interferometer, using wavelength-swept light sources or broadband (low-coherence) sources, such as superluminescent diodes for illumination. In analogy to ultrasound imaging, OCT uses an optical “pulse-echo” time-of-flight method to create tomographic line-scan images along an optical ray, which can penetrate up to a few millimeters inside human tissue. Scanning mirrors can then be used to move the optical beam to transversely across the sample and create a volumetric 3D image of a tissue sample. While the lateral resolution of OCT depends on the NA, its axial resolution is inversely proportional to the bandwidth of the source [41]. Since its invention in the early 1990s, OCT has become one of the most successful optical methods in the medical industry [41]. Due to this commercial success, OCT devices are now available off-the-shelf. For instance, Lin et al. use a multi-modal OCT system (AcuSolutions Inc, Taiwan) that can create registered images from both optical coherence microscopy and fluorescence microscopy [42]. The images from the two modalities were merged and false-colored to create pseudo-H&E images. Extensive in-depth comparisons between pseudo-H&E images and frozen-section H&E images from various biopsy specimens showed that the proposed digital stain method can provide H&E images that describe cellular-level morphology around two times faster than the frozen-section method [42]. In addition, another study shows that digital staining can also be achieved from in vivo OCT measurements [43] where tomographic images of the optic nerve heads are acquired from 10 healthy subjects using a standard OCT eye scanner (Heidelberg Engineering Inc, Germany). Four different tissue types are identified based on pixel-intensity histograms and digitally stained in a way that connective and neural tissues of the optics nerve heads can be easily visualized [43].

While the well-established techniques of phase-contrast microscopy and DIC provide qualitative phase contrast by converting phase differences into intensity differences, quantitative phase imaging (QPI) can provide intrinsic quantification of the optical path lengths difference which is a function of refractive index (RI) and sample thickness [44]. Thus QPI shows decent specificity in the imaging signal without requiring any sample preparations. Due to its ability to map the physical refractive index of the sample, digital staining based on QPI has been widely explored recently with various computational microscopy implementations [38]. The QPI concept was gradually extended towards 3D imaging, which resulted in the invention of gradient light interference microscopy (GLIM) in 2017 [45]. GLIM uses data post-processing for filtering of out-of-focus components for 3D imaging. In 2020, this technique was further augmented by computational specificity (phase imaging with computational specificity - PICS) to digitally stain 3D GLIM images using a U-Net implementation [46].

FPM is a computational microscopic technique that enables wide-field and high-resolution QPI without any interferometry and mechanical scanning [47]. Usually, a low-magnification objective lens is used for a wide field-of-view, and an LED array is utilized for varying illumination angles. In FPM, multiple measurements are captured by varying illumination angles, and each measurement represents a different spatial frequency of the sample. Phase information is then recovered via phase retrieval algorithms, that utilize overlapped spatial frequency as a constraint. FPM was already used for digital staining of antibody conjugates stained mouse kidney slides from monochromatic phase images reconstructed with Fourier Ptychography [29]. An FPM-like setup using the same active illumination of a LED array was also used to digitally stain cell membrane, and nuclei in two different cell cultures [48], although actual FP reconstruction was not applied in that case.

Autofluorescence (AF)

There are several naturally occurring proteins, that emit fluorescence upon excitation by UV or blue light. This process of autofluorescence is often exploited for label-free fluorescence imaging. The most common autofluorescent molecules are listed in Table 1. The excited molecule can then emit standard fluorescence after internal energy conversion (Stokes shift). Intensities, life times as well as ratios of different autofluorescence molecules can be specific to certain cell types and/or functional states [49]. Thus, AF is a reasonable candidate to be used for digital staining. Similar to WSI with white light illumination, some articles use whole slide scanners with UV light to excite these natural fluorophores to WSI based on AF contrast [50].

Table 1 Most common fluorophores for natural autofluorescence, according to [49]

Nonlinear techniques

Optical, nonlinear label-free contrast mechanisms described here include multiphoton microscopy (based on nonlinear AF and second harmonic generation - SHG) and Coherent Anti-Stokes Raman Scattering (CARS).

Although the non-linear excitation process in Multiphoton Microscopy is slightly different to the single-photon AF, described above, most molecules displayed in Table 1 can also be excited with a corresponding two- or three-photon excitation. Compared to conventional fluorescence, which uses blue or UV light of around 400 nm, MPM uses longer wavelengths typically in the range of 780-850 nm (two photon process) or 1,100-1,300 nm (three photon process). This avoids the strong scattering and absorption of biological tissues in the UV range and is not yet affected by the immense attenuation from absorption in water towards the far infra-red region. Therefore, MPM enables deeper tissue imaging than single-photon microscopy. Additionally, the signal generation is naturally limited to the confined focal volume, which outwears the need for a pinhole in the detection path. Most commonly, the native fluorophores of NADH and flavins, are used for label-free MPM [51]. Similar to single-photon autofluorescence, this signal was shown to be specific for certain cell types and/or functional states [52, 53], making it a useful input contrast for digital staining.

Furthermore, MPM naturally enables higher harmonic generation (second harmonic generation - SHG or third harmonic generation - THG) as additional contrast mechanism for imaging. SHG or THG are based on the electrical field component of the incident light and the polarization properties of the sample. This electrical field induces a directional polarization within the sample, which in turn induces the emission of a secondary wave at higher frequency. In contrast to fluorescence, SHG or THG does not experience a Stokes shift. This signal is very specific to structures within the sample that have respective non-linear susceptibility properties (i.e., \(\chi ^{(2)} > 0\) for SHG or \(\chi ^{(3)} > 0\) for THG). SHG for instance, is specific for structures that lack inversion symmetry (\(\chi ^{(2)} > 0\)), such as biological molecules of collagen, myosin and tubulin [54].

A multi-modal microscopy system, including coherent anti-stokes Raman scattering (CARS) at 2,850 cm\(^{-1}\), SHG in forward direction and two-photon AF in backward direction [55], was used to demonstrate a computational transformation from images label-free multi-modal contrast to image with an artificial H&E contrast [33]. This translation was later updated by using GAN models [32].

Photoacoustic microscopy (PAM)

Photoacoustic imaging is based on the photoacoustic effect [56] and detects sound propagation upon laser excitation of the most prominent absorbers in tissue [57]. Thus, PAM promises high molecular specificity to molecules that have a high absorption coefficient, such as hemoglobin, water, melanin and collagen [57]. As ultrasonic scattering is typically weaker in tissue compared to optical scattering, photoacoustic microscopy can produce absorption images at deeper depths compared with traditional microscopy techniques, which makes it suitable for a variety of in vivo studies [58]. Digital staining of PAM images was demonstrated for FFPE brain secitons [59, 60] or skin sections [61, 62].

Target domain: biochemical stains as ground-truth

While the previous chapter on Input domains discusses label-free optical imaging techniques, that are mostly used as input images for digital staining, this chapter presents a similar analysis for artificial staining methods that are usually used as target images for digital staining. Here, we have grouped the typical staining methods into: the histological H&E staining, immuno-histochemical staining (IHC) and immuno-fluorescence staining (IF).

Histological staining

In standard histopathology, tissue samples are most often analyzed with respect to their morphological appearance. Due to low contrast of thin tissue sections under conventional light microscopes, histopathology relies on artificial staining to evaluate tissue morphology. The combination of hematoxylin and eosin staining (H&E) is the most widely used in histopathology and serves as gold standard for most medical diagnosis of tissues. Generally, tissue biopsies are first extracted, using techniques, such as strip biopsy [63], endoscopic pincer grasping instruments [64] or ligating devices [65]. These samples are then fixed, embedded and sectioned. Typical fixation media are based on formaldehyde, while some techniques use Zinc or Alcohol/acetone, sometimes with the addition of picric acid, mercuric chloride or sodium acetate [66]. There are two main approaches for tissue embedding: embedding in paraffin [67] or snap freezing in optimal cooling temperature gel [68]. Each of these procedures comes with certain procedural requirements and different time durations. The most common technique for fixation and embedding is the use of formaldehyde for fixation and paraffin for embedding, leading to the gold standard for tissue preparation of Formalin-Fixed Paraffin-Embedding (FFPE).

Depending on the type of embedding, the samples are then sectioned by cryotomes or microtomes to thin slices, typically between 3 and 10 µm. Finally, these sections are mounted on glass slides and stained. In the case of H&E staining, following Cardiff et al. [69], paraffin tissue sections are first cleared of paraffin in baths of xylene (three changes for 2 min per change), then hydrated by ethanol baths (three changes of 100% ethanol for 2 min per change, transfer to 95% ethanol for 2 min, transfer to 70% ethanol for 2 min) and rinsed in running tap water (2 min) [69].

Afterwards, the tissue sections are stained in hematoxylin solution (3 min), washed again in running tap water (5 min) and then stained with eosin (2 min) [69]. The samples are dehydrated (dipping in 95% ethanol, transfer to 95% ethanol for 2 min, two transfers to 100% ethanol for 2 min per change) and cleared in three changes of xylene (2 min per change) [69]. Thereby, hematoxylin stains cell nuclei and eosin stains extracellular matrix and cytoplasm. Finally, the stained tissue sections are sealed and preserved between glass slice and a coverslip [69]. Thus, the staining protocol alone already accounts for at least 90 min, and the entire procedure from biopsy acquisition to microscopic images of the stained tissue sections can easily last multiple days or even weeks, when considering queuing times in the common laboratory work-flow. In the current state-of-the-art of digital staining, H&E staining is the most common target stain for digital staining, as displayed in Fig. 2.

Immuno-histochemical staining (IHC)

Compared to the purely morphological approach of H&E staining in histopathology, the concept of immuno-histochemical staining (IHC) allows more specific antigen detection. Thereby, IHC goes beyond morphological analysis and fills the gap between classic histopathology (see section on Histological staining) and the molecular specificity of immuno-fluorescence staining (see section on Immuno-fluorescence staining (IF)) [66]. Similar to histopathology, IHC stains are usually applied to fixed tissue sections. In contrast to H&E however, IHC stains are based on specific antibodies [66]. IHC can either use direct staining, where a primary antibody directly leads to colored histochemical reaction, or indirect staining, where the primary antibody is combined with a secondary antibody. In the latter case, the primary antibody binds to the target epitope and the secondary antibody is loaded with a chromogen and binds to that primary antibody. Common examples for example IHC include cell stainings, such as anti-CD3 or anti-CD20 or picro Sirius red staining for collagen [70]. Similar to the above mentioned histology stainings, IHC stains are most commonly used on FFPE tissue sections. IHC were regularly used for digital staining, for instance by using human cancer marker (Ki-67 antigen) [31], Jones’ stain [20, 71,72,73], Masson’s trichrome [20, 34, 71,72,73,74,75,76,77], picro sirius red [78, 79], orcein [78],Verhoeff van Gieson (EVG) stains [79] or periodic acid-Schiff (PAS) stain [34, 76, 80].

Immuno-fluorescence staining (IF)

The third category is the use of fluorescence markers for staining. This can either be achieved by a fluorescent primary antibody (for instance the DAPI stain) [81] or by using the established combination of primary antibodies against specific epitopes and a fluorescent secondary antibody. In the latter case, the primary antibodies are sometimes similar to those used in IHC. Although the boundary between IHC and IF staining can sometimes be blurry, we deliberately make this distinction, since IF can also be used with unfixed samples, like in vivo cell cultures. Due to the toxicity of the fixation process and the physical sectioning of the samples, this is challenging or even infeasible using histology or IHC stainings. Therefore, IF enables a series of new applications for digital staining.

In combination with a shared optical system to generate paired input and target images (see chapter on Generation of paired images), IF is the best viable option to perform digital staining for living cells in culture. The most common examples for IF techniques in digital staining include stains for cell membranes [21, 22, 46, 48, 82], DAPI or Hoechst stains for nuclei [21, 22, 46, 48, 50, 82,83,84,85,86,87,88,89,90], Rhodamine B isothiocyanat for viruses [91], axons markers (tau stain [83], anti-neurofilament stain [21], antiMAP2 for dendrites [21, 50, 83], live and dead cell markers (NucBlue as “live” reagent and NucGreen as “dead” reagent or PI as dead cell marker) [21, 50, 84, 87, 92], actin markers [22, 84, 86, 88, 93], Mitochondria (MitoTracker Red) [94], antiTuj1 for neurons [21], endosome [84], goldi apparatus [84], proliferation [84], Myelin marker in brain [86], markers for the G1 and S stage of the cell cycle [24].

In addition to these exogenous molecular markers, fluorescence stains can also be encoded by genetic modification of the target organism to achieve expression of fluorescence markers in target components, e.g., in mitochondria [25]. Furthermore, IF stains are also being used in multiplexed fashion (see Fig. 4E) for multiplexed immunofluorescence (mpIF) [37, 95].

Biochemical specificity of target stains

In digital staining, it is often overlooked that the biochemical binding specificity represents the fundamental uncertainty that defines the upper limit of trustworthiness of any digital staining model. Although most stains mentioned above are commonly used as ‘gold-standard’, they are actually not always standardized with respect to their biochemcial binding specificity. In the case of H&E or IHC stains, the appearance of stained samples severely dependents on the type of stain solution, the exact staining protocol and the quality or age of dyes. This is especially the case for histology and IHC stainings, but also applies to many IF stains, like the common fluorescent DNA-stain DAPI. In these cases, a standardized specificity value (commonly stated in %) is not available.

For IF stains on the other hand, antibody manufacturers occasionally state reference measurements for specificity. However, it is still challenging to standardize the actual biochemical specificity values across different studies, as it is severely affected by the precise biochemical conditions of the experiment and the environment, including pH value, different behavior in medium vs in cells, ligand buffer interaction, temperature or competing binding partners, to name a few. As displayed in Table 2, the stated specificity values can range between 66% and almost 100 % for different target molecules. Moreover, this binding specificity can even vary for the same molecule, for instance if different antibodies target different binding sites (see the example of anti-tau antibodies in Table 2).

Table 2 Examples of primary antibodies for immuno-fluorescence staining with reported binding specificity and features examples for digital staining (DS). MAP2 = Microtubule Associated Protein 2, HEK = human embryonic kidney, isoform specificity = “no detectable non-specific binding”

For most histological applications, this is completely acceptable, as long as the stain quality enables pathologists to count cells, determine diseased tissue and make a diagnosis. In the case of IF stains, careful calibration measurements can still enable quantitative analysis under standardized conditions. However, it is essential to consider the limited specificity of any target image instead of treating it as actual ground-truth and to regard digital staining as a model prediction that is fundamentally based on these limitations.

Computational models to transfer input images to target domain

As already mentioned, the development of image-to-image regression models, like U-Net [5], GANs [26] or cycle conditional GANs [7] fueled the field of digital staining over the past years. Together, these models make up more than 60% of all reviewed articles here. A short overview of the basic principle of these models is displayed in Fig. 3 and elaborated upon below.

Fig. 3
figure 3

Computational models for Digital staining. a The general supervised machine learning workflow for most digital staining. Please refer to the main text for some examples that use an unsupervised workflow b The most commonly used models: besides earlier implementations of color-coding with a linear contrast translation equation f(k) or feature engineering and classical ML, almost all modern digital staining implementations use deep learning with either CNN and GAN architectures (I = Input image, T = Target image, G = Generator, \(I_g\) = generated image, D = Discriminator)

Pre-processing

Before image data can be used to train a digital staining model, several pre-processing steps are often required. Unless a common optical path is used (see chapter on Generation of paired images), digital staining usually requires image registration to ensure optimal pixel overlay between input and target images. As discussed in the chapter on Caution & pitfalls, this can lead to several challenges, as for instance with sectioning artifacts when using consecutive sections to generate paired images. A detailed explanation of an image registration workflow for digital staining can be found in the work of Bai et al [104], who used a combination of finding speeded up robust features (SURF) points, correlation-based elastic registration algorithms, trained registration models and pyramid elastic image registration algorithms. The generation of image patches is an additional pre-processing step that is very common. Especially, when images are acquired from large-FOV whole-slide imaging (WSI) systems (see section on Wide-field (WF) microscopy), slide images are usually cropped into 2,000-20,000 image patches of 256x256 pix2 or 512x512 pix2 each before training a digital staining model.

Linear color-coding methods for stain transformation

Training of data-driven machine learning models is the current method of choice as computational model to transfer style and color from input into target images. However, especially earlier studies also used simpler mathematical equations for color transfer that worked reasonably well, but were often not verified quantitatively on a separate validation data set [42, 74, 105,106,107,108,109,110,111,112,113]. Most of them follow a simple color-coding method, i.e., a linear mathematical model based on Gareau et al. [105], which was also applied to the previously mentioned OCT images [42]. Although most of these linear color coding methods were applied for earlier implementations of DS, they were still used as recently as 2022 [30].

Feature engineering and classical machine learning

In the next phase of digital staining models, researchers quantified engineered image features and exploited them in classical machine learning models. For instance, k nearest neighbor  [114], spectral Angle Map (SAM), Nearest neighbor (NN), nearest mean classifier (NearMean) [115], random forest [31, 116] or partial least squares regression (PLS) [33] were used for digital staining problems. Although these approaches require more human-supervised feature extraction and prior knowledge, it can perform very robustly and often generalizes well across different data sets from the same sample under different imaging systems. On the other hand, it is often challenging to transfer it to other samples and can be more labor-intense than deep learning methods.

Deep learning

Deep neural networks (DNNs) refer to neural networks with multiple layers, allowing for the extraction of increasingly abstract features from input data. While early models in the 1940s and 1950s were limited in their ability to learn from data and to scale to larger and more complex problems, the development of backpropagation in the 1980s sparked renewed interest in DL. However, computational limitations prevented training of neural networks with many layers, and progress in DL was slow. The emergence of faster and more powerful processors, along with the availability of large amounts of labeled data, led to a resurgence of interest in DL in the early 2000s.

Convolutional neural networks (CNNs)

Researchers began to develop more sophisticated NN architectures, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), that could learn from complex and high-dimensional data, such as images, leading to image recognition using a so-called deep convolutional neural network (DCNN) [117]. The employed convolutional layers are particularly well-suited for image data, as they use a set of learnable filters to convolve over the image, detecting various features such as edges, corners, or textures. Since then, DL has become one of the most active areas of research in artificial intelligence (AI).

DL has been used for many machine learning tasks of images, including classification, regression, and segmentation. The most popular DL architecture for image segmentation is the U-Net which is a fully convolutional neural network that was introduced in 2015 by Ronneberger et al [5]. The U-Net consists of an encoder network and a decoder network. The encoder network consists of several convolutional and pooling layers that decrease the spatial dimension of the input images while simultaneously increasing the number of feature maps. The decoder network is made up of convolutional and upsampling layers that restore the spatial dimensions of the resulting segmentation map and simultaneously decrease the number of feature maps. The U-Net utilizes skip connections to combine low-level features from the contracting path with high-level features from expanding path for preserving spatial resolution in the output.

CNNs are one of the two most often used types of predicitve models, besides generative models of the GAN family (see below). In particular, the UNet is the most popular CNN architecture used for digital staining [21, 77, 79, 83, 85, 86, 89,90,91,92, 98, 118, 119].

Generative models

Generative adversarial networks (GANs) have revolutionized the field of DL by enabling the generation of realistic data samples. The first GAN was proposed by Ian Goodfellow in 2014 [26], and consisted of one generator and one discriminator. The generator produces fake data samples, while the discriminator tries to distinguish between real and fake data samples. The training process involves the two networks playing a min-max game, with the generator trying to fool the discriminator into classifying its fake samples as real, while the discriminator tries to correctly classify the samples. While GANs work most of the time, there is no guarantee that the generator will produce images that actually look like the input dataset. To address this issue, researchers have proposed various modifications to the GAN architecture, such as the conditional GAN, which adds class labels to the generator [26], and the CycleGAN, which consists of two generators and one discriminator [6]. Within this GAN family, the conditional GAN architecture of ‘Pix2Pix’ is the most commonly used for digital staining [25, 61, 62, 79, 82, 120,121,122,123,124].

The CycleGAN has gained popularity in recent years due to its ability to translate between different modalities without the need for paired datasets with labels. Instead, the CycleGAN uses a cycle consistency loss to ensure that the generated output is consistent with the input data [6]. This has enabled researchers to apply the CycleGAN to a wide range of tasks, such as predicting H&E stain from photoacoustic microscopy images [125] and improving periodic-acid-Schiff-stained renal tissue for whole slide image segmentation [126]. In addition, translations between different stains have been proposed, like, transferring between Papanicolaou and Giemsa stains [127]. Moreover, CycleGAN approaches have also been used to improve periodic-acid-Schiff-stained renal tissue for whole slide image segmentation [126], as well as to predict color brightfield images and antibody conjugates stained mouse kidney slides from monochromatic phase images reconstructed with Fourier ptychography [29]. In general, CycleGAN approaches have proven to be versatile and flexible, allowing for the translation between various modalities, making it easier to acquire the data required for medical diagnostics and research. One recent development in the field is the introduction of saliency maps, which have been used to improve the performance of unsupervised models for image transformation tasks. For example, an unsupervised model named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM) uses a saliency constraint to learn the mapping between different histology stains [50].

One of the major challenges with GANs is the problem of “hallucination” [128]. Hallucination occurs when the generator produces synthetic data that do not correspond to the input data distribution. In other words, the discriminator is still fooled by synthetic data that either shows realistic looking artifacts (e.g., a digitally stained cell, when there is no actual cell in that region) or by synthetic data that deletes features (e.g., cells) that are actually present in the real data. This problem can arise when the training data are limited or when the input data are highly variable. Hallucination can be problematic, particularly in the medical domain. It is difficult to detect when a GAN is hallucinating, as the synthetic data may look plausible to the human eye. To mitigate the problem, researchers have proposed various techniques, such as incorporating regularization terms in the GAN loss function [129], using pre-training of the generator [26], or using multiple discriminators [130]. However, the problem of hallucination remains a challenging issue in GAN training, particularly when working with limited or highly variable data, as discussed below.

Loss functions

For deep learning-based digital staining, loss function selection is one of the most important aspects of the neural network designing. Similar to other DL applications, the most commonly used loss functions are mean absolute error (MAE) or L1 loss, the mean squared error (MSE), which takes the L2-norm penalty, and cross-entropy. The innovations in the fields of convolutional networks, the U-Net and generative models, were accompanied by a series of new quantitative metrics for image similarity, such as Wasserstein loss [131], structural similarity index (SSIM) [132] or multi-scale SSIM (MS-SSIM) [133]. In addition to MSE and MAE, these metrics are used frequently for training and/or performance evaluation in digital staining. However, employing a single loss function may lead to performance degradation. MAE keeps brightness and color unchanged, but assumes that the influence of noise and the local characteristics of the image are independent [82], MSE tends to generate blurry results [134]. SSIM became popular since it tends to produce results that are closer to the human visual system in terms of brightness, contrast, structure, and resolution [93]. Additionally, SSIM can detect high-level structural errors [50]. However, we note that the MS-SSIM loss can lead to brightness changes and color deviations [82]. Therefore, there is a range of customized metrics designed for specific tasks, as well as the combination of multiple basic loss functions [50, 94, 135, 136]. In Table 3, we summarize formal definitions and featured references for the most common loss functions.

Table 3 Selected loss functions used for digital staining (DS) with featured references, not including adversarial losses. O(xy) represents the output image, \(\hat{O}(x,y)\) represents the target image, \(\mu\) represents the average value, \(\sigma\) represents the standard deviation, \(c_1\) and \(c_2\) are stabilization constants used to prevent division by weak denominator, respectively

The invention of GANs and their wide use for digital staining, as discussed above, require more complex adversarial loss metrics that are composed of generator loss and discriminators loss. CycleGAN models [7] typically contain two terms: the adversarial loss, to quantify the style match between target and generated images, and a cycle consistency loss \(L_{cyc}(G,F)\), which prevents the learned mappings G and F from contradicting each other. Additional losses are also often incorporated into these basic terms, i.e., for regularization purposes.

Generation of paired images

Digital staining relies on paired images. Although some GAN-based techniques use unpaired data sets for unsupervised training, the majority of articles reviewed here still relies on paired images for supervised learning. Moreover, due to the “hallucination-gap” mentioned above, we postulate that paired images are a necessity at least for a trustworthy validation and performance evaluation of a given digital staining model.

The generation of these paired input and target images is as an important consideration in the practical implementation of digital staining. With the exception of earlier studies that used mathematical equations for color / style transfer [42, 74, 105,106,107,108,109,110,111,112,113] and most recent techniques that use semi-supervised or un-supervised ML models [23, 29, 32, 50, 125, 137], most approaches use paired images of input and target space for training. At the very least, a truthful validation of the output of trained digital staining models still requires paired images, even for conventional linear color translation or for modern unsupervised learning. Therefore, the process of sample preparation, staining protocol and sequence of imaging is also important for digital staining. Here, we have identified five main procedures, as displayed in Fig. 4:

  • cutting consecutive tissue sections from a block of FFPE tissue, and imaging each at a different device (e.g., one for label-free input and one for actual staining as target image).

  • the sample is first stained and then imaged consecutively by two different techniques (e.g., one for input and one for target imaging)

  • the unstained sample is first imaged for the (label-free) input domain and is then stained for the target image domain

  • the choice of input contrast and target contrast allows for spectral separation between input and target images in the same shared optical path.

  • multiplex-staining or de- and re-staining. Here, the same sample is imaged multiple times with multiple different staining techniques. A previous set of stains is chemically removed or bleached, before the next set is applied.

Fig. 4
figure 4

Generation of image pairs for the training of digital staining models. A-E Schematic workflow of the five different procedures. The table shows positive features (green), neutral features (orange) and negative features (red)

The unique advantages and disadvantages of these techniques are summarized in the table in Fig. 4. Please note that these limitations are only relevant for the generation of image pairs, and, therefore for the development/training and the verification of single digital staining models. While early studies relied mostly on working with consecutive sections (Fig. 4A), imaging of the same tissue section is actually the preferred method of choice to remove sectioning artifacts between input and target. Ideally, staining of the target contrast is performed after input imaging, although a few niche applications used a workflow where the sample was first stained (Fig. 4B). Whenever different imaging platforms are used sequentially (Fig. 4A-C), image registration is still essential (see section on Pre-processing). In contrast to that, techniques with shared optical path can almost omit the need for image registration, while also enabling digital staining of cell cultures without the presence of tissue sectioning artifacts for continuous digital staining of processes, like cell growths and cell-to-cell interaction (Fig. 4D). Multiplexing of the staining protocol by using de-staining and re-staining protocols (Fig. 4E), can maximize the amount of staining from a given sample (tissue section or cell culture). The main advantages and disadvantages of each technique are summarized in Fig. 4.

Applications

In this section, we categorized digital staining publications according to the type of sample and according to the field of application. As already mentioned in the section on Basic principle and key examples, there are currently two main modes of operation. On the one hand, the use of fixed tissue sections is most common, either relying on FFPE sections [20, 23, 27, 30,31,32,33,34, 36, 37, 42, 50, 59,60,61,62, 71,72,73, 75,76,77,78,79,80, 85, 86, 88, 95, 98, 101,102,103,104,105,106, 108,109,110, 112, 114,115,116, 119, 121, 123, 134, 137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154] or on frozen tissue sections [32, 107, 125, 135, 155]. The second main field is digital staining for cell cultures [21, 22, 24, 25, 39, 46, 48, 82,83,84, 87, 89, 90, 92,93,94, 111, 113, 118, 127, 136, 156, 157], either from fixed or in vitro cell samples [22, 24, 89, 90]. Additional, but minor fields of application include the use of fresh, un-preserved tissue samples [120, 158] or even some in vivo studies, e.g., on the skin [122], in opthomological imaging of the retina [159] or during endoscopic imaging [160].

The immediate goal of digital histopathology staining of tissue sections (sometimes termed ‘pseudo-H&E’ staining) is to facilitate a wider use of label-free optical technologies by physicians and biomedical researchers, as it allows analysis of label-free images by a pathologist in the well-known and accepted histology image domain [17, 33]. Furthermore, it could allow the use of routine image analysis protocols that have been developed for conventional stainings, e.g., for surgical margin analysis [60] or white blood cell identification in blood smears [124]. Digital staining of tissue sections is often used for pathological evaluation of disease scores [104].

Compared to virtual histology staining of tissue sections and blood smears, digital staining of cells cultures offers entire new research applications that could otherwise not be investigated. IHC staining are unfeasible, especially if cells need to be kept alive. Even fluorescence antibodies stains can interfere with biological processes, if their molecular size is large. A common applications for digital staining of cell cultures is the distinction between live and dead cells using label-free imaging and digital staining [21, 50, 84, 87, 92]. Digital staining is also frequently applied to neurons [21, 50, 83, 86], where functional information from living cells is especially interesting and where actual staining can be particular challenging. The combination of label-free imaging and digital staining allowed the simultaneous use of an AI-based nucleus finding algorithm and an additional tracking algorithm, which was not possible to the traditional method, as fluorescent tracking can affect cell behavior [89]. This concept of combining digital staining with object detection, i.e., for nucleii detection is also used in other applications [154]. As already mentioned, digital staining of phase microscopy images enabled investigation of cell growth and cell division, where the translation model was trained on samples concurrently stained for the G1 and the S stage of the cell cycle [24]. The overlap of both signals could then indicates the G2 or M stage [24]. The concept was even extended to infer not only the staining procedure, but also 3D optical sectioning capability of confocal fluorescence microscopy based on non-confocal 3D quantitative phase images [98]. The approach was generalized for different fluorescence channels, different cell types and different magnifications [98].

Trends & methods of good practice

Pillar and Ozcan identified several key advantages of virtual staining over actual staining [18], like a reduced time to perform staining, minimal manual labor, minimal stain variability, less hazardous waste composition of tissue fixatives, preservatives and staining dyes, no tissue disruption of the actual sample, no restrains to use multiple stains on a single slide, the chance to perform stain-to-stain transformation and a reduced chance for technical failures [18]. Most of these advantages also generally apply to digital staining, as it is discussed here. An important addition to the field is the use of digital staining for cell cultures (both fixed and alive cells), as discussed above. In this case, digital staining offers additional advantages, such as an identification of functional stages (e.g., growth phase) without the biochemical binding of actual antibody stains, that could otherwise interfere with biological homeostasis and impact motion, growths or other aspects of relevance.

As digital staining was refined over the years, we can identify certain trends in this field (see Fig. 5 and supplementary material Figs. S1 and S2). While the first techniques mostly used linear color translation for pseudo-H &E staining, computational tools became more powerful and the applications became more diverse over time. Nowadays, DL models, like the U-Net (since 2015), or GAN models (since 2016), are the most common models used for digital staining. At the same time, the range of applications has significantly expanded, coming from histological tissue sections to cell cultures (since 2018), multiplexed cell imaging (since 2018), or even advanced examples mentioned above, like live cell growth imaging [24] or inference of 3D confocal fluorescence from non-confocal phase images [161]. Similarly, the applied input imaging technologies diversified over time. Label-free modalities, like phase contrast (20/105 articles), wide-field (17/105 articles) and single-photon autofluorescence (12/105 articles) microscopy are still the most frequently used, making good use of digital staining to add computational specificity to these label-free technologies. However, digital staining is also used for stain-to-stain translations, e.g., with artificial stains (H&E with 10/105 or IF with12/105 articles) as input instead of targets.

Fig. 5
figure 5

Historical trends in the field of digital staining. a The total number of publications in the field. b All reviewed articles as parallel and linked categories. The year of each publication is color-coded. An interactive version of this plot is available as Supplementary HTML file

Based on these ongoing trends and the current state-of-the-art, we suggest the following methods of good scientific practice, when developing digital staining. Since each of these topics is an entire field of research in itself, we will only shortly address their relevance to the field of digital staining.

  • General feasibility: As with most ML problems, one should consider first, whether the information content in the data, i.e., the input domain, is believed to be sufficient for the given task. More specifically, a good first question is if the general information in the input images is correlated with the one in the target domain. For instance, it might seem unfeasible to digitally stain cell nuclei (target) from images that only contain fluorescence of a membrane marker as input, if no additional information was used. On the other hand, it would seem quite feasible to perform DS of nuclei and membrane markers based on phase contrast images, as the contrast in optical phase is high for both nuclei and membranes. If a paired data set is already available, we suggest to test the general feasibility first by developing a model for simpler tasks, such as patch classification, object detection, or semantic segmentation.

  • Report uncertainties: One of the main short-comings of the current state-of-the-art for digital staining is that fundamental uncertainties in input and in target data are usually not reported. As presented in this review, however, DS is a holistic approach that involves the entire pipeline of biology, imaging and ML. Simply reporting a performance metric of the ML task, is therefore insufficient, as those metrics assume a perfect ground-truth. However, target data from actual biochemical staining are always affected by the specificity of the molecular marker as fundamental uncertainty in the ‘ground truth’ (see section on Biochemical specificity of target stains). Similarly, imaging of inputs and targets is subject to the specific contrast mechanism, resolution and SNR of the respective imaging technology. Thus, we propose that digital staining should always be embedded in the context of input and target uncertainties of the actual stain as well as SNR of the imaging process to allow a fair evaluation of its performance.

  • Generalizability: there is a generalization gap [162] in DL and digital pathology, which also applies to digital staining. DS can often be very hardware-specific and can be prone to over-fitting. Therefore, it is essential to discuss generalizability. Ideally, one should take a hardware-agnostic approach when testing a DS pipeline. It is recommended to validate and test a system across different imaging systems and/or different tissue types to evaluate if it generalizes well. This can further be extended to evaluate the generalizability across different experimenters, different staining methods or different data sets. See references [21, 22, 25, 34, 37] for good examples.

  • Choice of the right loss function: After the selection of input and target technologies (which might be predefined for a given problem), the choice of the loss function is important. Different loss functions can emphasize different aspects of the image-to-image regression task, e.g., high-level structural errors (SSIM), absolute errors at the pixel level (peak signal-to-noise ratios - PSNR), brightness and color (MAE) or custom loss functions (see section on Loss functions for more details).

  • Image inspection and decision visualization: Besides the mere reporting of loss curves and performance metrics, it is indispensable to visually inspect and report the actual target and prediction images. Although the above mentioned loss functions are suited for training and quantitative performance comparison, some can be ill-suited to detect hallucinations [128], artifacts or other localized prediction errors in the images. Moreover, decision visualization, like occlusion maps, Shapley values or perturbation studies can inform the researcher about features that are particularly important to the learning process. This can not only support de-bugging during the development of DS, but it can also offer valuable scientific feedback e.g., to understand which parts of an input image are particularly relevant to predict a certain target.

  • Interpretability: similar to the point above, ML models can often lack interpretability, which prevents identification of biases and can thereby reduce generalizability. Interpretability is especially relevant for digital staining to prevent false halluzinations from overfitting. A good rule of thumb is that simpler models with a smaller number of parameters are more interpretable. Furthermore, it is preferred to create more interpretable models from the beginning instead of post-hoc explanations of complicated models [163].

  • Availability of code & data: Whenever possible, it is recommended to make code and data available to other researchers, according to the FAIR principle (Findability, Accessibility, Interoperability, and Reuse of digital assets). This enhances trustworthiness and transparency of the general scientific procedure and further enables other researchers to test new approaches, especially since good data sets of paired images might be a bottleneck for many ML researchers. Positive examples, where code and data were made public are [21, 22, 25, 34, 50, 93, 94, 104, 135, 136, 151].

  • Multi-modal imaging: Combinations of different contrast mechanisms for a richer information content an be more robust. Examples include FPM as a natural combination of amplitude and phase [29], dark field reflectance and autofluorescence (DRUM) [30] or complementary nonlinear techniques, like CARS, SHG and two-photon AF [31,32,33].

Caution & pitfalls

As antithesis to the methods of good practice discussed above, we identify certain pitfalls that can reduce the overall success of digital staining (i.e., prediction performance, robustness, validity, computation time, and required number of examples). Generally, we consider the error analysis of digital staining to be not fully developed yet. While most articles in the field do a very good job to report a growing collection of ML performance metrics, a holistic error analysis of the entire process, is not part of the state-of-the-art. We postulate that error analysis for digital staining should include modeling uncertainties (ML performance metrics, training curves but also errors from pre-processing, e.g., image registration) and biological uncertainties (binding specificity, purity of cell cultures, contamination, bleaching of fluorescence), as well as optical uncertainties (contrast, resolution, SNR). Moreover, high uncertainties in input contrast and target ‘ground truth’ will remain undetected, if data from the same general population (e.g., the same target stain and same imaging system) are used for the validation of predictions and for the overall performance evaluation.

Conclusion & future perspectives for digital staining

Despite this role as gold-standard, staining protocols face several challenges, such as a need for extensive, manual processing of samples, substantial time delays, altered tissue homeostasis, limited choice of contrast agents for a given sample, 2D imaging instead of 3D tomography and many more. Label-free optical technologies, on the other hand, do not rely on exogenous and artificial markers, by exploiting intrinsic optical contrast mechanisms, where the specificity is typically less obvious to the human observer. Over the past few years, digital staining has emerged as a promising concept to use modern deep learning for the translation from optical contrast to established biochemical contrast of actual stainings. In this following chapter, we present potential future trends and challenges, as well as our view on the broader impact on clinical diagnostics, research, and biotechnology.

Generally, medical diagnostics in remote and resource-limited settings would greatly profit from a low-cost, stainless approach like digital staining. When applied to simple and robust systems, like portable white-light or phase contrast microscopes, this could enable reasonable diagnostic yield from inexpensive hardware. On the other hand, label-free technologies, like MPM, CARS, PAM, FPM and others, are growing fields of research in high-income contries, and yet digital staining is currently still under-investigated for these emerging techniques. Thus, we foresee a further implementations of digital staining for these more advanced optical contrast mechanisms. Furthermore, we believe that the input and target images with rely more on multiple different stains and/or mpIF, which was shown to have a higher accuracy in diagnostic prediction as compared to single stainings [164].

In the branch of ML models that are used for digital staining, several innovations can be imagined. For once, multi-task learning is an emerging concept that is being used for ML in optical microscopy. As it was already realized for digital staining with auxiliary tasks [39], it will likely become more relevant for this field in the future. The concept of multi-domain image translation, i.e., training a single model to learn mappings among multiple domains, was already implemented in a larger number of publications [22, 72, 73, 82, 102]. In a similar fashion, physics-informed learning and integration of prior information or simulation data into the learning process are interesting concepts in modern ML research. Since these are well-suited to increase robustness and generalizability, they would probably be able to address several challenges in modern digital staining, as in the case of hallucinations. This concept has not really found its way to digital staining yet, except for publications that employed simulations to improve the training process [25] or that modeled the microscope’s point spread function in the learning of an adversarial neural network to improve digital staining [25]. The trend in the development of new ML models, from classical ML over DL to GAN models, is likely to continue and to produce entirely new concepts for ML models. One potential candidate is Adversarial Diffusion Models. These are already used to translate between MRI and CT data [165], which is a very similar problem to digital staining in optical microscopy.

The continuation of current trends as well as potential innovations in the field will very likely result in a series of exciting new applications for digital staining. Although digital staining of histology sections has shown to facilitate easier, faster and potentially more accurate clinical diagnosis in several research publications, a full FDA approval as medical product will remain challenging, due to extensive documentation requirements and current technical limitations. We believe that this technique is currently more interesting for cell cultures, as discussed in this review. Since this use-case does not imply sensitive patient data or critical decisions on clinical diagnosis, a commercialization in the biotechnology sector might be more feasible. The technique of 3D fluorescent labeling based on phase microscopy was already patented [118]. Following this trend, digital staining could potentially be used for organoids, that gained a lot of popularity in the recent years.

In the long-term future, however, clinical applications of digital staining would not only be limited to tissue sections but could become a vital tool for clinical in vivo imaging. Currently, DL is already used to improve image quality in endomicroscopes [166], and endoscopic or endomicroscopic implementations are already available for many imaging technologies and optical contrast mechanisms mentioned in this review. This next step of digital staining, however, needs to be accompanied by designing more robust, generalizable and interpretable models, as discussed above. This point was also identified by Jiang et al., who mentioned the problems of variable clinical factors regarding imaging microscopes, staining techniques, patch extraction, and selection and stated that “To address this issue, designing more robust architectures can make the model less dependent on data quality in digital medicine” [14].

Supplementary materials & analysis of literature

Methods for selecting literature

Literature review

For the literature data base in this review, 108 articles between 2005 and January 2023 were reviewed and categorized. We considered peer-reviewed articles of above two pages length, not including short conference abstracts or un-reviewed preprints. Articles were considered as digital staining, if an image-to-image regression for microscopy images in different contrast domains was carried out. Articles that performed conventional segmentation tasks or stain normalization (i.e., transfer from one domain to the same domain) were not considered. The date of acceptance was used as time stamp. If that was not available, the date of publication was used. The full data base of the all reviewed articles that were used for the figures in this paper is available as Supplementary material. The keyword search on google scholar contained the followed keywords and possible permutations thereof: virtual fluorescence, virtual staining, in silica label, computational specificity, computational stain, digital stain, in silico stain, pseudo H&E, in silico label.

Visualizations

Literature data in Figs. 2 A, 5 and in supplementary Figs. S1 and S2 were handled using the pandas library and plotted in Python. Figure 5 B was generated using \(plotly.express.parallel\_categories\). An interactive version of this plot is available as Supplementary file.

Availability of data and materials

The full data base of all reviewed articles and their respective categorizations is available as Supplementary material.

Code availability

The code to plot the data will be made available upon reasonable request.

References

  1. Aresta G, Araújo T, Kwok S, Chennamsetty SS, Safwan M, Alex V, et al. Bach: Grand challenge on breast cancer histology images. Med Image Anal. 2019;56:122–39.

    Article  Google Scholar 

  2. Haft-Javaherian M, Fang L, Muse V, Schaffer CB, Nishimura N, Sabuncu MR. Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models. PLoS ONE. 2019;14(3):e0213539.

    Article  Google Scholar 

  3. Muthumbi A, Chaware A, Kim K, Zhou KC, Konda PC, Chen R, et al. Learned sensing: jointly optimized microscope hardware for accurate image classification. Biomed Opt Express. 2019;10(12):6351–69.

    Article  Google Scholar 

  4. Chen T, Chefd’Hotel C. Deep learning based automatic immune cell detection for immunohistochemistry images. In: International workshop on machine learning in medical imaging. Springer; 2014. p. 17–24.

  5. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. Springer; 2015. p. 234–241.

  6. Liu MY, Breuel T, Kautz J. Unsupervised image-to-image translation networks. Adv Neural Inf Process Syst. 2017;30. https://proceedings.neurips.cc/paper_files/paper/2017/hash/dc6a6489640ca02b0d42dabeb8e46bb7-Abstract.html.

  7. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision. Computer Vision Foundation; 2017. p. 2223–32.

  8. Zhang Y, Tang F, Dong W, Huang H, Ma C, Lee TY, et al. Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning. arXiv preprint arXiv:2205.09542. 2022.

  9. Kingma DP, Dhariwal P. Glow: Generative flow with invertible 1x1 convolutions. Adv Neural Inf Process Syst. 2018;31.

  10. Kobyzev I, Prince SJ, Brubaker MA. Normalizing flows: An introduction and review of current methods. IEEE Trans Pattern Anal Mach Intell. 2020;43(11):3964–79.

    Article  Google Scholar 

  11. Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. Adv Neural Inf Process Syst. 2020;33:6840–51.

    Google Scholar 

  12. Saharia C, Chan W, Chang H, Lee C, Ho J, Salimans T, et al. Palette: Image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings. Association for Computing Machinery; 2022. p. 1–10.

  13. You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. Eye Vis. 2022;9(1):1–19.

    Article  Google Scholar 

  14. Jiang H, Zhou Y, Lin Y, Chan RC, Liu J, Chen H. Deep Learning for Computational Cytology: A Survey. arXiv preprint arXiv:2202.05126. 2022.

  15. Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, et al. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers. 2022;14(5):1199.

    Article  Google Scholar 

  16. Jo Y, Cho H, Lee SY, Choi G, Kim G, Min HS, et al. Quantitative phase imaging and artificial intelligence: a review. IEEE J Sel Top Quantum Electron. 2018;25(1):1–14.

    Article  Google Scholar 

  17. Rivenson Y, de Haan K, Wallace WD, Ozcan A. Emerging advances to transform histopathology using virtual staining. BME Front. 2020:9647163.

  18. Pillar N, Ozcan A. Virtual tissue staining in pathology using machine learning. Expert Rev Mol Diagn. 2022;22(11):987–9. https://doi.org/10.1080/14737159.2022.2153040.

    Article  Google Scholar 

  19. Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. Light Sci Appl. 2023;12(1):57.

    Article  Google Scholar 

  20. Rivenson Y, Wang HD, Wei ZS, de Haan K, Zhang YB, Wu YC, et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat Biomed Eng. 2019;3(6):466–77. https://doi.org/10.1038/s41551-019-0362-y.

    Article  Google Scholar 

  21. Christiansen EM, Yang SJ, Ando DM, Javaherian A, Skibinski G, Lipnick S, et al. In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images. Cell. 2018;173(3):792. https://doi.org/10.1016/j.cell.2018.03.040.

    Article  Google Scholar 

  22. Ounkomol C, Seshamani S, Maleckar MM, Collman F, Johnson GR. Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy. Nat Methods. 2018;15(11):917. https://doi.org/10.1038/s41592-018-0111-2.

    Article  Google Scholar 

  23. Gupta L, Klinkhammer BM, Boor P, Merhof D, Gadermayr M. GAN-Based Image Enrichment in Digital Pathology Boosts Segmentation Accuracy. Med Image Comput Comput Assist Interv Miccai 2019, Pt I. 2019;11764:631–639. https://doi.org/10.1007/978-3-030-32239-7_70.

  24. He YR, He S, Kandel ME, Lee YJ, Hu C, Sobh N, et al. Cell cycle stage classification using phase imaging with computational specificity. ACS Photon. 2022;9(4):1264–73.

    Article  Google Scholar 

  25. Somani A, Sekh AA, Opstad IS, Birgisdottir ÅB, Myrmel T, Ahluwalia BS, et al. Virtual labeling of mitochondria in living cells using correlative imaging and physics-guided deep learning. Biomed Opt Express. 2022;13(10):5495–516.

    Article  Google Scholar 

  26. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Adv Neural Inf Process Syst. 2014;27. https://proceedings.neurips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html.

  27. Rana A, Yaunery G, Lowe A, Shah P. Computational Histological Staining and Destaining of Prostate Core Biopsy RGB Images with Generative Adversarial Neural Networks. In: 2018 17th Ieee International Conference on Machine Learning and Applications (Icmla). 2018. p. 828–834. https://doi.org/10.1109/Icmla.2018.00133.

  28. Shaban MT, Baur C, Navab N, Albarqouni S. Staingan: Stain style transfer for digital histological images. In: 2019 Ieee 16th international symposium on biomedical imaging (Isbi 2019). IEEE; 2019. p. 953–956.

  29. Wang RH, Song PM, Jiang SW, Yan CG, Zhu JK, Guo CF, et al. Virtual brightfield and fluorescence staining for Fourier ptychography via unsupervised deep learning. Opt Lett. 2020;45(19):5405–8. https://doi.org/10.1364/Ol.400244.

    Article  Google Scholar 

  30. Ye S, Zou J, Huang C, Xiang F, Wen Z, Wang N, et al. Rapid and label-free histological imaging of unprocessed surgical tissues via Dark-field Reflectance Ultraviolet Microscopy. iScience. 2022;105849.

  31. Petersen D, Mavarani L, Niedieker D, Freier E, Tannapfel A, Kotting C, et al. Virtual staining of colon cancer tissue by label-free Raman micro-spectroscopy. Analyst. 2017;142(8):1207–15. https://doi.org/10.1039/c6an02072k.

    Article  Google Scholar 

  32. Pradhan P, Meyer T, Vieth M, Stallmach A, Waldner M, Schmitt M, et al. Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning. Biomed Opt Express. 2021;12(4):2280–98. https://doi.org/10.1364/Boe.415962.

    Article  Google Scholar 

  33. Bocklitz TW, Salah FS, Vogler N, Heuke S, Chernavskaia O, Schmidt C, et al. Pseudo-HE images derived from CARS/TPEF/SHG multimodal imaging in combination with Raman-spectroscopy as a pathological screening tool. BMC Cancer. 2016;16. https://doi.org/10.1186/S12885-016-2520-X.

  34. de Haan K, Zhang Y, Zuckerman JE, Liu T, Sisk AE, Diaz MF, et al. Deep learning-based transformation of H &E stained tissues into special stains. Nat Commun. 2021;12(1):1–13.

    Google Scholar 

  35. Opstad I. Data set: Fluorescence microscopy videos of mitochondria in H9c2 cardiomyoblasts. DataverseNO. 2023. https://doi.org/10.18710/11LLTW.

  36. Hong Y, Heo YJ, Kim B, Lee D, Ahn S, Ha SY, et al. Deep learning-based virtual cytokeratin staining of gastric carcinomas to measure tumor-stroma ratio. Sci Rep. 2021;11(1). https://doi.org/10.1038/S41598-021-98857-1.

  37. Ghahremani P, Li Y, Kaufman A, Vanguri R, Greenwald N, Angelo M, et al. Deep learning-inferred multiplex immunofluorescence for immunohistochemical image quantification. Nat Mach Intell. 2022;4(4):401–12.

    Article  Google Scholar 

  38. Park Y, Depeursinge C, Popescu G. Quantitative phase imaging in biomedicine. Nat Photonics. 2018;12(10):578–89.

    Article  Google Scholar 

  39. Tomczak A, Ilic S, Marquardt G, Engel T, Forster F, Navab N, et al. Multi-Task Multi-Domain Learning for Digital Staining and Classification of Leukocytes. IEEE Trans Med Imaging. 2021;40(10):2897–910. https://doi.org/10.1109/Tmi.2020.3046334.

    Article  Google Scholar 

  40. Zhou KC, Qian R, Dhalla AH, Farsiu S, Izatt JA. Unified k-space theory of optical coherence tomography. Adv Opt Photonics. 2021;13(2):462–514.

    Article  Google Scholar 

  41. Drexler W, Fujimoto JG, et al. Optical coherence tomography: technology and applications. vol. 2. Springer; 2015.

  42. Lin SE, Jheng DY, Hsu KY, Liu YR, Huang WH, Lee HC, et al. Rapid pseudo-H&E imaging using a fluorescence-inbuilt optical coherence microscopic imaging system. Biomed Opt Express. 2021;12(8):5139–58. https://doi.org/10.1364/Boe.431586.

    Article  Google Scholar 

  43. Mari JM, Aung T, Cheng CY, Strouthidis NG, Girard MJA. A Digital Staining Algorithm for Optical Coherence Tomography Images of the Optic Nerve Head. Transl Vis Sci Technol. 2017;6(1). https://doi.org/10.1167/Tvst.6.1.8.

  44. Wang Z, Millet L, Mir M, Ding H, Unarunotai S, Rogers J, et al. Spatial light interference microscopy (SLIM). Opt Express. 2011;19(2):1016–26.

    Article  Google Scholar 

  45. Nguyen TH, Kandel ME, Rubessa M, Wheeler MB, Popescu G. Gradient light interference microscopy for 3D imaging of unlabeled specimens. Nat Commun. 2017;8(1):1–9.

    Article  Google Scholar 

  46. Kandel ME, He YR, Lee YJ, Chen THY, Sullivan KM, Aydin O, et al. Phase Imaging with Computational Specificity (PICS) for measuring dry mass changes in sub-cellular compartments. Nat Commun. 2020;11(1):1–10.

    Article  Google Scholar 

  47. Konda PC, Loetgering L, Zhou KC, Xu S, Harvey AR, Horstmeyer R. Fourier ptychography: current applications and future promises. Opt Express. 2020;28(7):9603–30.

    Article  Google Scholar 

  48. Cooke CL, Kong F, Chaware A, Zhou KC, Kim K, Xu R, et al. Physics-enhanced machine learning for virtual fluorescence microscopy. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. Computer Vision Foundation; 2021. p. 3803–3813.

  49. Croce AC, Bottiroli G. Autofluorescence spectroscopy and imaging: a tool for biomedical research and diagnosis. Eur J Histochem EJH. 2014;58(4):2461.

    Google Scholar 

  50. Li XY, Zhang GX, Qiao H, Bao F, Deng Y, Wu JM, et al. Unsupervised content-preserving transformation for optical microscopy. Light-Sci Appl. 2021;10(1). https://doi.org/10.1038/S41377-021-00484-Y.

  51. Zipfel WR, Williams RM, Webb WW. Nonlinear Magic: Multiphoton Microscopy in the Biosciences. Nat Biotechnol. 2003;21(11):1368–76. https://doi.org/10.1038/nbt899.

    Article  Google Scholar 

  52. Lemire S, Thoma OM, Kreiss L, Völkl S, Friedrich O, Neurath MF, et al. Natural NADH and FAD Autofluorescence as Label-Free Biomarkers for Discriminating Subtypes and Functional States of Immune Cells. Int J Mol Sci. 2022;23(4):2338.

    Article  Google Scholar 

  53. Gehlsen U, Szaszak M, Gebert A, Koop N, Huettmann G, Steven P. Non-Invasive Multi-Dimensional Two-Photon Microscopy enables optical fingerprinting (TPOF) of immune cells. J Biophotonics. 2015;8(6):466–79.

    Article  Google Scholar 

  54. Schürmann S, Weber C, Fink RH, Vogel M. Myosin Rods are a Source of Second Harmonic Generation Signals in Skeletal Muscle. In: Proceedings Volume 6442, Multiphoton Microscopy in the Biomedical Sciences VII; 2007. p. 64421U. https://doi.org/10.1117/12.700917.

  55. Heuke S, Vogler N, Meyer T, Akimov D, Kluschke F, Röwert-Huber HJ, et al. Multimodal Mapping of Human Skin. Br J Dermatol. 2013;169(4):794–803.

    Article  Google Scholar 

  56. Rosencwaig A. Photoacoustics and Photoacoustic Spectroscopy. vol. 57. Wiley; 1980. ISBN 10: 0894644505.

  57. Xu M, Wang LV. Photoacoustic Imaging in Biomedicine. Rev Sci Instrum. 2006;77(4):041101.

    Article  Google Scholar 

  58. Yao J, Wang LV. Photoacoustic microscopy. Laser Photonics Rev. 2013;7(5):758–78.

    Article  Google Scholar 

  59. Kang L, Li XF, Zhang Y, Wong TTW. Deep learning enables ultraviolet photoacoustic microscopy based histological imaging with near real-time virtual staining. Photoacoustics. 2022;25. https://doi.org/10.1016/J.Pacs.2021.100308.

  60. Li X, Kang L, Lo CT, Tsang VT, Wong TT. High-Speed Ultraviolet Photoacoustic Microscopy for Histological Imaging with Virtual-Staining assisted by Deep Learning. J Visualized Exp Jove. 2022;(182).

  61. Boktor M, Ecclestone B, Pekar V, Dinakaran D, Mackey JR, Fieguth P, et al. Deep-Learning-Based Virtual H&E Staining Using Total-Absorption Photoacoustic Remote Sensing (TA-PARS). In: Sci Rep. 2022;12:10296. https://doi.org/10.1038/s41598-022-14042-y.

  62. Boktor M, Ecclestone BR, Pekar V, Dinakaran D, Mackey JR, Fieguth P, et al. Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS). Sci Rep. 2022;12(1):1–12.

    Article  Google Scholar 

  63. Karita M, Tada M, Okita K, Kodama T. Endoscopic Therapy for Early Colon Cancer: the Strip Biopsy Resection Technique. Gastrointest Endosc. 1991;37(2):128–32.

    Article  Google Scholar 

  64. Stefanchik D. Endoscopic Tissue Resection Device. Google Patents; 2010. US Patent 7,780,691.

  65. Akiyama M, Ota M, Nakajima H, Yamagata K, Munakata A. Endoscopic Mucosal Resection of Gastric Neoplasms Using a Ligating Device. Gastrointest Endosc. 1997;45(2):182–6.

    Article  Google Scholar 

  66. Ramos-Vara J, Miller M. When tissue antigens and antibodies get along: revisiting the technical aspects of immunohistochemistry–the red, brown, and blue technique. Vet Pathol. 2014;51(1):42–87.

    Article  Google Scholar 

  67. Wright DK, Manos MM. Sample Preparation from Paraffin-Embedded Tissues. PCR Protocol Guide Methods Appl. 1990;19:153–9.

    Google Scholar 

  68. Mager S, Oomen MH, Morente MM, Ratcliffe C, Knox K, Kerr DJ, et al. Standard Operating Procedure for the Collection of Fresh Frozen Tissue Samples. Eur J Cancer. 2007;43(5):828–34.

    Article  Google Scholar 

  69. Cardiff RD, Miller CH, Munn RJ. Manual Hematoxylin and Eosin Staining of Mouse Tissue Sections. Cold Spring Harb Protoc. 2014;2014(6):073411.

    Article  Google Scholar 

  70. Whittaker P, Kloner R, Boughner D, Pickering J. Quantitative Assessment of Myocardial Collagen with Picrosirius Red Staining and Circularly Polarized Light. Basic Res Cardiol. 1994;89(5):397–410.

    Article  Google Scholar 

  71. Rivenson Y, Liu TR, Wei ZS, Zhang Y, de Haan K, Ozcan A. PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning. Light-Sci Appl. 2019;8. https://doi.org/10.1038/s41377-019-0129-y.

  72. Zhang YJ, de Haan K, Rivenson Y, Li JX, Delis A, Ozcan A. Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue. Light-Sci Appl. 2020;9(1). https://doi.org/10.1038/s41377-020-0315-y.

  73. Zhang Y, de Haan K, Li J, Rivenson Y, Ozcan A. Neural network-based multiplexed and micro-structured virtual staining of unlabeled tissue. In: Conference on Lasers and Electro-Optics, Technical Digest Series (Optica Publishing Group, 2022), paper ATh2I.2.

  74. Bautista PA, Yagi Y. Digital simulation of staining in histopathology multispectral images: enhancement and linear transformation of spectral transmittance. J Biomed Opt. 2012;17(5). https://doi.org/10.1117/1.Jbo.17.5.056013.

  75. Mayerich D, Walsh MJ, Kadjacsy-Balla A, Ray PS, Hewitt SM, Bhargava R. Stain-less staining for computed histopathology. Technology. 2015;3(01):27–31.

    Article  Google Scholar 

  76. Gadermayr M, Appel V, Klinkhammer BM, Boor P, Merhof D. Which way round? A study on the performance of stain-translation for segmenting arbitrarily dyed histological images. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. Lecture Notes in Computer Science. vol. 11071. Cham: Springer. 2018.  p. 165–173. https://doi.org/10.1007/978-3-030-00934-2_192018.

  77. Fujitani M, Mochizuki Y, Iizuka S, Simo-Serra E, Kobayashi H, Iwamoto C, et al. Re-staining pathology images by FCNN. In: 2019 16th International Conference on Machine Vision Applications (MVA). IEEE; 2019. p. 1–6.

  78. Li D, Hui H, Zhang YQ, Tong W, Tian F, Yang X, et al. Deep Learning for Virtual Histological Staining of Bright-Field Microscopic Images of Unlabeled Carotid Artery Tissue. Mol Imaging Biol. 2020;22(5):1301–9. https://doi.org/10.1007/s11307-020-01508-6.

    Article  Google Scholar 

  79. Zhang GH, Ning B, Hui H, Yu TF, Yang X, Zhang HX, et al. Image-to-Images Translation for Multiple Virtual Histological Staining of Unlabeled Human Carotid Atherosclerotic Tissue. Mol Imaging Biol. 2022;24(1):31–41. https://doi.org/10.1007/s11307-021-01641-w.

    Article  Google Scholar 

  80. Yang X, Bai B, Zhang Y, Li Y, de Haan K, Liu T, et al. Virtual stain transfer in histology via cascaded deep neural networks. ACS Photonics. 2022;9(9):p3134-3143. https://doi.org/10.1021/acsphotonics.2c00932.

    Article  Google Scholar 

  81. Otto F. DAPI Staining of Fixed Cells for High-Resolution Flow Cytometry of Nuclear DNA. In: Methods in Cell Biology. vol. 33. Elsevier; 1990. p. 105–10. https://doi.org/10.1016/s0091-679x(08)60516-6

  82. Jiang Z, Li B, Tran TN, Jiang J, Liu X, Ta D. Fluo-Fluo translation based on deep learning. Chinese Opt Lett. 2022;20(3):031701.

    Article  Google Scholar 

  83. Kandel ME, Kim E, Lee YJ, Tracy G, Chung HJ, Popescu G. Multiscale Assay of Unlabeled Neurite Dynamics Using Phase Imaging with Computational Specificity. Acs Sensors. 2021;6(5):1864–74. https://doi.org/10.1021/acssensors.1c00100.

    Article  Google Scholar 

  84. Cheng SY, Fu SP, Kim YM, Song WY, Li YZ, Xue YJ, et al. Single-cell cytometry via multiplexed fluorescence prediction by label-free reflectance microscopy. Sci Adv. 2021;7(3). https://doi.org/10.1126/sciadv.abe0431.

  85. Yuan E, Matusiak M, Sirinukunwattana K, Varma S, Kidzinski L, West R. Self-Organizing Maps for Cellular In Silico Staining and Cell Substate Classification. Front Immunol. 2021;12. https://doi.org/10.3389/Fimmu.2021.765923.

  86. Guo SM, Yeh LH, Folkesson J, Ivanov IE, Krishnan AP, Keefe MG, et al. Revealing architectural order with quantitative label-free imaging and deep learning. Elife. 2020;9. https://doi.org/10.7554/eLife.55502.

  87. Liu Y, Yuan H, Wang ZY, Ji SW. Global Pixel Transformers for Virtual Staining of Microscopy Images. IEEE Trans Med Imaging. 2020;39(6):2256–66. https://doi.org/10.1109/Tmi.2020.2968504.

    Article  Google Scholar 

  88. Burlingame EA, Margolin AA, Gray JW, Chang YH. SHIFT: speedy histopathological-to-immunofluorescent translation of whole slide images using conditional generative adversarial networks. Med Imaging 2018 Digit Pathol. 2018;10581. https://doi.org/10.1117/12.2293249.

  89. Gu S, Lee RM, Benson Z, Ling C, Vitolo MI, Martin SS, et al. Label-free cell tracking enables collective motion phenotyping in epithelial monolayers. iScience. 2022;25(7):104678. https://doi.org/10.1016/j.isci.2022.104678.

  90. Ling C, Majurski M, Halter M, Stinson J, Plant A, Chalfoun J. Analyzing u-net robustness for single cell nucleus segmentation from phase contrast images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Computer Vision Foundation; 2020. p. 966–67.

  91. Goswami N, He YCR, Deng YH, Oh C, Sobh N, Valera E, et al. Label-free SARS-CoV-2 detection and classification using phase imaging with computational specificity. Light-Sci Appl. 2021;10(1). https://doi.org/10.1038/s41377-021-00620-8.

  92. Hu CF, He SH, Lee YJ, He YC, Kong EM, Li H, et al. Live-dead assay on unlabeled cells using phase imaging with computational specificity. Nat Commun. 2022;13(1). https://doi.org/10.1038/s41467-022-28214-x.

  93. Kolln LS, Salem O, Valli J, Hansen CG, McConnell G. Label2label: training a neural network to selectively restore cellular structures in fluorescence microscopy. J Cell Sci. 2022;135(3). https://doi.org/10.1242/jcs.258994.

  94. Jo Y, Cho H, Park WS, Kim G, Ryu D, Kim YS, et al. Label-free multiplexed microtomography of endogenous subcellular dynamics using generalizable deep learning. Nat Cell Biol. 2021;23(12):1329. https://doi.org/10.1038/s41556-021-00802-x.

    Article  Google Scholar 

  95. Hermsen M, Volk V, Bräsen JH, Geijs DJ, Gwinner W, Kers J, et al. Quantitative assessment of inflammatory infiltrates in kidney transplant biopsies using multiplex tyramide signal amplification and deep learning. Lab Investig. 2021;101(8):970–82.

    Article  Google Scholar 

  96. Tondeleir D, Lambrechts A, Müller M, Jonckheere V, Doll T, Vandamme D, et al. Cells lacking β-actin are genetically reprogrammed and maintain conditional migratory capacity. Mol Cell Proteomics. 2012;11(8):255–71.

  97. Anti-MAP2 antibody Data sheet [EPR19691] ab183830. Accessed Oct 2023. https://www.abcam.com/products/primary-antibodies/map2-antibody-epr19691-ab183830.html.

  98. Chen X, Kandel ME, Shenghua H, et al. Artificial confocal microscopy for deep label-free imaging. Nat Photonics. 2022. https://doi.org/10.1038/s41566-022-01140-6.

  99. Li D, Cho YK. High specificity of widely used phospho-tau antibodies validated using a quantitative whole-cell based assay. J Neurochem. 2020;152(1):122–35.

    Article  Google Scholar 

  100. Anti-Ki67 Antibody [Ki-67] data sheet (PE) (A86642). Antibodies.com 2023. Accessed Oct 2023. https://www.antibodies.com/de/ki67-antibody-ki-67-pe-a86642.

  101. Xu ZD, Li X, Zhu XH, Chen LY, He YH, Chen YP. Effective Immunohistochemistry Pathology Microscopy Image Generation Using CycleGAN. Front Mol Biosci. 2020;7. https://doi.org/10.3389/Fmolb.2020.571180.

  102. Zhang R, Cao Y, Li Y, Liu Z, Wang J, He J, et al. MVFStain: Multiple virtual functional stain histopathology images generation based on specific domain mapping. Med Image Anal. 2022;80:102520.

    Article  Google Scholar 

  103. Liu Y, Li X, Zheng A, Zhu X, Liu S, Hu M, et al. Predict Ki-67 positive cells in H &E-stained images using deep learning independently from IHC-stained images. Front Mol Biosci. 2020;7:183.

    Article  Google Scholar 

  104. Bai B, Wang H, Li Y, de Haan K, Colonnese F, Wan Y, et al. Label-free virtual HER2 immunohistochemical staining of breast tissue using deep learning. BME Front. 2022;2022:9786242. https://doi.org/10.34133/2022/9786242.

    Article  Google Scholar 

  105. Gareau DS. Feasibility of digitally stained multimodal confocal mosaics to simulate histopathology. J Biomed Opt. 2009;14(3). https://doi.org/10.1117/1.3149853.

  106. Bini J, Spain J, Nehal K, Hazelwood V, DiMarzio C, Rajadhyaksha M. Confocal mosaicing microscopy of basal cell carcinomas ex vivo: progress in digital staining to simulate histology-like appearance. Adv Biomed Clin Diagn Syst Ix. 2011;7890. https://doi.org/10.1117/12.873601.

  107. Bini J, Spain J, Nehal K, Hazelwood V, DiMarzio C, Rajadhyaksha M. Confocal mosaicing microscopy of human skin ex vivo: spectral analysis for digital staining to simulate histology-like appearance. J Biomed Opt. 2011;16(7). https://doi.org/10.1117/1.3596742.

  108. Amrania H, Antonacci G, Chan CH, Drummond L, Otto WR, Wright NA, et al. Digistain: a digital staining instrument for histopathology. Opt Express. 2012;20(7):7290–9. https://doi.org/10.1364/Oe.20.007290.

    Article  Google Scholar 

  109. Giacomelli MG, Husvogt L, Vardeh H, Faulkner-Jones BE, Hornegger J, Connolly JL, et al. Virtual Hematoxylin and Eosin Transillumination Microscopy Using Epi-Fluorescence Imaging. PLoS ONE. 2016;11(8). https://doi.org/10.1371/journal.pone.0159337.

  110. Elfer KN, Sholl AB, Wang M, Tulman DB, Mandava H, Lee BR, et al. DRAQ5 and Eosin (‘D &E’) as an Analog to Hematoxylin and Eosin for Rapid Fluorescence Histology of Fresh Tissues. PLoS ONE. 2016;11(10). https://doi.org/10.1371/journal.pone.0165530.

  111. Fan X, Tang ZY, Healy JJ, O’Dwyer K, Hennelly BM. Label-free Rheinberg staining of cells using digital holographic microscopy and spatial light interference microscopy. Adv Opt Imaging Technol Ii. 2019;11186. https://doi.org/10.1117/12.2538670.

  112. Fereidouni F, Todd A, Li YH, Chang CW, Luong K, Rosenberg A, et al. Dual-mode emission and transmission microscopy for virtual histochemistry using hematoxylin- and eosin-stained tissue sections. Biomed Opt Express. 2019;10(12):6516–30. https://doi.org/10.1364/Boe.10.006516.

    Article  Google Scholar 

  113. Fan X, Healy JJ, O’Dwyer K, Hennelly BM. Label-free color staining of quantitative phase images. Opt Lasers Eng. 2020;129. https://doi.org/10.1016/j.optlaseng.2020.106049.

  114. Bautista PA, Abe T, Yamaguchi M, Yagi Y, Ohyama N. Digital staining of unstained pathological tissue samples through spectral transmittance classification. Opt Rev. 2005;12(1):7–14. https://doi.org/10.1007/s10043-005-0007-0.

    Article  Google Scholar 

  115. Bautista PA, Abe T, Yamaguchi M, Yagi Y, Ohyama N. Digital Staining of Pathological Images: Dye amount correction for improved classification performance. Med Imaging 2007 Comput-Aided Diagn Pts 1 2. 2007;6514. https://doi.org/10.1117/12.710446.

  116. Hanselmann M, Kothe U, Kirchner M, Renard BY, Amstalden ER, Glunde K, et al. Toward Digital Staining using Imaging Mass Spectrometry and Random Forests. J Proteome Res. 2009;8(7):3558–67. https://doi.org/10.1021/pr900253y.

    Article  Google Scholar 

  117. Hinton G, LeCun Y, Bengio Y. Deep learning. Nature. 2015;521(7553):436–44.

    Article  Google Scholar 

  118. Park Y, Park W, Jo Y, Min H, Cho H. Method and Apparatus for Generating 3D Fluorescent Label Image of Label-Free using 3D Refractive Index Tomography and Deep Learning. European Patent Application. 2021. Patent number: 11450062, Filed: March 19, 2020 Date of Patent: September 20, 2022.

  119. Stenman S, Bychkov D, Kücükel H, Linder N, Haglund C, Arola J, et al. Antibody supervised training of a deep learning based algorithm for leukocyte segmentation in papillary thyroid carcinoma. IEEE J Biomed Health Inf. 2020;25(2):422–8.

    Article  Google Scholar 

  120. Zhuge H, Summa B, Hamm J, Brown JQ. Deep learning 2D and 3D optical sectioning microscopy using cross-modality Pix2Pix cGAN image translation. Biomed Opt Express. 2021;12(12):7526–43. https://doi.org/10.1364/Boe.439894.

    Article  Google Scholar 

  121. Segerer FJ, Nekolla K, Rognoni L, Kapil A, Schick M, Angell H, et al. Novel Deep Learning Approach to Derive Cytokeratin Expression and Epithelium Segmentation from DAPI. In: Medical Imaging with Deep Learning. CoRR; 2022. https://doi.org/10.48550/arXiv.2208.08284.

  122. Li J, Garfinkel J, Zhang X, Wu D, Zhang Y, De Haan K, et al. Biopsy-free in vivo virtual histology of skin using deep learning. Light Sci Appl. 2021;10(1):1–22.

    Article  Google Scholar 

  123. Shi L, Wong IH, Lo CT, Wong TT. One-side Virtual Histological Staining Model for Complex Human Samples. In: 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Ioannina, Greece; 2022. p. 1–4.https://doi.org/10.1109/BHI56158.2022.9926959.

  124. Fanous MJ, He S, Sengupta S, Tangella K, Sobh N, Anastasio MA, et al. White blood cell detection, classification and analysis using phase imaging with computational specificity (PICS). Sci Rep. 2022;12(1):1–10.

    Article  Google Scholar 

  125. Cao R, Nelson SD, Davis S, Liang Y, Luo Y, Zhang Y, et al. Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy. Nat Biomed Eng. 2023;7(2):124–34. https://doi.org/10.1038/s41551-022-00940-z.

    Article  Google Scholar 

  126. de Bel T, Hermse, M, Kers J, van der Laak J, Litjens G. Stain-Transforming Cycle-Consistent Generative Adversarial Networks for Improved Segmentation of Renal Histopathology. In: Proceedings of The 2nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research. 2019;102:151–163 Available from: https://proceedings.mlr.press/v102/de-bel19a.html.

  127. Teramoto A, Yamada A, Tsukamoto T, Kiriyama Y, Sakurai E, Shiogama K, et al. Mutual stain conversion between Giemsa and Papanicolaou in cytological images using cycle generative adversarial network. Heliyon. 2021;7(2):e06331.

    Article  Google Scholar 

  128. Cohen JP, Luck M, Honari S. Distribution matching losses can hallucinate features in medical image translation. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. Lecture Notes in Computer Science. vol. 11070. Cham: Springer. 2018. p. 529–36. https://doi.org/10.1007/978-3-030-00928-1_60.

  129. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training gans. Adv Neural Inf Process Syst. 2016;29. https://proceedings.neurips.cc/paper_files/paper/2016/hash/8a3363abe792db2d8761d6403605aeb7-Abstract.html.

  130. Durugkar I, Gemp I, Mahadevan S. Generative multi-adversarial networks. arXiv preprint arXiv:1611.01673. 2016.

  131. Frogner C, Zhang C, Mobahi H, Araya M, Poggio TA. Learning with a Wasserstein loss. Adv Neural Inf Process Syst. 2015;28. https://proceedings.neurips.cc/paper/2015/hash/a9eb812238f753132652ae09963a05e9-Abstract.html.

  132. Brunet D, Vrscay ER, Wang Z. On the mathematical properties of the structural similarity index. IEEE Trans Image Process. 2011;21(4):1488–99.

    Article  MathSciNet  MATH  Google Scholar 

  133. Wang Z, Simoncelli EP, Bovik AC. Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. Vol. 2. Pacific Grove; 2003. p. 1398-1402. https://doi.org/10.1109/ACSSC.2003.1292216.

  134. Borhani N, Bower AJ, Boppart SA, Psaltis D. Digital staining through the application of deep neural networks to multi-modal multi-photon microscopy. Biomed Opt Express. 2019;10(3):1339–50.

    Article  Google Scholar 

  135. Bayramoglu N, Kaakinen M, Eklund L, Heikkila J. Towards Virtual H &E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks. In: 2017 IEEE International Conference on Computer Vision Workshops (Iccvw 2017). 2017. p. 64–71. https://doi.org/10.1109/Iccvw.2017.15.

  136. Nygate YN, Levi M, Mirsky SK, Turko NA, Rubin M, Barnea I, et al. Holographic virtual staining of individual biological cells. Proc Natl Acad Sci USA. 2020;117(17):9223–31. https://doi.org/10.1073/pnas.1919569117.

    Article  Google Scholar 

  137. Trullo R, Bui QA, Tang Q, Olfati-Saber R. Image Translation Based Nuclei Segmentation for Immunohistochemistry Images. In: Mukhopadhyay A, Oksuz I, Engelhardt S, Zhu D, Yuan Y. (eds) Deep Generative Models. DGM4MICCAI 2022. Lecture Notes in Computer Science, vol 13609. Cham: Springer. https://doi.org/10.1007/978-3-031-18576-2_9.

  138. Bautista PA, Abe T, Yamaguchi M, Yagi Y, Ohyama N. Digital staining of pathological tissue specimens using spectral transmittance. Med Imaging 2005 Image Process Pt 1-3. 2005;5747:1892–1903. https://doi.org/10.1117/12.595016.

  139. Bautista PA, Abe T, Yamaguchi M, Yagi Y, Ohyama N. Digital staining for multispectral images of pathological tissue specimens based on combined classification of spectral transmittance. Comput Med Imaging Graph. 2005;29(8):649–57. https://doi.org/10.1016/j.compmedimag.2005.09.003.

    Article  Google Scholar 

  140. Bautista PA, Yagi Y. Digital Staining for Histopathology Multispectral Images by the Combined Application of Spectral Enhancement and Spectral Transformation. In: 2011 Annual International Conference of the Ieee Engineering in Medicine and Biology Society (Embc); 2011. p. 8013–8016. https://doi.org/10.1109/iembs.2011.6091976.

  141. Lotfollahi M, Daeinejad D, Berisha S, Mayerich D. Digital Staining of High-Resolution Ftir Spectroscopic Images. In: Appl Spectrosc. 2019;73(5):556–64. https://doi.org/10.1177/0003702818819857.

  142. Bulten W, Bándi P, Hoven J, Loo Rvd, Lotz J, Weiss N, et al. Epithelium segmentation using deep learning in H &E-stained prostate specimens with immunohistochemistry as reference standard. Sci Rep. 2019;9(1):1–10.

  143. Lotfollahi M, Berisha S, Daeinejad D, Mayerich D. Digital Staining of High-Definition Fourier Transform Infrared (FT-IR) Images Using Deep Learning. Appl Spectrosc. 2019;73(5):556–64. https://doi.org/10.1177/0003702818819857.

    Article  Google Scholar 

  144. Perez-Anker J, Malvehy J, Moreno-Ramirez D. Ex Vivo Confocal Microscopy Using Fusion Mode and Digital Staining: Changing Paradigms in Histological Diagnosis. Actas Dermo-Sifiliograficas. 2020;111(3):236–42. https://doi.org/10.1016/j.ad.2019.05.005.

    Article  Google Scholar 

  145. Schuurmann M, Stecher MM, Paasch U, Simon JC, Grunewald S. Evaluation of digital staining forex vivoconfocal laser scanning microscopy. J Eur Acad Dermatol Venereol. 2020;34(7):1496–9. https://doi.org/10.1111/jdv.16085.

    Article  Google Scholar 

  146. Mercan C, Mooij GCAM, Tellez D, Lotz J, Weiss N, van Gerven M, Staining Virtual, for Mitosis Detection in Breast Histopathology. In, et al. IEEE 17th International Symposium on Biomedical Imaging (ISBI). Iowa City. 2020;2020:1770–4. https://doi.org/10.1109/ISBI45749.2020.9098409.

    Article  Google Scholar 

  147. Jackson CR, Sriharan A, Vaickus LJ. A machine learning algorithm for simulating immunohistochemistry: development of SOX10 virtual IHC and evaluation on primarily melanocytic neoplasms. Mod Pathol. 2020;33(9):1638–48.

    Article  Google Scholar 

  148. Oszutowska-Mazurek D, Parafiniuk M, Mazurek P. Virtual UV Fluorescence Microscopy from Hematoxylin and Eosin Staining of Liver Images Using Deep Learning Convolutional Neural Network. Appl Sci-Basel. 2020;10(21). https://doi.org/10.3390/App10217815.

  149. Picon A, Medela A, Sánchez-Peralta LF, Cicchi R, Bilbao R, Alfieri D, et al. Autofluorescence image reconstruction and virtual staining for in-vivo optical biopsying. IEEE Access. 2021;9:32081–93.

    Article  Google Scholar 

  150. Fredman G, Christensen RL, Ortner VK, Haedersdal M. Visualization of energy-based device-induced thermal tissue alterations using bimodal ex-vivo confocal microscopy with digital staining. A proof-of-concept study. In Skin Res Technol. 2022;28:564–70. https://doi.org/10.1111/srt.13155.

  151. Meng XY, Li X, Wang X. A Computationally Virtual Histological Staining Method to Ovarian Cancer Tissue by Deep Generative Adversarial Networks. Comput Math Methods Med. 2021;2021. https://doi.org/10.1155/2021/4244157.

  152. Vladimirova G, Ruini C, Kapp F, Kendziora B, Ergün EZ, Bağcı IS, et al. Ex vivo confocal laser scanning microscopy: A diagnostic technique for easy real-time evaluation of benign and malignant skin tumours. J Biophoton. 2022;15(6):e202100372.

    Article  Google Scholar 

  153. Soltani S, Ojaghi A, Qiao H, Kaza N, Li X, Dai Q, et al. Prostate cancer histopathology using label-free multispectral deep-UV microscopy quantifies phenotypes of tumor aggressiveness and enables multiple diagnostic virtual stains. Sci Rep. 2022;12(1):1–17.

    Article  Google Scholar 

  154. Liu K, Li B, Wu W, May C, Chang O, Knezevich S, et al. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. In: IEEE Winter Conf Appl Comput Vis. 2023;2023:1918–1927. https://doi.org/10.1109/wacv56688.2023.00196.

  155. Ruini C, Vladimirova G, Kendziora B, Salzer S, Ergun E, Sattler E, et al. Ex-vivo fluorescence confocal microscopy with digital staining for characterizing basal cell carcinoma on frozen sections: A comparison with histology. J Biophotonics. 2021;14(8). https://doi.org/10.1002/jbio.202100094.

  156. Kaza N, Ojaghi A, Costa PC, Robles FE. Deep learning based virtual staining of label-free ultraviolet (UV) microscopy images for hematological analysis. In: Label-free Biomedical Imaging and Sensing (LBIS) 2021. vol. 11655. Proceedings of the SPIE; 2021. p. 116550C. https://doi.org/10.1117/12.2576429.

  157. Kaza N, Ojaghi A, Robles FE. Automated virtual staining, segmentation and classification of deep ultraviolet (UV) microscopy images for hematological analysis. In: Biophotonics Congress: Biomedical Optics 2022 (Translational, Microscopy, OCT, OTS, BRAIN), Technical Digest Series (Optica Publishing Group, 2022), paper MW4A.5. https://doi.org/10.1364/MICROSCOPY.2022.MW4A.5.

  158. Ortner VK, Sahu A, Cordova M, Kose K, Aleissa S, Alessi-Fox C, et al. Exploring the utility of Deep Red Anthraquinone 5 for digital staining of ex vivo confocal micrographs of optically sectioned skin. J Biophotonics. 2021;14(4). https://doi.org/10.1002/jbio.202000207.

  159. Tavakkoli A, Kamran SA, Hossain KF, Zuckerbrod SL. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci Rep. 2020;10(1):1–15.

    Article  Google Scholar 

  160. Sasajima K, Kudo SE, Inoue H, Takeuchi T, Kashida H, Hidaka E, et al. Real-time in vivo virtual histology of colorectal lesions when using the endocytoscopy system. Gastrointest Endosc. 2006;63(7):1010–7. https://doi.org/10.1016/j.gie.2006.01.021.

    Article  Google Scholar 

  161. Cetin O, Chen M, Ziegler P, Wild P, Koeppl H. Deep learning-based restaining of histopathological images. In: 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE Computer Society; 2022. p. 1467–1474.

  162. Wagner SJ, Matek C, Shetab Boushehri S, Boxberg M, Lamm L, Sadafi A, et al. Make deep learning algorithms in computational pathology more reproducible and reusable. Nat Med. 2022;28(9):1744–6.

    Article  Google Scholar 

  163. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1(5):206–15. https://doi.org/10.1038/s42256-019-0048-x.

    Article  Google Scholar 

  164. Lu S, Stein JE, Rimm DL, Wang DW, Bell JM, Johnson DB, et al. Comparison of biomarker modalities for predicting response to PD-1/PD-L1 checkpoint blockade: a systematic review and meta-analysis. JAMA Oncol. 2019;5(8):1195–204.

    Article  Google Scholar 

  165. Özbey M, Dar SU, Bedel HA, Dalmaz O, Özturk Ş, Güngör A, et al. Unsupervised Medical Image Translation with Adversarial Diffusion Models. arXiv preprint arXiv:2207.08208. 2022.

  166. Guan H, Li D, Park Hc, Li A, Yue Y, Gau YA, et al. Deep-learning two-photon fiberscopy for video-rate brain imaging in freely-behaving mice. Nat Commun. 2022;13(1):1–9.

Download references

Funding

This project has received funding from the European Union’s Horizon 2022 Marie Skłodowska-Curie Action (grant agreement 101103200, ‘MICS’ to L.K.). K.C.Z. was supported in part by Schmidt Science Fellows, in partnership with the Rhodes Trust. K.C.L. was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI21C0977060102002) and the Commercialization Promotion Agency for R&D Outcomes(COMPA) funded by the Ministry of Science and ICT (MSIT) (1711198540). This material is based upon work supported in part by the Air Force Office of Scientific Research under award number FA9550-21-1-0401, the National Science Foundation under Grant 2238845, and a Hartwell Foundation Individual Biomedical Researcher Award.

Author information

Authors and Affiliations

Authors

Contributions

LK conceptualized the work, performed literature review, categorized all reviewed publications, generated figures and wrote the manuscript with input from all co-authors. XL, KL, LB and OF provided input on biochemical staining as target images and on multiplexed digital staining. AM, SX, AC and KK provided input on DL and GAN models. SJ and GZ provided input on loss functions. SX, KCL and SAL provided input on phase contrast imaging. KZ provided input on white-field imaging. SJ, KL, MA and RH provided input on methods of good practice, pitfalls and potential trends.

Corresponding author

Correspondence to Lucas Kreiss.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors consent with the final version of this manuscript.

Competing interests

The authors declare no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Figure S1. Historical trends sorted by digital staining categories. The number of publications in each category as historical trend over time. Figure S2. All combinations of different categories as heatmap of with the color-coded number of publications. Extended version of Fig. 2 A.

Additional file 2.

Supplementary materials.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kreiss, L., Jiang, S., Li, X. et al. Digital staining in optical microscopy using deep learning - a review. PhotoniX 4, 34 (2023). https://doi.org/10.1186/s43074-023-00113-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-023-00113-4

Keywords