Skip to main content

Detection of surface defects and subsurface defects of polished optics with multisensor image fusion

Abstract

Surface defects (SDs) and subsurface defects (SSDs) are the key factors decreasing the laser damage threshold of optics. Due to the spatially stacked structure, accurately detecting and distinguishing them has become a major challenge. Herein a detection method for SDs and SSDs with multisensor image fusion is proposed. The optics is illuminated by a laser under dark field condition, and the defects are excited to generate scattering and fluorescence lights, which are received by two image sensors in a wide-field microscope. With the modified algorithms of image registration and feature-level fusion, different types of defects are identified and extracted from the scattering and fluorescence images. Experiments show that two imaging modes can be realized simultaneously by multisensor image fusion, and HF etching verifies that SDs and SSDs of polished optics can be accurately distinguished. This method provides a more targeted reference for the evaluation and control of the defects of optics, and exhibits potential in the application of material surface research.

Introduction

Cutting, grinding and polishing are commonly used contact processing methods for optics, and they can cause surface defects (SDs) such as pits, scratches, and micro-cracks. These defects are not only distributed on the surface, but may further extend to a depth of several micrometers to hundreds of micrometers below the surface, becoming subsurface defects (SSDs). These defects will degrade the mechanical stability of the optics in extreme environment, such as space telescopes [1] and deep-ultraviolet detectors [2]. And in high-power solid-state laser devices [3, 4], even small-sized defects can cause the laser damage threshold to decrease, which becomes one of the key factors limiting the increase of energy density [5, 6]. It is necessary to detect and evaluate SDs and SSDs accurately, and reduce them during processing or subsequent processing.

The detection of SDs is relatively mature at present. For example, the method using atomic force microscope or electron microscope has high resolution, and is suitable for the detection in small sampling areas. Efficient and fast detection of SDs of large optics can be realized by digital evaluation system with wide-field scattering microscope [7, 8]. But SSDs are covered under the surface, which are difficult to directly detect by classic detection systems for SDs. Specific detection methods for SSDs have been developed. Destructive ones such as acid etching [9], dimpling [10], etc. They expose SSDs through physical or chemical means, and will cause irreversible damage to the optics. Non-destructive methods based on optical imaging include total internal reflection microscope (TIRM) [11], confocal laser scanning microscope(CLSM) [12, 13], and optical coherence tomography(OCT) [14], etc. They illuminate the detection area in different ways, receive the optical signal modulated by SSDs, and will not cause damage. CLSM with fluorescence imaging is widely used for SSD detection in recent years [15, 16]. The optics are doped with tiny fluorescent materials during grinding and polishing, and they can generate fluorescence under the excitation of lasers. These materials may come from the cooling fluid used in polishing [15], or may be artificially doped quantum dots [17]. They are buried in the pits, scratches and other mechanical damage in subsurface layer. It is worth noting that stronger fluorescence can be excited with quantum dots used as fluorescent materials, but they are also contaminants for precision optics, and are difficult to remove. In addition, studies have shown that this kind of SSDs with fluorescence characteristics is closely related to the laser damage of optics [18, 19].

The surface of polished optics is relatively smooth, and most of the defects are removed, but there are still some residues randomly distributed on the surface and subsurface. Since SDs and SSDs may be stacked in space, it is a major challenge to quickly detect and accurately distinguish them by non-destructive detection. CLSM with fluorescence imaging has capabilities of high resolution and three-dimensional (3D) reconstruction. But the system is complex, and its field of view is quite small, which is usually used for small-range detection of hundreds of microns. The large-range detection can be realized by wide-field scattering imaging, but SDs and SSDs cannot be distinguished. Herein, a multisensor image fusion detection method is proposed, which combines wide-field scattering and fluorescence imaging. The sample is illuminated by a laser, and a microscope system is placed in a direction perpendicular to the sample. Scattering and fluorescence lights are split by the system, and received by two image sensors at the same time. Multisensor images are processed by spatial registration and feature-level fusion, realizing the identification and extraction of SDs and SSDs.

Results and discussion

Multisensor imaging system

Polishing is the last processing process of most optics, SDs and SSDs are greatly affected by polishing. The surface and subsurface structure of a polished optics is shown in Fig. 1a. From top to bottom, there are redeposition layer (or hydrolyzed layer, Beilby layer, polishing layer), subsurface layer, deformed layer and the bulk [20]. According to the chemical action theory and thermal surface flow theory, the redeposition layer is produced by the hydrolysis of the polishing slurry and the material surface. Contaminants such as cerium and iron remain in this layer, and the thickness of this layer is about tens to hundreds of nanometers. According to the mechanical grinding theory, polishing particles enter the redeposition layer and then slide on the material, resulting in polishing dots and polishing scratches. They and a small amount of cracks remaining during grinding together constitute mechanical SSDs. Affected by the pulling force of polishing, contaminants are quickly buried in these SSDs, covering all or part of SSDs under the redeposition layer. Therefore, there is SSD that extends below the surface and is partially covered [21, 22]. The former is called extended SSD and the latter is called covered SSD in this paper.

Fig. 1
figure 1

Principle and layout of multisensor imaging system. a Surface and subsurface structure of polished optics. b Scattering imaging principle of defects. c Fluorescence imaging principle of defects. d Different types of defects characterized by scattering image and fluorescence image. e Layout of the multisensor detection system

The principle of dark-field scattering imaging for defects detection is shown in Fig. 1b. The surface of the sample is illuminated by the incident light obliquely at a certain angle, and the light is directly reflected when there are no defects on the surface. If there are defects, the scattering light is generated and can be received by the microscopy system (not shown in the figure), so a dark-field image with bright defects can be obtained. Since transmissive optics can be penetrated by the incident light, covered SSDs and bulk defects [23] will also be illuminated and generate scattering light, but the strength is relatively weak. Fluorescence imaging is similar to scattering imaging. As shown in Fig. 1c, the surface is illuminated by the excitation light, then the fluorescence is generated by contaminants with strong light absorption. These contaminants are all over the surface and subsurface. The ones buried in SDs are removed after cleaning; and the ones buried in the redeposition layer and SSDs are retained and become the main source of fluorescence. Because of the deep mechanical SSDs, they provide places for a large number of contaminants to gather, so the fluorescence is strong and concentrated. The depth of the redeposition layer is very shallow, there are few contaminants and they are evenly distributed, so the fluorescence from the redeposition layer is weak, showing as a uniform background.

As shown in Fig. 1d, SDs and extended SSDs are the main types of defects in scattering image, and they are hard to distinguish. Extended SSDs and covered SSDs are the main types of defects in fluorescence image, and they are also hard to distinguish. The mechanism of laser-induced damage of optics with different types of defects is different, and it is of great significance to distinguish them for damage research. The optical field is modulated by SDs, inducing local field enhancement. In addition, SDs cause fracture-induced subbandgap absorption and ultimately induce damage [24]. Covered SSDs provide many attachment points for contaminants (such as Fe, Cu, Ce), and the thermal-absorption of the contaminants causes localized high temperature. The higher the area proportion of covered SSDs, the lower the laser damage threshold of the optics [25]. Extended SSDs have both fracture-induced subbandgap absorption properties and thermal-absorption properties. From the perspective of defects detection and removal, SDs and extended SSDs are easy to be detected and removed in subsequent processing; while covered SSDs are not easy to be found and removed. The defects information collected by one imaging method is very limited, so multisensor imaging includes both scattering and fluorescence modes is proposed. Scattering and fluorescence lights are collected by two independent sensors simultaneously to improve efficiency. Different types of defects are identified from the multisensor images, so they can be reduced in a targeted manner in subsequent processing.

The sample used in the experiment is an optical window of polished fused silica with a size of 100 × 100 mm and a thickness of 5 mm. The fluorescent materials come from the water-soluble oil coolants used during diamond grinding [15]. Since such coolants are widely used in the processing of most optics, no additional fluorescent materials are added. The sample needs to be ultrasonically cleaned (Micro-90 cleaner used in this paper) before multisensor imaging to remove contaminants adhering to the surfaces of the optics. A detection system designed based on the principle of multisensor imaging is shown in Fig. 1e. Two image sensors are used in the microscope to take scattering and fluorescence images simultaneously, and a laser is used as the light source for two imaging modes. When the sample is illuminated by an ultraviolet laser, fluorescence in the visible light can be generated from the SSDs [19, 25]. Therefore, an ultraviolet quasi-continuous laser is used as the excitation light in the system. The laser of Gaussian beam is modulated into a uniform flat-top beam by a shaper, so the energy density of the illuminated area is basically the same, covering the field of view of the imaging system. Reflected and transmitted lights are absorbed by beam traps to reduce stray light. Scattering and fluorescence lights generated by defects are received by an objective (4 × , NA 0.13), and after being split by a 409 nm dichroic mirror (transmission wavelength: 415–850 nm), they enter different tube lenses (f = 150 mm) and image sensors respectively. The sensors are ultraviolet-enhanced CCD and electron-enhanced CCD (pixel size: 13.3 μm, resolution: 1024 × 1024). Both sensors are located at a position conjugate to the object plane, and take the scattering and fluorescence images on the same imaging area. Because the fluorescence is very weak, a watt-level 355 nm laser (short wavelength with stronger energy) is used as the excitation, and a highly sensitive electron-enhanced CCD is used as the image sensor, ensuring that the fluorescence can be excited and detected. The focusing and scanning control system is used to control the 3D movement of the sample by a XYZ stage, and adjusts the posture of the sample to keep it in focus positions.

Image processing includes preprocessing, registration and fusion. The scattering and fluorescence images are first preprocessed after reading into the computer, including denoising, background homogenization, and distortion correction. Then the defects will be highlighted from the background, which is conducive to the subsequent image processing. Next, two images will be registered and fused to get images characterizing SDs, extended SSDs and covered SSDs respectively.

Image registration and fusion

Image registration refers to the process of matching multiple images of the same scene to make their features correspond. These images with overlapping regions may be taken at different times, different conditions, or different sensors. Even if the same objective lens, tube lens and image sensors are used in the multisensor imaging system, the positions of image planes will be slightly different due to the difference in the detection wavelength. In addition, there will inevitably be differences in the positions of the imaging devices in the optical path, especially the image sensors. These factors will cause the difference of the imaging range of the target.

As shown in Fig. 2a.i & 2a.ii, the imaging results of polishing scratches and polishing dots are represented by straight lines and rectangles. These defects are extended SSDs, which exist in both scattering and fluorescence images. Due to the difference between the two imaging systems, there are differences in the position, size and rotation angle of the same defects in the two images. The unregistered and registered superimposition images are shown in Fig. 2a.iii & 2a.iv (the defects in the scattering image and the fluorescence image are set to red and green respectively for display). If the two images are not registered, the same defect cannot be overlapped in the superimposition image. In the subsequent image fusion process, it is easy to misjudge the spatial location of such defects. Therefore, image registration become one of the key steps before image fusion.

Fig. 2
figure 2

Image registration and fusion. a Diagrams of image registration. i. scattering image; ii. fluorescence image; iii. unregistered superposition image; iv. registered superposition image. b Flow chart of image registration. c Flow chart of image fusion

The flow chart of image registration is shown in Fig. 2b. First, the feature points are selected in the scattering and fluorescence images respectively, and then the parameters of the transformation matrix are estimated based on affine transformation. Finally, the image is resampled and interpolated to complete the registration. After the system is set up, multisensor imaging system will no longer change, so the affine transformation matrix is also fixed. Therefore, after the matrix is estimated for the first time, the parameters can be used directly to complete image registration.

Image fusion is a multi-level image processing that uses the temporal or spatial correlation and complementarity of two (or more) images to get more accurate fusion images. For the multisensor image fusion detection system, the images taken by two sensors are based on different imaging principles, having different physical meanings. The information of the scattering and fluorescence images is both complementary and redundant, and the purpose of image fusion is to get images that characterize different types of defects. According to the characteristics of the images of defects, feature-level fusion is used in this paper, and the flow chart is shown in Fig. 2c. First, the contours of all defects in the scattering and fluorescence images are extracted, and the coordinates of the contour points are recorded. Then the coordinates are used for feature matching, and the defects are divided into three types. Finally, the defects are classified and extracted, getting images that characterize SDs, extended SSDs and covered SSDs respectively.

Imaging and identification of typical defects

The imaging results of SDs and covered SSDs are shown in Fig. 3a-3f, they are superimposition images of scattering and fluorescence images. The red areas in Fig. 3a-3b are surface scratches. They only exist in the scattering image, indicating that they are SDs. The green areas in Fig. 3c-3d are polishing points, and in Fig. 3e-3f are polishing scratches. These four defects are all covered SSDs, which are covered under the redeposition layer and cannot generate obvious scattering lights, only existing in the fluorescence image. The sample is etched with HF acid to verify the effectiveness of multisensor imaging for SSDs detection. As shown in Fig. 3g-3h, there are SD (shallow surface scratch) and covered SSD (polishing scratch) in the superposition image before etching. The shallow SD is completely etched away after a 3 μm-deep etch, so it disappears in the bright field microscope. The covered SSD is fully exposed after etching, which can be directly observed in the bright field image.

Fig. 3
figure 3

Imaging results and etching verification of SDs and covered SSDs. a b Superposition images, there are SDs in the figures (surface scratches in red areas). c d e f Superposition images, there are covered SSDs in the figures (polishing points and scratches in green areas). g h Superposition image before etching and its bright image after etching, there is a SD (shallow surface scratch) which disappears after etching, and a covered SSD (polishing scratch) which is exposed after etching

The imaging result of an extended SSD is shown in Fig. 4a. This defect exists in both the scattering and fluorescence images, and the overlapping area is displayed in yellow (red in the scattering image, green in the fluorescence image, and the overlapping area becomes yellow after superimposition). To get an image characterizes SDs, the extended SSD is needed to be removed from the scattering image. The removal result after the pixel-level processing is shown in Fig. 4b. The defect in the fluorescent image is first expanded, and then the expanded area is subtracted from the scattered image. It can be seen from the result that the defect is not completely removed, but a "doughnut" remains, which is physically meaningless. Although the "doughnut" can be removed by adjusting the expansion parameters, but it is difficult to set parameters suitable for different sizes of defects. The removal result after the feature-level image fusion is shown in Fig. 4c. It can be seen that the extended SSD is completely removed, and the remaining defects in the figure are SDs.

Fig. 4
figure 4

Imaging and removal results of extended SSDs. a Superposition image. b SDs image (the extended SSD is not completely removed). c SDs image (the extended SSD is completely removed)

The signal strengths of SSDs in scattering and fluorescence images are discussed in detail. A covered SSD at site 1 is shown in Fig. 5a, and local enlarged scattering and fluorescence images of it are in Fig. 5b-5c; an extended SSD at site 2 is shown in Fig. 5d, and local enlarged images of it are in Fig. 5e-5f. Both images are captured under the same power of laser with same exposure and gain, so the strengths of scattering and fluorescence signals can be reflected by gray values. We select sampling lines on each of the two defects, and draw their grayscale curves. In the scattering image shown in Fig. 5g, the peak gray value of the covered SSD is about 2800, which is very close to the background; the peak gray value of the extended SSD reaches 65,535 (16-bit image), which is overexposed and significantly different from the background. Although covered SSDs can generate scattering signals, the strength is relatively weak, and the scattering images are dominated by extended SSDs and SDs. In the fluorescence image shown in Fig. 5h, the peak gray values of covered SSD and extended SSD are around 2000 and 1000, respectively. Both are significantly different from the background. Therefore, covered SSD and extended SSD are dominant in fluorescence images. It is difficult to capture and distinguish all types of defects by a single imaging method, so it shows the advantages of multisensor image fusion.

Fig. 5
figure 5

Signal strength comparison of SSDs in scattering and fluorescence images. a Covered SSD in superposition image. b c Locally enlarged scattering and fluorescence images (contrast stretched from original to show). d Extended SSD in superposition image. e f Locally enlarged scattering and fluorescence images (contrast stretched from original to show). g Contrast of the gray values of the two defects in scattering image (the sampling lines are the connection lines of the red arrows in Fig. 5b and Fig. 5e). h Contrast of the gray values of the two defects in fluorescence image (the sampling lines are the connection lines of the green arrows in Fig. 5c and Fig. 5f)

The superimposition images taken in three sites are shown in Fig. 6a - 6c. There are extended SSDs in all three images, and image fusion is needed to extract different types of defects. There is a scratch that exists in both the scattering and fluorescence images in Fig. 6a, and after it is removed from the scattering image, a SDs image is obtained as shown in Fig. 6d. There is also a scratch that exists in both the scattering and fluorescence images in Fig. 6b, and after it is extracted, an extended SSDs image is obtained as shown in Fig. 6e. There are two polishing dots in both the scattering and fluorescence images in Fig. 6c, and after they are removed from the fluorescence image, a covered SSDs image is obtained as shown in Fig. 6f. After the defects are classified and extracted by image fusion, the scratches and pits in the images can be identified, and the sizes of the defects can be calculated with calibration [26], finally realizing the quantitative evaluation of various defects.

Fig. 6
figure 6

Extraction results of three types of defects. a b c Superposition images of three sites. d Surface detects image (the extended SSD in Fig. 6a is removed). e Extended SSDs image (the extended SSD in Fig. 6b is reserved). f Covered SSDs image (the extended SSD in Fig. 6c is removed)

Methods

This section presents briefly the algorithms for image registration and fusion. In the beginning, preprocessing is introduced. Whether it is a scattering image or a fluorescence image, the image pixels where the defect is located have different gray-scale from the surrounding background, which is shown as a bright spot on a dark background. However, the noise generated during image acquisition and the uneven background generated by the illumination will adversely affect the extraction of defects. In the previous paper, we have introduced the preprocessing algorithms such as Top-Hat algorithm, gray-scale converting and medium filter [26].

The core step of image registration, affine transformation matrix estimation is introduced next. For two images to be registered for the same target, if the distance between a pair of feature points in the two images is small enough, it is considered that the points correspond to the same position in the target object. This feature-based registration calculates the spatial transformation model between the images by extracting the positional relationship between a pair of feature points. The analysis and processing of the entire image is transformed into few points, which greatly reduces the amount of calculations. In the multisensor image fusion detection system, the polished dots appear as bright spots under a dark background in the scattering and fluorescence images, and the shapes are generally regular with obvious closed area characteristics, which are suitable choices for feature points. The fluorescence and the scattering image are used as fixed image and moving image respectively, and the feature points coordinate sets of the fixed image \(\left\{\left({x}_{i},{y}_{i}\right)\right\}\) and the moving image \(\left\{\left({{x}^{^{\prime}}}_{i},{{y}^{^{\prime}}}_{i}\right)\right\}\) are established (i is the number of selected feature points). As shown in Fig. 7a, three polishing dots that exist in both the fluorescence and scattering images are selected as feature points. The centroid coordinates of the polishing dots are used as feature point coordinates as Fig. 7b.

Fig. 7
figure 7

Algorithm schematic diagrams for image registration and fusion. a Diagrams of feature points selection from two images to be registered (scattering image on the left, fluorescence image on the right). b Diagrams of affine transformation model. c Simulation images of three types of defects (scattering image on the left, fluorescence image on the right). d Diagrams of contour overlap judgment: i. coarse judgment; ii. fine judgment. e Results of image fusion: i. superposition image; ii. type I defects (SDs); iii. type II defects (covered SSDs); iv. Type III defects (extended SSDs)

After the feature points are determined, the spatial transformation model parameters between the fixed and moving images are calculated by affine transformation. The model of affine transformation can be expressed as:

$$\left[\begin{array}{c}x\\ y\\ 1\end{array}\right]=\left[\begin{array}{ccc}{a}_{1}& {a}_{2}& {t}_{x}\\ {a}_{3}& {a}_{4}& {t}_{y}\\ 0& 0& 1\end{array}\right]=\left[\begin{array}{c}{x}^{\mathrm{^{\prime}}}\\ {x}^{\mathrm{^{\prime}}}\\ 1\end{array}\right]=\mathbf{M}\left[\begin{array}{c}{x}^{\mathrm{^{\prime}}}\\ {x}^{^{\prime}}\\ 1\end{array}\right]$$
(1)

where \(\left(x\right.,\left.y\right)\) and (x’, y’) are the pixel coordinates of the feature point pair in the fixed image and the moving image respectively; a1, a2, a3, a4 are the transformation parameters of scale, rotation, flip and shear; tx, ty are the translation parameters. Substituting the coordinates{(xi, yi)} and \(\left\{\left({x}_{i}^{^{\prime}},{y}_{i}^{^{\prime}}\right)\right\}\) into formula (1), the affine transformation matrix M can be calculated. Since there are 6 unknowns in formula (1), at least 3 pairs of non-collinear feature points are needed. The moving image is resampled and interpolated according to the affine transformation matrix, and its coordinate system is mapped to the coordinate system of the fixed image.

The images are binarized after registration, and then they are processed by modified algorithm of feature-level fusion. The algorithm consists of three steps: a. Contour feature extraction; b. Feature matching and classification; c. Defects extraction. Next, they will be introduced in detail.

  1. a.

    Contour feature extraction

    The contour features in the image are extracted at first in the process of feature-level image fusion, and a feature space is retained, reducing memory and time consumption. Traversal searching is used to extract contour features of the defects, and the steps are as follows:

    1. i.

      First search the image from top to bottom, from left to right, and find the first white pixel as the contour point of the first defect, and record its coordinates as (x1, y1).

    2. ii.

      Take the first contour point as the center, and start the search for the second contour point clockwise in the 8-neighborhood with (x1, y1) as the starting point, and denote the second contour point as (x2, y2).The basis for judging the contour point is: if the four adjacent points of a certain point are white points, it is not a contour point, otherwise it is a contour point.

    3. iii.

      Take the second contour point as the center, repeat the step in ii. Until it returns to (x1, y1), it means that the traversal of all contour points of the first defect has been completed, and these contour points are denoted as set F1={(x1, y1), (x2, y2), (x3, y3)…}.

    4. iv.

      Repeat the above steps, establish the contour point coordinate set Fi of all the defects in the figure, where i is the number of the defect.

      The defects can be numbered while obtaining the contour point coordinates of each defect. If there are a total of m defects in the fluorescence image, the contour point coordinate set of the i-th defect is denoted as Fi (i=1,2,…,m); and there are a total of n defects in the scattering image, the contour point coordinate set of the j-th defect is denoted as Sj (j=1,2,…,n).

  2. b.

    Feature matching and classification

    The commonly used feature matching method is template matching. If the morphological similarity of the features in two images is greater than a threshold, the two features are considered to belong to the same target. The defects are mainly linear scratches and dotted polished dots, and there will be a lot of similar features in the image. And the scattering image and the fluorescence image are taken by different imaging mechanisms, even the same defect may show different topography in the two images. Therefore, template matching is not suitable for the feature matching of defects. The spatial information of the defects is used for feature matching in this paper. As shown in Fig. 7c, if the contour of a defect in the scattering image does not overlap with any contour in the fluorescence image, the defect belongs to Type I (SD, which only exists in the scattering image). If the contour of a defect in the fluorescence image does not overlap with any contour in the scattering image, the defect belongs to Type II (covered SSD, which only exists in the fluorescence image). If the contour of a defect overlaps on the two images, the defect belongs to Type III (extended SSD, which exist in both scattering and fluorescence images).

    The key to feature matching is to accurately determine whether the contour overlaps in the two images. The judgment criterion used in this paper is: if there are two points on the contour of a defect that are located within the area contained by the contour of another defect, the two defects are spatially overlapped. For example, take one point (xF, yF) in the first defect F1 to determine whether it locates within the irregular area contained by the first defect S1. As shown in Fig. 7d.i, the rough judgment is carried out first. The maximum and minimum values of the abscissa and ordinate in S1 are denoted as \({x}_{max}\), \({x}_{min}\), \({y}_{max}\), \({y}_{min}\) respectively. According to the coordinates of the above four points, a circumscribed rectangle of S1 can be constructed. If (xF, yF) is outside this rectangle, that is:

    $$\left({x}_{F}>{x}_{\mathrm{max}}\right)\bigvee \left({x}_{F}<{x}_{\mathrm{min}}\right)\bigvee \left({y}_{F}>{y}_{\mathrm{max}}\right)\bigvee \left({y}_{F}<{y}_{\mathrm{min}}\right)$$

    Then (xF, yF) must be outside the irregular area of S1, and the coarse judgment is completed. On the contrary, if (xF, yF) is within the circumscribed rectangle, the Ray Casting Algorithm [27] is used for fine judgment. As shown in Fig. 7d.ii, a horizontal straight line through (xF, yF) is drawn, and the number of times that the straight line intersects the contour of S1 are calculated. If the number of intersection points on the left and right sides of the point are both odd, then the point is inside the contour, otherwise it is outside. The number of intersections on the left and right sides are 3 and 1 respectively shown in Fig. 7d ii, so it can be judged that (xF, yF) is in the irregular area contained by S1.

    The implementation of fine judgment are provide in references [27, 28]. Compared with the point-by-point comparison in the connected domain, this feature-level matching only traverses the points on the contour, reducing the time required for feature matching. And the number of judgments is greatly reduced by two rounds of judgment. After all the points in F1 are traversed, if there are two points located in the irregular area contained by S1, it means that the two defects F1 and S1 are overlapped. Comparing all the features in Fi and Sj, all the overlapped defects can be indentified and marked, which belong to type III. After the marked defects removed from Sj, the remaining ones belong to Type I. And after the marked defects removed from Fi, the remaining ones belong to Type II.

  3. c.

    Defects extraction

After the classification of all defects is completed, the final step of image fusion is to extract defects to get fused images with accurate physical meaning. The image in Fig. 7e.i is the superposition result of the scattering and fluorescence images in Fig. 7c. This image retains the information taken by two sensors to the greatest extent, and Type I, Type II and Type III defects are set to red, green and yellow respectively. Fig. 7e.ii—7e.iv are fusion images after defects extraction. A single image does not cover all the information taken by two sensors, but each one characterizes a type of defect and has a more accurate physical meaning.

Conclusion

SDs can be detected by scattering imaging, but the detection results will be interfered by SSDs; SSDs can be detected by fluorescence imaging, but it is difficult to distinguish between the extended SSDs and covered SSDs. Based on the scattering and fluorescence imaging principles of polished optics, a multisensor image fusion detection method for SDs and SSDs is proposed. Two image sensors are used for wide-field imaging in two modes at the same time, which has the advantages of large imaging range and high detection efficiency. The scattering and fluorescence images are processed by registration and fusion algorithms, and after contour extraction, feature matching and classification, three types of defects are extracted. The method provides a rich reference for the quality evaluation of the optical surface processing, which is beneficial to improve the processing technology, reducing various defects in a more targeted manner.

Availability of data and materials

The calculation and experiment data that support the works of this study are available from the corresponding authors on reasonable request.

Abbreviations

SDs:

Surface defects

SSDs:

Subsurface defects

TIRM:

Total internal reflection microscope

CLSM:

Confocal laser scanning microscope

OCT:

Optical coherence tomography

3D:

Three-dimensional

CCD:

Charge coupled device

HF:

Hydrogen fluoride

References

  1. Serjeant S, Elvis M, Tinetti G. The future of astronomy with small satellites. Nat Astron. 2020;4:1031–8.

    Article  Google Scholar 

  2. Li Y, Zheng W, Huang F. All-silicon photovoltaic detectors with deep ultraviolet selectivity. PhotoniX. 2020;1:15.

    Article  Google Scholar 

  3. Betti R, Hurricane OA. Inertial-confinement fusion with lasers. Nat Phys. 2016;12:435–48.

    Article  Google Scholar 

  4. Zhang F, Cai HB, Zhou WM, Dai ZS, Shan LQ, Xu H, et al. Enhanced energy coupling for indirect-drive fast-ignition fusion targets. Nat Phys. 2020;16:810–4.

    Article  Google Scholar 

  5. Demos SG, Staggs M, Kozlowski MR. Investigation of processes leading to damage growth in optical materials for large-aperture lasers. Appl Opt. 2002;41:3628–33.

    Article  Google Scholar 

  6. Neauport J, Lamaignere L, Bercegol H, Pilon F, Birolleau JC. Polishing-induced contamination of fused silica optics and laser induced damage density at 351 nm. Opt Express. 2005;13:10163–71.

    Article  Google Scholar 

  7. Liu D, Wang S, Cao P, Li L, Cheng Z, Gao X, et al. Dark-field microscopic image stitching method for surface defects evaluation of large fine optics. Opt Express. 2013;21:5974–87.

    Article  Google Scholar 

  8. Zhang Y, Yang Y, Li C, Wu F, Chai H, Yan K, et al. Defects evaluation system for spherical optical surfaces based on microscopic scattering dark-field imaging method. Appl Opt. 2016;55:6162.

    Article  Google Scholar 

  9. Neauport J, Ambard C, Cormont P, Darbois N, Destribats J, Luitot C, et al. Subsurface damage measurement of ground fused silica parts by HF etching techniques. Opt Express. 2009;17:20448–56.

    Article  Google Scholar 

  10. Zhou Y, Funkenbusch PD, Quesnel DJ, Golini D, Lindquist A. Effect of etching and imaging mode on the measurement of subsurface damage in microground optical glasses. J Am Ceram Soc. 1994;77:3277–80.

    Article  Google Scholar 

  11. Kozlowski MR. Application of total internal reflection microscopy for laser damage studies on fused silica. Proc SPIE - The Int Soc Opt Eng. 1998;3244:282–95.

    Google Scholar 

  12. Wang C, Tian A, Wang H, Li B, Jiang Z. Optical subsurface damage evaluation using LSCT. Proc SPIE – Int Soc OptEng. 2009;7522:75226K-1.

  13. Liu J, Liu J, Liu C, Wang Y. 3D dark-field confocal microscopy for subsurface defects detection. Opt Lett. 2020;45:660–3.

    Article  Google Scholar 

  14. Sergeeva M, Khrenikov K, Hellmuth T, Boerret R. Sub surface damage measurements based on short coherent interferometry. J Eur Opt Soc-Rapid. 2010;5:138–138.

    Google Scholar 

  15. Neauport J, Cormont P, Legros P, Ambard C, Destribats J. Imaging subsurface damage of grinded fused silica optics by confocal fluorescence microscopy. Opt Express. 2009;17:3543.

    Article  Google Scholar 

  16. Sun H, Wang S, Bai J, Zhang J, Huang J, Zhou X, et al. Confocal laser scanning and 3D reconstruction methods for the subsurface damage of polished optics. Opt Lasers Eng. 2021;136:106315.

    Article  Google Scholar 

  17. Williams WB, Mullany BA, Parker WC, Moyer PJ, Randles MH. Using quantum dots to tag subsurface damage in lapped and polished glass samples. Appl Opt. 2009;48:5155–63.

    Article  Google Scholar 

  18. Liu H, Huang J, Wang F, Zhou X, Jiang X, Wu W, et al. Photoluminescence defects on subsurface layer of fused silica and its effects on laser damage performance. Proc SPIE – Int Soc Opt Eng. 2015;9255:92553V-1.

  19. Fournier J, Neauport J, Grua P, Fargin E, Jubera V, Talaga D, et al. Green luminescence in silica glass: A possible indicator of subsurface fracture. Appl Phys Lett. 2012;100:56.

    Article  Google Scholar 

  20. Suratwala TI, Miller PE, Bude JD, Steele WA, Shen N, Monticelli MV, et al. HF-based etching processes for improving laser damage resistance of fused silica optical surfaces. J Am Ceram Soc. 2011;94:416–28.

    Article  Google Scholar 

  21. Cook LM. Chemical processes in glass polishing. J Non·Cryst Solids. 1990;120:0–171.

    Google Scholar 

  22. Rodolphe C, Jérôme N, Philippe L, Daniel T, Thomas C, Philippe C, et al. Using STED and ELSM confocal microscopy for a better knowledge of fused silica polished glass interface. Opt Express. 2013;21:29769–79.

    Article  Google Scholar 

  23. Wang S, Sun H, Liu A, Zhang J, Cheng X, Huang J, et al. Automatic evaluation system for bulk defects in optics. Opt Lasers Eng. 2021;137:106380.

    Article  Google Scholar 

  24. Miller PE, Bude JD, Suratwala TI, Shen N, Laurence TA. Fracture-induced subbandgap absorption as a precursor to optical damage on fused silica surfaces. Opt Lett. 2010;35:2702–4.

    Article  Google Scholar 

  25. Liu H, Wang F, Geng F, Zhou X, Huang J, Ye X, et al. Nondestructive detection of optics subsurface defects by fluorescence image technique. Opt Precis Eng. 2020;28:50–9.

    Article  Google Scholar 

  26. Liu D, Yang YY, Wang L, Zhuo YM, Lu CH, Yang LM, et al. Microscopic scattering imaging measurement and digital evaluation system of defects for fine optical surface. Opt Commun. 2007;278:240–6.

    Article  Google Scholar 

  27. Heckbert PS. Graphics Gems IV. 4th ed. San Diego: Academic Press; 1994.

    MATH  Google Scholar 

  28. Milgram M. Does a point lie inside a polygon? J Comput Phys. 1989;84:134–44.

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors would like to express their great appreciation to Mr. An Lu and Mr. Menghui Huang for their helps in programming.

Funding

This work was supported by the National Key Research and Development Program of China (2016YFC1400900); National Natural Science Foundation of China (NSFC) (41775023); Excellent Young Scientist Program of Zhejiang Provincial Natural Science Foundation of China (LR19D050001); Fundamental Research Funds for the Central Universities(2019FZJD011); State Key Laboratory of Modern Optical Instrumentation Innovation Program (MOI2018ZD01).

Author information

Authors and Affiliations

Authors

Contributions

Huanyu Sun: Conceptualization, Investigation, Writing original draft. Shiling Wang: Conceptualization, Methodology. Xiaobo Hu: Software, Formal analysis. Hongjie Liu: Methodology. Xiaoyan Zhou: Resources. Jin Huang: Resources. Xinglei Cheng: Investigation. Feng Sun: Software. Yubo Liu: Software. Dong Liu: Conceptualization, Methodology, Supervision, Writing—review & editing. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Dong Liu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Sun, H., Wang, S., Hu, X. et al. Detection of surface defects and subsurface defects of polished optics with multisensor image fusion. PhotoniX 3, 6 (2022). https://doi.org/10.1186/s43074-022-00051-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43074-022-00051-7

Keywords

  • Polished optics
  • Surface defects
  • Subsurface defects
  • Multisensor
  • Image fusion