A virtual journey within a fluorescent sample: 3D digital refocusing of fluorescence images using deep learning

Remember that time-lapse fluorescence movie where the samples got out of focus during imaging, wasting your valuable experiments? Now there is a deep learning-based solution to this -- By Aydogan Ozcan, Yichen Wu and Yair Rivenson
Published in Protocols & Methods
A virtual journey within a fluorescent sample: 3D digital refocusing of fluorescence images using deep learning
Like

Fluorescence microscopy is an indispensable tool in life-sciences and biology, owing to its ability to obtain highly specific and high-contrast information of living or fixed samples. A specimen is usually scanned across a volume to acquire 3D fluorescence information, and to image transient biological phenomena at high speed and with reduced photo-toxicity, various imaging methods have been proposed. Most of the existing approaches, however, require adding customized optical components onto the microscope hardware, increasing the alignment complexity and cost of the imaging system.

Unlike fluorescence microscopy, coherent imaging systems such as holography possess a unique 3D wave propagation framework that enables digital refocusing of a snapshot image onto an arbitrary sample plane. Inspired by holography, we explored the possibility of a similar framework in fluorescence microscopy and inquired if the 3D information of a specimen can be inferred from a single fluorescence image.

This exploration has led us to the development of Deep-Z,[1] which is a deep neural network-based framework that can digitally refocus a 2D fluorescence image onto user-defined 3D surfaces. In this framework, the input 2D fluorescence image is appended to a user-defined digital propagation matrix (DPM) that represents, pixel-by-pixel, the axial distance of a target surface from the plane of the input image. Through a one-time training process that uses experimental image data, the neural network learns to interpret the pixel values of each DPM as axial refocusing distances, and rapidly outputs a digitally-refocused fluorescence image of the 3D surface defined by the corresponding DPM (Fig. 1).

Deep-Z can be especially useful to capture 3D transient phenomena within live organisms, while also reducing photon dose on the sample. Using Deep-Z, we imaged the neuronal activity of C. elegans worms in 3D using a time-sequence of fluorescence images acquired at a single focal plane, digitally increasing the depth-of-field by 20-fold without any additional imaging hardware, or a sacrifice in resolution or imaging speed [1]. Since the axial scanning is performed virtually, Deep-Z can reduce sample photo-bleaching and is also likely to reduce photo-toxicity.

A very interesting feature of Deep-Z is that it can work with spatially non-uniform DPMs even though the network is only trained using uniform DPMs. Non-uniform DPMs can be used to virtually refocus an input image onto an arbitrary 3D surface within the sample, including e.g., curved or tilted surfaces (see Fig. 2). At first sight, this might look like another exotic mathematical feature of Deep-Z; in practice, though, this feature enabled by non-uniform DPMs can be used to digitally correct for sample drift, tilt, curvature or other aberrations, and therefore might prove extremely useful for longitudinal imaging of live samples by digitally recovering sample information that would normally be lost due to for example the sample getting out of focus.

Following the idea of deep learning-enabled cross-modality image transformations introduced earlier, [2,3] we further used Deep-Z to perform cross-modality virtual refocusing of fluorescence images. In this case, the network was trained with gold standard label images obtained by e.g., a confocal microscope to digitally refocus wide-field microscopy images (input) onto other planes within the sample, but this time matching the corresponding images acquired by a confocal microscope – performing cross-modality virtual refocusing of input fluorescence images.

In addition to working with the intrinsic/native point-spread-function (PSF) of a fluorescence microscope, Deep-Z can also learn the spatial features of an engineered PSF, such as the double-helix PSF that we reported in [1]; also see Fig. 2g for some examples. This capability would be highly useful for localization-based super-resolution microscopy and can potentially enable virtual refocusing over an even larger depth range and/or with better axial resolution.

Overall, Deep-Z introduces a powerful methodology to merge desired physical parameters, such as a DPM, as inputs to deep neural networks in order to achieve new inference functionalities, and we believe that the underlying principles of Deep-Z can be broadly applied for digital refocusing of images captured by other microscopy modalities, including e.g., brightfield microscopy and light-sheet microscopy.  

References

[1] Yichen Wu, Yair Rivenson, Hongda Wang, Yilin Luo, Eyal Ben-David, Laurent A. Bentolila, Christian Pritz, and Aydogan Ozcan, “Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning” Nature Methods (2019) https://www.nature.com/articles/s41592-019-0622-5

[2] H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L.A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nature Methods DOI: 10.1038/s41592-018-0239-0 (2019) https://www.nature.com/articles/s41592-018-0239-0

[3] Y. Rivenson, H. Wang, Z. Wei, K. de Haan, Y. Zhang, Y. Wu, H. Günaydın, J.E. Zuckerman, T. Chong, A.E. Sisk, L. M. Westbrook, W.D. Wallace, and A. Ozcan, “Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning,” Nature Biomedical Engineering DOI: 10.1038/s41551-019-0362-y (2019) https://www.nature.com/articles/s41551-019-0362-y

Figures

Fig. 1. Virtual refocusing of fluorescence images of a C. elegans nematode using Deep-Z. By appending a DPM to a single fluorescence image (input) and passing it through a trained Deep-Z network, refocused images at different planes can be virtually obtained. The digitally propagated images provide a very good match to the corresponding ground-truth images (mechanically scanned), acquired using a scanning fluorescence microscope. Refer to [1] for further details.


Fig. 2. (a) Deep-Z based digital refocusing onto a user-defined 3D surface using a non-uniform DPM.  Measurement of a tilted fluorescent sample (300 nm beads). (b) The corresponding DPM for this tilted plane. (c) Measured raw fluorescence image; the left and right parts are out-of-focus in different directions, due to the sample tilt. (d) The Deep-Z output rapidly brings all the regions into correct focus. (e,f) report the lateral full-width-at-half-maximum (FWHM) values of the nano-beads shown in (c,d), respectively, clearly demonstrating the success of Deep-Z. (g) Deep-Z based digital refocusing of objects imaged through a double helix PSF. A single fluorescence image containing 300 nm fluorescent beads was captured at the reference plane (z = 0 µm) using a double-helix PSF, which was appended to different DPMs and passed through a trained Deep-Z network to digitally propagate the input image to a series of planes, matching the mechanically-scanned images acquired at the corresponding planes. Refer to [1] and its Supplementary Information for further details.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in