When I was first introduced to the concept of resolution, the exposition was quite basic and straightforward. Due to the wave nature of light, the image of an imaginary infinitely sharp point source will appear blurred on the camera. The resolution is then defined as the smallest distance at which two points can still be distinguished as two entities by an observer.
While very intuitive and broad, this definition is actually very loose in practice. The ambiguity is translated in the fact that, for a given optical system, at a given wavelength λ and numerical aperture (NA), two distinct values, proposed by Ernst Abbe in 1873 and Lord Rayleigh in 1879 coexist for the resolution. The so called Rayleigh criterion, 0.61 λ/NA, uses the position of the first zero of the point spread function. In contrast, E. Abbe realized that if one places a grating in the object plane, an incident beam of light will be diffracted in multiple directions, all equally spaced by an angle proportional to the spatial frequency of the grating. The resolution power of a microscope is then given by the highest diffraction order that can be collected by the objective lens and interfere at the image plane. The resolution power of a coherent imaging system is therefore λ/(NA+NA_ill). In the case of fluorescence microscopy, the incoherent nature of the source simplifies the formula to 0.5 λ/NA. While providing an indication of the maximum theoretical resolution of a wide-field microscope, these formulae overlook the notion of signal to noise ratio or the fact that fluorescence light is not quasi-monochromatic, where the real point spread function (psf) is the sum of many monochromatic psf with varying intensities. In short, the resolution depends on many factors, and this is without mentioning super-resolution, where the photo-physical properties of the fluorophore become sometimes even more important than the optical system itself.
One of the most successful attempt to extract the resolution directly from the measured image was independently proposed by Van Heel and Saxton in 1982, in the context of electron microscopy, where the image resolution is a function of the kinetic energy of the probing electrons. The principle is very simple: Take two images of the object of interest. The two images should be the same but the noise will be different. Correlate the two images in several frequency bands. Find the frequency at which the correlation drops below a certain threshold since a low correlation value means that the frequency band contains more noise than signal. This approach has the merit to integrate the notion of signal to noise ratio but has two major flaws. First, it requires two images, which sounds easy but can be very challenging as great care needs to the be taken to not introduce unwanted correlations that might bias the estimate. Second the method requires a threshold to be able to predict a resolution. Several thresholds have been proposed and they will all predict a different resolution, similar in a certain sense to the formulae of Abbe and Rayleigh.
Another option to estimate the resolution consists in manually selecting a small structure and through a few line profiles, show that they can be resolved. This have the inconvenience of being tedious and biased in the structure selection but it also highlights the fact that the information we are looking for is indeed present in a single image.
How to distinguish signal from noise?
The idea for the method going along this blog came while reading a paper about image coregistration, where the authors insisted on the advantages of phase correlations over standard correlations. And that is the key: The Fourier normalization required for the phase correlation has a completely different effect on the noise and the signal, which is exactly what we need to be able to tell them apart. The Fourier normalization equalizes the contribution of all Fourier coefficients and, by gradually removing the high frequencies, we can see how the transformed image decorrelates (hence the name) with the original image and how the local maxima emerges. One of the most striking fact was the smoothness of the generated curve. The same evening a prototype algorithm was written and simulations were running to better understand the underlying mathematics and evaluate the potential of the method. Simulations showed that the estimate corresponds to about 110% of the Abbe limit which makes perfect sense since the Abbe limit is the noise-free theoretical limit.
I also take this opportunity to thank again all the researchers that kindly shared their data with us, an important step in the demonstration of the broad applicability of the algorithm. The development of the tools released will continue with processing speed improvement, sectorial, local and of course axial resolution at the menu. If enough people uses the algorithm and report the estimated resolution for various experimental conditions, it may become a simple and useful tool for experiment comparison in addition to an optimization tool.
To conclude this blog, I believe it is important to remind that the resolution estimated by the method is not an absolute magic number. It is one of many performance indicators that should only be considered along with control experiments and full understanding of each steps, from sample preparation to image post-processing, required to get the final image.
Written by Adrien Descloux