Transform image processing methods are methods that work in domains of image transforms, such as Discrete Fourier, Discrete Cosine, Wavelet, and alike. They proved to be very efficient in image compression, in image restoration, in image resampling, and in geometrical transformations and can be traced back to early 1970s. The paper reviews these methods, with emphasis on their comparison and relationships, from the very first steps of transform image compression methods to adaptive and local adaptive filters for image restoration and up to “compressive sensing” methods that gained popularity in last few years. References are made to both first publications of the corresponding results and more recent and more easily available ones. The review has a tutorial character and purpose.
It will not be an exaggeration to assert that digital image processing came into being with introduction, in 1965 by Cooley and Tukey, of the
The second wave in this process was inspired by the introduction into communication engineering and digital image processing, in the 1970s, of WalshHadamard transform and Haar transform [
The third large wave of activities in transforms for signal and image processing was caused by the introduction, in the 1980s, of a family of transforms that was coined the name “
Presently, fast transforms with FFTtype fast algorithms and wavelet transforms constitute the basic instrumentation tool of digital image processing.
The main distinctive feature of transforms that makes them so efficient in digital image processing is their
The situation is totally different in image representation in transform domain. For orthogonal transforms that feature good energy compaction capability, a lion share of total image “energy” (sum of squared transform coefficients) is concentrated in a small fraction of transform coefficients, which indices are, as a rule, known in advance for the given type of images or can be easily detected. It is this feature of transforms that is called their
The optimal transform with the best energy compaction capability can be defined for groups of images of the same type or ensembles of images that are subjects of processing. Customarily, the “bandlimited” image approximation accuracy is evaluated in terms of the root mean square approximation error (RMSE) for the image ensemble. By virtue of this, the optimal transform that secures the least band limited approximation error is defined by the
However, being optimal in terms of the energy compaction capability, KarhunenLoeve and Hotelling transforms have, generally, high computational complexity: the per pixel number of operations required for their computation is of the order of image size. This is why for practical needs only fast transforms that feature fast transform algorithms with computational complexity of the order of the logarithm of image size are considered. A register of the most relevant fast transforms, their main characteristic features, and application areas is presented in Table
Register of relevant fast transforms, their main characteristic features, and application areas.
Relevance to imaging optics  Main characteristic features  Main application areas  

Discrete Fourier Transforms  Represents optical Fourier Transform  Cyclic shift invariance vulnerable to boundary effects  (i) Analysis of periodicities. 
Discrete Cosine Transform  Represents optical Fourier Transform  Cyclic shift invariance (with double cycle);  


Discrete Fresnel Transforms  Represent optical Fresnel Transform  Computable through DFT/DCT  Numerical reconstruction of holograms 


WalshHadamard Transform  No direct relevance  Binary basis functions. 
(i) Image compression (marginal). 


Haar Transform and other Discrete Wavelet Transforms  Subband decomposition  Binary basis functions. 
(i) Signal/image wideband noise denoising. 
Selected fast transforms most relevant for digital image processing and their computational complexity (in 1D denotations).
Name  Definition  Computational complexity  

(operations per signal sample)  
“Global”: applied to entire signal of

“Local”: applied in mowing window of
 
Discrete Cosine Transform (DFT) 

Real number operations

Real number operations



Discrete Fourier Transforms  
Canonic DFT 

Complex number operations

Complex number operations

General Shifted Scaled DFT 
 


Discrete Fresnel Transforms (DFrT)  
Canonic DFrT 

Complex number operations

n/a 
 
Convolutional DFrT 




WalshHadamard Transform  
Hadamard Transform 

Addition operations only

n/a 


Walsh Transform 




Haar Transform (the simplest discrete wavelet transform) 

Addition operations only

n/a 
Different fast transforms have different energy compaction capability for different types of images. Figure
Illustration of the energy compaction capability of Discrete Fourier, Discrete Cosine, Walsh, and Haar transforms for a test image shown in the upper left corner. The rest of images show bandlimited approximations of the test image in the domain of corresponding transforms for the approximation error variance equal to 5% of image variance. Graphs on the plot in the bottom right corner present energy contained in fractions of transform coefficients for different transforms.
Experience shows that DCT is in most applications advantageous with respect to other transforms in terms of the energy compaction capability. This property of DCT has a simple and intuitive explanation.
DCT of a discrete signal is essentially, to the accuracy of an unimportant exponential shift factor, DFT for the same signal extended outside its border to a double length by means of its mirrored, in the order of samples, copy [
It is noteworthy to mention that introducing DCT in [
In the family of orthogonal transforms, DFT and DCT occupy a special place. DFT and DCT are two discrete representations of the Integral Fourier Transform, which plays a fundamental role in signal and image processing as a mathematical model of signal transformations and of wave propagation in imaging systems [
In conclusion of this section, come back to the common feature of fast transform, to the availability for all of these transform of fast algorithms of FFT type. Initially, fast transforms were developed only for signals with the number of samples equal to an integer power of two. These are the fastest algorithms. At present, this limitation is overcome, and fast transform algorithms exist for arbitrary number of signal samples. Note that 2D and 3D transforms for image and video applications are built as separable; that is, they work separably in each dimension. Note that the transform separability is the core idea for fast transform algorithms. All fast transform algorithms are based on representation of signal and transform spectrum sample indices as multidimensional, that is represented by multiple digits, numbers.
The data on computational complexity of the transform fast algorithms are collected in Table
In some applications it is advisable to apply transforms locally in running window rather than globally to entire image frames. For such application, Discrete Cosine and Discrete Fourier Transforms have an additional very useful feature. Computing these transforms in running window can be carried out recursively: signal transform coefficients at each window position can be found by quite simple modification of the coefficients found at the previous window position with per pixel computational complexity proportional to the window size rather than to the product of the window size and its logarithm, which is the computational complexity of the fast transforms [
The rest of the paper is arranged as follows. In Section
Ideally, image digitization, that is, converting continuous signals from image sensors into digital signals, should be carried out in such a way as to provide as compact image digital representation as possible provided that quality of image reproduction satisfies given requirements. Due to technical limitation, image digitization is most frequently performed in two steps: discretization, that is, converting image sensor signal into a set of real numbers, and scalar quantization, that is rounding off, of those numbers to a set of fixed quantized values [
Image discretization can, in general, be treated as obtaining coefficients of image signal expansion over a set of discretization basis functions. In order to make the set of these representation coefficients as compact as possible, one should choose discretization basis functions that secure the least number of the signal expansion coefficients sufficient for image reconstruction with a given required quality. One can call this general method of signal discretization “
There are at least two examples of practical implementation of the principle of general discretization by means of measuring transform domain image coefficients:
However, by virtue of historical reasons and of the technical tradition, image discretization is most frequently in imaging engineering is implemented as image sampling at nodes of a uniform rectangular sampling lattice using
The theoretical foundation of image sampling is the
In reality no continuous signal is bandlimited, and the image sampling interval is defined not through specifying, in one or another way, of the image bandwidth, but directly by the requirement to reproduce smallest objects and borders of larger objects present in images sufficiently well. The selected in this way image sampling interval
Since small objects and object borders usually occupy relatively small fraction of the image area, vast portions of images are oversampled, that is, sampled with redundantly small sampling intervals. Hence, substantial compression of image sampled representation is possible. It can be implemented by means of applying to the image primary and redundant sampled representation a discrete analog of the general compressive discretization, that is, by means of image expansion over a properly chosen set of discrete optimal bases functions and limitation of the amount of the expansion coefficients. This is exactly what is done in all transform methods of image compression.
Needs of image compression were the primary motivations of digital image processing. Being started in 1950s from various predictive coding methods (a set of good representative publications can be found in [
The principle of image transform coding is illustrated by the flow diagrams sketched in Figure
Flow diagrams of image transform coding and reconstruction.
According to these diagrams, set of image pixels is first subjected to a fast orthogonal transform. Then low intensity transform coefficients are discarded, which substantially reduces the volume of data. This is the main source of image compression. Note that discarding certain image transform coefficients means replacement of images by their bandlimited, in terms of the selected transform, approximations. The remaining coefficients are subjected, one by one, to optimal nonuniform scalar quantization that minimizes the average number of transform coefficient quantization levels. Finally, quantized transform coefficients are entropy encoded to minimize the average number of bits per coefficient. For image reconstruction from the compressed bit stream, it must be subjected to corresponding inverse transformations.
In 1970s, main activities of researches were aimed at inventing, in addition to known at the time Discrete Fourier, WalshHadamard, and Haar Transforms, new transforms that have guaranteed fast transform algorithms and improved energy compaction capability [
This transform invention activity gradually faded after introduction, in a short note, of the Discrete Cosine transform [
Images usually contain many objects, and their global spectra are a mixture of object spectra, whereas spectra of individual image fragments or blocks are much more specific and this enables much easier separation of “important” (most intense) from “unimportant” (least intense) spectral components. This is vividly illustrated in Figure
Comparison of global and local (blockwise) image DCT spectra.
Blockwise image DCT transform compression proved to be so successful, that, by 1992, it was put in the base of the image and video coding standards “JPEG” and “MPEG” [
Though the standard does not fix the size of blocks, blocks of
Very soon after the image transform compression methods emerged, it was recognized that transforms represent a very useful tool for image restoration from distortions of image signals in imaging system and for image enhancement [
Flow diagram of transform domain filtering.
For the implementation of modifications of image transform coefficients, two options are usually considered.
Modification of absolute values of transform coefficients by a nonlinear transfer function; usually it is a function that compresses the dynamic range of transform coefficients, which redistributes the coefficients’ intensities in favor of less intensive coefficients and results in contrast enhancement of image small details and edges.
Multiplication of transform coefficients by scalar weight coefficients; this processing is called transform domain
For defining optimal scalar filter coefficients, a WienerKolmogorov [
Three modifications of the filters based on this principle are (i) proper
As one can see in these equations, all these filters eliminate image spectral components that are less intensive than those of noise and the remaining components are corrected by the
In addition, empirical Wiener filter modifies image spectrum through deamplification of image spectral components according to the level of noise in them; spectrum preservation filter modifies magnitude of the image spectrum as well by making it equal to a square root of image power spectrum empirical estimate as a difference between power spectrum of noisy image and that of noise. Rejecting filter does not modify remaining, that is, not rejected, spectral components at all. In some publications, image denoising using the spectrum preservation filter is called “
Versions of these filters are filters that combine image denoisingdeblurring and image enhancement by means of
The described filters proved to be very efficient in denoising socalled
Image cleaning from moire noise.
Cleaning a Mars satellite image from banding noise by means of separable (rowwise and columnwise) high pass rejecting filtering. Fiducial marks seen in the left image were extracted from the image before the filtering [
However for image cleaning from wideband or “white” noise, above transform domain filtering applied globally to entire image frames is not efficient. In fact it can even worsen the image visual quality. As one can see from an illustrative example shown in Figure
Denoising of a piecewise test image using empirical Wiener filter applied globally to the entire image.
The reason for this inefficiency of global transform domain filtering is the same as for the inefficiency of the global transform compression, which was discussed in Section
It is instructive to note that this is not an accident that the evolution of human vision came up with a similar solution. It is well known that when viewing image, human eye’s optical axis permanently hops chaotically over the field of view and that the human visual acuity is very nonuniform over the field of view. The field of view of a man is about 30°. Resolving power of man’s vision is about 1′. However such a relatively high resolving power is concentrated only within a small fraction of the field of view that has size of about 2° (see, for instance, [
The most straightforward way to implement local filtering is to do it in a hopping window, just as human vision does, and this is exactly the way of processing implemented in the transform image coding methods. However “hopping window” processing, being very attractive from the computational complexity point of view, suffers from “
Thus, local adaptive linear filters work in a transform domain in a sliding window and, at each position of the window, modify, according to the type of the filter defined by (
Denoising and deblurring of a satellite image by means of local adaptive filtering. Top row: raw image and its fragments magnified for better viewing (marked by arrows); bottom row: resulting image and its corresponding magnified fragments.
Enhancement of an astronomical image by means of
Local adaptive filters can work with color and multicomponent images and videos as well. In this case, filtering is carried out in the corresponding multidimensional transform domains, for instance, domains of 3D (two spatial and one color component coordinates) spectra of color images or 3D spectra of a sequence of video frames (two spatial and time coordinates). In the latter case filter 3D window scans video frame sequence in both spatial and time coordinates. As one can see from Figures
2D and 3D local adaptive filtering of a simulated video sequence: (a) one of noisy frames of a test video sequence (image size
3D local adaptive empirical Wiener filtering for denoising and deblurring of a real thermal video sequence: (a) a frame of the initial video sequence; (b) a frame of a restored video sequence. Filter window is (
Certain further improvement of image denoising capability can be achieved if, for each image fragment in a given position of the sliding window, similar, according to some criterion, image fragments over the whole image frame are found and included in the stack of fragments, to which the 3D transform domain filtering is applied. This approach in its simplest form was coined the name “
Obviously, image restoration efficiency of the sliding window transform domain filtering will be higher, if the window size is selected adaptively at each window position. To this goal, filtering can be, for instance, carried out in windows of multiple sizes and, at each window position, the best filtering result should be taken as the signal estimate at this position using methods of statistical tests, for instance, “
In conclusion of this section note that DFT and DCT spectra of image fragments in sliding window form the so called “
In 1990s, a specific family of transform domain denoising filters, the socalled
Basis functions of wavelet transforms (wavelets) are formed by means of a combination of two methods of building transform basis functions from one
These filters did show a good denoising capability. However, a comprehensive comparison of their denoising capability with that of sliding window DCT domain filters reported in [
The capability of wavelets to represent images in different scales can be exploited for improving the image denoising performance of both families of filters and for overcoming the abovementioned main drawback of sliding window DCT domain filters, the fixed window size that might not be optimal for different image fragments. This can be achieved by means of incorporating sliding window DCT domain (SWTD) filters into the wavelet filtering structure through replacing soft/hard thresholding of image wavelet decomposition components in different scales by their filtering with SWTD filters working in the window of the smallest size
Obviously, sliding window transform domain and wavelet processing are just different implementation of the scalar linear filtering in transform domain. This motivates their unified interpretation in order to gain a deeper insight into their similarities and dissimilarities. The common base for such unified interpretation is the notion of signal subband decomposition [
It is curious to note that this “logarithmic coarse—uniform fine” subband arrangement resembles very much the arrangements of tones and semitones in music. In Bach’s equal tempered scale, octaves are arranged in a logarithmic scale and 12 semitones are equally spaced within octaves [
As it was indicated in the introductory section, Discrete Fourier and Discrete Cosine Transforms occupy the unique position among other orthogonal transforms. They are two versions of discrete representation of the integral Fourier Transform, the fundamental transform for describing the physical reality. Among applications specific for DFT and DCT are signal and image spectral analysis and analysis of periodicities, fast signal and image convolution and correlation, and image resampling and building “continuous” image models [
Image resampling is a key operation in solving many digital image processing tasks. It assumes reconstruction of an approximation of the original nonsampled image by means of interpolation of available image samples to obtain samples “inbetween” the available ones. The most feasible is interpolation by means of digital filtering implemented as digital convolution. A number of convolutional interpolation methods are known, beginning from the simplest and the least accurate nearest neighbor and linear (bilinear, for 2D case) interpolations to more accurate cubic (bicubic, for 2D case) and higher order spline methods [
There exists a discrete signal interpolation method that is capable, given finite set of signal samples, to secure virtually error free interpolation of sampled signals, that is, the method that does not add to the signal any distortions additional to the distortions caused by the signal sampling. This method is the
Fast Fourier transform and fast Discrete Cosine transform are the ideal computational means for implementing the convolutional interpolation by the discrete sincfunction. Of these two transforms, FFT has a substantial drawback: it implements the cyclic convolution with a period equal to the number
Several methods of implementation of DFT/DCT based discrete sincinterpolation. The most straightforward one is DFT/DCT spectrum zero padding. Interpolated signal is generated by applying inverse DFT (or, correspondingly, DCT) transform to its zero pad spectrum. Padding DFT/DCT spectra of signals of
The same discrete sinc interpolated
The method is based on the property of
Image subsampling using the perfect shifting filter can be employed for creating “continuous” image models for subsequent image arbitrary resampling with any given accuracy [
Except for creating “continuous” image models, the perfect shifting filter is very well suited for image sheering in the threestep method for fast image rotation by means of three subsequent (horizontal/vertical/horizontal) image sheerings [
Perfect interpolation capability of the discrete sincinterpolation was demonstrated in a comprehensive comparison of different interpolation methods in experiments with multiple 360° rotations reported in [
Discrete sincinterpolation versus other interpolation methods: results of multiple image rotations.
In some applications, “elastic” or space variant image resampling is required, when shifts of pixel positions are specified individually for each image pixel. In these cases, the perfect shifting filter can be applied to image fragments in sliding window for evaluating interpolated value of the window central pixel at each window position. Typical application examples are imitation of image retrieval through turbulent media (for illustration see [
Being working in DCT transform domain, “elastic” resampling in sliding window can be easily combined with abovedescribed local adaptive denoising and deblurring [
A certain limitation of the perfect shifting filter in creating “continuous” image models is its capability of subsampling images only with a rate expressed by an integer or a rational number. In some cases, this might be inconvenient, for instance, when the required resampling rate is a value between one and two, say 1.1, 1.2, or alike. For such cases, there exists the third method of signal resampling with discrete sincinterpolation. It is based on the general Shifted Scaled DFT, which includes arbitrary analog shift and scale parameters. Using Shifted Scaled (ShSc) DFT, one can apply to the image DFT spectrum inverse ShScDFT with the desired scale parameter and obtain a correspondingly scaled discrete sinc interpolated image [
Discrete sincinterpolation versus bilinear and bicubic interpolations in image iterative zoomin/zoomout with the scale parameter
The Shifted Scaled DFT can be presented as a convolution and, as such, can be computed using Fast Fourier or Fast Cosine Transforms [
Among the image processing tasks, which involve “continuous” image models, are also signal differentiation and integration, the fundamental tasks of numerical mathematics that date back to such classics of mathematics as Newton and Leibnitz. One can find standard numerical differentiation and integration recipes in numerous reference books, for instance, [
The exact solution for the discrete representation of signal differentiation and integration provided by the sampling theory tells that, given the signal sampling interval and signal sampling and reconstruction devices, discrete frequency responses (in DFT domain) of digital filters for perfect differentiation and integrations should be, correspondingly, proportional and inversely proportional to the frequency index [
The comprehensive comparison of the accuracy of standard numerical differentiation and integration algorithms with perfect DCTbased differentiation and integration reported in [
Comparison of results of iterative alternated 100 differentiations and integrations of a test rectangular signal using the DCTbased algorithms (left) and standard numerical algorithms (right, differentiation filter with point spread function [−0.5, 0, 0.5] and trapezoidal rule integration algorithm).
Computational efficiency of the DFT/DCT based interpolation error free discrete sincinterpolation algorithms is rooted in the use of fast Fourier and Fast DCT transforms. Perhaps, the best concluding remark for this discussion of the discrete sincinterpolation DFT/DCT domain methods would be mentioning that a version of what we call now Fast Fourier Transform algorithm was invented more than 200 hundred years ago by Gauss just for the purpose of facilitating numerical interpolation of sampled data of astronomical observation [
As we mentioned at the beginning of Section
There are many applications, where, contrary to the common practice of uniform sampling, sampled data are collected in irregular fashion. Because image display devices as well as computer software for processing sampling data assume using regular uniform rectangular sampling lattice, one needs in all these cases to convert irregularly sampled images to regularly sampled ones.
Generally, the corresponding regular sampling grid may contain more samples than it is available, because coordinates of positions of available samples might be known with a “subpixel” accuracy, that is, with the accuracy (in units of image size) better than
The general framework for recovery of discrete signals from a given set of their arbitrarily taken samples can be formulated as an approximation task in the assumption that continuous signals are represented in computers by their
Abovedescribed discrete sincinterpolation methods provide bandlimited, in terms of signal Fourier spectra, approximation of regularly sampled signals. One can also think of signal bandlimited approximation in terms of their spectra in other transforms. This approach is based on the
The meaning of the Discrete Sampling Theorem is very simple. Given
The discrete sampling theorem is applicable to signals of any dimensionality, though the formulation of the signal bandlimitedness depends on the signal dimensionality. For 2D images and such transforms as Discrete Fourier, Discrete Cosine, and Walsh Transforms, the most simple is compact “lowpass” bandlimitedness by a rectangle or by a circle sector.
It was shown in ([
As it was indicated, the choice of the transform must be made on the base of the transform energy compaction capability for each particular class of images, to which the image to be recovered is believed to belong. The type of the bandlimitation also must be based on a priori knowledge regarding the class of images at hand. The number of samples to be recovered is a matter of a priori belief of how many samples of a regular uniform sampling lattice would be sufficient to represent the images for the end user.
Implementation of signal recovery/approximation from sparse nonuniformly sampled data according to the discrete sampling theorem requires matrix inversion, which is, generally, a very computationally demanding procedure. In applications, in which one can be satisfied with image reconstruction with a certain limited accuracy, one can apply to the reconstruction a simple iterative reconstruction algorithm of the GerchbergPapoulis [
Among reported applications, one can mention superresolution from multiple video frames of turbulent video and superresolution in computed tomography that makes use of redundancy of object slice images, in which usually a substantial part of their area is an empty space surrounding the object [
The described discrete sampling theorem based methods for image recovery from sparse samples by means of their bandlimited approximation in a certain transform domain require explicit formulation of the desired band limitation in the selected transform domain. While for 1D signal this formulation is quite simple and requires, for most frequently used low pass band limitation, specification of only one parameter, signal bandwidth, in 2D case formulation of the signal band limitation, requires specification of a 2D shape of signal bandlimited spectrum. The simplest shapes, rectangle and circle sector ones, may only approximate, with a certain redundancy, real areas occupied by the most intensive image spectral coefficients for particular images. Figure
Spectral binary mask (shown white with the dc component in the left upper corner) that indicates components of DCT spectrum of the image in Figure
In the cases, when exact character of spectrum band limitation is not known, image recovery from sparse samples can be achieved using the “compressive sensing” approach introduced in [
The compressive sensing approach assumes obtaining a bandlimited, in a certain selected transform domain, approximation of images as well, but it does not require explicit formulation of the image bandlimitation and achieves image recovery from an incomplete set of samples by means of minimization of
However, there is a price one should pay for the uncertainty regarding the band limitation: the number
Signal bandlimitedness plays an important role in dealing with both continuous signals and discrete (sampled) signals representing them. It is well known that continuous signals cannot be both strictly bandlimited and have strictly bounded support. In fact, continuous signals neither are bandlimited nor have strictly bounded support. They can only be more or less densely concentrated in signal and spectral domains. This property is mathematically formulated in the form of the “
In distinction to that, sampled signals that represent continuous signals can be sharply bounded in both signal and spectral domains. This is quite obvious for some signal spectral presentation, such as Haar signal spectra: Haar basis functions are examples of sampled functions sharply bounded in both signal and Haar spectral domains. But it turns out that there exist signals that are sharply bounded in both signal and their spectral domains of DFT or DCT, which are discrete representations of the Fourier integral transform. An example of a spacelimited/bandlimited image is shown in Figure
Spacelimited image “C. Shannon” (left) and its bandlimited DFT spectrum (right; shown centered at the signal dc component).
In a similar way one can generate spacelimited and bandlimited analogs of the discrete sincfunction, i.e. functions, which form bandlimited shift bases in the given space limits. In [
Examples of a “sinclet” (red plots) and, for comparison, of the discrete sincfunction for the same band limitation (blue plots) in their three different positions within the interval of 103 samples of 512 samples (boxes a–c). Plots of their DFT spectra are shown in boxes (d–f).
The following relationship between signal bounds in signal and DFT spectral domains can be derived:
Two fundamental features of fast transforms, the energy compaction capability and fast algorithms for transform computation, make them perfect tool for image compression, restoration, reconstruction, and resampling. Of all fast transforms, Discrete Fourier Transform and Discrete Cosine Transform are the most important as they are complementing each other discrete representations of the integral Fourier transform, one of the most fundamental mathematical models for describing physical reality, and, in addition, they enable fast digital convolution. From these two transforms, DCT is preferable in most applications thanks to its ability to very substantially reduce processing artifacts associated with image discontinuities at the borders.
Modern tendency in the imaging engineering is computational imaging. Computer processing of sensor data enables substantial price reduction and sometimes even complete removal of imaging optics and similar imaging hardware. It also gives birth to numerous new imaging techniques in astrophysics, biology, industrial engineering, remote sensing, and other applications. No doubts, this area promises many new achievements in the coming years. And it is certain that fast transforms will remain to be the indispensable tool in this process.
Image samples
Image spectral coefficients
pixel and, correspondingly, spectra coefficient vertical and horizontal indices
Various constants
Image sampling intervals
Sampling interval in the transform domain
Total number of image samples
Wavelength of coherent radiation
Distance between object plane and hologram.
Discrete Cosine Transform
Discrete Fourier Transform
Discrete Fresnel Transform
Peak SignaltoNoise Ratio: ratio of signal dynamic range to noise standard deviation
root mean square error
SignaltoNoise Ratio (ratio of signal and noise variances).
The author declares that there is no conflict of interests regarding the publication of this paper.