Merkle–Damgård construction: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
change link
 
Line 1: Line 1:
{{citations needed|date=January 2014}}
Alyson is what my husband loves to call me but I  [http://chorokdeul.co.kr/index.php?document_srl=324263&mid=customer21 clairvoyance] don't like when people [http://findyourflirt.net/index.php?m=member_profile&p=profile&id=117823 live psychic reading] use my complete name. North Carolina is where we've been living for years and will by no means transfer. To perform domino is something I really appreciate performing. My working day job is an invoicing officer but I've already utilized for an additional one.<br><br>Here is my homepage online psychic chat ([https://www-ocl.gist.ac.kr/work/xe/?document_srl=605236 https://www-ocl.gist.ac.kr/work/xe/?document_srl=605236])
The '''optical transfer function''' ('''OTF''') is the [[Transfer Function|transfer function]] of an optical system such as a [[camera]], [[microscope]], [[human eye]], or [[Image projector|projector]]. It is used by optical engineers and scientists to describe how the optics project light from the object or scene onto a photographic film, [[Image sensor|detector array]], [[retina]], or screen. The function specifies the [[Translation (physics)|translation]] and contrast reduction of a periodic [[sine]] pattern after passing through the lens system, as a function of its periodicity and orientation. Formally, the optical transfer function is defined as the [[Fourier transform]] of the [[point spread function]], or [[impulse response]] of the optics, i.e. the image of a point source. When this image does not change shape upon lateral translation of the point source, the optical transfer function can be used to study the projection of arbitrary objects or scenes onto the detector or film. While figures of merit such as contrast, sensitivity, and resolution give an intuitive indication of performance, the optical transfer function provides a comprehensive and well-defined characterization of optical systems.
 
[[File:Illustration of the optical transfer function and its relation to image quality..svg|thumb|right|660px|Illustration of the optical transfer function and its relation to image quality. The optical transfer function of a well-focused (a), and an out-of-focus optical imaging system without aberrations (d). As the optical transfer function of these system is real and non-negative, the optical transfer function is equal to the modulation transfer function by definition. Images of a point source and spoke target are shown in (b,e) and (c,f), respectively.]]
 
== Definition and related concepts ==
Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a [[Complex number|complex valued]] function. The projection of a specific periodic pattern is represented by a complex number with absolute value and [[complex argument]] proportional to the relative contrast and translation of the projected projection, respectively.
 
[[File:Definitions PSF OTF MTF PhTF.svg|right|thumb|660px|Various closely related characterizations of an optical system exhibiting coma, a typical aberration that occurs off-axis. (a) The point-spread function (PSF) is the image of a point source. (b) The image of a line is referred to as the line-spread function, in this case a vertical line. The line-spread function is directly proportional to the vertical integration of the point-spread image. The optical-transfer function (OTF) is defined as the Fourier transform of the point-spread function and is thus generally a two-dimensional complex function. Typically only a one-dimensional slice is shown (c), corresponding to the Fourier transform of the line-spread function. The thick green line indicates the real part of the function, and the thin red line the imaginary part. Often only the absolute value of the complex function is shown, this allows visualization of the two-dimensional function (d); however, more commonly only the one-dimensional function is shown (e). The latter is typically normalized at the spatial frequency zero and referred to as the modulation transfer function (MTF). For completeness, the complex argument is sometimes provided as the phase transfer function (PhTF), shown in panel (f).]]
 
Often the contrast reduction is of most interest and the pattern translation can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the '''modulation transfer function''' ('''MTF'''). On the other hand, when also the pattern translation is important, the [[complex argument]] of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the '''phase transfer function''' ('''PhTF'''). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions:
:<math>\mathrm{OTF}(\nu)=\mathrm{MTF}(\nu)e^{i\,\mathrm{PhTF}(\nu)}</math>
where
:<math>\mathrm{MTF}(\nu) = \left\vert \mathrm{OTF}(\nu) \right\vert </math>
:<math>\mathrm{PhTF}(\nu) = \mathrm{arg}(\mathrm{OTF}(\nu)), </math>
and <math>\mathrm{arg}(\cdot)</math> represents the complex argument function, while <math>\nu</math> is the spatial frequency of the periodic pattern. In general <math>\nu</math> is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern.
 
The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope.
 
True to the definition of [[Transfer Function|transfer function]], <math>\mathrm{OTF}(0)=\mathrm{MTF}(0)</math> should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence <math>\mathrm{MTF}(\nu)\equiv 1</math>.
 
Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal At the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors.
 
Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the '''Contrast Transfer Function (CTF)'''.<ref name=CTF >{{cite web |title=Contrast Transfer Function|url=http://www.microscopyu.com/articles/optics/mtfintro.html|accessdate=16 November 2013}}</ref>
 
== Examples ==
 
=== The OTF of an ideal lens system ===
A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, [[Optical aberration|non-aberrated]], [[F-number|f/4]] optical imaging system used, at the visible wavelength of 500&nbsp;nm, would have the optical transfer function depicted in the right hand figure.
 
<div style="float:right;margin-left:1em;">
{{Auto images
|total_width=360
|width1=800|height1=600|image1=1D diffraction limited optical transfer function..svg|caption1=The one-dimensional optical transfer function of a diffraction limited imaging system is identical to its modulation transfer function.|alt1=One-dimensional optical transfer function of a diffraction limited imaging system is identical to its modulation transfer function.
|width2=600|height2=600|image2=Spoke target imaged by a diffraction limited imaging system..svg|caption2=Spoke target imaged by a diffraction limited imaging system.|alt2=Spoke target imaged by a diffraction limited imaging system.
|footer=Transfer function and example image of an ideal, optical-aberration-free (diffraction-limited) imaging system.
}}
</div>
 
It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500{{sup|th}} of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc. Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between the two is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200.
 
The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the [[Nyquist-Shannon Theorem]], to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, [[aliasing]] may lead to a further reduction of the image fidelity.
 
=== The OTF of an imperfect lens system ===
An imperfect, [[Optical Aberrations|aberrated]], imaging system could possess the optical transfer function depicted in the following figure.
 
<div style="float:left;margin-right:1em;">
{{Auto images
|total_width=560
|width1=800|height1=600|image1=OTF of an optical system with spherical aberration..svg|caption1=The real part of the optical transfer function of an aberrated, imperfect, imaging system.|alt1=The optical transfer function of an aberrated, imperfect, imaging system.
|width2=800|height2=600|image2=MTF of an optical system with spherical aberration..svg|caption2=The modulation transfer function of an aberrated, imperfect, imaging system.|alt2=The optical transfer function of an aberrated, imperfect, imaging system.
|width3=600|height3=600|image3=Spoke target imaged with spherical aberration.svg|caption3=The image of a spoke target as image by an aberrated optical system.|alt3=The image of a spoke target as image by an aberrated optical system.
|footer=Transfer function and example image of an f/4 optical imaging system at 500nm with spherical aberration with standard Zernike coefficient of 0.25.
}}
</div>
 
As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and ''visa versa'', this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns.
 
While it could be argued that the resolution of both the ideal and the imperfect system is 10 μm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality, would instead use the spatial frequency at which the first zero occurs. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function.
 
=== The OTF of an optical system with a non-rotational symmetric aberration ===
[[File:Trefoil aberration PSF OTF and example image.svg|right|thumb|600px|When viewed through an optical system with trefoil aberration, the image of a point object will look as a three-pointed star (a). As the point-spread function is not rotational symmetric, only a two-dimensional optical transfer function can describe it well (b). The height of the surface plot indicates the absolute value and the hue indicates the complex argument of the function. A spoke target imaged by such an imaging device is shown by the simulation in (c).]]
 
Optical systems, and in particular [[optical aberrations]] are not always rotational symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, next to an optical system with [[Coma (optics)|coma]], a non-rotational-symmetric aberration.
 
Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously.
 
=== Practical Example - High Definition Video System ===
While [[optical resolution]], as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language.
 
Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the [[Nyquist rate|Nyquist]] theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) in total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses [[oversampling]], as described later, may appear sharper than a high definition picture shot on a camera with a poor Modulation Transfer Function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall.
 
== Calculation ==
Most [[Optical lens design|optical design software]] has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as [[GNU Octave]] or [[Matlab]], and in some specific cases even analytically. The optical transfer function can be calculate following two approaches:<ref name=Goodman2005>{{cite book |first=Joseph|last=Goodman|year=2005|title=Introduction to Fourier Optics|edition=3rd ed,|publisher=Roberts & Co Publishers|isbn=0-9747077-2-4}}</ref>
# as the Fourier transform of the incoherent [[point spread function]], or
# as the auto-correlation of the [[pupil function]] of the optical system
Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach.
 
=== Example ===
 
==== Ideal lens system with circular aperture ====
 
===== Auto-correlation of the pupil function =====
Since the optical transfer function is the [[Fourier transform]] of the [[point spread function]], and the [[point spread function]] is the square absolute of the inverse Fourier transformed [[pupil function]], the [[optical transfer function]] can also be calculated directly from the [[pupil function]]. From the [[convolution theorem]] it can be seen that the optical transfer function is in fact the [[autocorrelation|auto-correlation]] of the [[pupil function]].
 
The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of <math>2\nu</math>, where <math>\nu</math> is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for <math>\nu = 0</math>, so the resulting area should be divided by <math>\pi</math>.
 
The intersecting area can be calculated as the sum of that of two identical [[circular segment]]s: <math> \theta/2 - \sin(\theta)/2</math>, where <math>\theta</math> is the circle segment angle. By substituting <math> \nu = cos(\theta/2) </math>, and using the equalities <math> \sin(\theta)/2 = \sin(\theta /2)\cos(\theta /2) </math> and <math> 1 = \nu^2 + \sin(\arccos(\nu))^2 </math>, this can be rewritten as <math>\arccos(\nu) - \nu \sqrt{1 - \nu^2} </math>. Hence the normalized optical transfer function is given by:
 
<math>OTF(\nu) = \frac{2}{\pi} \left(\arccos(\nu)-\nu \sqrt{1-\nu^2}\right)</math>.
 
=== Numerical evaluation ===
The one-dimensional optical transfer function can be calculated as the [[discrete Fourier transform]] of the line spread function. This data is graphed against the [[spatial frequency]] data. In this case, a sixth order polynomial is fitted to the '''MTF vs. spatial frequency''' curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the '''unit under test''' is determined from this data.
 
[[Image:MTF example graph.jpg|thumb|right|310px|The '''MTF''' data versus spatial frequency is normalized by fitting a sixth order polynomial to it, making a smooth curve. The 50% cut-off frequency is determined and the corresponding '''spatial frequency''' is found, yielding the approximate position of '''best focus'''.]]
 
The Fourier transform of the LSF can not be determined analytically by the following equations:
 
:<math>\text{MTF} = \mathcal{F} \left[ \text{LSF}\right] \qquad \qquad \text{MTF}= \int f(x) e^{-i 2 \pi\, x s}\, dx</math>
 
Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform <math>\mathcal{DFT}</math>.<ref>Chapra, S.C.; Canale, R.P. (2006). ''Numerical Methods for Engineers (5th ed.). New York, New York: McGraw-Hill</ref>
 
:<math>\text{MTF} = \mathcal{DFT}[\text{LSF}] = Y_k = \sum_{n=0}^{N-1} y_n e^{-ik \frac{2 \pi}{N} n} \qquad  k\in  [0, N-1] </math>
 
where
* <math>Y_k\,</math> = the <math>k^{th}</math> value of the MTF
* <math>N\,</math> = number of data points
* <math>n\,</math> = index
* <math>k\,</math> = <math>k^{th}</math> term of the LSF data
* <math>y_n\,</math> = <math>n^{th}\,</math> pixel position
* <math>i=\sqrt{-1}</math>
 
: <math> e^{\pm ia} = \cos(a) \, \pm \, i \sin(a) </math>
: <math>\text{MTF}= \mathcal{DFT}[\text{LSF}] = Y_k = \sum_{n=0}^{N-1} y_n \left[\cos\left(k\frac{2 \pi}{N} n\right) - i\sin\left(k \frac{2 \pi}{N} n\right)\right] \qquad k\in[0,N-1]</math>
 
The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph.
 
== Measurement ==
The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems.
 
=== Starting from the point spread function ===
The optical transfer function is defined as the [[Fourier transform]] of the [[Impulse response|impulse-response]] of the optical system, also called the [[point spread function]]. The optical transfer function is thus readily obtained by first acquiring the image of an incoherent point source, and applying the two-dimensional [[Fast Fourier transform|discrete Fourier transform]] to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic [[microsphere]], or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source.
 
=== Using extended test objects for spatially invariant optics ===
When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional [[Fast Fourier transform|discrete Fourier transform]] of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.
 
==== The line-spread function ====
The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single '''line-spread function''' (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles.
 
The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.
 
==== The edge-spread function ====
The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the '''edge spread function''' (ESF).<ref>Holst, G.C. (1998). ''Testing and Evaluation of Infrared Imaging Systems'' (2nd ed.). Florida:JCD Publishing, Washington:SPIE.</ref><ref name=ElectroOpticalTestLab >Electro Optical Industries, Inc.(2005). EO TestLab Methodology. In Education/Ref. http://www.electro-optical.com/html/toplevel/educationref.asp.</ref> However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric.
 
[[Image:MTF knife-edge target.jpg|thumb|right|215px|In evaluating the '''ESF''', an operator defines a box area equivalent to 10%{{citation needed|date=August 2013}} of the total frame area of a '''knife-edge test target''' back-illuminated by a '''blackbody'''. The area is defined to encompass the edge of the target image.]]
 
As shown in the right hand figure, an operator defines a box area encompassing the edge of a '''knife-edge test target''' image back-illuminated by a [[blackbody]]. The box area is defined to be approximately 10%{{citation needed|date=August 2013}} of the total frame area. The image [[pixel]] data is translated into a two-dimensional array ([[pixel|pixel intensity]] and [[pixel|pixel position]]). The amplitude (pixel intensity) of each [[line (video)|line]] within the array is [[Normalization (statistics)|normalized]] and averaged. This yields the edge spread function.
 
: <math>\text{ESF} = \frac{X - \mu}{\sigma} \qquad \qquad \sigma\, = \sqrt{\frac{\sum_{i=0}^{n-1} (x_i-\mu\,)^2}{n}} \qquad \qquad \mu\, = \frac{\sum_{i=0}^{n-1} x_i}{n} </math>
where
* ESF = the output array of normalized pixel intensity data
* <math>X\,</math> = the input array of pixel intensity data
* <math>x_i\,</math> = the i<sup>th</sup> element of <math>X\,</math>
* <math>\mu\,</math> = the average value of the pixel intensity data
* <math>\sigma\,</math> = the standard deviation of the pixel intensity data
* <math>n\,</math> = number of pixels used in average
 
The line spread function is identical to the [[derivative|first derivative]] of the edge spread function,<ref name=Mazzetta2007 >Mazzetta, J.A.; Scopatz, S.D. (2007). Automated Testing of Ultraviolet, Visible, and Infrared Sensors Using Shared Optics.'' Infrared Imaging Systems: Design Analysis, Modeling, and Testing XVIII,Vol. 6543'', pp. 654313-1 654313-14</ref> which is differentiated using [[numerical analysis|numerical methods]]. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows:
 
: <math>\text{LSF} = \frac{d}{dx} \text{ESF}(x)</math>
Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the [[finite difference]]:
:<math> \text{LSF} = \frac{d}{dx}\text{ESF}(x) \approx \frac{\Delta \text{ESF}}{\Delta x}</math>
:<math> \text{LSF} \approx \frac{\text{ESF}_{i+1} - \text{ESF}_{i-1}}{2(x_{i+1} - x_i)}</math>
 
where:
* <math>i\,</math> = the index <math>i = 1,2,\dots,n-1</math>
* <math>x_i\,</math> = <math>i^{th}\,</math> position of the <math>i^{th}\,</math> pixel
* <math>\text{ESF}_i\,</math> = ESF of the <math>i^{th}\,</math> pixel
 
==== Using a grid of black and white lines ====
Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF).
 
==Factors affecting MTF in typical camera systems==
In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a '[[phase plate]]' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce [[aliasing]] by eliminating spatial frequencies above the Nyquist rate of the display.
 
== Digital correction of the optical transfer function ==
Often the contrast decreases significantly before the Nyquist frequency of the display is reached. Digital image [[deconvolution]] is the procedure that reverses the contrast reduction to yield a sharper image. Errors in the recording due to noise or aliasing will also be amplified. The technique is thus only effective on high quality, high signal to noise ratio, recordings.
 
=== Oversampling and downconversion to maintain the optical transfer function ===
The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than [[Sampling (signal processing)|samples]] in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later [[charge-coupled device|CCD]] line scanners, were developed which sampled more pixels than were needed and then 'downconverted', which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by [[convolution]] with a two-dimensional sin(x)/x [[weighting]] function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as [[Photoshop]].
 
Just as standard definition video with a flat MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has at least twice{{Citation needed|date=August 2013}} as many pixels, and then digitally filtering.  With movies now being shot in [[4K resolution|4k]] and even 8k video for the cinema, using cameras like the [[Red Digital Cinema Camera Company|Red]], we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true (unless a perfect optical spatial filter can be devised), and the same problem exists of course with stills cameras, where a better image can be expected when, say, a 10-megapixel image is converted to a 5-megapixel image, than could ever be obtained from an even the best 5-megapixel camera. Because of this problem of maintaining a flat MTF, broadcasters like the [[BBC]] did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).
 
Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'Modulation Transfer Function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency.
 
Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35mm film, but has become an insurmountable limitation with the very small format sensors used in digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4&nbsp;inch sensors, for which apertures smaller than about f4 begin to limit resolution.  Even professional video cameras mostly use 2/3&nbsp;inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the [[Pentax K10D]]) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range.<ref>[http://www.b2bvideosource.com/mm5/merchant.mvc?Screen=CAMERA_TERMINOLOGY&Store_Code=BVS B2BVideoSource.com: Camera Terminology<!-- Bot generated title -->]</ref>
 
=== Trend to large-format DSLRs and improved MTF potential ===
There has recently been a shift towards the use of large image format [[digital single lens reflex camera]]s driven by the need for low-light sensitivity and narrow [[depth of field]] effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. In practice, such cameras currently fail in this respect, and they do not have the processing power to do what is required. The [[Canon EOS 5D Mark II]] is believed to use only every third line, and hence suffers bad aliasing, as its [[Optical low-pass filter|optical filter]] is optimized for stills use. The [[Panasonic Lumix DMC-GH2]] may do some processing across pixels, producing very sharp images, but with some aliasing. Nevertheless, such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing.
 
== Limitations ==
In general, the [[point spread function]], the image of a point source also depends on factors such as the [[wavelength]] ([[Visible spectrum|color]]), and [[Field of view|field]] angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly upon lateral translation, the optical transfer function does not describe the optical system accurately.
 
==See also==
* [[Bokeh]]
* [[Modulation transfer function (infrared imaging)]]
* [[Optical resolution]]
* [[Signal to noise ratio (image processing)]]
* [[Signal transfer function]]
* [[Strehl ratio]]
* [[Transfer function]]
* [[Wavefront coding]]
* [[Minimum resolvable contrast]]
* [[Minimum resolvable temperature difference]]
* [[Signal transfer function]]
 
==References==
{{Reflist}}
 
== External links ==
* [http://spie.org/ SPIE] international society advancing light based research.
* [http://www.electro-optical.com/ Electro Optical Industries] infrared and visible light spectrum test and calibration equipment.
* [http://www.trioptics.com/knowledgebase/mtf.php MTF] expressing the imaging quality.
* [http://www.quickmtf.com Quick MTF] software for measuring resolution, including ESF, LSF and MTF.
* [http://spie.org/x34298.xml MTF on Optipedia] Modulation transfer function written by Glenn D. Boreman on Optipedia.
* [http://www.optikos.com/resource_files/pdfs/how_to_measure_mtf.pdf How to Measure MTF] How to Measure MTF and other Properties of Lenses written by Optikos Corporation.
 
[[Category:Optics]]

Latest revision as of 01:37, 13 November 2014

Alyson is what my husband loves to call me but I clairvoyance don't like when people live psychic reading use my complete name. North Carolina is where we've been living for years and will by no means transfer. To perform domino is something I really appreciate performing. My working day job is an invoicing officer but I've already utilized for an additional one.

Here is my homepage online psychic chat (https://www-ocl.gist.ac.kr/work/xe/?document_srl=605236)