Adder–subtractor: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Yobot
m See also: WP:CHECKWIKI error fixes - Replaced special characters in sortkey using AWB (9095)
No edit summary
Line 1: Line 1:
{{Use dmy dates|date=June 2013}}
If you have CCTV security solutions set up, then you can still boost your technology to benefit in the advances that came in since  [http://traditionsalive.wsiefusion.net/Redirect.aspx?destination=http%3A%2F%2Fcctvdvrreviews.com best cctv system] these folks were installed. Motion sensors will also be incredibly a good choice for CCTV, allowing your cameras to record only if there is movement being picked up. Cctv dvr playback software Of course, a DVR protection structure is not merely for home security but can be fit within the office, stockroom along with other areas.<br><br>
The '''Allan variance''' ('''AVAR'''), also known as '''two-sample variance''', is a measure of frequency stability in [[clock]]s, [[oscillator]]s and [[amplifier]]s. It is named after [[David W. Allan]]. It is expressed mathematically as


:<math>\sigma_y^2(\tau). \, </math>
  is, evident that CCTVs are finding a wide application in lots of sectors and. First coming from all, where are  eleven, [http://photo.daroiz.sk/main.php?g2_view=core.UserAdmin&g2_subView=core.UserLogin&g2_return=http://cctvdvrreviews.com photo.daroiz.sk], the CCTV cameras located around your home.<br><br>Make sure that the instructions presented to you  Whether ([http://tracker.roitesting.com/go.aspx?UID&PPCSEID&URL=http%3A//cctvdvrreviews.com&mmtctg=6006406896&mmtcmp=106878696&mmtgglcnt=0&mmtadid=24403164336&mmtplmt=&keyword=seo&mmtmtname=b tracker.roitesting.Com]) because of your professor are evident to you. Here is often a list of 10 questions that must be asked and answered when interviewing ghost writer candidates. [http://www.solspace.com/index?URL=http://cctvdvrreviews.com videosecu cctv security camera usb dvr system] cctv dvr recording quality You can also make use of the Google 'more search tools' feature found at the bottom of the left navigation bar when you are conducting a Google search and selecting 'reading level'. It will likely be advantage to spend little level of money now on expert writing services to prevent spending more inside the future.
 
The '''Allan deviation''' ('''ADEV''') is the square root of Allan variance. It is also known as ''sigma-tau'', and is expressed mathematically as
 
:<math>\sigma_y(\tau).\,</math>
 
The ''M-sample variance'' is a measure of frequency stability using M samples, time T between measures and observation time <math>\tau</math>. ''M''-sample variance is expressed as
 
:<math>\sigma_y^2(M, T, \tau).\,</math>
 
The ''Allan variance'' is intended to estimate stability due to noise processes and not that of systematic errors or imperfections such as frequency drift or temperature effects. The Allan variance and Allan deviation describe frequency stability, i.e. the stability in frequency. See also the section entitled "[[Allan variance#Interpretation of value|Interpretation of value]]" below.
 
There are also different adaptations or alterations of ''Allan variance'',  notably the [[modified Allan variance]] MAVAR or MVAR, the [[total variance]], and the [[Hadamard variance]]. There also exist time stability variants such as [[time deviation]] TDEV or [[time deviation|time variance]] TVAR. Allan variance and its variants have proven useful outside the scope of [[timekeeping]] and are a set of improved statistical tools to use whenever the noise processes are not unconditionally stable, thus a derivative exists.
 
The general ''M''-sample variance remains important since it allows [[dead time]] in measurements and bias functions allows conversion into Allan variance values. Nevertheless, for most applications the special case of 2-sample, or "Allan variance" with <math>T = \tau</math> is of greatest interest.
 
==Background==
When investigating the stability of [[crystal oscillator]]s and [[atomic clock]]s it was found that they did not have a [[phase noise]] consisting only of [[white noise]], but also of white frequency noise and [[flicker noise|flicker frequency noise]]. These noise forms become a challenge for traditional statistical tools such as [[standard deviation]] as the estimator will not converge. The noise is thus said to be divergent. Early efforts in analysing the stability included both theoretical analysis and practical measurements.<ref name=Cutler1966>{{Citation |last1=Cutler |first1=L. S. |last2=Searle |first2=C. L. |url=http://wwwusers.ts.infn.it/~milotti/Didattica/Segnali/Cutler&Searle_1966.pdf |title=Some Aspects of the Theory and Measurements of Frequency Fluctuations in Frequency Standards |journal=Proceedings of IEEE |volume=54 |number=2 |date=February 1966 |pages=136–154}}</ref><ref name=Leeson1966>{{Citation |last=Leeson |first=D. B |title=A simple Model of Feedback Oscillator Noise Spectrum |url=http://ccnet.stanford.edu/cgi-bin/course.cgi?cc=ee246&action=handout_download&handout_id=ID113350669026291 |pages=329–330 |journal=Proceedings of IEEE |volume=54 |number=2 |date=February 1966 |accessdate=20 September 2012}}</ref>
 
An important side-consequence of having these types of noise was that, since the various methods of measurements did not agree with each other, the key aspect of repeatability of a measurement could not be achieved. This limits the possibility to compare sources and make meaningful specifications to require from suppliers. Essentially all forms of scientific and commercial uses were then limited to dedicated measurements which hopefully would capture the need for that application.
 
To address these problems, David Allan introduced the M-sample variance and (indirectly) the two-sample variance.<ref name=Allan1966/> While the two-sample variance did not completely allow all types of noise to be distinguished, it provided a means to meaningfully separate many noise-forms for time-series of phase or frequency measurements between two or more oscillators. Allan provided a method to convert between any M-sample variance to any N-sample variance via the common 2-sample variance, thus making all M-sample variances comparable. The conversion mechanism also proved that M-sample variance does not converge for large M, thus making them less useful. IEEE later identified the 2-sample variance as the preferred measure.<ref name=IEEE1139>{{cite journal | doi = 10.1109/IEEESTD.1999.90575 | title=Definitions of physical quantities for fundamental frequency and time metrology &ndash; Random Instabilities | journal=IEEE Std 1139-1999}}</ref>
 
An early concern was related to time and frequency measurement instruments which had a [[dead time]] between measurements. Such a series of measurements did not form a continuous observation of the signal and thus introduced a [[systematic bias]] into the measurement. Great care was spent in estimating these biases. The introduction of zero dead time counters removed the need, but the bias analysis tools have proved useful.
 
Another early aspect of concern was related to how the [[Bandwidth (signal processing)|bandwidth]] of the measurement instrument would influence the measurement, such that it needed to be noted. It was later found that by algorithmically changing the observation <math>\tau</math>, only low <math>\tau</math> values would be affected while higher values would be unaffected. The change of <math>\tau</math> is done by letting it be an integer multiple <math>n</math> of the measurement timebase <math>\tau_0</math>.
 
:<math>\tau = n\,\tau_0 </math>
 
The physics of [[crystal oscillator]]s was analyzed by D. B. Leeson<ref name=Leeson1966/> and the result is now referred to as [[Leeson's equation]]. The feedback in the [[oscillator]] will make the [[white noise]] and [[flicker noise]] of the feedback amplifier and crystal become the [[power-law noise]]s of <math>f^{-2}</math> white frequency noise and <math>f^{-3}</math> flicker frequency noise respectively. These noise forms have the effect that the [[standard variance]] estimator does not converge when processing time error samples. This mechanics of the feedback oscillators was unknown when the work on oscillator stability started but was presented by Leeson at the same time as the statistical tools was made available by [[David W. Allan]]. For a more thorough presentation on the [[Leeson effect]] see modern phase noise literature.<ref name=Rubiola2009>{{Citation |last=Rubiola |first=Enrico |title=Phase Noise and Frequency Stability in Oscillators |publisher=Cambridge university press |isbn=0-521-88677-5 |year=2008}}</ref>
 
==Interpretation of value==
Allan variance is defined as one half of the [[time]] average of the squares of the differences between successive readings of the [[frequency deviation]] sampled over the sampling period. The Allan variance depends on the time period used between samples: therefore it is a function of the sample period, commonly denoted as τ, likewise the distribution being measured, and is displayed as a graph rather than a single number. A low Allan variance is a characteristic of a clock with good stability over the measured period.
 
Allan deviation is widely used for plots (conveniently in [[Log-log graph|log-log]] format) and presentation of numbers. It is preferred as it gives the relative amplitude stability, allowing ease of comparison with other sources of errors.
 
An Allan deviation of 1.3×10<sup>&minus;9</sup> at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative [[root mean square]] (RMS) value of 1.3×10<sup>&minus;9</sup>. For a 10-MHz clock, this would be equivalent to 13 mHz RMS movement. If the phase stability of an oscillator is needed then the [[time deviation]] variants should be consulted and used.
 
One may convert the Allan variance and other time-domain variances into frequency-domain measures of time (phase) and frequency stability.  The following link shows these relationships and how to perform these conversions:
http://www.allanstime.com/Publications/DWA/Conversion_from_Allan_variance_to_Spectral_Densities.pdf
 
==Definitions==
 
===<math>M</math>-sample variance===
 
The <math>M</math>-sample variance is defined<ref name=Allan1966>Allan, D  [http://tf.boulder.nist.gov/general/pdf/7.pdf ''Statistics of Atomic Frequency Standards''], pages 221–230. Proceedings of IEEE, Vol. 54, No 2, February 1966.</ref> (here in a modernized notation form) as
 
:<math>\sigma_y^2(M, T, \tau) = \frac{1}{M-1}\left\{\sum_{i=0}^{M-1}\left[\frac{x(iT+\tau )-x(iT)}{\tau}\right]^2 - \frac{1}{M}\left[\sum_{i=0}^{M-1}\frac{x(iT+\tau)-x(iT)}{\tau}\right]^2\right\}</math>
 
or with [[Allan variance#Average fractional frequency|average fractional frequency]] time series
 
:<math>\sigma_y^2(M, T, \tau) = \frac{1}{M-1}\left\{\sum_{i=0}^{M-1}\bar{y}_i^2 - \frac{1}{M}\left[\sum_{i=0}^{M-1}\bar{y}_i\right]^2\right\}</math>
 
where <math>M</math> is the number of frequency samples used in variance, <math>T</math> is the time between each frequency sample and <math>\tau</math> is the time-length of each frequency estimate.
 
An important aspect is that <math>M</math>-sample variance model counter dead-time by letting the time <math>T</math> be different from that of&nbsp;<math>\tau</math>.
 
===Allan variance===
The Allan variance is defined as
 
:<math>\sigma_y^2(\tau) = \langle\sigma_y^2(2, \tau, \tau)\rangle</math>
 
which is conveniently expressed as
 
:<math>\sigma_y^2(\tau) = \frac{1}{2}\langle(\bar{y}_{n+1}-\bar{y}_n)^2\rangle = \frac{1}{2\tau^2}\langle(x_{n+2}-2x_{n+1}+x_n)^2\rangle</math>
 
where <math>\tau</math> is the observation period, <math>\bar{y}_n</math> is the ''n''th [[allan variance#Fractional frequency|fractional frequency]] average over the observation time <math>\tau</math>.
 
The samples are taken with no dead-time between them, which is achieved by letting
 
:<math>T = \tau \, </math>
 
===Allan deviation===
Just as with [[standard deviation]] and [[variance]], the Allan deviation is defined as the square root of the Allan variance.
 
:<math>\sigma_y(\tau) = \sqrt{\sigma_y^2(\tau)} \, </math>
 
==Supporting definitions==
 
===Oscillator model===
 
The oscillator being analysed is assumed to follow the basic model of
 
: <math>V(t) = V_0 \sin (\Phi(t)) \, </math>
 
The oscillator is assumed to have the nominal frequency of ''v''<sub>''n''</sub> being the nominal number of cycles per second or Hertz (Hz), corresponding to the nominal angular frequency <math>\omega_n</math> as related in
 
: <math>\omega_n = 2\pi v_n \, </math>
 
Removing the nominal phase ramp, the total phase can be separated into:
 
: <math>\Phi(t) = \omega_nt + \phi(t) = 2\pi v_nt + \phi(t) \, </math>
 
===Time error===
The time error function ''x''(''t'') is the difference between expected nominal time and actual normal time
 
: <math>x(t) = \frac{\phi(t)}{2\pi v_n} = \frac{\Phi(t)}{2\pi v_n} - t = T(t) - t </math>
 
For measured values a time error series TE(''t'') is defined from the reference time function ''T''<sub>REF</sub>(''t'') as
 
: <math>TE(t) = T(t) - T_\text{REF}(t). \, </math>
 
===Frequency function===
The frequency function ''v''(''t'') is the frequency over time defined as
 
: <math>v(t) = \frac{1}{2\pi} \frac{d\Phi(t)}{dt}</math>
 
===Fractional frequency===
The fractional frequency ''y''(''t'') is the normalized delta from the nominal frequency ''v''<sub>''n''</sub>, thus
 
:<math>y(t) = \frac{v(t)-v_n}{v_n} = \frac{v(t)}{v_n}-1</math>
 
===Average fractional frequency===
The average fractional frequency is defined as
 
:<math>\bar{y}(t, \tau) = \frac{1}{\tau}\int\limits_0^\tau y(t+t_v) \, dt_v</math>
 
where the average is taken over observation time ''&tau;'', the ''y''(''t'') is the fractional frequency error at time ''t'' and ''τ'' is the observation time.
 
Since ''y''(''t'') is the derivative of ''x''(''t'') we can without loss of generality rewrite it as
 
:<math>\bar{y}(t, \tau) = \frac{x(t+\tau)-x(t)}{\tau}</math>
 
==Estimators==
The definition is based on the statistical [[expected value]], integrating over infinite time. Real world situation does not allow for such time-series, in which case a statistical [[estimator]] needs to be used in its place. A number of different estimators will be presented and discussed.
 
===Conventions===
*The number of frequency samples in a fractional frequency series is denoted with ''M''.
*The number of time error samples in a time error series is denoted with ''N''.
The relation between the number of fractional frequency samples and time error series is fixed in the relationship
: <math>N = M + 1 \, </math>
 
*For [[allan variance#Time error|time error]] sample series, ''x''<sub>''i''</sub> denotes the ''i'';th sample of the continuous time function ''x''(''t'') as given by
 
:<math>x_i = x(iT) \, </math>
 
where ''T'' is the time between measurements. For Allan variance, the time being used has ''T'' set to the observation time ''τ''.
 
The [[allan variance#Time error|time error]] sample series let ''N'' denote the number of samples (''x''<sub>0</sub>&nbsp;...''x''<sub>''N-1''</sub>) in the series. The traditional convention uses index 1 through&nbsp;''N''.
 
*For [[allan variance#Average fractional frequency|average fractional frequency]] sample series, <math>\bar{y}_i</math> denotes the ''i''th sample of the average continuous fractional frequency function ''y''(''t'') as given by
 
:<math>\bar{y}_i = \bar{y}(Ti, \tau) \, </math>
 
which gives
 
:<math>\bar{y}_i = \frac{1}{\tau}\int\limits_0^\tau y(iT + t_v) \, dt_v = \frac{x(iT+\tau)-x(iT)}{\tau}</math>
 
For the Allan variance assumption of ''T'' being ''τ'' it becomes
 
:<math>\bar{y}_i = \frac{x_{i+1}-x_i}{\tau}. </math>
 
The [[allan variance#Average fractional frequency|average fractional frequency]] sample series let ''M'' denote the number of samples (<math>\bar{y}_0 \ldots \bar{y}_{M-1}</math>) in the series. The traditional convention uses index 1 through&nbsp;''M''.
 
As a shorthand is [[allan variance#Average fractional frequency|average fractional frequency]] often written without the average bar over it. This is however formally incorrect as the [[allan variance#Fractional frequency|fractional frequency]] and [[allan variance#Average fractional frequency|average fractional frequency]] is two different functions. A measurement instrument able to produce frequency estimates with no dead-time will actually deliver a frequency average time series which only needs to be converted into [[allan variance#Average fractional frequency|average fractional frequency]] and may then be used directly.
 
*It is further a convention to let ''&tau;'' denote the nominal time-difference between adjacent phase or frequency samples. A time series taken for one time-difference ''&tau;''<sub>0</sub> can be used to generate Allan variance for any ''&tau;'' being an integer multiple of ''&tau;''<sub>0</sub> in which case ''&tau;''&nbsp;=&nbsp;''n&tau;''<sub>0</sub> is being used, and n becomes a variable for the estimator.
 
*The time between measurements is denoted with ''T'', which is the sum of observation time ''τ'' and dead-time.
 
===Fixed &tau; estimators===
A first simple estimator would be to directly translate the definition into
 
:<math>\sigma_y^2(\tau, M) = \text{AVAR}(\tau, M) = \frac{1}{2(M-1)} \sum_{i=0}^{M-2}(\bar{y}_{i+1}-\bar{y}_i)^2</math>
 
or for the time series
 
:<math>\sigma_y^2(\tau, N) = \text{AVAR}(\tau, N) = \frac{1}{2\tau^2(N-2)} \sum_{i=0}^{N-3}(x_{i+2}-2x_{i+1}+x_i)^2</math>
 
These formulas however only provide the calculation for the ''&tau;''&nbsp;=&nbsp;''&tau;''<sub>0</sub> case.  To calculate for a different value of ''&tau;'', a new time-series needs to be provided.
 
===Non-overlapped variable &tau; estimators===
If taking the time-series and skipping past ''n''&nbsp;−&nbsp;1 samples a new (shorter) time-series would occur with ''τ''<sub>0</sub> as the time between the adjacent samples, for which the Allan variance could be calculated with the simple estimators. These could be modified to introduce the new variable ''n'' such that no new time-series would have to be generated, but rather the original time series could be reused for various values of ''n''. The estimators become
 
:<math>\sigma_y^2(n\tau_0, M) = \text{AVAR}(n\tau_0, M) = \frac{1}{2n(M-1)} \sum_{i=0}^{\frac{M-1}{n}-1}(\bar{y}_{ni+n}-\bar{y}_{ni})^2</math>
 
with <math>n \le M - 1</math>,
 
and for the time series
 
:<math>\sigma_y^2(n\tau_0, N) = \text{AVAR}(n\tau_0, N) = \frac{1}{2n^2\tau_0^2(\frac{N-1}{n}-1)} \sum_{i=0}^{\frac{N-1}{n}-2}(x_{ni+2n}-2x_{ni+n}+x_{ni})^2</math>
 
with <math>n \le \frac{N-1}{2}</math>.
 
These estimators have a significant drawback in that they will drop a significant amount of sample data as only 1/''n'' of the available samples is being used.
 
===Overlapped variable &tau; estimators===
A technique presented by J.J. Snyder<ref name=Snyder1981>Snyder, J. J.: ''An ultra-high resolution frequency meter'', pages 464–469, Frequency Control Symposium #35, 1981</ref> provided an improved tool, as measurements was overlapped in ''n'' overlapped series out of the original series. The overlapping Allan variance estimator was introduced in.<ref name=Howe1981/> This can be shown to be equivalent to averaging the time or normalized frequency samples in blocks of ''n'' samples prior to processing. The resulting predictors becomes
 
:<math>\sigma_y^2(n\tau_0, M) = \text{AVAR}(n\tau_0, M) = \frac{1}{2n^2(M-2n+1)} \sum_{j=0}^{M-2n} \left( \sum_{i=j}^{j+n-1}\bar{y}_{i+n}-\bar{y}_i \right)^2 </math>
 
or for the time series
 
:<math>\sigma_y^2(n\tau_0, N) = \text{AVAR}(n\tau_0, N) = \frac{1}{2n^2\tau_0^2(N-2n)} \sum_{i=0}^{N-2n-1}(x_{i+2n}-2x_{i+n}+x_i)^2</math>
 
The overlapping estimators have far superior performance over the non-overlapping estimators as ''n'' rises and the time-series is of moderate length. The overlapped estimators have been accepted as the preferred Allan variance estimators in IEEE,<ref name=IEEE1139/> ITU-T<ref name=itutg810>ITU-T Rec. G.810: [http://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC-G.810-199608-I!!PDF-E&type=items ''Definitions and terminology for synchronization and networks''], ITU-T Rec. G.810 (08/96)</ref> and ETSI<ref name=ETSIEN3004610101>ETSI EN 300 462-1-1: [http://www.etsi.org/deliver/etsi_en/300400_300499/3004620701/01.01.01_20/en_3004620701v010101c.pdf ''Definitions and terminology for synchronisation networks''], ETSI EN 300 462-1-1 V1.1.1 (1998–05)</ref> standards for comparable measurements such as needed for telecommunication qualification.
 
===Modified Allan variance===
In order to address the inability to separate white phase modulation from flicker phase modulation using traditional Allan variance estimators an algorithmic filtering to reduce the bandwidth by ''n''. This filtering provides a modification to the definition and estimators and is now identifies as a separate class of variance called [[modified Allan variance]]. The modified Allan variance measure is a frequency stability measure, just as the Allan variance.
 
===Time stability estimators===
The Allan variance and Allan deviation provides the frequency stability variance and deviation. The time stability variants can be provided by using frequency to time scaling from the modified (Mod.) Allan variance to [[time deviation|time variance]]
 
:<math>\sigma_x^2(\tau) = \frac{\tau^2}{3}Mod.\sigma_y^2(\tau)</math>
 
and similarly for Allan deviation to [[time deviation]]
 
:<math>\sigma_x(\tau) = \frac{\tau}{\sqrt{3}}Mod.\sigma_y(\tau).</math>
 
===Other estimators===
Further developments have produced improved estimation methods for the same stability measure, the variance/deviation of frequency, but these are known by separate names such as the [[Hadamard variance]], [[modified Hadamard variance]], the [[total variance]], [[modified total variance]] and the [[Theo variance]]. These distinguish themselves in better use of statistics for improved confidence bounds or ability to handle linear frequency drift.
 
==Confidence intervals and equivalent degrees of freedom==
Statistical estimators will calculate an estimated value on the sample series used. The estimates may deviate from the true value and the range of values which for some probability will contain the true value is referred to as the [[confidence interval]].  The confidence interval depends on the number of observations in the sample series, the dominant noise type, and the estimator being used. The width is also dependent on the statistical certainty for which the confidence interval values forms a bounded range, thus the statistical certainty that the true value is within that range of values. For variable-τ estimators, the ''&tau;''<sub>0</sub> multiple ''n'' is also a variable.
 
===Confidence interval===
The [[confidence interval]] can be established using [[chi-squared distribution]] by using the [[Variance#Distribution of the sample variance|distribution of the sample variance]]:<ref name=IEEE1139/><ref name=Howe1981>D.A. Howe, D.W. Allan and J.A. Barnes: [http://tf.boulder.nist.gov/general/pdf/554.pdf ''Properties of signal sources and measurement methods''], pages 464–469, Frequency Control Symposium #35, 1981</ref>
 
:<math>\chi^2 = \frac{(d.f.)s^2}{\sigma^2}</math>
 
where ''s''<sup>''2''</sup> is the sample variance of our estimate, ''σ''<sup>2</sup> is the true variance value, ''d.f.'' is the degrees of freedom for the estimator and ''χ''<sup>2</sup> is the degrees of freedom for a certain probability. For a 90% probability, covering the range from the 5% to the 95% range on the probability curve, the upper and lower limits can be found using the inequality:
 
:<math>\chi^2(0.05) \le \frac{(d.f.)s^2}{\sigma^2} \le \chi^2(0.95)</math>
 
which after rearrangement for the true variance becomes:
 
:<math>\frac{(d.f.)s^2}{\chi^2(0.95)} \le \sigma^2 \le \frac{(d.f.)s^2}{\chi^2(0.05)}</math>
 
===Effective degrees of freedom===
The [[Degrees of freedom (statistics)|degrees of freedom]] represents the number of free variables capable of contributing to the estimate. Depending on the estimator and noise type, the effective degrees of freedom varies. Estimator formulas depending on ''N'' and ''n'' has been empirically found<ref name=Howe1981/> to be:
 
{| border="1" cellpadding="5" cellspacing="0" align="center"
|+ '''Allan variance degrees of freedom'''
|-
|Noise type
|degrees of freedom
|-
|white phase modulation (WPM)
|<math>d.f. \cong \frac{(N+1)(N-2n)}{2(N-n)}</math>
|-
|flicker phase modulation (FPM)
|<math>d.f. \cong {e^\left(\ln \frac{N-1}{2n} \ln \frac{(2n+1)(N-1)}{4}\right)}^{- \frac{1}{2}}</math>
|-
|white frequency modulation (WFM)
|<math>d.f. \cong \left[ \frac{3(N-1)}{2n} - \frac{2(N-2)}{N}\right]\frac{4n^2}{4n^2+5}</math>
|-
|flicker frequency modulation (FFM)
|<math>d.f. \cong \begin{cases}\frac{2(N-2)}{2.3N-4.9} & n = 1 \\ \frac{5N^2}{4n(N+3n)}& n \ge 2\end{cases}</math>
|-
|random walk frequency modulation (RWFM)
|<math>d.f. \cong \frac{N-2}{n}\frac{(N-1)^2-3n(N-1)+4n^2}{(N-3)^2}</math>
|}
 
==Power-law noise==
The Allan variance will treat various [[power-law noise]] types differently, conveniently allowing them to be identified and their strength estimated. As a convention, the measurement system width (high corner frequency) is denoted ''f''<sub>''H''</sub>.
{| border="1" cellpadding="5" cellspacing="0" align="center"
|+ '''Allan variance power-law response'''
|-
|Power-law noise type
|Phase noise slope
|Frequency noise slope
|Power coefficient
|Phase noise
|Allan variance
|Allan deviation
|-
|white phase modulation (WPM)
|<math>f^0=1</math>
|<math>f^2</math>
|<math>h_2</math>
|<math>S_x(f) = \frac{1}{(2\pi)^2}h_2</math>
|<math>\sigma_y^2(\tau) = \frac{3 f_H}{4\pi^2\tau^2}h_2</math>
|<math>\sigma_y(\tau) = \frac{\sqrt{3 f_H}}{2\pi\tau}\sqrt{h_2}</math>
|-
|flicker phase modulation (FPM)
|<math>f^{-1}</math>
|<math>f^1=f</math>
|<math>h_1</math>
|<math>S_x(f) = \frac{1}{(2\pi)^2f}h_1</math>
|<math>\sigma_y^2(\tau) = \frac{3[\gamma+\ln(2\pi f_H\tau)]-\ln 2}{4\pi^2\tau^2}h_1</math>
|<math>\sigma_t(\tau) = \frac{\sqrt{3[\gamma+\ln(2\pi f_H\tau)]-\ln 2}}{2\pi\tau}\sqrt{h_1}</math>
|-
|white frequency modulation (WFM)
|<math>f^{-2}</math>
|<math>f^0=1</math>
|<math>h_0</math>
|<math>S_x(f) = \frac{1}{(2\pi)^2f^2}h_0</math>
|<math>\sigma_y^2(\tau) = \frac{1}{2\tau}h_0</math>
|<math>\sigma_y(\tau) = \frac{1}{\sqrt{2\tau}}\sqrt{h_0}</math>
|-
|flicker frequency modulation (FFM)
|<math>f^{-3}</math>
|<math>f^{-1}</math>
|<math>h_{-1}</math>
|<math>S_x(f) = \frac{1}{(2\pi)^2f^3}h_{-1}</math>
|<math>\sigma_y^2(\tau) = 2\ln(2)h_{-1}</math>
|<math>\sigma_y(\tau) = \sqrt{2\ln(2)}\sqrt{h_{-1}}</math>
|-
|random walk frequency modulation (RWFM)
|<math>f^{-4}</math>
|<math>f^{-2}</math>
|<math>h_{-2}</math>
|<math>S_x(f) = \frac{1}{(2\pi)^2f^4}h_{-2}</math>
|<math>\sigma_y^2(\tau) = \frac{2\pi^2\tau}{3}h_{-2}</math>
|<math>\sigma_y(\tau) = \frac{\pi\sqrt{2\tau}}{\sqrt{3}}\sqrt{h_{-2}}</math>
|-
|}
 
As found in<ref name=NBSTN394>J.A. Barnes, A.R. Chi, L.S. Cutler, D.J. Healey, D.B. Leeson, T.E. McGunigal, J.A. Mullen, W.L. Smith, R. Sydnor, R.F.C. Vessot, and G.M.R. Winkler: [http://tf.boulder.nist.gov/general/pdf/264.pdf ''Characterization of Frequency Stability''], NBS Technical Note 394, 1970</ref><ref>J.A. Barnes, A.R. Chi, L.S. Cutler, D.J. Healey, D.B. Leeson, T.E. McGunigal, J.A. Mullen, Jr., W.L. Smith, R.L. Sydnor, R.F.C. Vessot, and G.M.R. Winkler: [http://tf.boulder.nist.gov/general/pdf/118.pdf ''Characterization of Frequency Stability''], IEEE Transactions on Instruments and Measurements 20, pp. 105&ndash;120, 1971</ref> and in modern forms.<ref name=Bregni2002>Bregni, Stefano: [http://books.google.com/books?id=APEBaL4WHNoC&printsec=frontcover ''Synchronisation of digital telecommunication networks''], Wiley 2002, ISBN 0-471-61550-1</ref><ref name=NISTSP1065>NIST SP 1065: [http://tf.nist.gov/timefreq/general/pdf/2220.pdf ''Handbook of Frequency Stability Analysis'']</ref>
 
The Allan variance is unable to distinguish between WPM and FPM, but is able to resolve the other power-law noise types. In order to distinguish WPM and FPM, the [[modified Allan variance]] needs to be employed.
 
The above formulas assume that
 
:<math>\tau \gg \frac{1}{2\pi f_H}</math>
 
and thus that the bandwidth of the observation time is much lower than the instruments bandwidth. When this condition is not met, all noise forms depend on the instrument's bandwidth.
 
===&alpha;-&mu; mapping===
The detailed mapping of a phase modulation of the form
 
:<math>S_x(f) = \frac{1}{4\pi^2}h_{\alpha}f^{\alpha-2} = \frac{1}{4\pi^2}h_{\alpha}f^{\beta}</math>
 
where
 
:<math>\beta \equiv \alpha - 2</math>
 
or frequency modulation of the form
 
:<math>S_y(f) = h_{\alpha}f^{\alpha}</math>
 
into the Allan variance of the form
 
:<math>\sigma_y^2(\tau) = K_{\alpha}h_{\alpha}\tau^{\mu}</math>
 
can be significantly simplified by providing a mapping between α and μ. A mapping between α and ''K''<sub>α</sub> is also presented for convenience:
 
{| border="1" cellpadding="5" cellspacing="0" align="center"
|+ '''Allan variance α-μ mapping'''
|-
|''K''<sub>α</sub>
|-
| -2
| -4
| 1
|<math>\frac{2\pi^2}{3}</math>
|-
| -1
| -3
| 0
|<math>2\ln{2}</math>
|-
| 0
| -2
| -1
|<math>\frac{1}{2}</math>
|-
| 1
| -1
| -2
|<math>\frac{3[\gamma+\ln(2\pi f_H\tau)]-\ln 2}{4\pi^2}</math>
|-
| 2
| 0
| -2
|<math>\frac{3f_H}{4\pi^2}</math>
|-
|}
 
The mapping is taken from.<ref name=IEEE1139/>
 
===General Conversion from Phase Noise===
A signal with spectral phase noise <math>S_\phi</math> with units rad<sup>2</sup>/Hz can be converted to Allan Variance by:<ref name=NISTSP1065/>
 
<math>\sigma^2_y(\tau) = \frac{2}{\nu_0^2} \int^{f_b}_0 S_\phi(f) \frac{\sin^4(\pi \tau f)}{(\pi \tau)^2} df</math>
 
==Linear response==
While Allan variance is intended to be used to distinguish noise forms, it will depend on some but not all linear responses to time. They are given in the table:
 
{| border="1" cellpadding="5" cellspacing="0" align="center"
|+ '''Allan variance linear response'''
|-
! Linear effect
! time response
! frequency response
! Allan variance
! Allan deviation
|-
| phase offset
| <math>x_0</math>
| <math>0</math>
| <math>0</math>
| <math>0</math>
|-
| frequency offset
| <math>y_0t</math>
| <math>y_0</math>
| <math>0</math>
| <math>0</math>
|-
| linear drift
| <math>\frac{Dt^2}{2}</math>
| <math>Dt</math>
| <math>\frac{D^2\tau^2}{2}</math>
| <math>\frac{D\tau}{\sqrt{2}}</math>
|-
|}
 
Thus, linear drift will contribute to output result. When measuring a real system, the linear drift or other drift mechanism may need to be estimated and removed from the time-series prior to calculating the Allan variance.<ref name=Bregni2002/>
 
==Time and frequency filter properties==
In analysing the properties of Allan variance and friends, it has proven useful to consider the filter properties on the normalize frequency. Starting with the definition for Allan variance for
 
:<math>\sigma_y^2(\tau) = \frac{1}{2}\langle(\bar{y}_{i+1}-\bar{y}_i)^2\rangle</math>
 
where
 
:<math>\bar{y}_i = \frac{1}{\tau} \int\limits_0^\tau y(i\tau+t) \, dt.</math>
 
Replacing the time series of <math>y_i</math> with the Fourier transformed variant <math>S_y(f)</math> the Allan variance can be expressed in the frequency domain as
 
:<math>\sigma_y^2(\tau) = \int_0^\infty S_y(f)\frac{2\sin^4\pi\tau f}{(\pi \tau f)^2} \, df</math>
 
Thus the transfer function for Allan variance is
 
:<math>\left\vert H_A(f)\right\vert^2 = \frac{2\sin^4\pi \tau f}{(\pi \tau f)^2}. </math>
 
==Bias functions==
The ''M''-sample variance, and the defined special case Allan variance, will experience [[systematic bias]] depending on different number of samples ''M'' and different relationship between ''T'' and ''τ''. In order address these biases the bias-functions ''B''<sub>1</sub> and ''B''<sub>2</sub> has been defined<ref name=NBSTN375>Barnes, J.A.: [http://tf.boulder.nist.gov/general/pdf/11.pdf ''Tables of Bias Functions, ''B''<sub>1</sub> and ''B''<sub>2</sub>, for Variances Based On Finite Samples of Processes with Power Law Spectral Densities''], NBS Technical Note 375, 1969</ref> and allows for conversion between different ''M'' and ''T'' values.
 
These bias functions is not sufficient for handling the bias resulting from concatenating ''M'' samples to the ''Mτ''<sub>0</sub> observation time over the ''MT''<sub>0</sub> with has the dead-time distributed among the ''M'' measurement blocks rather than in the end of the measurement. This rendered the need for the ''B''<sub>3</sub> bias.<ref name=NISTTN1318/>
 
The bias functions is evaluated for a particular µ value, so the α-µ mapping needs to be done for the dominant noise form as found using [[noise identification]]. Alternatively as proposed in<ref name=Allan1966/> and elaborated in<ref name=NBSTN375/> the µ value of the dominant noise form may be inferred from the measurements using the bias functions.
 
===B<sub>1</sub> bias function===
The ''B''<sub>1</sub> bias function relates the ''M''-sample variance with the 2-sample variance (Allan variance), keeping the time between measurements ''T'' and time for each measurements ''τ'' constant, and is defined<ref name=NBSTN375/> as
 
:<math>B_1 (N, r, \mu ) = \frac{ \left \langle\sigma_y^2(N, T, \tau ) \right \rangle}{ \left \langle\sigma_y^2(2, T, \tau ) \right\rangle}</math>
 
where
 
:<math>r = \frac{T}{\tau}.</math>
 
The bias function becomes after analysis
 
:<math>B_1(N, r, \mu) = \frac{1 + \sum_{n=1}^{N-1} \frac{N-n}{N(N-1)}\left [ 2\left (rn\right )^{\mu+2} - \left (rn+1\right )^{\mu+2} -\left |rn-1\right |^{\mu+2}\right ]}{1 + \frac{1}{2}\left [ 2r^{\mu+2} - \left (r+1\right )^{\mu+2}-\left |r-1\right |^{\mu+2}\right ]}.</math>
 
===B<sub>2</sub> bias function===
The ''B''<sub>2</sub> bias function relates the 2-sample variance for sample time ''T'' with the 2-sample variance (Allan variance), keeping the number of samples ''N''&nbsp;=&nbsp;2 and the observation time ''τ'' constant, and is defined<ref name=NBSTN375/>
 
:<math>B_2 (r, \mu ) = \frac{ \left \langle\sigma_y^2(2, T, \tau ) \right \rangle}{ \left \langle\sigma_y^2(2, \tau, \tau ) \right\rangle}</math>
 
where
 
:<math>r = \frac{T}{\tau}.</math>
 
The bias function becomes after analysis
 
:<math>B_2(r, \mu) = \frac{1 + \frac{1}{2}\left [ 2r^{\mu+2} - \left (r+1\right )^{\mu+2}-\left |r-1\right |^{\mu+2}\right ]}{2\left ( 1-2^{\mu}\right )}. </math>
 
===''B''<sub>3</sub> bias function===
The ''B''<sub>3</sub> bias function relates the 2-sample variance for sample time ''MT''<sub>0</sub> and observation time ''Mτ''<sub>0</sub> with the 2-sample variance (Allan variance) and is defined<ref name=NISTTN1318>J.A. Barnes and D.W. Allan: [http://tf.boulder.nist.gov/general/pdf/878.pdf ''Variances Based on Data with Dead Time Between the Measurements''], NIST Technical Note 1318, 1990</ref> as
 
:<math>B_3 (N, M, r, \mu) = \frac{\left\langle\sigma_y^2(N, M, T, \tau)\right\rangle}{\left\langle\sigma_y^2(N, T, \tau)\right\rangle}</math>
 
where
 
:<math>T = M T_0 \, </math>
 
:<math>\tau = M \tau_0. \, </math>
 
The ''B''<sub>3</sub> bias function is useful to adjust non-overlapping and overlapping variable ''τ'' estimator values based on dead-time measurements of observation time ''τ''<sub>0</sub> and time between observations ''T''<sub>0</sub> to normal dead-time estimates.
 
The bias function becomes after analysis (for the ''N''&nbsp;=&nbsp;2 case)
 
:<math>B_3(2, M, r, \mu) = \frac{2M + MF(Mr) - \sum_{n=1}^{M-1} (M-n)\left [ 2F(nr) - F((M+n)r) + F((M-n)r)\right ]}{M^{\mu+2} \left [ F(r) + 2\right ]}</math>
 
where
 
:<math>F(A) = 2A^{\mu+2} - (A+1)^{\mu+2} - |A-1|^{\mu+2}. \, </math>
 
===&tau; bias function===
While formally not formulated, it has been indirectly inferred as a consequence of the α-µ mapping. When comparing two Allan variance measure for different τ assuming same dominant noise in the form of same µ coefficient, a bias can be defined as
 
:<math>B_\tau (\tau_1, \tau_2, \mu ) = \frac{ \left \langle\sigma_y^2(2, \tau_2, \tau_2 ) \right \rangle}{ \left \langle\sigma_y^2(2, \tau_1, \tau_1 ) \right\rangle}. \, </math>
 
The bias function becomes after analysis
 
:<math>B_\tau (\tau_1, \tau_2, \mu ) = \left ( \frac{\tau_2}{\tau_1} \right)^\mu.</math>
 
===Conversion between values===
In order to convert from one set of measurements to another the ''B''<sub>1</sub>, ''B''<sub>2</sub> and τ bias functions can be assembled. First the ''B''<sub>1</sub> function converts the (''N''<sub>1</sub>,&nbsp;''T''<sub>1</sub>,&nbsp;''τ''<sub>1</sub>) value into (2,&nbsp;''T''<sub>1</sub>,&nbsp;''τ''<sub>1</sub>), from which the ''B''<sub>2</sub> function converts into a (2,&nbsp;''τ''<sub>1</sub>,&nbsp;''τ''<sub>1</sub>) value, thus the Allan variance at&nbsp;''τ''<sub>1</sub>. The Allan variance measure can be converted using the τ bias function from ''τ''<sub>1</sub> to ''τ''<sub>2</sub>, from which then the (2,&nbsp;''T''<sub>2</sub>,&nbsp;''τ''<sub>2</sub>) using ''B''<sub>2</sub> and then finally using ''B''<sub>1</sub> into the (''N''<sub>2</sub>,&nbsp;''T''<sub>2</sub>,&nbsp;''τ''<sub>2</sub>) variance. The complete conversion becomes
 
:<math>\left \langle \sigma_y^2(N_2, T_2, \tau_2) \right \rangle = \left ( \frac{\tau_2}{\tau_1} \right )^\mu \left [ \frac{B_1(N_2, r_2, \mu)B_2(r_2, \mu)}{B_1(N_1, r_1, \mu)B_2(r_1, \mu)} \right ] \left \langle \sigma_y^2(N_1, T_1, \tau_1) \right \rangle</math>
 
where
 
:<math>r_1 = \frac{T_1}{r_1}</math>
 
:<math>r_2 = \frac{T_2}{r_2}</math>
 
Similarly, for concatenated measurements using M sections, the logical extension becomes
 
:<math>\left \langle \sigma_y^2(N_2, M_2, T_2, \tau_2) \right \rangle = \left ( \frac{\tau_2}{\tau_1} \right )^\mu \left [ \frac{B_3(N_2, M_2, r_2, \mu)B_1(N_2, r_2, \mu)B_2(r_2, \mu)}{B_3(N_1, M_1, r_1, \mu)B_1(N_1, r_1, \mu)B_2(r_1, \mu)} \right ] \left \langle \sigma_y^2(N_1, M_1, T_1, \tau_1) \right \rangle.</math>
 
==Measurement issues==
When making measurements to calculate Allan variance or Allan deviation a number of issues may cause the measurements to degenerate. Covered here is the effects specific to Allan variance, where results would be biased.
 
===Measurement bandwidth limits===
A measurement system is expected to have a bandwidth at or below that of the Nyquist rate as described within the [[Shannon–Hartley theorem]]. As can be seen in the power-law noise formulas, the white and flicker noise modulations both depends on the upper corner frequency <math>f_H</math> (these systems is assumed to be low-pass filtered only). Considering the frequency filter property it can be clearly seen that low-frequency noise has greater impact on the result. For relatively flat phase modulation noise types (e.g. WPM and FPM), the filtering has relevance, whereas for noise types with greater slope the upper frequency limit becomes of less importance, assuming that the measurement system bandwidth is wide relative the <math>\tau</math> as given by
 
:<math>\tau \gg \frac{1}{2\pi f_H}.</math>
 
When this assumption is not met, the effective bandwidth <math>f_H</math> needs to be notated alongside the measurement. The interested should consult NBS TN394.<ref name=NBSTN394/>
 
If however one adjust the bandwidth of the estimator by using integer multiples of the sample time <math>n\tau_0</math> then the system bandwidth impact can be reduced to insignificant levels. For telecommunication needs, such methods have been required in order to ensure comparability of measurements and allow some freedom for vendors to do different implementations. The ITU-T Rec. G.813<ref name=ITUTG813>ITU-T Rec. G.813: [http://www.itu.int/rec/T-REC-G.813/recommendation.asp?lang=en&parent=T-REC-G.813-200303-I ''Timing characteristics of SDH equipment slave clock (SEC)''], ITU-T Rec. G.813 (03/2003)</ref> for the TDEV measurement.
 
It can be recommended that the first <math>\tau_0</math> multiples be ignored such that the majority of the detected noise is well within the passband of the measurement systems bandwidth.
 
Further developments on the Allan variance was performed to let the hardware bandwidth be reduced by software means. This development of a software bandwidth allowed for addressing the remaining noise and the method is now referred to [[modified Allan variance]]. This bandwidth reduction technique should not be confused with the enhanced variant of [[modified Allan variance]] which also changes a smoothing filter bandwidth.
 
===Dead time in measurements===
Many measurement instruments of time and frequency have the stages of arming time, time-base time, processing time and may then re-trigger the arming. The arming time is from the time the arming is triggered to when the start event occurs on the start channel. The time-base then ensures that minimum amount of time goes prior to accepting an event on the stop channel as the stop event. The number of events and time elapsed between the start event and stop event is recorded and presented during the processing time. When the processing occurs (also known as the dwell time) the instrument is usually unable to do another measurement. After the processing has occurred, an instrument in continuous mode triggers the arm circuit again. The time between the stop event and the following start event becomes [[dead time]] during which the signal is not being observed. Such dead time introduces systematic measurement biases, which needs to be compensated for in order to get proper results. For such measurement systems will the time ''T'' denote the time between the adjacent start events (and thus measurements) while <math>\tau</math> denote the time-base length, i.e. the nominal length between the start and stop event of any measurement.
 
Dead time effects on measurements have such an impact on the produced result that much study of the field have been done in order to quantify its properties properly. The introduction of zero dead-time counters removed the need for this analysis. A zero dead-time counter has the property that the stop-event of one measurement is also being used as the start-event of the following event. Such counters creates a series of event and time timestamp pairs, one for each channel spaced by the time-base. Such measurements have also proved useful in order forms of time-series analysis.
 
Measurements being performed with dead time can be corrected using the bias function ''B''<sub>1</sub>, ''B''<sub>2</sub> and ''B''<sub>3</sub>. Thus, dead time as such is not prohibiting the access to the Allan variance, but it makes it more problematic. The dead time must be known such that the time between samples ''T'' can be established.
 
===Measurement length and effective use of samples===
Studying the effect on the [[Allan variance#Confidence interval|confidence intervals]] that the length ''N'' of the sample series have, and the effect of the variable τ parameter ''n'' the confidence intervals may become very large since the [[Allan variance#Effective degree of freedom|effective degree of freedom]] may become small for some combination of ''N'' and ''n'' for the dominant noise-form (for that τ).
 
The effect may be that the estimated value may be much smaller or much greater than the real value, which may lead to false conclusions of the result.
 
It is recommended that the confidence interval is plotted along with the data, such that the reader of the plot is able to be aware of the statistical uncertainty of the values.
 
It is recommended that the length of the sample sequence, i.e. the number of samples ''N'' is kept high to ensure that confidence interval is small over the τ-range of interest.
 
It is recommended that the τ-range as swept by the ''&tau;''<sub>0</sub> multiplier ''n'' is limited in the upper end relative ''N'' such that the read of the plot is not being confused by highly unstable estimator values.
 
It is recommended that estimators providing better degrees of freedom values be used in replacement of the Allan variance estimators or as complementing them where they outperform the Allan variance estimators. Among those the [[Total variance]] and [[Theo variance]] estimators should be considered.
 
===Dominant noise type===
A large number of conversion constants, bias corrections and confidence intervals depends on the dominant noise type. For proper interpretation shall the dominant noise type for the particular τ of interest be identified through noise identification. Failing to identify the dominant noise type will produce biased values. Some of these biases may be of several order of magnitude, so it may be of large significance.
 
===Linear drift===
Systematic effects on the signal is only partly cancelled. Phase and frequency offset is cancelled, but linear drift or other high degree forms of polynomial phase curves will not be cancelled and thus form a measurement limitation. Curve fitting and removal of systematic offset could be employed. Often removal of linear drift can be sufficient. Use of linear drift estimators such as the [[Hadamard variance]] could also be employed. A linear drift removal could be employed using a moment based estimator.
 
===Measurement instrument estimator bias===
Traditional instruments provided only the measurement of single events or event pairs. The introduction of the improved statistical tool of overlapping measurements by J.J. Snyder<ref name=Snyder1981/> allowed for much improved resolution in frequency readouts, breaking the traditional digits/time-base balance. While such methods is useful for their intended purpose, using such smoothed measurements for Allan variance calculations would give a false impression of high resolution,<ref name=Rubiola2005>{{Cite journal|url=http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf|doi=10.1063/1.1898203|title=On the measurement of frequency and of its sample variance with high-resolution counters|year=2005|last1=Rubiola|first1=Enrico|journal=Review of Scientific Instruments|volume=76|pages=054703|issue=5|arxiv = physics/0411227 |bibcode = 2005RScI...76e4703R }}</ref><ref name=Rubiola2005ifcs>Rubiola, Enrico: [http://www.femto-st.fr/~rubiola/pdf-articles/conference/2005-ifcs-counters.pdf ''On the measurement of frequency and of its sample variance with high-resolution counters''], Proc. Joint IEEE International Frequency Control Symposium and Precise Time and Time Interval Systems and Applications Meeting pp. 46–49, Vancouver, Canada, 29–31 August 2005.</ref><ref name=Rubiola2008cntpres>Rubiola, Enrico: [http://www.femto-st.fr/~rubiola/pdf-slides/2008T-femto-counters.pdf ''High-resolution frequency counters (extended version, 53 slides)''], seminar given at the FEMTO-ST Institute, at the Université Henri Poincaré, and at the Jet Propulsion Laboratory, NASA-Caltech.</ref> but for longer τ the effect is gradually removed and the lower τ region of the measurement has biased values. This bias is providing lower values than it should, so it is an overoptimistic (assuming that low numbers is what one wishes) bias reducing the usability of the measurement rather than improving it. Such smart algorithms can usually be disabled or otherwise circumvented by using time-stamp mode which is much preferred if available.
 
==Practical measurements==
While several approaches to measurement of Allan variance can be devised, a simple example may illustrate how measurements can be performed.
 
===Measurement===
All measurements of Allan variance will in effect be the comparison of two different clocks. Lets consider a reference clock and a device under test (DUT), and both having a common nominal frequency of 10&nbsp;MHz. A time-interval counter is being used to measure the time between the rising edge of the reference (channel A) and the rising edge of the device under test.
 
In order to provide evenly spaced measurements will the reference clock be divided down to form the measurement rate, triggering the time-interval counter (ARM input). This rate can be 1&nbsp;Hz (using the [[Pulse per second|1 PPS]] output of a reference clock) but other rates like 10&nbsp;Hz and 100&nbsp;Hz can also be used. The speed of which the time-interval counter can complete the measurement, output the result and prepare itself for the next arm will limit the trigger frequency.
 
A computer is then useful to record the series of time-differences being observed.
 
===Post-processing===
The recorded time-series require post-processing to unwrap the wrapped phase, such that a continuous phase error is being provided. If necessary should also logging and measurement mistakes be fixed. Drift estimation and drift removal should be performed, the drift mechanism needs to be identified and understood for the sources. Drift limitations in measurements can be severe, so letting the oscillators become stabilized by long enough time being powered on is necessary.
 
The Allan variance can then be calculated using the estimators given, and for practical purposes the overlapping estimator should be used due to its superior use of data over the non-overlapping estimator. Other estimators such as Total or Theo variance estimators could also be used if bias corrections is applied such that they provide Allan variance compatible results.
 
To form the classical plots, the Allan deviation (square root of Allan variance) is plotted in log-log format against the observation interval tau.
 
===Equipment and software===
The time-interval counter is typically an off the shelf counter commercially available. Limiting factors involve single-shot resolution, trigger jitter, speed of measurements and stability of reference clock. The computer collection and post-processing can be done using existing commercial or public domain software. Highly advanced solutions exists which will provide measurement and computation in one box.
 
==Research history==
The field of frequency stability has been studied for a long time, however it was found during the 1960s that there was a lack of coherent definitions. The NASA-IEEE Symposium on Short-Term Stability in 1964 was followed with the IEEE Proceedings publishing a special issue on Frequency Stability in its February 1966 issue.
 
The NASA-IEEE Symposium on Short-Term Stability in November 1964<ref name=NASA1964>NASA: [http://hdl.handle.net/2060/19660001092] ''Short-Term Frequency Stability'', NASA-IEEE symposium on Short Term Frequency Stability Goddard Space Flight Center 23–24 November 1964, NASA Special Publication 80</ref> brings together many fields and uses of short and long term stability with papers from many different contributors. The articles and panel discussions is interesting in that they concur on the existence of the frequency flicker noise and the wish for achieving a common definition for short and long term stability (even if the conference name only reflect the short-term stability intention).
 
The IEEE proceedings on Frequency Stability 1966 included a number of important papers including those of David Allan,<ref name=Allan1966/> James A. Barnes,<ref name=Barnes1966>Barnes, J. A.: [http://tf.boulder.nist.gov/general/pdf/6.pdf ''Atomic Timekeeping and the Statistics of Precision Signal Generators''], IEEE Proceedings on Frequency Stability, Vol 54 No 2, pages 207&ndash;220, 1966</ref> L. S. Cutler and C. L. Searle<ref name=Cutler1966/> and D. B. Leeson.<ref name=Leeson1966/> These papers helped shape the field.
 
The classical ''M''-sample variance of frequency was analysed by David Allan in<ref name=Allan1966/> along with an initial bias function. This paper tackles the issues of dead-time between measurements and analyses the case of M frequency samples (called ''N'' in the paper) and variance estimators. It provides the now standard ''α'' to ''µ'' mapping. It clearly builds on James Barnes work as detailed in his article<ref name=Barnes1966/> in the same issue. The initial bias functions introduced assumes no dead-time, but the formulas presented includes dead-time calculations. The bias function assumes the use of the 2-sample variance as a base-case, since any other variants of ''M'' may be chosen and values may be transferred via the 2-sample variance to any other variance for of arbitrary ''M''. Thus, the 2-sample variance was only implicitly used and not clearly stated as the preference even if the tools where provided. It however laid the foundation for using the 2-sample variance as the base case of comparison among other variants of the ''M''-sample variance. The 2-sample variance case is a special case of the ''M''-sample variance which produces an average of the frequency derivative.
 
The work on bias functions was significantly extended by James Barnes in<ref name=NBSTN375/> in which the modern B<sub>1</sub> and B<sub>2</sub> bias functions was introduced. Curiously enough it refers to the ''M''-sample variance as "Allan variance" while referencing to.<ref name=Allan1966/> With these modern bias functions full conversion among ''M''-sample variance measures of variating ''M'', ''T'' and τ values could used, by conversion through the 2-sample variance.
 
James Barnes and David Allan further extended the bias functions with the B<sub>3</sub> function in<ref name=NISTTN1318/> to handle the concatenated samples estimator bias. This was necessary to handle the new use of concatenated sample observations with dead time in between.
 
The IEEE Technical Committee on Frequency and Time within the IEEE Group on Instrumentation & Measurements provided a summary of the field in 1970 published as NBS Technical Notice 394.<ref name=NBSTN394/> This paper could be considered first in a line of more educational and practical papers aiding the fellow engineers in grasping the field. In this paper the 2-sample variance with ''T''&nbsp;=&nbsp;''τ'' is being the recommended measurement and it is referred to as Allan variance (now without the quotes). The choice of such parametrisation allows good handling of some noise forms and to get comparable measurements, it is essentially the least common denominator with the aid of the bias functions B<sub>1</sub> and B<sub>2</sub>.
 
An improved method for using sample statistics for frequency counters in frequency estimation or variance estimation was proposed by J.J. Snyder.<ref name=Snyder1981/> The trick to get more effective degrees of freedom out of the available dataset was to use overlapping observation periods. This provides a square-root ''n'' improvement. It was included into the overlapping Allan variance estimator introduced in.<ref name=Howe1981/> The variable τ software processing was also included in.<ref name=Howe1981/> This development improved the classical Allan variance estimators likewise providing a direct inspiration going into the work on [[modified Allan variance]].
 
The confidence interval and degrees of freedom analysis, along with the established estimators was presented in.<ref name=Howe1981/>
 
==Educational and practical resources==
The field of time and frequency and its use of Allan variance, [[Allan deviation]] and friends is a field involving many aspects, for which both understanding of concepts and practical measurements and post-processing requires care and understanding. Thus, there is a realm of educational material stretching some 40 years available. Since these reflect the developments in the research of their time, they focus on teaching different aspect over time, in which case a survey of available resources may be a suitable way of finding the right resource.
 
The first meaningful summary is the NBS Technical Note 394 "Characterization of Frequency Stability".<ref name=NBSTN394/> This is the product of the Technical Committee on Frequency and Time of the IEEE Group on Instrumentation & Measurement. It gives the first overview of the field, stating the problems, defining the basic supporting definitions and getting into Allan variance, the bias functions ''B''<sub>1</sub> and ''B''<sub>2</sub>, the conversion of time-domain measures. This is useful as it is among the first references to tabulate the Allan variance for the five basic noise types.
 
A classical reference is the NBS Monograph 140<ref name=NBSMG140>Blair, B.E.: [http://tf.boulder.nist.gov/general/pdf/59.pdf ''Time and Frequency: Theory and Fundamentals''], NBS Monograph 140, May 1974</ref> from 1974, which in chapter 8 has "Statistics of Time and Frequency Data Analysis".<ref name=NBSMG140-8>David W. Allan, John H. Shoaf and Donald Halford: [http://tf.boulder.nist.gov/general/pdf/59.pdf ''Statistics of Time and Frequency Data Analysis''], NBS Monograph 140, pages 151&ndash;204, 1974</ref> This is the extended variant of NBS Technical Note 394 and adds essentially in measurement techniques and practical processing of values.
 
An important addition will be the ''Properties of signal sources and measurement methods''.<ref name=Howe1981/> It covers the effective use of data, confidence intervals, effective degree of freedom likewise introducing the overlapping Allan variance estimator. It is a highly recommended reading for those topics.
 
The IEEE standard 1139 ''Standard definitions of Physical Quantities for Fundamental Frequency and Time Metrology''<ref name=IEEE1139/> is beyond that of a standard a comprehensive reference and educational resource.
 
A modern book aimed towards telecommunication is Stefano Bregni "Synchronisation of Digital Telecommunication Networks".<ref name=Bregni2002/> This summarises not only the field but also much of his research in the field up to that point. It aims to include both classical measures likewise telecommunication specific measures such as MTIE. It is a handy companion when looking at telecommunication standard related measurements.
 
The NIST Special Publication 1065 "Handbook of Frequency Stability Analysis" of W.J. Riley<ref name=NISTSP1065/> is a recommended reading for anyone wanting to pursue the field. It is rich of references and also covers a wide range of measures, biases and related functions that a modern analyst should have available. Further it describes the overall processing needed for a modern tool.
 
==Uses==
Allan variance is used as a measure of frequency stability in a variety of precision oscillators, such as [[crystal oscillator]]s, [[atomic clock]]s and frequency-stabilized [[laser]]s over a period of a second or more. Short term stability (under a second) is typically expressed as [[phase noise]]. The Allan variance is also used to characterize the bias stability of [[gyroscopes]], including [[fiber optic gyroscope]]s and [[Microelectromechanical systems|MEMS]] gyroscopes.
 
==See also==
{{colbegin|2}}
*[[Variance]]
*[[Semivariance]]
*[[Variogram]]
*[[Metrology]]
*[[Network time protocol]]
*[[Precision Time Protocol]]
*[[Synchronization]]
{{colend}}
 
==References==
{{Reflist|2}}
 
==External links==
*[http://www.ieee-uffc.org/frequency_control/teaching.asp UFFC Frequency Control Teaching Resources]
*[http://www.tf.nist.gov/timefreq/general/publications.htm NIST Publication search tool]
*[http://www.allanstime.com/AllanVariance/ David W. Allan's Allan Variance Overview]
*[http://www.allanstime.com David W. Allan's official web site]
*[http://horology.jpl.nasa.gov/noiseinfo.html JPL Publications &ndash; Noise Analysis and Statistics]
*[http://www.wriley.com/ William Riley publications]
*[http://home.dei.polimi.it/bregni/public.htm Stefano Bregni publications]
*[http://rubiola.org/ Enrico Rubiola publications]
*[http://cran.r-project.org/web/packages/allanvar/index.html Allanvar: R package for sensor error characterization using the Allan Variance]
*[http://www.alamath.com/ Alavar windows software with reporting tools; Freeware ]
 
{{DEFAULTSORT:Allan Variance}}
[[Category:Clocks]]
[[Category:Signal processing metrics]]
[[Category:Measurement]]

Revision as of 18:42, 3 March 2014

If you have CCTV security solutions set up, then you can still boost your technology to benefit in the advances that came in since best cctv system these folks were installed. Motion sensors will also be incredibly a good choice for CCTV, allowing your cameras to record only if there is movement being picked up. Cctv dvr playback software Of course, a DVR protection structure is not merely for home security but can be fit within the office, stockroom along with other areas.

is, evident that CCTVs are finding a wide application in lots of sectors and. First coming from all, where are   eleven, photo.daroiz.sk, the CCTV cameras located around your home.

Make sure that the instructions presented to you Whether (tracker.roitesting.Com) because of your professor are evident to you. Here is often a list of 10 questions that must be asked and answered when interviewing ghost writer candidates. videosecu cctv security camera usb dvr system cctv dvr recording quality You can also make use of the Google 'more search tools' feature found at the bottom of the left navigation bar when you are conducting a Google search and selecting 'reading level'. It will likely be advantage to spend little level of money now on expert writing services to prevent spending more inside the future.