|
|
Line 1: |
Line 1: |
| {{Cleanup|date=October 2009}}
| | Hi there. Allow me begin by introducing the writer, her name is Sophia. Distributing manufacturing is exactly where her main earnings comes from. Her family members lives in Ohio but her spouse desires them to move. To play lacross is the thing I love most of all.<br><br>Also visit my web site; online psychic readings ([http://1.234.36.240/fxac/m001_2/7330 http://1.234.36.240/fxac/m001_2/7330]) |
| [[Image:Acf.svg|thumb|right|A plot showing 100 random numbers with a "hidden" [[sine]] function, and an autocorrelation (correlogram) of the series on the bottom.]]
| |
| [[Image:Correlogram.png|thumb|Example for a correlogram]]
| |
| | |
| In the analysis of data, a '''correlogram''' is an image of correlation statistics. For example, in [[time series analysis]], a correlogram, also known as an '''autocorrelation plot''', is a plot of the sample [[autocorrelation]]s <math>r_h\,</math> versus <math>h\,</math> (the time lags).
| |
| | |
| If [[cross-correlation]] is used, the result is called a ''cross-correlogram''. The correlogram is a commonly used tool for checking [[randomness]] in a [[data set]]. This randomness is ascertained by computing autocorrelations for data values at varying time lags. If random, such autocorrelations should be near zero for any and all time-lag separations. If non-random, then one or more of the autocorrelations will be significantly non-zero.
| |
| | |
| In addition, correlograms are used in the [[model identification]] stage for [[Box–Jenkins]] [[autoregressive moving average]] [[time series]] models. Autocorrelations should be near-zero for randomness; if the analyst does not check for randomness, then the validity of many of the statistical conclusions becomes suspect. The correlogram is an excellent way of checking for such randomness.
| |
| | |
| Sometimes, '''corrgrams''', color-mapped matrices of correlation strengths in [[multivariate analysis]],<ref>{{cite journal |last=Friendly |first=Michael |date=19 August 2002 |title=Corrgrams: Exploratory displays for correlation matrices |url=http://euclid.psych.yorku.ca/datavis/papers/corrgram.pdf |journal=[[The American Statistician]] |publisher=[[Taylor & Francis]] |volume=56 |issue=4 |pages=316-324 |doi=10.1198/000313002533 |accessdate=19 January 2014}}</ref> are also called correlograms.<ref>{{cite web |url=http://cran.r-project.org/web/packages/corrgram/ |title=CRAN - Package corrgram |author=<!--Staff writer(s); no by-line.--> |date=29 August 2013 |website=cran.r-project.org |accessdate=19 January 2014}}</ref><ref>{{cite web |url=http://www.statmethods.net/advgraphs/correlograms.html |title=Quick-R: Correlograms |author=<!--Staff writer(s); no by-line.--> |website=statmethods.net |accessdate=19 January 2014}}</ref>
| |
| | |
| ==Applications==
| |
| The correlogram can help provide answers to the following questions:
| |
| * Are the data random?
| |
| * Is an observation related to an adjacent observation?
| |
| * Is an observation related to an observation twice-removed? (etc.)
| |
| * Is the observed time series [[white noise]]?
| |
| * Is the observed time series sinusoidal?
| |
| * Is the observed time series autoregressive?
| |
| * What is an appropriate model for the observed time series?
| |
| * Is the model
| |
| :<math>
| |
| Y = \mathrm{constant} + \mathrm{error }
| |
| </math>
| |
| valid and sufficient?
| |
| * Is the formula <math>s_{\bar{Y}}=s/\sqrt{N}</math> valid?
| |
| | |
| ==Importance==
| |
| Randomness (along with fixed model, fixed variation, and fixed distribution) is one of the four assumptions that typically underlie all measurement processes. The randomness assumption is critically important for the following three reasons:
| |
| * Most standard [[statistical test]]s depend on randomness. The validity of the test conclusions is directly linked to the validity of the randomness assumption.
| |
| * Many commonly used statistical formulae depend on the randomness assumption, the most common formula being the formula for determining the standard deviation of the sample mean:
| |
| :<math>
| |
| s_{\bar{Y}}=s/\sqrt{N}
| |
| </math>
| |
| where ''s'' is the [[standard deviation]] of the data. Although heavily used, the results from using this formula are of no value unless the randomness assumption holds. | |
| * For univariate data, the default model is
| |
| :<math>
| |
| Y = \mathrm{constant} + \mathrm{error }
| |
| </math>
| |
| If the data are not random, this model is incorrect and invalid, and the estimates for the parameters (such as the constant) become nonsensical and invalid.
| |
| | |
| ==Estimation of autocorrelations==
| |
| The autocorrelation coefficient at lag ''h'' is given by
| |
| :<math>
| |
| r_h = c_h/c_0 \,
| |
| </math>
| |
| where ''c<sub>h</sub>'' is the [[autocovariance function]]
| |
| :<math>
| |
| c_h = \frac{1}{N}\sum_{t=1}^{N-h} \left(Y_t - \bar{Y}\right)\left(Y_{t+h} - \bar{Y}\right)
| |
| </math>
| |
| and ''c<sub>0</sub>'' is the variance function
| |
| :<math>
| |
| c_0 = \frac{1}{N}\sum_{t=1}^{N} \left(Y_t - \bar{Y}\right)^2
| |
| </math>
| |
| | |
| The resulting value of ''r<sub>h</sub>'' will range between -1 and +1.
| |
| | |
| ===Alternate estimate===
| |
| Some sources may use the following formula for the autocovariance function:
| |
| :<math>
| |
| c_h = \frac{1}{N-h}\sum_{t=1}^{N-h} \left(Y_t - \bar{Y}\right)\left(Y_{t+h} - \bar{Y}\right)
| |
| </math>
| |
| Although this definition has less [[bias of an estimator|bias]], the (1/''N'') formulation has some desirable statistical properties and is the form most commonly used in the statistics literature. See pages 20 and 49-50 in Chatfield for details.
| |
| | |
| ==Statistical inference with correlograms==
| |
| | |
| In the same graph one can draw upper and lower bounds for autocorrelation with significance level <math>\alpha\,</math>:
| |
| | |
| :<math>B=\pm z_{1-\alpha/2} SE(r_h)\,</math> with <math>r_h\,</math> as the estimated autocorrelation at lag <math>h\,</math>.
| |
| | |
| If the autocorrelation is higher (lower) than this upper (lower) bound, the null hypothesis that there is no autocorrelation at and beyond a given lag is rejected at a significance level of <math>\alpha\,</math>. This test is an approximate one and assumes that the time-series is [[Gaussian]].
| |
| | |
| In the above, z<sub>1-α/2</sub> is the quantile of the [[normal distribution]]; SE is the standard error, which can be computed by [[M. S. Bartlett|Bartlett]]'s formula for MA(l) processes:
| |
| | |
| :<math>SE(r_1)=\frac {1} {\sqrt{N}} </math>
| |
| :<math> SE(r_h)=\sqrt\frac{1+2\sum_{i=1}^{h-1} r^2_i}{N}</math> for <math>h>1.\,</math>
| |
| | |
| In the picture above we can reject the [[null hypothesis]] that there is no autocorrelation between time-points which are adjacent (lag=1). For the other periods one cannot reject the [[null hypothesis]] of no autocorrelation.
| |
| | |
| Note that there are two distinct formulas for generating the confidence bands:
| |
| | |
| 1. If the correlogram is being used to test for randomness (i.e., there is no [[time dependence]] in the data), the following formula is recommended: | |
| :<math>
| |
| \pm \frac{z_{1-\alpha/2}}{\sqrt{N}}
| |
| </math>
| |
| where ''N'' is the [[sample size]], ''z'' is the [[quantile function]] of the [[standard normal distribution]] and α is the [[significance level]]. In this case, the confidence bands have fixed width that depends on the sample size.
| |
| | |
| 2. Correlograms are also used in the model identification stage for fitting [[ARIMA]] models. In this case, a [[moving average model]] is assumed for the data and the following confidence bands should be generated:
| |
| :<math>
| |
| \pm z_{1-\alpha/2}\sqrt{\frac{1}{N}\left(1+2\sum_{i=1}^{k} y_i^2\right)}
| |
| </math>
| |
| where ''k'' is the lag. In this case, the confidence bands increase as the lag increases.
| |
| | |
| ==Software==
| |
| Correlograms are available in most general purpose statistical software programs. In [[R (programming language)|R]], the function acf and pacf can be used to produce such a plot.
| |
| | |
| ==Related techniques==
| |
| * [[Partial autocorrelation plot]]
| |
| * [[Lag plot]]
| |
| * [[Spectral plot]]
| |
| * [[Seasonal subseries plot]]
| |
| * [[Scaled Correlation]]
| |
| | |
| ==References==
| |
| {{reflist}}
| |
| | |
| ==Further reading==
| |
| *{{cite book
| |
| |author = Hanke, John E./Reitsch, Arthur G./Wichern, Dean W.
| |
| |year = 2001
| |
| |title = Business forecasting
| |
| |edition = 7th edition
| |
| |location = Upper Saddle River, NJ
| |
| |publisher = Prentice Hall
| |
| }}
| |
| *{{cite book
| |
| |author = Box, G. E. P., and Jenkins, G.
| |
| |year = 1976
| |
| |title=Time Series Analysis: Forecasting and Control
| |
| |publisher = Holden-Day
| |
| }}
| |
| *{{cite book
| |
| |author = Chatfield, C.
| |
| |year = 1989
| |
| |title = The Analysis of Time Series: An Introduction
| |
| |edition = Fourth Edition
| |
| |publisher = Chapman & Hall
| |
| |location = New York, NY
| |
| }}
| |
| | |
| ==External links==
| |
| *[http://www.itl.nist.gov/div898/handbook/eda/section3/eda331.htm Autocorrelation Plot]
| |
| | |
| {{NIST-PD}}
| |
| | |
| {{Statistics|descriptive}}
| |
| | |
| [[Category:Statistical charts and diagrams]]
| |
| [[Category:Time series analysis]]
| |
Hi there. Allow me begin by introducing the writer, her name is Sophia. Distributing manufacturing is exactly where her main earnings comes from. Her family members lives in Ohio but her spouse desires them to move. To play lacross is the thing I love most of all.
Also visit my web site; online psychic readings (http://1.234.36.240/fxac/m001_2/7330)