Flexibility method: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Addbot
m Bot: Migrating 3 interwiki links, now provided by Wikidata on d:q492260
en>Yobot
m Tagging using AWB (10470)
 
Line 1: Line 1:
In [[statistics]], the '''delta method''' is a method for deriving an approximate [[probability distribution]] for a [[function (mathematics)|function]] of an [[Asymptotic distribution|asymptotically normal]] statistical [[estimator]] from knowledge of the limiting [[variance]] of that estimator.  More broadly, the delta method may be considered a fairly general [[central limit theorem]].
Wilber Berryhill is what his wife enjoys to call him and he completely loves this name. My wife and I reside in Kentucky. To climb is some thing she would never give up. Office supervising is exactly where my main earnings arrives from but I've always wanted my personal company.<br><br>Also visit my site: [http://www.weddingwall.com.au/groups/easy-advice-for-successful-personal-development-today/ psychic chat online]
 
==Univariate delta method==
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms.  Roughly, if there is a [[sequence (mathematics)|sequence]] of random variables ''X''<sub>n</sub> satisfying
:<math>{\sqrt{n}[X_n-\theta]\,\xrightarrow{D}\,\mathcal{N}(0,\sigma^2)},</math>
where ''θ'' and ''σ''<sup>2</sup> are finite valued constants and <math>\xrightarrow{D}</math> denotes [[convergence in distribution]], then
:<math>{\sqrt{n}[g(X_n)-g(\theta)]\,\xrightarrow{D}\,\mathcal{N}(0,\sigma^2[g'(\theta)]^2)}</math>
for any function ''g'' satisfying the property that {{nowrap|''g′''(''θ'')}} exists and is non-zero valued.
 
==Proof in the univariate case==
Demonstration of this result is fairly straightforward under the assumption that {{nowrap|''g′''(''θ'')}} is [[Continuous_function|continuous]]. To begin, we use the [[Mean value theorem]]:
:<math>g(X_n)=g(\theta)+g'(\tilde{\theta})(X_n-\theta),</math>
where <math>\tilde{\theta}</math> lies between ''X''<sub>''n''</sub> and ''θ''.
Note that since <math>X_n\,\xrightarrow{P}\,\theta</math> implies <math>\tilde{\theta} \,\xrightarrow{P}\,\theta</math> and since {{nowrap|''g′''(''θ'')}} is continuous, applying the [[continuous mapping theorem]] yields
:<math>g'(\tilde{\theta})\,\xrightarrow{P}\,g'(\theta),</math>
where <math>\xrightarrow{P}</math> denotes [[convergence in probability]].
 
Rearranging the terms and multiplying by <math>\sqrt{n}</math> gives
:<math>\sqrt{n}[g(X_n)-g(\theta)]=g'(\tilde{\theta})\sqrt{n}[X_n-\theta].</math>
Since
:<math>{\sqrt{n}[X_n-\theta] \xrightarrow{D} \mathcal{N}(0,\sigma^2)}</math>
by assumption, it follows immediately from appeal to [[Slutsky%27s_theorem|Slutsky's Theorem]] that
:<math>{\sqrt{n}[g(X_n)-g(\theta)] \xrightarrow{D} \mathcal{N}(0,\sigma^2[g'(\theta)]^2)}.</math>
This concludes the proof.
 
==Motivation of multivariate delta method==
By definition, a [[consistency (statistics)|consistent]] [[estimator]] ''B'' [[convergence in probability|converges in probability]] to its true value ''β'', and often a [[central limit theorem]] can be applied to obtain [[Estimator#Asymptotic_normality|asymptotic normality]]:
 
:<math>
\sqrt{n}\left(B-\beta\right)\,\xrightarrow{D}\,N\left(0, \Sigma \right),
</math>
 
where ''n'' is the number of observations and Σ is a (symmetric positive semi-definite) covariance matrix. Suppose we want to estimate the variance of a function ''h'' of the estimator ''B''.  Keeping only the first two terms of the [[Taylor series]], and using vector notation for the [[gradient]], we can estimate ''h(B)'' as
 
:<math>
h(B) \approx h(\beta) + \nabla h(\beta)^T \cdot (B-\beta)
</math>
 
which implies the variance of ''h(B)'' is approximately
 
:<math>
\begin{align}
\operatorname{Var}\left(h(B)\right) & \approx \operatorname{Var}\left(h(\beta) + \nabla h(\beta)^T \cdot (B-\beta)\right) \\
 
& = \operatorname{Var}\left(h(\beta) + \nabla h(\beta)^T \cdot B - \nabla h(\beta)^T \cdot \beta\right) \\
 
& = \operatorname{Var}\left(\nabla h(\beta)^T \cdot B\right) \\
 
& = \nabla h(\beta)^T \cdot Cov(B) \cdot \nabla h(\beta) \\
 
& = \nabla h(\beta)^T \cdot (\Sigma/n) \cdot \nabla h(\beta)
\end{align}
</math>
 
One can use the [[mean value theorem]] (for real-valued functions of many variables) to see that this does not rely on taking first order approximation.
 
The delta method therefore implies that
 
:<math>
\sqrt{n}\left(h(B)-h(\beta)\right)\,\xrightarrow{D}\,N\left(0, \nabla h(\beta)^T \cdot \Sigma \cdot \nabla h(\beta) \right)
</math>
 
or in univariate terms,
 
:<math>
\sqrt{n}\left(h(B)-h(\beta)\right)\,\xrightarrow{D}\,N\left(0, \sigma^2 \cdot \left(h^\prime(\beta)\right)^2 \right).
</math>
 
==Example==
Suppose ''X<sub>n</sub>'' is [[binomial distribution|Binomial]] with parameters ''p'' and ''n''.  Since
:<math>{\sqrt{n} \left[ \frac{X_n}{n}-p \right]\,\xrightarrow{D}\,N(0,p (1-p))},</math>
we can apply the Delta method with {{nowrap|''g''(''θ'') {{=}} log(''θ'')}} to see
:<math>{\sqrt{n} \left[ \log\left( \frac{X_n}{n}\right)-\log(p)\right] \,\xrightarrow{D}\,N(0,p (1-p) [1/p]^2)}</math>
Hence, the variance of <math> \log \left( \frac{X_n}{n} \right) </math> is approximately
:<math> \frac{1-p}{p\,n}. \,\!</math>
Moreover, if <math>\hat p </math> and <math>\hat q</math> are estimates of different group rates from independent samples of sizes ''n'' and ''m'' respectively, then the logarithm of the estimated [[relative risk]] <math> \frac{\hat p}{\hat q} </math> is approximately normally distributed with variance that can be estimated by <math> \frac{1-\hat p}{\hat p \, n}+\frac{1-\hat q}{\hat q \, m} </math>.  This is useful to construct a hypothesis test or to make a confidence interval for the relative risk.
 
==Note==
The delta method is often used in a form that is essentially identical to that above, but without the assumption that ''X''<sub>n</sub> or ''B'' is asymptotically normal. Often the only context is that the variance is "small". The results then just give approximations to the means and covariances of the transformed quantities. For example, the formulae presented in Klein (1953, p. 258) are:
:<math>
\begin{align}
\operatorname{Var} \left( h_r \right) = & \sum_i
  \left( \frac{ \partial h_r }{ \partial B_i } \right)^2
  \operatorname{Var}\left( B_i \right) + \\
&  \sum_i \sum_{j \neq i}
  \left( \frac{ \partial h_r }{ \partial B_i } \right)
  \left( \frac{ \partial h_r }{ \partial B_j } \right)
  \operatorname{Cov}\left( B_i, B_j \right) \\
\operatorname{Cov}\left( h_r, h_s \right) = & \sum_i
  \left( \frac{ \partial h_r }{ \partial B_i } \right)
  \left( \frac{ \partial h_s }{ \partial B_i } \right)
  \operatorname{Var}\left( B_i \right) + \\
&  \sum_i \sum_{j \neq i}
  \left( \frac{ \partial h_r }{ \partial B_i } \right)
  \left( \frac{ \partial h_s }{ \partial B_j } \right)
  \operatorname{Cov}\left( B_i, B_j \right)
\end{align}
</math>
 
where ''h<sub>r</sub>'' is the ''r''th element of ''h''(''B'') and ''B<sub>i</sub>''is the ''i''th element of ''B''.  The only difference is that Klein stated these as identities, whereas they are actually approximations.
 
==See also==
*[[Taylor expansions for the moments of functions of random variables]]
 
==References==
* Casella, G. and Berger, R. L. (2002), Statistical Inference, 2nd ed.
* Cramér, H. (1946), Mathematical Models of Statistics, p. 353.
* Davison, A. C. (2003), Statistical Models, pp. 33-35.
* Greene, W. H. (2003), Econometric Analysis, 5th ed., pp. 913f.
* Klein, L. R. (1953), A Textbook of Econometrics, p. 258.
* Oehlert, G. W. (1992), A Note on the Delta Method, ''The American Statistician'', Vol. 46, No. 1, p. 27-29. http://www.jstor.org/stable/2684406
*[http://www.indiana.edu/~jslsoc/stata/ci_computations/spost_deltaci.pdf Lecture notes]
*[http://data.imf.au.dk/courses/advsimmethod/Fall05/notes/1209.pdf More lecture notes]
*[http://www.stata.com/support/faqs/stat/deltam.html Explanation from Stata software corporation]
 
[[Category:Econometrics]]
[[Category:Statistical approximations]]
[[Category:Articles containing proofs]]
 
[[de:Statistischer Test#Asymptotisches Verhalten des Tests]]

Latest revision as of 13:59, 22 September 2014

Wilber Berryhill is what his wife enjoys to call him and he completely loves this name. My wife and I reside in Kentucky. To climb is some thing she would never give up. Office supervising is exactly where my main earnings arrives from but I've always wanted my personal company.

Also visit my site: psychic chat online