|
|
Line 1: |
Line 1: |
| In [[statistics]], '''local asymptotic normality''' is a property of a sequence of [[statistical model]]s, which allows this sequence to be [[asymptotic distribution|asymptotically approximated]] by a [[normal distribution|normal location model]], after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of [[iid]] sampling from a [[regular parametric model]].
| | My name is Elisha and I am studying Physical and Gender and Women's Studies at Gliwice / Poland.<br><br>Also visit my site: [http://flash.gamescafe.ir/profile/puxsa Info size bike mountain bike sizing.] |
| | |
| The notion of local asymptotic normality was introduced by {{harvtxt|Le Cam|1960}}.
| |
| | |
| == Definition ==
| |
| {{Technical|reason=notation should be explained - for example, what is θ here?|date=September 2010}}
| |
| A sequence of [[parametric statistical model]]s {{nowrap|{ ''P<sub>n,θ</sub>'': ''θ'' ∈ Θ }}} is said to be '''locally asymptotically normal (LAN)''' at ''θ'' if there exist [[matrix (mathematics)|matrices]] ''r<sub>n</sub>'' and ''I<sub>θ</sub>'' and a random [[Coordinate vector|vector]] {{nowrap|Δ<sub>''n,θ''</sub> ~ ''N''(0, ''I<sub>θ</sub>'')}} such that, for every converging sequence {{math|''h<sub>n</sub>'' → ''h''}},<ref name=V>{{harvtxt|van der Vaart|1998|pp=103–104 }}</ref>
| |
| : <math> | |
| \ln \frac{dP_{\!n,\theta+r_n^{-1}h_n}}{dP_{n,\theta}} = h'\Delta_{n,\theta} - \frac12 h'I_\theta\,h + o_{P_{n,\theta}}(1),
| |
| </math>
| |
| where the derivative here is a [[Radon–Nikodym theorem#Radon–Nikodym derivative|Radon–Nikodym derivative]], which is a formalised version of the [[likelihood ratio]], and where ''o'' is a type of [[big O in probability notation]]. In other words, the local likelihood ratio must [[convergence in distribution|converge in distribution]] to a normal random variable whose mean is equal to minus one half the variance:
| |
| : <math>
| |
| \ln \frac{dP_{\!n,\theta+r_n^{-1}h_n}}{dP_{n,\theta}}\ \ \xrightarrow{d}\ \ \mathcal{N}\Big( {-\tfrac12} h'I_\theta\,h,\ h'I_\theta\,h\Big).
| |
| </math>
| |
| | |
| The sequences of distributions <math>P_{\!n,\theta+r_n^{-1}h_n}</math> and <math>P_{n,\theta}</math> are [[Contiguity (probability theory)|contiguous]].<ref name=V/>
| |
| | |
| === Example ===
| |
| The most straightforward example of a LAN model is an iid model whose likelihood is twice continuously differentiable. Suppose {{nowrap|{ ''X''<sub>1</sub>, ''X''<sub>2</sub>, …, ''X<sub>n</sub>'' }}} is an iid sample, where each ''X<sub>i</sub>'' has density function {{nowrap|''f''(''x'', ''θ'')}}. The likelihood function of the model is equal to
| |
| : <math>
| |
| p_{n,\theta}(x_1,\ldots,x_n;\,\theta) = \prod_{i=1}^n f(x_i,\theta).
| |
| </math>
| |
| If ''f'' is twice continuously differentiable in ''θ'', then
| |
| : <math>\begin{align}
| |
| \ln p_{n,\theta+\delta\theta}
| |
| &\approx \ln p_{n,\theta} + \delta\theta'\frac{\partial \ln p_{n,\theta}}{\partial\theta} + \frac12 \delta\theta' \frac{\partial^2 \ln p_{n,\theta}}{\partial\theta\,\partial\theta'} \delta\theta \\
| |
| &= \ln p_{n,\theta} + \delta\theta' \sum_{i=1}^n\frac{\partial \ln f(x_i,\theta)}{\partial\theta} + \frac12 \delta\theta' \bigg[\sum_{i=1}^n\frac{\partial^2 \ln f(x_i,\theta)}{\partial\theta\,\partial\theta'} \bigg]\delta\theta .
| |
| \end{align}</math>
| |
| | |
| Plugging in {{nowrap|''δθ'' {{=}} ''h'' / √''n''}}, gives
| |
| : <math>
| |
| \ln \frac{p_{n,\theta+h/\sqrt{n}}}{p_{n,\theta}} =
| |
| h' \Bigg(\frac{1}{\sqrt{n}} \sum_{i=1}^n\frac{\partial \ln f(x_i,\theta)}{\partial\theta}\Bigg) \;-\;
| |
| \frac12 h' \Bigg( \frac1n \sum_{i=1}^n - \frac{\partial^2 \ln f(x_i,\theta)}{\partial\theta\,\partial\theta'} \Bigg) h \;+\;
| |
| o_p(1).
| |
| </math>
| |
| By the [[central limit theorem]], the first term (in parentheses) converges in distribution to a normal random variable {{nowrap|Δ<sub>''θ''</sub> ~ ''N''(0, ''I<sub>θ</sub>'')}}, whereas by the [[law of large numbers]] the expression in second parentheses converges in probability to ''I<sub>θ</sub>'', which is the [[Fisher information matrix]]:
| |
| : <math>
| |
| I_\theta = \mathrm{E}\bigg[{- \frac{\partial^2 \ln f(X_i,\theta)}{\partial\theta\,\partial\theta'}}\bigg] = \mathrm{E}\bigg[\bigg(\frac{\partial \ln f(X_i,\theta)}{\partial\theta}\bigg)\bigg(\frac{\partial \ln f(X_i,\theta)}{\partial\theta}\bigg)'\,\bigg].
| |
| </math>
| |
| Thus, the definition of the local asymptotic normality is satisfied, and we have confirmed that the parametric model with iid observations and twice continuously differentiable likelihood has the LAN property.
| |
| | |
| == See also ==
| |
| * [[Asymptotic distribution]]
| |
| | |
| === Notes ===
| |
| {{Reflist|3}}
| |
| | |
| == References ==
| |
| {{Refbegin}}
| |
| * {{Cite book
| |
| | last1 = Ibragimov | first1 = I.A.
| |
| | last2 = Has’minskiĭ | first2 = R.Z.
| |
| | title = Statistical estimation: asymptotic theory
| |
| | year = 1981
| |
| | publisher = Springer-Verlag
| |
| | isbn = 0-387-90523-5
| |
| | ref = harv
| |
| }}
| |
| * {{Cite journal
| |
| | last = Le Cam | first = L.
| |
| | title = Locally asymptotically normal families of distributions
| |
| | year = 1960
| |
| | journal = University of California Publications in Statistics
| |
| | volume = 3
| |
| | pages = 37–98
| |
| | ref = harv
| |
| }}
| |
| * {{Cite book
| |
| | last = van der Vaart | first = A.W.
| |
| | title = Asymptotic statistics
| |
| | year = 1998
| |
| | publisher = Cambridge University Press
| |
| | isbn = 978-0-521-78450-4
| |
| | ref = harv
| |
| }}
| |
| {{Refend}}
| |
| | |
| {{DEFAULTSORT:Local Asymptotic Normality}}
| |
| [[Category:Asymptotic statistical theory]]
| |
My name is Elisha and I am studying Physical and Gender and Women's Studies at Gliwice / Poland.
Also visit my site: Info size bike mountain bike sizing.