Doomsday argument: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Kcrca
→‎Analogy to the estimated final score of a cricket batsman: Tried to make more accessible to non-cricket-knowing folk
 
Line 1: Line 1:
{{Refimprove|date=May 2010}}
[http://Search.About.com/?q=Roberto Roberto] is what's written to his birth certificate yet he never really adored that name. Managing people can be where his primary sales comes from. Base jumping is something that she has been doing for a number of. Massachusetts has always been his everyday living place and his family loves it. Go in which to his website to hit upon out more: http://circuspartypanama.com<br><br>Also visit my webpage [http://circuspartypanama.com clash of clans hacker v1.3]
In [[statistics]], an '''efficient estimator''' is an [[estimator]] that estimates the quantity of interest in some “best possible” manner. The notion of “best possible” relies upon the choice of a particular [[loss function]] — the function which quantifies the relative degree of undesirability of estimation errors of different magnitudes. The most common choice of the loss function is [[quadratic loss function|quadratic]], resulting in the [[mean squared error]] criterion of optimality.<ref>{{harvtxt|Everitt|2002|p=128}}</ref>
 
== Finite-sample efficiency ==
Suppose {{nowrap|{ ''P<sub>θ</sub>'' {{!}} ''θ'' ∈ Θ }}} is a [[parametric model]] and {{nowrap|1=''X'' = (''X''<sub>1</sub>, …, ''X<sub>n</sub>'')}} is the data sampled from this model. Let {{nowrap|1=''T'' = ''T''(''X'')}} be the [[estimator]] for the parameter ''θ''. If this estimator is [[bias of an estimator|unbiased]] (that is, {{nowrap|1=E[&thinsp;''T''&thinsp;] = ''θ''}}), then the [[Cramér–Rao inequality]] states the [[variance]] of this estimator is bounded from below:
: <math>
    \operatorname{Var}[\,T\,]\ \geq\ \mathcal{I}_\theta^{-1},
  </math>
where <math>\scriptstyle\mathcal{I}_\theta</math> is the [[Fisher information matrix]] of the model at point ''θ''. Generally, the variance measures the degree of dispersion of a random variable around its mean. Thus estimators with small variances are more concentrated, they estimate the parameters more precisely. We say that the estimator is '''finite-sample efficient estimator''' (in the class of unbiased estimators) if it reaches the lower bound in the Cramér–Rao inequality above, for all {{nowrap|''θ'' ∈ Θ}}. Efficient estimators are always [[minimum variance unbiased estimator]]s. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.{{Citation needed|date=February 2012}}
 
Historically, finite-sample efficiency was an early optimality criterion. However this criterion has some limitations:
* Finite-sample efficient estimators are extremely rare. In fact, it was proved that efficient estimation is possible only in an [[exponential family]], and only for the natural parameters of that family.{{Citation needed|date=February 2012}}
* This notion of efficiency is restricted to the class of [[bias of an estimator|unbiased]] estimators. Since there are no good theoretical reasons to require that estimators are unbiased, this restriction is inconvenient. In fact, if we use [[mean squared error]] as a selection criterion, many biased estimators will slightly outperform the “best” unbiased ones. For example, in [[multivariate statistics]] for dimension three or more, the mean-unbiased estimator, [[sample mean]], is [[admissible procedure|inadmissible]]: Regardless of the outcome, its performance is worse than for example the [[James–Stein estimator]].{{Citation needed|date=December 2011}}
* Finite-sample efficiency is based on the variance, as a criterion according to which the estimators are judged. A more general approach is to use [[loss function]]s other than quadratic ones, in which case the finite-sample efficiency can no longer be formulated.{{Citation needed|date=February 2012}}{{dubious|date=February 2012}}
 
=== Example ===
Among the models encountered in practice, efficient estimators exist for: the mean ''μ'' of the [[normal distribution]] (but not the variance ''σ''<sup>2</sup>), parameter ''λ'' of the [[Poisson distribution]], the probability ''p'' in the [[binomial distribution|binomial]] or [[multinomial distribution]].
 
Consider the model of a [[normal distribution]] with unknown mean but known variance: {{nowrap|1={ ''P<sub>θ</sub>'' = ''N''(''θ'', ''σ''<sup>2</sup>) {{!}} ''θ'' ∈ '''R''' }.}} The data consists of ''n'' [[iid]] observations from this model: {{nowrap|1=''X'' = (''x''<sub>1</sub>, …, ''x<sub>n</sub>'')}}. We estimate the parameter ''θ'' using the [[sample mean]] of all observations:
: <math>
    T(X) = \frac1n \sum_{i=1}^n x_i\ .
  </math>
This estimator has mean ''θ'' and variance of {{nowrap|''σ''<sup>2</sup>&thinsp;/&thinsp;''n''}}, which is equal to the reciprocal of the [[Fisher information]] from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution.
 
==Relative efficiency==
If <math>T_1</math> and <math>T_2</math> are estimators for the parameter <math>\theta</math>, then <math>T_1</math> is said to '''[[dominating decision rule|dominate]]''' <math>T_2</math> if:
# its [[mean squared error]] (MSE) is smaller for at least some value of <math>\theta</math>
# the MSE does not exceed that of <math>T_2</math> for any value of θ.
 
Formally, <math>T_1</math> dominates <math>T_2</math> if
:<math>
\mathrm{E}
\left[
(T_1 - \theta)^2
\right]
\leq
\mathrm{E}
\left[
(T_2-\theta)^2
\right]
</math>
 
holds for all <math>\theta</math>, with strict inequality holding somewhere.
 
The relative efficiency is defined as
 
:<math>
e(T_1,T_2)
=
\frac
{\mathrm{E} \left[ (T_2-\theta)^2 \right]}
{\mathrm{E} \left[ (T_1-\theta)^2 \right]}
</math>
 
Although <math>e</math> is in general a function of <math>\theta</math>, in many cases the dependence drops out; if this is so, <math>e</math> being greater than one would indicate that <math>T_1</math> is preferable, whatever the true value of <math>\theta</math>.
 
==Asymptotic efficiency==
For some [[estimator]]s, they can attain efficiency [[asymptotically]] and are thus called asymptotically efficient estimators.
This can be the case for some [[maximum likelihood]] estimators or for any estimators that attain equality of the Cramér-Rao bound asymptotically.
 
==See also==
* [[Bayes estimator]]
*[[Hodges’ estimator]]
*[[Efficiency (statistics)]]
 
==Notes==
{{Reflist|3}}
 
==References==
{{refbegin}}
*{{cite book
  | last = Everitt | first = B.S.
  | year = 2002
  | title = The Cambridge Dictionary of Statistics
  | publisher = New York, Cambridge University Press
  | edition = 2nd
  | isbn = 0-521-81099-X
  | ref = harv
  }}
{{refend}}
 
==Further reading==
* {{cite book
  | last1 = Lehmann | first1 = E.L. | authorlink = Erich Leo Lehmann
  | last2 = Casella | first2 = G.
  | title = Theory of Point Estimation, 2nd ed
  | year = 1998
  | publisher = Springer
  | isbn = 0-387-98502-6
  }}
* {{cite book|title=Parametric Statistical Theory | last1=Pfanzagl | first1=Johann |authorlink=Johann Pfanzagl |last2=with the assistance of R. Hamböker |year=1994|publisher=Walter de Gruyter|location=Berlin|isbn=3-11-013863-8| mr=1291393 }}
 
{{DEFAULTSORT:Efficient Estimator}}
[[Category:Estimation theory]]
[[Category:Statistical theory]]
[[Category:Statistical terminology]]

Latest revision as of 07:19, 6 January 2015

Roberto is what's written to his birth certificate yet he never really adored that name. Managing people can be where his primary sales comes from. Base jumping is something that she has been doing for a number of. Massachusetts has always been his everyday living place and his family loves it. Go in which to his website to hit upon out more: http://circuspartypanama.com

Also visit my webpage clash of clans hacker v1.3