Mott–Bethe formula: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>AnomieBOT
m Dating maintenance tags: {{Referencing}}
 
en>Yobot
m WP:CHECKWIKI error fixes / special characters in pagetitle using AWB (9485)
 
Line 1: Line 1:
I'm Kimber and I live in Lindau. <br>I'm interested in Africana Studies, Writing and Chinese art. I like to travel and reading fantasy.<br>xunjie 共同で計画されたケイト·ボスワースとトップショップですが。
{{About|the confidence distribution|confidence intervals|Confidence interval}}
「さわやかCL」の生地用いられる光触媒は、
味ファニー(ファニー)、 [http://alpha-printing.com/images/store/mcm.php MCM ���å� ���] 第二ホイバオグループで国内の宝石市場にもう一つの主要な移動後のブランドストライド」ダニーによると、
宝石や記号イヤリング優雅の壮大な象徴であるかどうか、
ジャーミンストリートにも時代の新たな味をもたらします。 [http://www.equityfair.ch/mod_news/jp/mall/shoes/cl/ ���ꥹ�����֥��� ѥ] 彼女の母親の世話の子供たちは喜んで繁栄するように一部のCMは、
セクシーなビキニの滴下のグループ写真を撮りました。
典型的なアメリカンスタイルのドレスと私の黒いレギンスとTOPSHOPノースリーブシフォンドレス。[http://www.dressagetechnique.com/p/newbalance.html �˥�`�Х��576] 女性の最愛の人です!投票入力してください:2012(第2セッション)春と夏のほとんどのネットワーク選択の活動について話しました①この原稿は、
米国を持って来るために組み合わせるの生活と自然、
世界中の耕しパイオニアの熱意にある服の経験のユニークな魅力をもたらす!①ベン原稿は以上の普及のため、
エディタの選択:イブの気分はセクシーな短い黒のTシャツ(440 RMB):夏の高品質シルク100%で作られたユニークな夏の短い段落小さなシャツを、 [http://www.cosmopolitancarpetcleaning.com/data/images1/gaga.html �����ߥ�� �r�<br><br>�ǥ��`��]


Feel free to surf to my webpage [http://www.dressagetechnique.com/images/jp/top/jimmychoo/ ジミーチュウ トートバッグ]
In [[statistical inference]], the concept of a '''confidence distribution''' ('''CD''') has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial<ref name = "Fisher1930"/> interpretation ([[fiducial distribution]]), although it is a purely frequentist concept.<ref name="cox1958" />  A confidence distribution is not a valid [[probability distribution]],<ref name="Cox2006a"/> but may still be a function useful for making inferences.<ref name="Xie2013r" />
 
In recent years, there has been a surge of renewed interest in confidence distributions.{{citation needed|date=June 2011}} In the more recent developments, the  concept of confidence distribution has emerged as a purely [[frequentist inference|frequentist]] concept, without any fiducial interpretation or reasoning. Conceptually, a confidence distribution is no different from a [[point estimator]] or an interval estimator ([[confidence interval]]), but it uses a sample-dependent distribution function on the parameter space (instead of a point or an interval) to estimate the parameter of interest.
 
A simple example of a confidence distribution, that has been broadly used in statistical practice, is a [[Bootstrapping (statistics)|bootstrap]] distribution.<ref name="Efron1998" /> The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions, [[p-value]] functions,<ref name = "Fraser1991"/> normalized [[likelihood function]]s and, in some cases, Bayesian [[prior distribution|prior]]s and Bayesian [[posterior distribution|posteriors]].<ref name="Xie2011" />
 
Just as a Bayesian posterior distribution contains a wealth of information for any type of [[Bayesian inference]], a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, including [[point estimate]]s, [[confidence interval]]s and p-values, among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool.{{citation needed|date=June 2011}}
 
== The history of CD concept ==
 
Neyman (1937)<ref name="Neyman1937" /> introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser,<ref name = "Fraser2011"/> the seed (idea) of confidence distribution can even be traced back to Bayes (1763)<ref name="Bayes1973" /> and Fisher (1930).<ref name="Fisher1930" /> Some researchers view the confidence distribution as "the Neymanian interpretation of Fishers fiducial distribution",<ref name="Schweder2002" /> which was "furiously disputed by Fisher".<ref name="Zabell1992" /> It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence"<ref name="Zabell1992" /> might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework.<ref name="Xie2011" /><ref name="Singh2011" /> Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation, and it also has ties to Bayesian inference concepts and the fiducial arguments.
 
== Definition ==
=== Classical definition ===
 
Classically, a confidence distribution is defined by inverting the upper limits of a series of lower sided confidence intervals.<ref name = "Efron1993"/>{{page needed|date=May 2011}}<ref name = "Cox2006a"/> In particular,
 
;'''Definition''' (classical definition): For every ''α'' in (0,&nbsp;1), let (−∞,&nbsp;''ξ''<sub>''n''</sub>(''α'')] be a 100α% lower-side confidence interval for ''θ'', where ''ξ''<sub>''n''</sub>(''α'') =&nbsp;''ξ''<sub>''n''</sub>(''X''<sub>n</sub>,α) is continuous and increasing in α for each sample ''X''<sub>''n''</sub>. Then, ''H''<sub>''n''</sub>(•)&nbsp;=&nbsp;''ξ''<sub>''n''</sub><sup>−1</sup>(•) is a confidence distribution for&nbsp;''θ''.
 
Efron stated that this distribution "assigns probability 0.05 to θ lying between the upper endpoints of the 0.90 and 0.95 confidence interval, etc." and "it has powerful intuitive appeal".<ref name="Efron1993" />
In the classical literature,{{citation needed|date=May 2011}} the confidence distribution function is interpreted as a distribution function of the parameter θ, which is impossible unless fiducial reasoning is involved since, in a frequentist setting, the parameters are fixed and nonrandom.
 
To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence distribution as a purely frequentist concept (similar to a point estimator) is that it is now free from those restrictive, if not controversial, constraints set forth by Fisher on fiducial distributions.<ref name = "Xie2011"/><ref name = "Singh2011" />
 
=== The modern definition ===
 
The following definition applies.<ref name="Schweder2002" /><ref name = "Singh2001"/><ref name="Singh2005" /> In the definition, Θ is the parameter space of the unknown parameter of interest θ, and χ is the sample space corresponding to data ''X''<sub>''n''</sub>={''X''<sub>1</sub>, ..., ''X''<sub>''n''</sub>}.
 
;'''Definition''': A function ''H''<sub>''n''</sub>(•) = ''H''<sub>''n''</sub>(''X''<sub>''n''</sub>,&nbsp;•) on χ&nbsp;×&nbsp;Θ&nbsp;→&nbsp;[0,&nbsp;1] is called a confidence distribution (CD) for a parameter θ, if it follows two requirements:
:*(R1) For each given ''X''<sub>''n''</sub> ∈ ''χ'' is a continuous cumulative distribution function on Θ;
:*(R2) At the true parameter value ''θ''&nbsp;=&nbsp;''θ''<sub>0</sub>, ''H''<sub>''n''</sub>(''θ''<sub>0</sub>)&nbsp;≡&nbsp;''H''<sub>''n''</sub>(''X''<sub>''n''</sub>, ''θ''<sub>0</sub>), as a function of the sample ''X''<sub>''n''</sub>, follows the uniform distribution ''U''[0,&nbsp;1].
Also, the function ''H'' is an asymptotic CD (aCD), if the ''U''[0,&nbsp;1] requirement is true only asymptotically and the continuity requirement on ''H''<sub>''n''</sub>(•) is dropped.
 
In nontechnical terms, a confidence distribution is a function of both the parameter and the random sample, with two requirements. The first requirement (R1) simply requires that a CD should be a distribution on the parameter space. The second requirement (R2) sets a restriction on the function so that inferences (point estimators, confidence intervals and hypothesis testing, etc.) based on the confidence distribution have desired frequentist properties. This is similar to the restrictions in point estimation to ensure certain desired properties, such as unbiasedness, consistency, efficiency, etc.<ref name="Xie2011" /><ref name="Xie2009" />
 
A confidence distribution derived by inverting the upper limits of confidence intervals (classical definition) also satisfies the requirements in the above definition and this version of the definition is consistent with the classical definition.<ref name="Singh2005" />
 
Unlike the classical fiducial inference, more than one confidence distributions may be available to estimate a parameter under any specific setting. Also, unlike the classical fiducial inference, optimality is not a part of requirement.  Depending on the setting and the criterion used, sometimes there is a unique "best" (in terms of optimality) confidence distribution. But sometimes there is no optimal confidence distribution available or, in some extreme cases, we may not even be able to find a meaningful confidence distribution. This is not different from the practice of point estimation.
 
===Examples===
'''Example 1: Normal Mean and Variance'''
 
Suppose a [[normal distribution|normal]] sample ''X''<sub>''i''</sub>&nbsp;~&nbsp;''N''(''μ'',&nbsp;''σ''<sup>2</sup>), ''i''&nbsp;=&nbsp;1,&nbsp;2,&nbsp;...,&nbsp;''n'' is given.
 
'''(1) Variance ''σ''<sup>2</sup> is known'''
 
Both the functions <math>H_\Phi(\mu)</math> and <math>H_t(\mu)</math> given by
 
:<math>
    H_{\Phi}(\mu) = \Phi\left(\frac{\sqrt{n}(\mu-\bar{X})}{\sigma}\right) ,
    \quad\text{and}\quad
    H_{t}(\mu) = F_{t_{n-1}}\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right) ,
</math>
 
satisfy the two requirements in the CD definition, and they are confidence distribution functions for&nbsp;''μ''.{{citation needed|date=June 2011}} Here, Φ is the cumulative distribution function of the standard normal distribution, and <math> F_{t_{n-1}} </math> is the cumulative distribution function of the student <math> t_{n-1} </math> distribution. Furthermore,
 
:<math> H_A(\mu) = \Phi\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right)</math>
 
satisfies the definition of an asymptotic confidence distribution when n→∞, and it is an asymptotic confidence distribution for μ. The uses of <math>H_{t}(\mu)</math> and <math>H_{A}(\mu)</math> are equivalent to state that we use <math>N(\bar{X},\sigma^2)</math> and <math>N(\bar{X},s^2)</math> to estimate <math>\mu</math>, respectively.
 
'''(2) Variance ''σ''<sup>2</sup> is unknown'''
 
For the parameter ''μ'', since <math>H_\Phi(\mu)</math> involves the unknown parameter σ and it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution for&nbsp;''μ''.{{cn|date=July 2012}} However, <math>H_{t}(\mu)</math> is still a CD for μ and <math>H_{A}(\mu)</math> is an aCD for&nbsp;''μ''.
 
For the parameter ''σ''<sup>2</sup>, the sample-dependent cumulative distribution function
 
:<math>H_{\chi^2}(\theta)=1-F_{\chi^2_{n-1}}(s^2/\theta)</math>
 
is a confidence distribution function for σ<sup>2</sup>.{{cn|date=July 2012}} Here, <math> F_{\chi^2_{n-1}} </math> is the cumulative distribution function of the student <math> \chi^2_{n-1} </math> distribution.
 
In the case when the variance ''σ''<sup>2</sup> is known, <math>
    H_{\Phi}(\mu) = \Phi\left(\frac{\sqrt{n}(\mu-\bar{X})}{\sigma}\right) </math> is optimal in terms of producing the shortest confidence intervals at any given level. In the case when the variance ''σ''<sup>2</sup> is unknown, <math> H_{t}(\mu) = F_{t_{n-1}}\left(\frac{\sqrt{n}(\mu-\bar{X})}{s}\right) </math> is an optimal confidence distribution for ''μ''.
 
'''Example 2: Bivariate normal correlation'''
 
Let ρ denotes the [[Pearson product-moment correlation coefficient|correlation coefficient]] of a [[multivariate normal distribution|bivariate normal]] population. It is well known that Fisher's z defined by the [[Fisher transformation]]:
 
:<math>z = {1 \over 2}\ln{1+r \over 1-r}</math>
 
has the [[asymptotic distribution|limiting distribution]] <math>N({1 \over 2}\ln{{1+\rho}\over{1-\rho}}, {1 \over n-3})</math> with a fast rate of convergence, where ''r'' is the sample correlation and ''n'' is the sample size.
 
The function
 
:<math>H_n(\rho) = 1 - \Phi\left(\sqrt{n-3} \left({1 \over 2}\ln{1+r \over 1-r} -{1 \over 2}\ln{{1+\rho}\over{1-\rho}} \right)\right)</math>
 
is an asymptotic confidence distribution for ρ.{{citation needed|date=June 2011}}
 
== Using CD to make inference ==
=== Confidence interval ===
 
[[File:CDinference1.png|right|thumb|400px]]
From the CD definition, it is evident that the interval <math>(-\infty, H_n^{-1}(1-\alpha)], [H_n^{-1}(\alpha), \infty)</math> and <math>[H_n^{-1}(\alpha/2), H_n^{-1}(1-\alpha/2)]</math> provide 100(1&nbsp;&minus;&nbsp;α)%-level confidence intervals of different kinds, for θ, for any ''α''&nbsp;∈&nbsp;(0,&nbsp;1).  Also <math>[H_n^{-1}(\alpha_1), H_n^{-1}(1-\alpha_2)]</math> is a level 100(1&nbsp;&minus;&nbsp;''α''<sub>1</sub>&nbsp;&minus;&nbsp;''α''<sub>2</sub>)% confidence interval for the parameter θ for any ''α''<sub>1</sub>&nbsp;>&nbsp;0, ''α''<sub>2</sub>&nbsp;>&nbsp;0 and ''α''<sub>1</sub>&nbsp;+&nbsp;''α''<sub>2</sub>&nbsp;<&nbsp;1. Here, <math> H_n^{-1}(\beta) </math> is the 100β% quantile of <math> H_n(\theta) </math> or it solves for θ in equation <math> H_n(\theta)=\beta </math>. The same holds for an aCD, where the confidence level is achieved in limit.
 
=== Point estimation ===
 
Point estimators can also be constructed given a confidence distribution estimator for the parameter of interest. For example, given ''H''<sub>''n''</sub>(''θ'') the CD for a parameter ''θ'', natural choices of point estimators include the median ''M''<sub>''n''</sub>&nbsp;=&nbsp;''H''<sub>''n''</sub><sup>&minus;1</sup>(1/2), the mean <math>\bar{\theta}_n=\int_{-\infty}^\infty t \, dH_n(t)</math>, and the maximum point of the CD density
 
:<math>\widehat{\theta}_n=\arg\max_\theta h_n(\theta), h_n(\theta)=H'_n(\theta).</math>
 
Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.<ref name = "Xie2011" /><ref name = "Singh2007" />
 
=== Hypothesis testing ===
 
One can derive a p-value for a test, either one-sided or two-sided, concerning the parameter&nbsp;''θ'', from its confidence distribution ''H''<sub>''n''</sub>(''θ'').<ref name="Xie2011" /><ref name="Singh2007" /> Denote by the probability mass of a set ''C'' under the confidence distribution function <math> p_s(C)=H_n(C) = \int_C d H(\theta). </math> This ''p''<sub>''s''</sub>(C) is called "support" in the CD inference and also known as "belief" in the fiducial literature.<ref name ="Kendall1974"/> We have
 
(1) For the one-sided test ''K''<sub>0</sub>: ''θ''&nbsp;∈&nbsp;''C'' vs. ''K''<sub>1</sub>: ''θ''&nbsp;∈&nbsp;''C''<sup>c</sup>, where ''C'' is of the type of (&minus;∞,&nbsp;''b''] or [''b'',&nbsp;∞), one can show from the CD definition that sup<sub>''θ''&nbsp;∈&nbsp;''C''</sub>''P''<sub>''θ''</sub>(''p''<sub>''s''</sub>(''C'')&nbsp;≤&nbsp;''α'')&nbsp;=&nbsp;''α''. Thus, ''p''<sub>''s''</sub>(''C'')&nbsp;=&nbsp;''H''<sub>''n''</sub>(''C'') is the corresponding p-value of the test.
 
(2) For the singleton test ''K''<sub>0</sub>: ''θ''&nbsp;=&nbsp;''b'' vs. ''K''<sub>1</sub>: ''θ''&nbsp;≠&nbsp;''b'', ''P''<sub>{''K''<sub>0</sub>: ''θ''&nbsp;=&nbsp;''b''}</sub>(2&nbsp;min{''p''<sub>''s''</sub>(''C''<sub>lo</sub>), one can show from the CD definition that p<sub>s</sub>(''C''<sub>up</sub>)}&nbsp;≤&nbsp;''α'')&nbsp;=&nbsp;''α''. Thus, 2&nbsp;min{''p''<sub>''s''</sub>(''C''<sub>lo</sub>),&nbsp;''p''<sub>''s''</sub>(''C''<sub>up</sub>)} =&nbsp;2&nbsp;min{''H''<sub>''n''</sub>(''b''), 1&nbsp;&minus;&nbsp;''H''<sub>''n''</sub>(''b'')} is the corresponding p-value of the test. Here, ''C''<sub>lo</sub> =&nbsp;(&minus;∞,&nbsp;''b''] and ''C''<sub>up</sub>&nbsp;=&nbsp;[''b'',&nbsp;∞).
 
See Figure 1 from Xie and Singh (2011)<ref name = "Xie2011"/> for a graphical illustration of the CD inference.
 
==References==
{{reflist| refs=
<ref name = "cox1958">Cox, D.R. (1958). "Some Problems Connected with Statistical Inference", "[[The Annals of Mathematical Statistics]]", "29" 357-372  (Section 4, Page 363)</ref>
<ref name="Cox2006a">[[David R. Cox|Cox, D. R.]] (2006). ''Principles of Statistical Inference'', CUP. ISBN 0-521-68567-2. (page 66)</ref>
<ref name="Bayes1973">Bayes, T. (1973). "[[An Essay towards solving a Problem in the Doctrine of Chances]]." ''Phil. Trans. Roy. Soc'', London '''53''' 370–418 '''54''' 296–325. Reprinted in ''[[Biometrika]]'' '''45''' (1958) 293–315.</ref>
<ref name="Efron1993">Efron, B. (1993). "Bayes and likelihood calculations from confidence intervals.'' ''[[Biometrika]]'', '''80''' 3–26.</ref>
<ref name="Efron1998">Efron, B. (1998). [http://www.jstor.org/pss/2290557 "R.A.Fisher in the 21st Century"] ''Statistical Science.'' '''13''' 95–122.</ref>
<ref name="Fisher1930">[[Ronald Fisher|Fisher, R.A.]] (1930). "Inverse probability." ''Proc. cambridge Pilos. Soc.'' '''26''', 528–535.</ref>
<ref name="Fraser1991">Fraser, D.A.S. (1991). [http://www.jstor.org/pss/2290557 "Statistical inference: Likelihood to significance."] ''[[Journal of the American Statistical Association]]'', '''86''', 258–265.</ref>
<ref name="Fraser2011">Fraser, D.A.S. (2011). [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1320066918 "Is Bayes posterior just quick and dirty confidence?"] ''Statistical Science'' '''26''', 299-316.</ref>
<ref name="Kendall1974">Kendall, M., & Stuart, A. (1974). ''The Advanced Theory of Statistics'', Volume ?. (Chapter 21). Wiley.</ref>
<ref name="Neyman1937">Neyman, J. (1937). "Outline of a theory of statistical estimation based on the classical theory of probability." ''Phil. Trans. Roy. Soc'' '''A237''' 333–380</ref>
<ref name="Schweder2002">Schweder, T. and Hjort, N.L. (2002). "Confidence and likelihood", ''Scandinavian Journal of Statistics.'' '''29''' 309–332. {{doi|10.1111/1467-9469.00285}}</ref>
<ref name="Singh2001">Singh, K. Xie, M. and Strawderman, W.E. (2001). "Confidence distributions—concept, theory and applications". Technical report, Dept. Statistics, Rutgers Univ. Revised 2004.</ref>
<ref name="Singh2005">Singh, K. Xie, M. and Strawderman, W.E. (2005). [http://www.jstor.org/pss/3448660 "Combining Information from Independent Sources Through Confidence Distribution"] ''[[Annals of Statistics]]'', '''33''', 159–183.</ref>
<ref name="Singh2007">Singh, K. Xie, M. and Strawderman, W.E. (2007). [http://www.jstor.org/pss/20461464 "Confidence Distribution (CD)-Distribution Estimator of a Parameter"], in ''Complex Datasets and Inverse Problems'' ''IMS Lecture Notes—Monograph Series'', '''54''',(R. Liu, et al. Eds) 132–150.</ref>
<ref name="Singh2011">Singh, K. and Xie, M. (2011). [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1320066920 "Discussions of “Is Bayes posterior just quick and dirty confidence?” by D.A.S. Fraser."] Statistical Science. Vol. 26, 319-321.</ref>
<ref name="Xie2009"> Xie, M., Liu, R., Daramuju, C.V., Olsan, W. (2012). [http://www.stat.rutgers.edu/home/mxie/RCPapers/expertopinions-final.pdf "Incorporating expert opinions with information from binomial clinical trials."] Annals of Applied Statistics. In press.</ref>
<ref name = "Xie2011">Xie, M. and Singh, K. (2013). [http://onlinelibrary.wiley.com/doi/10.1111/insr.12000/abstract "Confidence Distribution, the Frequentist Distribution Estimator of a Parameter – a Review (with discussion)"]. "International Statistical Review'', '''81''', 3-39.</ref>
<ref name = "Xie2013r">Xie, M.  (2013). [http://onlinelibrary.wiley.com/doi/10.1111/insr.12001/abstract "Rejoinder of Confidence Distribution, the Frequentist Distribution Estimator of a Parameter – a Review"]. "International Statistical Review'', '''81''', 68-77.</ref>
<ref name="Zabell1992">Zabell, S.L. (1992). "R.A.Fisher and fiducial argument", ''Stat. Sci.'', '''7''', 369–387</ref>
}}
<!--
uncited citations
 
<ref name="Fisher1937">[[Ronald Fisher|Fisher, R.A.]] (1973), ''Statistical Methods and Scientific Inference'', 3rd edition. Hafner Press, New York.</ref>
<ref name="Neyman1941">Neyman, J. (1941). "Fiducial argument and the theory of confidence intervals." ''Biometrika.'' '''32''' 128–150.</ref>
<ref name="Parzen2005">Parzen, E. (2005). "All Statistical Methods, Parameter Confidence Quantiles". ''Noether Award Lecture at the Joint Statistical Meeting.''</ref>
<ref name="Schweder2003">Schweder, T. and Hjort, N.L. (2003). "Frequentist analogues of priors and posteriors."  In B.P. Stigum, ''Econometrics and the philosophy of economics.'' Princeton University Press 285–317.</ref>
<ref name="Schweder2009">Schweder, T. and Hjort, N.L. (2008). ''Confidence, Likelihood and Probability.'' Cambridge University Press. ISBN 0-521-86160-8</ref>
<ref name = "Xie2011a">Xie, M., Singh, K. and Strawderman, W.E. (2011). [http://pubs.amstat.org/doi/abs/10.1198/jasa.2011.tm09803 "Confidence distributions and a unified framework for meta-analysis"]. ''[[Journal of the American Statistical Association]]'', 106, 320–333.</ref>
 
-->
 
==Bibliography==
{{refbegin}}
* Fisher, R A (1956). ''Statistical Methods and Scientific Inference''. New York: Hafner. ISBN 0-02-844740-9.
* Fisher, R. A. (1955). "Statistical methods and scientific induction" ''[[Journal of the Royal Statistical Society|J. Roy. Statist. Soc.]]'' Ser. B. 17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald from a fiducial perspective)
* Hannig, J. (2009). "On generalized fiducial inference". ''Statistica Sinica'', '''19''', 491–544.
*Lawless, F. and Fredette, M. (2005). "Frequentist prediction intervals and predictive distributions." ''Biometrika.'' '''92(3)''' 529–542.
* Lehmann, E.L. (1993). "The Fisher, Neyman–Pearson theories of testing hypotheses: one theory or two?" ''Journal of the American Statistical Association'' '''88''' 1242–1249.
* Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". ''Journal of the Royal Statistical Society''. Series B (Methodological) 18 (2): 288–294. {{jstor|2983716}}. (reply to Fisher 1955, which diagnoses a fallacy of "fiducial inference")
* Schweder T., Sadykova D., Rugh D. and Koski W. (2010) "Population Estimates From Aerial Photographic Surveys of Naturally and Variably Marked Bowhead Whales" '' Journal of Agricultural Biological and Environmental Statistics'' 2010 15: 1–19
* Bityukov S., Krasnikov N., Nadarajah S. and Smirnova V. (2010) "Confidence distributions in statistical inference". AIP Conference Proceedings, '''1305''', 346-353.
* Singh, K. and Xie, M. (2012). [http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.imsc/1331731621 "CD-posterior --- combining prior and data through confidence distributions."] Contemporary Developments in Bayesian Analysis and Statistical Decision Theory: A Festschrift for William E. Strawderman. (D. Fourdrinier, et al., Eds.). IMS Collection, Volume 8,  200 -214.
{{refend}}
 
[[Category:Statistical inference]]
[[Category:Parametric statistics]]

Latest revision as of 07:19, 17 September 2013

29 yr old Orthopaedic Surgeon Grippo from Saint-Paul, spends time with interests including model railways, top property developers in singapore developers in singapore and dolls. Finished a cruise ship experience that included passing by Runic Stones and Church.

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial[1] interpretation (fiducial distribution), although it is a purely frequentist concept.[2] A confidence distribution is not a valid probability distribution,[3] but may still be a function useful for making inferences.[4]

In recent years, there has been a surge of renewed interest in confidence distributions.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. In the more recent developments, the concept of confidence distribution has emerged as a purely frequentist concept, without any fiducial interpretation or reasoning. Conceptually, a confidence distribution is no different from a point estimator or an interval estimator (confidence interval), but it uses a sample-dependent distribution function on the parameter space (instead of a point or an interval) to estimate the parameter of interest.

A simple example of a confidence distribution, that has been broadly used in statistical practice, is a bootstrap distribution.[5] The development and interpretation of a bootstrap distribution does not involve any fiducial reasoning; the same is true for the concept of a confidence distribution. But the notion of confidence distribution is much broader than that of a bootstrap distribution. In particular, recent research suggests that it encompasses and unifies a wide range of examples, from regular parametric cases (including most examples of the classical development of Fisher's fiducial distribution) to bootstrap distributions, p-value functions,[6] normalized likelihood functions and, in some cases, Bayesian priors and Bayesian posteriors.[7]

Just as a Bayesian posterior distribution contains a wealth of information for any type of Bayesian inference, a confidence distribution contains a wealth of information for constructing almost all types of frequentist inferences, including point estimates, confidence intervals and p-values, among others. Some recent developments have highlighted the promising potentials of the CD concept, as an effective inferential tool.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

The history of CD concept

Neyman (1937)[8] introduced the idea of "confidence" in his seminal paper on confidence intervals which clarified the frequentist repetition property. According to Fraser,[9] the seed (idea) of confidence distribution can even be traced back to Bayes (1763)[10] and Fisher (1930).[1] Some researchers view the confidence distribution as "the Neymanian interpretation of Fishers fiducial distribution",[11] which was "furiously disputed by Fisher".[12] It is also believed that these "unproductive disputes" and Fisher's "stubborn insistence"[12] might be the reason that the concept of confidence distribution has been long misconstrued as a fiducial concept and not been fully developed under the frequentist framework.[7][13] Indeed, the confidence distribution is a purely frequentist concept with a purely frequentist interpretation, and it also has ties to Bayesian inference concepts and the fiducial arguments.

Definition

Classical definition

Classically, a confidence distribution is defined by inverting the upper limits of a series of lower sided confidence intervals.[14]Template:Page needed[3] In particular,

Definition (classical definition)
For every α in (0, 1), let (−∞, ξn(α)] be a 100α% lower-side confidence interval for θ, where ξn(α) = ξn(Xn,α) is continuous and increasing in α for each sample Xn. Then, Hn(•) = ξn−1(•) is a confidence distribution for θ.

Efron stated that this distribution "assigns probability 0.05 to θ lying between the upper endpoints of the 0.90 and 0.95 confidence interval, etc." and "it has powerful intuitive appeal".[14] In the classical literature,Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. the confidence distribution function is interpreted as a distribution function of the parameter θ, which is impossible unless fiducial reasoning is involved since, in a frequentist setting, the parameters are fixed and nonrandom.

To interpret the CD function entirely from a frequentist viewpoint and not interpret it as a distribution function of a (fixed/nonrandom) parameter is one of the major departures of recent development relative to the classical approach. The nice thing about treating confidence distribution as a purely frequentist concept (similar to a point estimator) is that it is now free from those restrictive, if not controversial, constraints set forth by Fisher on fiducial distributions.[7][13]

The modern definition

The following definition applies.[11][15][16] In the definition, Θ is the parameter space of the unknown parameter of interest θ, and χ is the sample space corresponding to data Xn={X1, ..., Xn}.

Definition
A function Hn(•) = Hn(Xn, •) on χ × Θ → [0, 1] is called a confidence distribution (CD) for a parameter θ, if it follows two requirements:
  • (R1) For each given Xnχ is a continuous cumulative distribution function on Θ;
  • (R2) At the true parameter value θ = θ0, Hn(θ0) ≡ Hn(Xn, θ0), as a function of the sample Xn, follows the uniform distribution U[0, 1].

Also, the function H is an asymptotic CD (aCD), if the U[0, 1] requirement is true only asymptotically and the continuity requirement on Hn(•) is dropped.

In nontechnical terms, a confidence distribution is a function of both the parameter and the random sample, with two requirements. The first requirement (R1) simply requires that a CD should be a distribution on the parameter space. The second requirement (R2) sets a restriction on the function so that inferences (point estimators, confidence intervals and hypothesis testing, etc.) based on the confidence distribution have desired frequentist properties. This is similar to the restrictions in point estimation to ensure certain desired properties, such as unbiasedness, consistency, efficiency, etc.[7][17]

A confidence distribution derived by inverting the upper limits of confidence intervals (classical definition) also satisfies the requirements in the above definition and this version of the definition is consistent with the classical definition.[16]

Unlike the classical fiducial inference, more than one confidence distributions may be available to estimate a parameter under any specific setting. Also, unlike the classical fiducial inference, optimality is not a part of requirement. Depending on the setting and the criterion used, sometimes there is a unique "best" (in terms of optimality) confidence distribution. But sometimes there is no optimal confidence distribution available or, in some extreme cases, we may not even be able to find a meaningful confidence distribution. This is not different from the practice of point estimation.

Examples

Example 1: Normal Mean and Variance

Suppose a normal sample Xi ~ N(μσ2), i = 1, 2, ..., n is given.

(1) Variance σ2 is known

Both the functions HΦ(μ) and Ht(μ) given by

HΦ(μ)=Φ(n(μX¯)σ),andHt(μ)=Ftn1(n(μX¯)s),

satisfy the two requirements in the CD definition, and they are confidence distribution functions for μ.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park. Here, Φ is the cumulative distribution function of the standard normal distribution, and Ftn1 is the cumulative distribution function of the student tn1 distribution. Furthermore,

HA(μ)=Φ(n(μX¯)s)

satisfies the definition of an asymptotic confidence distribution when n→∞, and it is an asymptotic confidence distribution for μ. The uses of Ht(μ) and HA(μ) are equivalent to state that we use N(X¯,σ2) and N(X¯,s2) to estimate μ, respectively.

(2) Variance σ2 is unknown

For the parameter μ, since HΦ(μ) involves the unknown parameter σ and it violates the two requirements in the CD definition, it is no longer a "distribution estimator" or a confidence distribution for μ.Template:Cn However, Ht(μ) is still a CD for μ and HA(μ) is an aCD for μ.

For the parameter σ2, the sample-dependent cumulative distribution function

Hχ2(θ)=1Fχn12(s2/θ)

is a confidence distribution function for σ2.Template:Cn Here, Fχn12 is the cumulative distribution function of the student χn12 distribution.

In the case when the variance σ2 is known, HΦ(μ)=Φ(n(μX¯)σ) is optimal in terms of producing the shortest confidence intervals at any given level. In the case when the variance σ2 is unknown, Ht(μ)=Ftn1(n(μX¯)s) is an optimal confidence distribution for μ.

Example 2: Bivariate normal correlation

Let ρ denotes the correlation coefficient of a bivariate normal population. It is well known that Fisher's z defined by the Fisher transformation:

z=12ln1+r1r

has the limiting distribution N(12ln1+ρ1ρ,1n3) with a fast rate of convergence, where r is the sample correlation and n is the sample size.

The function

Hn(ρ)=1Φ(n3(12ln1+r1r12ln1+ρ1ρ))

is an asymptotic confidence distribution for ρ.Potter or Ceramic Artist Truman Bedell from Rexton, has interests which include ceramics, best property developers in singapore developers in singapore and scrabble. Was especially enthused after visiting Alejandro de Humboldt National Park.

Using CD to make inference

Confidence interval

File:CDinference1.png

From the CD definition, it is evident that the interval (,Hn1(1α)],[Hn1(α),) and [Hn1(α/2),Hn1(1α/2)] provide 100(1 − α)%-level confidence intervals of different kinds, for θ, for any α ∈ (0, 1). Also [Hn1(α1),Hn1(1α2)] is a level 100(1 − α1 − α2)% confidence interval for the parameter θ for any α1 > 0, α2 > 0 and α1 + α2 < 1. Here, Hn1(β) is the 100β% quantile of Hn(θ) or it solves for θ in equation Hn(θ)=β. The same holds for an aCD, where the confidence level is achieved in limit.

Point estimation

Point estimators can also be constructed given a confidence distribution estimator for the parameter of interest. For example, given Hn(θ) the CD for a parameter θ, natural choices of point estimators include the median Mn = Hn−1(1/2), the mean θ¯n=tdHn(t), and the maximum point of the CD density

θ^n=argmaxθhn(θ),hn(θ)=H'n(θ).

Under some modest conditions, among other properties, one can prove that these point estimators are all consistent.[7][18]

Hypothesis testing

One can derive a p-value for a test, either one-sided or two-sided, concerning the parameter θ, from its confidence distribution Hn(θ).[7][18] Denote by the probability mass of a set C under the confidence distribution function ps(C)=Hn(C)=CdH(θ). This ps(C) is called "support" in the CD inference and also known as "belief" in the fiducial literature.[19] We have

(1) For the one-sided test K0: θ ∈ C vs. K1: θ ∈ Cc, where C is of the type of (−∞, b] or [b, ∞), one can show from the CD definition that supθ ∈ CPθ(ps(C) ≤ α) = α. Thus, ps(C) = Hn(C) is the corresponding p-value of the test.

(2) For the singleton test K0: θ = b vs. K1: θ ≠ b, P{K0: θ = b}(2 min{ps(Clo), one can show from the CD definition that ps(Cup)} ≤ α) = α. Thus, 2 min{ps(Clo), ps(Cup)} = 2 min{Hn(b), 1 − Hn(b)} is the corresponding p-value of the test. Here, Clo = (−∞, b] and Cup = [b, ∞).

See Figure 1 from Xie and Singh (2011)[7] for a graphical illustration of the CD inference.

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

Bibliography

Template:Refbegin

  • Fisher, R A (1956). Statistical Methods and Scientific Inference. New York: Hafner. ISBN 0-02-844740-9.
  • Fisher, R. A. (1955). "Statistical methods and scientific induction" J. Roy. Statist. Soc. Ser. B. 17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald from a fiducial perspective)
  • Hannig, J. (2009). "On generalized fiducial inference". Statistica Sinica, 19, 491–544.
  • Lawless, F. and Fredette, M. (2005). "Frequentist prediction intervals and predictive distributions." Biometrika. 92(3) 529–542.
  • Lehmann, E.L. (1993). "The Fisher, Neyman–Pearson theories of testing hypotheses: one theory or two?" Journal of the American Statistical Association 88 1242–1249.
  • Neyman, Jerzy (1956). "Note on an Article by Sir Ronald Fisher". Journal of the Royal Statistical Society. Series B (Methodological) 18 (2): 288–294. Template:Jstor. (reply to Fisher 1955, which diagnoses a fallacy of "fiducial inference")
  • Schweder T., Sadykova D., Rugh D. and Koski W. (2010) "Population Estimates From Aerial Photographic Surveys of Naturally and Variably Marked Bowhead Whales" Journal of Agricultural Biological and Environmental Statistics 2010 15: 1–19
  • Bityukov S., Krasnikov N., Nadarajah S. and Smirnova V. (2010) "Confidence distributions in statistical inference". AIP Conference Proceedings, 1305, 346-353.
  • Singh, K. and Xie, M. (2012). "CD-posterior --- combining prior and data through confidence distributions." Contemporary Developments in Bayesian Analysis and Statistical Decision Theory: A Festschrift for William E. Strawderman. (D. Fourdrinier, et al., Eds.). IMS Collection, Volume 8, 200 -214.

Template:Refend

  1. 1.0 1.1 Cite error: Invalid <ref> tag; no text was provided for refs named Fisher1930
  2. Cite error: Invalid <ref> tag; no text was provided for refs named cox1958
  3. 3.0 3.1 Cite error: Invalid <ref> tag; no text was provided for refs named Cox2006a
  4. Cite error: Invalid <ref> tag; no text was provided for refs named Xie2013r
  5. Cite error: Invalid <ref> tag; no text was provided for refs named Efron1998
  6. Cite error: Invalid <ref> tag; no text was provided for refs named Fraser1991
  7. 7.0 7.1 7.2 7.3 7.4 7.5 7.6 Cite error: Invalid <ref> tag; no text was provided for refs named Xie2011
  8. Cite error: Invalid <ref> tag; no text was provided for refs named Neyman1937
  9. Cite error: Invalid <ref> tag; no text was provided for refs named Fraser2011
  10. Cite error: Invalid <ref> tag; no text was provided for refs named Bayes1973
  11. 11.0 11.1 Cite error: Invalid <ref> tag; no text was provided for refs named Schweder2002
  12. 12.0 12.1 Cite error: Invalid <ref> tag; no text was provided for refs named Zabell1992
  13. 13.0 13.1 Cite error: Invalid <ref> tag; no text was provided for refs named Singh2011
  14. 14.0 14.1 Cite error: Invalid <ref> tag; no text was provided for refs named Efron1993
  15. Cite error: Invalid <ref> tag; no text was provided for refs named Singh2001
  16. 16.0 16.1 Cite error: Invalid <ref> tag; no text was provided for refs named Singh2005
  17. Cite error: Invalid <ref> tag; no text was provided for refs named Xie2009
  18. 18.0 18.1 Cite error: Invalid <ref> tag; no text was provided for refs named Singh2007
  19. Cite error: Invalid <ref> tag; no text was provided for refs named Kendall1974