Transcendental equation: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>NOrbeck
m Fixed link to closed form expression
en>Jasper Deng
not always, for example, cos(x)=sin(x)
 
Line 1: Line 1:
In [[statistics]], a '''pivotal quantity''' or '''pivot''' is a function of observations and unobservable parameters whose [[probability distribution]] does not depend on the unknown [[parameter]]s <ref>Shao, J.: ''Mathematical Statistics'', Springer, New York, 2003, ISBN 978-0-387-95382-3 (Section 7.1)</ref> (also referred to as [[nuisance parameter]]s). Note that a pivot quantity need not be a [[statistic]]—the function and its ''value'' can depend on the parameters of the model, but its ''distribution'' must not. If it is a statistic, then it is known as an ''[[ancillary statistic]].''
over the counter std test ([http://raybana.com/chat/pg/groups/174883/solid-advice-for-dealing-with-a-candida/ look at this site]) author is known by the title of Figures Wunder. Managing people has been his working day occupation for a whilst. One of the extremely best issues in the world for him is to gather badges but he is having difficulties to discover time for it. Years ago he moved to North Dakota and his family enjoys it.
 
More formally,<ref>Morris H. DeGroot, Mark J. Schervish: ''Probability and Statistics'' (4th Edition), Pearson, 2011 (page 489)</ref> let <math>X = (X_1,X_2,\ldots,X_n) </math> be a random sample from a distribution that depends on a parameter (or vector of parameters) <math> \theta </math>. Let <math> g(X,\theta) </math> be a random variable whose distribution is the same for all <math> \theta </math>. Then <math>g</math> is called a ''pivotal quantity'' (or simply a ''pivotal'').
 
Pivotal quantities are commonly used for [[Normalization (statistics)|normalization]] to allow data from different data sets to be compared. It is relatively easy to construct pivots for location and scale parameters: for the former we form differences so that location cancels, for the latter ratios so that scale cancels.
 
Pivotal quantities are fundamental to the construction of [[test statistic]]s, as they allow the statistic to not depend on parameters – for example, [[Student's t-statistic]] is for a normal distribution with unknown variance (and mean). They also provide one method of constructing [[confidence interval]]s, and the use of pivotal quantities improves performance of the [[bootstrapping (statistics)|bootstrap]]. In the form of ancillary statistics, they can be used to construct frequentist [[prediction interval]]s (predictive confidence intervals).
 
== Examples ==
 
=== Normal distribution ===
{{see also|Prediction interval#Normal distribution}}
One of the simplest pivotal quantities is the [[z-score]]; given a normal distribution with <math>\mu</math> and variance <math>\sigma^2</math>, and an observation ''x,'' the z-score:
: <math> z = \frac{x - \mu}{\sigma},</math>
has distribution <math>N(0,1)</math> – a normal distribution with mean 0 and variance 1. Similarly, since the ''n''-sample sample mean has sampling distribution <math>N(\mu,\sigma^2/n),</math> the z-score of the mean
: <math> z = \frac{\overline{X} - \mu}{\sigma/\sqrt{n}}</math>
also has distribution <math>N(0,1).</math> Note that while these functions depend on the parameters – and thus one can only compute them if the parameters are known (they are not statistics) – the distribution is independent of the parameters.
 
Given <math>n</math> independent, identically distributed (i.i.d.) observations <math>X = (X_1, X_2, \ldots, X_n) </math> from the [[normal distribution]] with unknown mean <math>\mu</math> and variance <math>\sigma^2</math>, a pivotal quantity can be obtained from the function:
:<math> g(x,X) = \sqrt{n}\frac{x - \overline{X}}{s} </math>
where
:<math> \overline{X} = \frac{1}{n}\sum_{i=1}^n{X_i} </math>
and
:<math> s^2 = \frac{1}{n-1}\sum_{i=1}^n{(X_i - \overline{X})^2} </math>
are unbiased estimates of <math>\mu</math> and <math>\sigma^2</math>, respectively. The function <math>g(x,X)</math> is the [[Student's t-statistic]] for a new value <math>x</math>, to be drawn from the same population as the already observed set of values <math>X</math>.
 
Using <math>x=\mu</math> the function <math>g(\mu,X)</math> becomes a pivotal quantity, which is also distributed by the [[Student's t-distribution]] with <math>\nu = n-1</math> degrees of freedom. As required, even though <math>\mu</math> appears as an argument to the function <math>g</math>, the distribution of <math>g(\mu,X)</math> does not depend on the parameters <math>\mu</math> or <math>\sigma</math> of the normal probability distribution that governs the observations <math>X_1,\ldots,X_n</math>.
 
This can be used to compute a [[prediction interval]] for the next observation <math>X_{n+1};</math> see [[Prediction interval#Normal distribution|Prediction interval: Normal distribution]].
 
=== Bivariate normal distribution ===
In more complicated cases, it is impossible to construct exact pivots. However, having approximate pivots improves convergence to [[asymptotic normality]].
 
Suppose a sample of size <math>n</math> of vectors <math>(X_i,Y_i)'</math> is taken from a bivariate [[normal distribution]] with unknown [[correlation]] <math>\rho</math>.  
 
An estimator of <math>\rho</math> is the sample (Pearson, moment) correlation
:<math> r = \frac{\frac1{n-1} \sum_{i=1}^n (X_i - \overline{X})(Y_i - \overline{Y})}{s_X s_Y} </math>
where <math>s_X^2, s_Y^2</math> are [[sample variance]]s of <math>X</math> and <math>Y</math>. The sample statistic <math>r</math> has an asymptotically normal distribution:
:<math>\sqrt{n}\frac{r-\rho}{1-\rho^2} \Rightarrow N(0,1)</math>.
 
However, a [[variance-stabilizing transformation]]
:<math> z = \rm{tanh}^{-1} r = \frac12 \ln \frac{1+r}{1-r}</math>
known as [[Fisher transformation|Fisher's ''z'' transformation]] of the correlation coefficient allows to make the distribution of <math>z</math> asymptotically independent of unknown parameters:
:<math>\sqrt{n}(z-\zeta) \Rightarrow N(0,1)</math>
where <math>\zeta = {\rm tanh}^{-1} \rho</math> is the corresponding population parameter. For finite samples sizes <math>n</math>, the random variable <math>z</math> will have distribution closer to normal than that of <math>r</math>. An even closer approximation to the standard normal distribution is obtained by using a better approximation for the exact variance: the usual form is
:<math>\operatorname{Var}(z) \approx \frac1{n-3} .</math>
 
== Robustness ==
{{main|Robust statistics}}
From the point of view of [[robust statistics]], pivotal quantities are robust to changes in the parameters – indeed, independent of the parameters – but not in general robust to changes in the model, such as violations of the assumption of normality.
This is fundamental to the robust critique of non-robust statistics, often derived from pivotal quantities: such statistics may be robust within the family, but are not robust outside it.
 
== See also ==
* [[Normalization (statistics)]]
 
== References ==
{{Reflist}}
 
[[Category:Statistical theory]]

Latest revision as of 08:30, 17 March 2014

over the counter std test (look at this site) author is known by the title of Figures Wunder. Managing people has been his working day occupation for a whilst. One of the extremely best issues in the world for him is to gather badges but he is having difficulties to discover time for it. Years ago he moved to North Dakota and his family enjoys it.