|
|
Line 1: |
Line 1: |
| '''Kuiper's test''' is used in [[statistics]] to [[statistical hypothesis test|test]] that whether a given [[cumulative distribution function|distribution]], or family of distributions, is contradicted by evidence from a sample of data. It is named after Dutch mathematician [[Nicolaas Kuiper]].
| | The name of the writer is Garland. My home is now in Kansas. Interviewing is what she does in her working day occupation but soon her spouse and her will begin their personal company. Playing crochet is a factor that I'm completely addicted to.<br><br>my homepage ... [http://fns.Kiev.ua/en/node/101702 extended car warranty] |
| | |
| Kuiper's test<ref name=K1960>Kuiper (1960)</ref> is closely related to the more well-known [[Kolmogorov–Smirnov test]] (or K-S test as it is often called). As with the K-S test, the discrepancy statistics ''D''<sup>+</sup> and ''D''<sup>−</sup> represent the absolute sizes of the most positive and most negative differences between the two [[cumulative distribution function]]s that are being compared. The trick with Kuiper's test is to use the quantity ''D''<sup>+</sup> + ''D''<sup>−</sup> as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the [[median]] and also makes it invariant under cyclic transformations of the independent variable. The [[Anderson–Darling test]] is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance.
| |
| | |
| This invariance under cyclic transformations makes Kuiper's test invaluable when testing for [[seasonality|cyclic variations]] by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between, [[circular distribution|circular probability distributions]].
| |
| | |
| ==Definition==
| |
| | |
| The test statistic, ''V'', for Kuiper's test is defined as follows. Let ''F'' be the continuous [[cumulative distribution function]] which is to be the [[null hypothesis]]. Denote the sample of data which are independent realisations of [[random variable]]s, having ''F'' as their distribution function, by ''x<sub>i</sub>'' (''i''=1,...,''n''). Then define <ref name=PH1>Pearson & Hartley (1972) p 118</ref>
| |
| :<math>z_i=F(x_i),</math>
| |
| :<math>D^+ = \mathrm{max} \left[i/n- z_i \right],</math>
| |
| :<math>D^- = \mathrm{max} \left[z_i-(i-1)/n \right],</math>
| |
| and finally,
| |
| :<math>V=D^+ + D^- .</math> | |
| | |
| Tables for the critical points of the test statistic are available,<ref>Pearson & Hartley (1972) Table 54</ref> and these include certain cases where the distribution being tested is not fully known, so that parameters of the family of distributions are [[estimation theory |estimated]].
| |
| | |
| ==Example==
| |
| | |
| We could test the hypothesis that computers fail more during some times of the year than others. To test this, we would collect the dates on which the test set of computers had failed and build an [[empirical distribution function]]. The [[null hypothesis]] is that the failures are [[Uniform distribution (continuous)|uniformly distributed]]. Kuiper's statistic does not change if we change the beginning of the year and does not require that we bin failures into months or the like.<ref name=K1960/><ref name=W1>Watson (1961)</ref> Another test statistic having this property is the Watson statistic,<ref name=PH1/><ref name=W1/> which is related to the [[Cramér–von Mises criterion|Cramér–von Mises test]].
| |
| | |
| However, if failures occur mostly on weekends, many uniform-distribution tests such as K-S would miss this, since weekends are spread throughout the year. This inability to distinguish distributions with a [[comb]]-like shape from continuous uniform distributions is a key problem with all statistics based on a variant of the K-S test. Kuiper's test, applied to the event times modulo one week, is able to detect such a pattern.
| |
| | |
| ==Notes==
| |
| {{Reflist}}
| |
| | |
| ==References==
| |
| *{{cite journal
| |
| | last= Kuiper | first=N. H. |authorlink=Nicolaas Kuiper
| |
| | year = 1960
| |
| | title = Tests concerning random points on a circle
| |
| | journal = Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, Series A
| |
| | volume = 63
| |
| | pages = 38–47
| |
| }}
| |
| *[[Egon Pearson|Pearson, E.S.]], Hartley, H.O. (1972) ''Biometrika Tables for Statisticians, Volume 2'', CUP. ISBN 0-521-06937-8 (page 118 and Table 54)
| |
| *Watson, G.S. (1961) "Goodness-Of-Fit Tests on a Circle", ''[[Biometrika]]'', 48 (1/2), 109–114 {{jstor|2333135}}
| |
| | |
| [[Category:Statistical tests]]
| |
| [[Category:Non-parametric statistics]]
| |
| [[Category:Directional statistics]]
| |
The name of the writer is Garland. My home is now in Kansas. Interviewing is what she does in her working day occupation but soon her spouse and her will begin their personal company. Playing crochet is a factor that I'm completely addicted to.
my homepage ... extended car warranty