|
|
Line 1: |
Line 1: |
| {{For|US Army cryptologist [[William F. Friedman]]'s cryptanalytic test|Vigenère cipher#Friedman test}}
| | The name of the author is Jayson. The preferred pastime for him and his children is fashion and he'll be starting something else along with it. My wife and I live in Mississippi but now I'm contemplating other options. I am an invoicing officer and I'll be promoted quickly.<br><br>my web page; free psychic reading; [http://cspl.postech.ac.kr/zboard/Membersonly/144571 cspl.postech.ac.kr], |
| | |
| The '''Friedman test''' is a [[non-parametric statistics|non-parametric]] [[statistical test]] developed by the [[United States|U.S.]] economist [[Milton Friedman]]. Similar to the [[parametric statistics|parametric]] [[repeated measures]] [[ANOVA]], it is used to detect differences in treatments across multiple test attempts. The procedure involves [[ranking]] each row (or ''block'') together, then considering the values of ranks by columns. Applicable to [[complete block design]]s, it is thus a special case of the [[Durbin test]]. | |
| | |
| Classic examples of use are:
| |
| * ''n'' wine judges each rate ''k'' different wines. Are any wines ranked consistently higher or lower than the others?
| |
| * ''n'' wines are each rated by ''k'' different judges. Are the judges' ratings consistent with each other?
| |
| * ''n'' welders each use ''k'' welding torches, and the ensuing welds were rated on quality. Do any of the torches produce consistently better or worse welds?
| |
| | |
| The Friedman test is used for one-way repeated measures analysis of variance by ranks. In its use of ranks it is similar to the [[Kruskal-Wallis one-way analysis of variance]] by ranks.
| |
| | |
| Friedman test is widely supported by many [[Comparison of statistical packages|statistical software packages]].
| |
| | |
| ==Method==
| |
| #Given data <math>\{x_{ij}\}_{n\times k}</math>, that is, a [[Matrix (mathematics)|matrix]] with <math>n</math> rows (the ''blocks''), <math>k</math> columns (the ''treatments'') and a single observation at the intersection of each block and treatment, calculate the [[Rank statistics|ranks]] ''within'' each block. If there are tied values, assign to each tied value the average of the ranks that would have been assigned without ties. Replace the data with a new matrix <math>\{r_{ij}\}_{n \times k}</math> where the entry <math>r_{ij}</math> is the rank of <math>x_{ij}</math> within block <math>i</math>.
| |
| #Find the values:
| |
| #*<math>\bar{r}_{\cdot j} = \frac{1}{n} \sum_{i=1}^n {r_{ij}}</math>
| |
| #*<math>\bar{r} = \frac{1}{nk}\sum_{i=1}^n \sum_{j=1}^k r_{ij}</math>
| |
| #*<math>SS_t = n\sum_{j=1}^k (\bar{r}_{\cdot j} - \bar{r})^2</math>,
| |
| #*<math>SS_e = \frac{1}{n(k-1)} \sum_{i=1}^n \sum_{j=1}^k (r_{ij} - \bar{r})^2</math>
| |
| #The test statistic is given by <math>Q = \frac{SS_t}{SS_e}</math>. Note that the value of Q as computed above does not need to be adjusted for tied values in the data.
| |
| #Finally, when n or k is large (i.e. n > 15 or k > 4), the [[probability distribution]] of Q can be approximated by that of a [[chi-squared distribution]]. In this case the [[p-value]] is given by <math>\mathbf{P}(\chi^2_{k-1} \ge Q)</math>. If n or k is small, the approximation to chi-square becomes poor and the p-value should be obtained from tables of Q specially prepared for the Friedman test. If the p-value is [[statistical significance|significant]], appropriate post-hoc [[multiple comparisons]] tests would be performed.
| |
| | |
| ==Related tests==
| |
| * When using this kind of design for a binary response, one instead uses the [[Cochran's Q test]].
| |
| * [[Kendall's W]] is a normalization of the Friedman statistic between 0 and 1.
| |
| | |
| ==Post hoc analysis==
| |
| | |
| [[Post-hoc analysis|Post-hoc tests]] were proposed by Schaich and Hamerle (1984)<ref>Schaich, E. & Hamerle, A. (1984). Verteilungsfreie statistische Prüfverfahren. Berlin: Springer. ISBN 3-540-13776-9.</ref> as well as Conover (1971, 1980)<ref>Conover, W. J. (1971, 1980). Practical nonparametric statistics. New York: Wiley. ISBN 0-471-16851-3.</ref> in order to decide which groups are significantly different from each other, based upon the mean rank differences of the groups. These procedures are detailed in Bortz, Lienert and Boehnke (2000, pp. 275).<ref>Bortz, J., Lienert, G. & Boehnke, K. (2000). Verteilungsfreie Methoden in der Biostatistik. Berlin: Springer. ISBN 3-540-67590-6.</ref>
| |
| | |
| Not all statistical packages support Post-hoc analysis for Friedman's test. But user contributed code exists that provides these facilities (for example in SPSS [http://timo.gnambs.at/en/scripts/friedmanposthoc], and in R [http://www.r-statistics.com/2010/02/post-hoc-analysis-for-friedmans-test-r-code/])
| |
| | |
| == References ==
| |
| | |
| <references/>
| |
| | |
| === Primary sources ===
| |
| | |
| *{{cite journal
| |
| | last = Friedman
| |
| | first = Milton
| |
| | authorlink = Milton Friedman
| |
| |date=December 1937
| |
| | title = The use of ranks to avoid the assumption of normality implicit in the analysis of variance
| |
| | journal = Journal of the American Statistical Association
| |
| | volume = 32
| |
| | issue = 200
| |
| | pages = 675–701
| |
| | doi = 10.2307/2279372
| |
| | jstor = 2279372
| |
| | publisher = American Statistical Association
| |
| }}
| |
| *{{cite journal
| |
| | last = Friedman
| |
| | first = Milton
| |
| | authorlink = Milton Friedman
| |
| |date=March 1939
| |
| | title = A correction: The use of ranks to avoid the assumption of normality implicit in the analysis of variance
| |
| | journal = Journal of the American Statistical Association
| |
| | volume = 34
| |
| | issue = 205
| |
| | pages = 109
| |
| | doi = 10.2307/2279169
| |
| | jstor = 2279169
| |
| | publisher = American Statistical Association
| |
| }}
| |
| *{{cite journal
| |
| | last = Friedman
| |
| | first = Milton
| |
| | authorlink = Milton Friedman
| |
| |date=March 1940
| |
| | title = A comparison of alternative tests of significance for the problem of ''m'' rankings
| |
| | journal = The Annals of Mathematical Statistics
| |
| | volume = 11
| |
| | issue = 1
| |
| | pages = 86–92
| |
| | doi = 10.1214/aoms/1177731944
| |
| | jstor=2235971
| |
| }}
| |
| | |
| === Secondary sources ===
| |
| *Kendall, M. G. ''Rank Correlation Methods.'' (1970, 4th ed.) London: Charles Griffin.
| |
| *Hollander, M., and Wolfe, D. A. ''Nonparametric Statistics.'' (1973). New York: J. Wiley.
| |
| *Siegel, Sidney, and Castellan, N. John Jr. ''Nonparametric Statistics for the Behavioral Sciences.'' (1988, 2nd ed.) New York: McGraw-Hill.
| |
| | |
| == External links ==
| |
| * [http://timo.gnambs.at/en/scripts/friedmanposthoc Post-hoc comparisons for Friedman test in SPSS]
| |
| * [http://www.r-statistics.com/2010/02/post-hoc-analysis-for-friedmans-test-r-code/ Post hoc analysis for Friedman’s Test in R]
| |
| | |
| {{Milton Friedman}}
| |
| | |
| {{DEFAULTSORT:Friedman Test}}
| |
| [[Category:Statistical tests]]
| |
| [[Category:Milton Friedman]]
| |
| [[Category:Non-parametric statistics]]
| |
The name of the author is Jayson. The preferred pastime for him and his children is fashion and he'll be starting something else along with it. My wife and I live in Mississippi but now I'm contemplating other options. I am an invoicing officer and I'll be promoted quickly.
my web page; free psychic reading; cspl.postech.ac.kr,