Feedback linearization: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>BD2412
Relative degree: remove link that points to the page itself via a redirection
 
Line 1: Line 1:
<!-- PLEASE DO NOT CONVERT REFERENCES WITHOUT DISCUSSING ON TALK PAGE. SEE http://bugzilla.wikimedia.org/show_bug.cgi?id=5885 -->
Nice to satisfy you, I am Marvella Shryock. To collect coins is what his family members and him appreciate. Years ago he moved to North Dakota and his family loves it. My working day occupation is a meter reader.<br><br>Here is my web blog: [http://riddlesandpoetry.com/?q=node/19209 over the counter std test]
'''Fleiss' kappa''' (named after [[Joseph L. Fleiss]]) is a [[statistical measure]] for assessing the [[inter-rater reliability|reliability of agreement]] between a fixed number of raters when assigning [[categorical rating]]s to a number of items or classifying items. This contrasts with other kappas such as [[Cohen's kappa]], which only work when assessing the agreement between two raters. The measure calculates the degree of agreement in classification over that which would be expected by chance. There is no generally agreed-upon measure of significance, although guidelines have been given.
 
Fleiss' kappa can be used only with binary or [[Nominal data|nominal-scale]] ratings.  No version is available for ordered-categorical ratings.
 
==Introduction==
 
Fleiss' kappa is a generalisation of [[Scott's pi]] statistic,{{ref|Scott1955}} a [[statistical]] measure of [[inter-rater reliability]].{{ref|Fleiss1971}}  It is also related to [[Cohen's kappa]] statistic.  Whereas Scott's pi and Cohen's kappa work for only two raters, Fleiss' kappa works for any number of raters giving categorical ratings (see [[nominal data]]), to a fixed number of items. It can be interpreted as expressing the extent to which the observed amount of agreement among raters exceeds what would be expected if all raters made their ratings completely randomly. It is important to note that whereas Cohen's kappa assumes the same two raters have rated a set of items, Fleiss' kappa specifically assumes that although there are a fixed number of raters (e.g., three), different items are rated by different individuals (Fleiss, 1971, p.378). That is, Item 1 is rated by Raters A, B, and C; but Item 2 could be rated by Raters D, E, and F.
 
Agreement can be thought of as follows, if a fixed number of people assign numerical ratings to a number of items then the kappa will give a measure for how consistent the ratings are. The kappa, <math>\kappa\,</math>, can be defined as,
 
<span style="float: right">(1)</span>
:<math>\kappa = \frac{\bar{P} - \bar{P_e}}{1 - \bar{P_e}}</math>
 
The factor <math>1 - \bar{P_e}</math> gives the degree of agreement that is attainable above chance, and, <math>\bar{P} - \bar{P_e}</math> gives the degree of agreement actually achieved above chance. If the raters are in complete agreement then <math>\kappa = 1~</math>. If there is no agreement among the raters (other than what would be expected by chance) then <math>\kappa \le 0</math>.
 
An example of the use of Fleiss' kappa may be the following: Consider fourteen psychiatrists are asked to look at ten patients. Each psychiatrist gives one of possibly five diagnoses to each patient. The Fleiss' kappa can be computed from this [[Matrix (computer science)|matrix]] (see [[#Worked example|example below]]) to show the degree of agreement between the psychiatrists above the level of agreement expected by chance.
 
==Equations==
 
Let ''N'' be the total number of subjects, let ''n'' be the number of ratings per subject, and let ''k'' be the number of categories into which assignments are made. The subjects are indexed by ''i'' = 1, ... ''N'' and the categories are indexed by ''j'' = 1, ... ''k''. Let ''n''<sub>''ij''</sub> represent the number of raters who assigned the ''i''-th subject to the ''j''-th category.
 
First calculate ''p''<sub>j</sub>, the proportion of all assignments which were to the ''j''-th category:
 
<span style="float: right">(2)</span>
:<math>p_{j} = \frac{1}{N n} \sum_{i=1}^N n_{i j},\quad\quad 1 = \frac{1}{n} \sum_{j=1}^k n_{i j} </math>
 
Now calculate <math>P_{i}\,</math>, the extent to which raters agree for the ''i''-th subject (i.e., compute how many rater--rater pairs are in agreement, relative to the number of all possible rater--rater pairs):
 
<span style="float: right">(3)</span>
:<math>P_{i} = \frac{1}{n(n - 1)} \sum_{j=1}^k n_{i j} (n_{i j} - 1)</math>
 
::<math>      = \frac{1}{n(n - 1)} \sum_{j=1}^k (n_{i j}^2 - n_{i j}) </math>
 
::<math>      = \frac{1}{n(n - 1)} [(\sum_{j=1}^k n_{i j}^2) - (n)] </math>
 
Now compute <math>\bar{P}</math>, the mean of the <math>P_i\,</math>'s, and <math>\bar{P_e}</math> which go into the formula for <math>\kappa\,</math>:
 
<span style="float: right">(4)</span>
:<math>\bar{P} = \frac{1}{N} \sum_{i=1}^N P_{i}</math>
 
::<math>      = \frac{1}{N n (n - 1)} (\sum_{i=1}^N \sum_{j=1}^k n_{i j}^2 - N n) </math>
 
<span style="float: right">(5)</span>
:<math>\bar{P_e} = \sum_{j=1}^k p_{j} ^2</math>
 
==Worked example==
<div style="float: right">
{|class="wikitable"
!        || 1  || 2  || 3  || 4  || 5  ||  <math>P_i\,</math>
|-
|'''1''' || 0  || 0  || 0  || 0  || 14  || 1.000
|-
|'''2''' || 0  || 2  || 6  || 4  || 2  || 0.253
|-
|'''3''' || 0  || 0  || 3  || 5  || 6  || 0.308
|-
|'''4''' || 0  || 3  || 9  || 2  || 0  || 0.440
|-
|'''5''' || 2  || 2  || 8  || 1  || 1  || 0.330
|-
|'''6''' || 7  || 7  || 0  || 0  || 0  || 0.462
|-
|'''7''' || 3  || 2  || 6  || 3  || 0  || 0.242
|-
|'''8''' || 2  || 5  || 3  || 2  || 2  || 0.176
|-
|'''9''' || 6  || 5  || 2  || 1  || 0  || 0.286
|-
|'''10''' || 0  || 2  || 2  || 3  || 7  || 0.286
|-
|'''Total''' || 20 ||  28  ||  39  ||  21 ||  32
|-
| '''<math>p_j\,</math>''' || 0.143 ||0.200 ||0.279 || 0.150 || 0.229
|-
|+ '''Table of values for computing the worked example'''
|}
</div>
In the following example, fourteen raters (<math>n</math>) assign ten "subjects" (<math>N</math>) to a total of five categories (<math>k</math>). The categories are presented in the columns, while the subjects are presented in the rows. Each cell is filled with the number of raters who agreed that a certain subject belongs to a certain category.
===Data===
 
See table to the right.
 
<math>N</math> = 10, <math>n</math> = 14, <math>k</math> = 5
 
Sum of all cells = 140<br/>
Sum of <math>P_{i}\,</math> = 3.780
 
===Calculations===
 
For example, taking the first column,
 
:<math>p_1 = \frac{ 0+0+0+0+2+7+3+2+6+0 }{140} = 0.143</math>
 
And taking the second row,
 
:<math>P_2 = \frac{1}{14(14 - 1)} \left(0^2 + 2^2 + 6^2 + 4^2 + 2^2 - 14\right) = 0.253</math>
 
In order to calculate <math>\bar{P}</math>, we need to know the sum of <math>P_i</math>,
 
:<math>\sum_{i=1}^N P_{i}= 1.000 + 0.253 + \cdots + 0.286 + 0.286 = 3.780</math>
 
Over the whole sheet,
 
:<math>\bar{P} = \frac{1}{(10)} (3.780) = 0.378</math>
 
:<math>\bar{P}_{e} = 0.143^2 + 0.200^2 + 0.279^2 + 0.150^2 + 0.229^2 = 0.213</math>
 
:<math>\kappa = \frac{0.378 - 0.213}{1 - 0.213} = 0.210</math>
 
==Interpretation==
 
Landis and Koch (1977) gave the following table for interpreting  <math>\kappa</math> values.{{ref|Landis1977}} This table is however ''by no means'' universally accepted.  They supplied no evidence to support it, basing it instead on personal opinion. It has been noted that these guidelines may be more harmful than helpful,{{ref|Gwet2001}} as the number of categories and subjects will affect the magnitude of the value. The kappa will be higher when there are fewer categories.{{ref|Sim2005}}
 
{|class=wikitable
! <math>\kappa</math> !! Interpretation
|-
|align=center| < 0              || Poor agreement
|-
|align=center| 0.01 &ndash; 0.20  || Slight agreement
|-
|align=center| 0.21 &ndash; 0.40 || Fair agreement
|-
|align=center| 0.41 &ndash; 0.60 || Moderate agreement
|-
|align=center| 0.61 &ndash; 0.80 || Substantial agreement
|-
|align=center| 0.81 &ndash; 1.00 || Almost perfect agreement
|-
|}
 
==See also==
{{Wikibooks|Algorithm implementation|Statistics/Fleiss' kappa|Fleiss' kappa}}
* [[Cohen's kappa]]
* [[Pearson product-moment correlation coefficient]]
 
==Notes==
 
# {{note|Fleiss1971}} Fleiss, J. L. (1971) pp. 378&ndash;382
# {{note|Scott1955}} Scott, W. (1955) pp. 321&ndash;325
# {{note|Landis1977}} Landis, J. R. and Koch, G. G. (1977) pp. 159&ndash;174
# {{note|Gwet2010}} [http://www.agreestat.com/book_excerpts.html Gwet, K. L. (2010, chapter 6)]
# {{note|Sim2005}} Sim, J. and Wright, C. C. (2005) pp. 257&ndash;268
 
==References==
 
* Fleiss, J. L. (1971) "Measuring nominal scale agreement among many raters." ''Psychological Bulletin'', Vol. 76, No. 5 pp. 378&ndash;382
* Gwet, K. (2001) ''Statistical Tables for Inter-Rater Agreement''. (Gaithersburg : StatAxis Publishing)
* Gwet, K. L. (2010) ''Handbook of Inter-Rater Reliability'' (2nd Edition). (Gaithersburg : Advanced Analytics, LLC) ISBN 978-0-9708062-2-2
* Landis, J. R. and Koch, G. G. (1977) "The measurement of observer agreement for categorical data" in ''Biometrics''. Vol. 33, pp. 159&ndash;174
* Scott, W. (1955). "Reliability of content analysis: The case of nominal scale coding." ''Public Opinion Quarterly'', Vol. 19, No. 3, pp. 321&ndash;325.
* Sim, J. and Wright, C. C. (2005) "The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements" in ''Physical Therapy''. Vol. 85, No. 3, pp. 257&ndash;268
 
==Further reading==
 
* Fleiss, J. L. and Cohen, J. (1973) "The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability" in ''Educational and Psychological Measurement'', Vol. 33 pp. 613&ndash;619
* Fleiss, J. L. (1981) ''Statistical methods for rates and proportions''. 2nd ed. (New York: John Wiley) pp. 38&ndash;46
* Gwet, K. L. (2008) "[http://www.agreestat.com/research_papers/bjmsp2008_interrater.pdf Computing inter-rater reliability and its variance in the presence of high agreement]", ''British Journal of Mathematical and Statistical Psychology'', Vol. 61, pp29&ndash;48
 
==External links==
* [http://dl.dropbox.com/u/27743223/201209-eacl2012-Kappa.pdf The Problem with Kappa]
* [http://www.john-uebersax.com/stat/kappa.htm Kappa: Pros and Cons] contains a good bibliography of articles about the coefficient.
* [http://justus.randolph.name/kappa Online Kappa Calculator] calculates a variation of Fleiss' kappa.
* [https://mlnl.net/jg/software/ira/ Online inter-rater agreement calculator] includes Fleiss' kappa.
 
[[Category:Categorical data]]
[[Category:Inter-rater reliability]]
 
{{good article}}
 
[[de:Cohens Kappa#Fleiss' Kappa]]

Latest revision as of 14:41, 17 November 2014

Nice to satisfy you, I am Marvella Shryock. To collect coins is what his family members and him appreciate. Years ago he moved to North Dakota and his family loves it. My working day occupation is a meter reader.

Here is my web blog: over the counter std test