|
|
Line 1: |
Line 1: |
| In [[statistics]], '''Cronbach's <math>\alpha</math> (alpha)'''<ref name=Cronbach>{{cite journal |author=Cronbach LJ |year=1951 |title=Coefficient alpha and the internal structure of tests |journal=Psychometrika |volume=16 |issue=3 |pages=297–334}}</ref> is a coefficient of [[internal consistency]]. It is commonly used as an estimate of the [[Reliability (psychometrics)|reliability]] of a [[psychometric testing|psychometric test]] for a sample of examinees. It was first named alpha by [[Lee Cronbach]] in 1951, as he had intended to continue with further coefficients. The measure can be viewed as an extension of the [[Kuder–Richardson Formula 20]] (KR-20), which is an equivalent measure for [[Dichotomy|dichotomous]] items. Alpha is not [[robust statistics|robust]] against [[missing data]]. Several other Greek letters have been used by later researchers to designate other measures used in a similar context.<ref>{{cite journal |author=Revelle W Zinbarg R |year=2009 |title=Coefficients Alpha, Beta, Omega, and the glb: Comments on Sijtsma |journal=Psychometrika |volume=74 |issue=1 |pages=145–154 |doi=10.1007/s11336-008-9102-z}}</ref> Somewhat related is the [[average variance extracted]] (AVE).
| | All person who wrote all of the article is called Leland but it's not the most masucline name out there. Managing people is undoubtedly where his primary purchases comes from. His wife and him live on Massachusetts and he is bound to have everything that he needs and wants there. [http://Www.wired.com/search?query=Base+jumping Base jumping] is something that he's been doing for a very long time. He can be running and maintaining a huge blog here: http://[http://Www.Bing.com/search?q=prometeu&form=MSNNWS&mkt=en-us&pq=prometeu prometeu].net<br><br>Also visit my homepage: how to hack clash of clans ([http://prometeu.net related webpage]) |
| | |
| This article discusses the use of <math>\alpha</math> in psychology, but Cronbach's alpha statistic is widely used in the [[social sciences]], business, nursing, and other disciplines. The term ''item'' is used throughout this article, but items could be anything — questions, raters, indicators — of which one might ask to what extent they "measure the same thing." Items that are manipulated are commonly referred to as ''variables''.
| |
| | |
| ==Definition==
| |
| Suppose that we measure a quantity which is a sum of <math>K</math> components (''K-items'' or ''[[testlet]]s''):
| |
| <math>X = Y_1 + Y_2 + \cdots + Y_K</math>. Cronbach's <math>\alpha</math> is defined as
| |
| | |
| :<math>
| |
| \alpha = {K \over K-1 } \left(1 - {\sum_{i=1}^K \sigma^2_{Y_i}\over \sigma^2_X}\right)
| |
| </math>
| |
| | |
| where <math>\sigma^2_X</math> the [[variance]] of the observed total test scores, and <math>\sigma^2_{Y_i}</math> the variance of component ''i'' for the current sample of persons.<ref>{{cite book |author=Develles RF |title=Scale Development |year=1991 |publisher=Sage Publications |pages=24–33}}</ref> | |
| | |
| If the items are scored 0 and 1, a shortcut formula is<ref name=Cronbach2>{{cite book |author=Cronbach LJ |title=Essentials of Psychological Testing |year=1970 |pages=161 |publisher=Harper & Row }}</ref>
| |
| | |
| :<math>
| |
| \alpha = {K \over K-1 } \left(1 - {\sum_{i=1}^K P_{i}Q_{i}\over \sigma^2_X}\right)
| |
| </math>
| |
| where <math>P_i</math> is the proportion scoring 1 on item ''i'', and <math>Q_i = 1 - P_i</math>. This is the same as [[Kuder–Richardson_Formula_20|KR-20]].
| |
| | |
| Alternatively, Cronbach's <math>\alpha</math> can be defined as
| |
| | |
| :<math>\alpha = {K \bar c \over (\bar v + (K-1) \bar c)}</math>
| |
| | |
| where <math>K</math> is as above, <math>\bar v</math> the average variance of each component (item), and <math>\bar c</math> the average of all [[covariance]]s between the components across the current sample of persons (that is, without including the variances of each component).
| |
| | |
| The ''standardized Cronbach's alpha'' can be defined as
| |
| | |
| :<math>\alpha_\text{standardized} = {K\bar r \over (1 + (K-1)\bar r)}</math>
| |
| | |
| where <math>K</math> is as above and <math>\bar r</math> the mean of the <math>K(K-1)/2</math> non-redundant [[Correlation_and_dependence#Pearson.27s_product-moment_coefficient|correlation coefficient]]s (i.e., the mean of an [[triangular matrix|upper triangular]], or lower triangular, correlation matrix).
| |
| | |
| Cronbach's <math>\alpha</math> is related conceptually to the [[Spearman–Brown prediction formula]]. Both arise from the basic [[classical test theory]] result that the reliability of test scores can be expressed as the ratio of the true-score and total-score (error plus true score) variances:
| |
| :<math>\rho_{XX}= { \sigma^2_T \over \sigma_X^2 }</math>
| |
| | |
| The theoretical value of alpha varies from zero to 1, since it is the ratio of two variances. However, depending on the estimation procedure used, estimates of alpha can take on any value less than or equal to 1, including negative values, although only positive values make sense.<ref>Ritter, N. (2010). "Understanding a widely misunderstood statistic: Cronbach's alpha". Paper presented at ''Southwestern Educational Research Association (SERA) Conference'' 2010: New Orleans, LA (ED526237).</ref> Higher values of alpha are more desirable. Some professionals,<ref>{{cite book |author=Nunnally JC |title=Psychometric Theory, 2nd ed. |year=1978 |publisher=McGraw-Hill |location=New York}}</ref> as a [[rule of thumb]], require a reliability of 0.70 or higher (obtained on a substantial sample) before they will use an instrument. Obviously, this rule should be applied with caution when <math>\alpha</math> has been computed from items that systematically violate its assumptions.{{specify|date=July 2010}} Furthermore, the appropriate degree of reliability depends upon the use of the instrument. For example, an instrument designed to be used as part of a battery of tests may be intentionally designed to be as short as possible, and therefore somewhat less reliable. Other situations may require extremely precise measures with very high reliabilities. In the extreme case of a two-item test, the [[Spearman–Brown prediction formula]] is more appropriate than Cronbach's alpha.<ref>{{cite journal|first1=R.|last1=Eisinga|first2=M.|last2=Te Grotenhuis|first3=B.|last3=Pelzer|title=The reliability of a two-item scale: Pearson, Cronbach or Spearman-Brown? |journal= International Journal of Public Health|year=2013|volume=58|issue=4|pages=637–642|doi= 10.1007/s00038-012-0416-3}}</ref>
| |
| | |
| This has resulted in a wide variance of test reliability. In the case of psychometric tests, most fall within the range of 0.75 to 0.83 with at least one claiming a Cronbach's alpha above 0.90 (Nunnally 1978, page 245–246).
| |
| | |
| ==Internal consistency==
| |
| {{main|Internal consistency}}
| |
| Cronbach's alpha will generally increase as the intercorrelations among test items increase, and is thus known as an [[internal consistency]] estimate of reliability of test scores. Because intercorrelations among test items are maximized when all items measure the same [[Construct (philosophy of science)|construct]], Cronbach's alpha is widely believed to indirectly indicate the degree to which a set of items measures a single unidimensional latent construct. However, the average intercorrelation among test items is affected by skew just like any other average. Thus, whereas the modal intercorrelation among test items will equal zero when the set of items measures several unrelated latent constructs, the average intercorrelation among test items will be greater than zero in this case. Indeed, several investigators have shown that alpha can take on quite high values even when the set of items measures several unrelated latent constructs.<ref name=Cortina>Cortina, J.M. (1993). What is coefficient alpha? An examination of theory and applications" ''Journal of Applied Psychology'' 78, 98–104.</ref><ref name=Cronbach/><ref>{{cite journal |author=Green SB Lissitz RW Mulaik SA |year=1977 |title=Limitations of coefficient alpha as an index of test unidimensionality |journal=Educational and Psychological Measurement |volume=37 |pages=827–838}}</ref><ref>{{cite journal |author=Revelle W |year=1979 |title=Hierarchical cluster analysis and the internal structure of tests |journal=Multivariate Behavioral Research |volume=14 |pages=57–74 |doi=10.1207/s15327906mbr1401_4}}</ref><ref>{{cite journal |author=Schmitt N |year=1996 |title=Uses and abuses of coefficient alpha |journal=Psychological Assessment |volume=8 |pages=350–353}}</ref><ref>{{cite journal |author=Zinbarg R Yovel I Revelle W McDonald R |year=2006 |title=Estimating generalizability to a universe of indicators that all have an attribute in common: A comparison of estimators for alpha |journal=Applied Psychological Measurement |volume=30 |pages=121–144}}</ref>As a result, alpha is most appropriately used when the items measure different substantive areas within a single construct. When the set of items measures more than one construct, coefficient omega_hierarchical is more appropriate.<ref name=McDonald>{{cite book |author=McDonald RP |title=Test Theory: A Unified Treatment |year=1999 |pages=90–103 |publisher=Erlbaum |ISBN=0805830758}}</ref><ref name="Zinbarg&Revelle">{{cite journal |author=Zinbarg R Revelle W Yovel I Li W |year=2005 |title=Cronbach’s , Revelle’s , and McDonald’s : Their relations with each other and two alternative conceptualizations of reliability |journal=Psychometrika |volume=70 |pages=123–133 |doi=10.1007/s11336-003-0974-7}}</ref>
| |
| | |
| Alpha treats any covariance among items as ''true-score'' variance, even if items covary for spurious reasons. For example, alpha can be artificially inflated by making scales which consist of superficial changes to the wording within a set of items or by analyzing speeded tests.
| |
| | |
| A commonly accepted{{Citation needed|reason=There have been several papers/commentaries about the limited uses of Cronbach's alpha in practice. The fact that alpha can often be inflated, or at other times at best an underestimate of reliability should be taken into account: if the inflation of alpha is a well-known fact, how can the table below be considered as commonly accepted?|date=September 2013}} rule of thumb for describing internal consistency using Cronbach's alpha is as follows,<ref>George, D., & Mallery, P. (2003). SPSS for Windows step by step: A simple guide and
| |
| reference. 11.0 update (4th ed.). Boston: Allyn & Bacon.</ref><ref>Kline, P. (2000). The handbook of psychological testing (2nd ed.). London: Routledge, page 13</ref> however, a greater number of items in the test can artificially inflate the value of alpha<ref name=Cortina/> and a sample with a narrow range can deflate it, so this rule of thumb should be used with caution:
| |
| {| class="wikitable"
| |
| |-
| |
| ! Cronbach's alpha !! Internal consistency
| |
| |-
| |
| | α ≥ 0.9 || Excellent (High-Stakes testing)
| |
| |-
| |
| | 0.7 ≤ α < 0.9 || Good (Low-Stakes testing)
| |
| |-
| |
| | 0.6 ≤ α < 0.7 || Acceptable
| |
| |-
| |
| | 0.5 ≤ α < 0.6 || Poor
| |
| |-
| |
| | α < 0.5 || Unacceptable
| |
| |}
| |
| | |
| ==Generalizability theory==
| |
| Cronbach and others generalized some basic assumptions of classical test theory in their [[generalizability theory]]. If this theory is applied to test construction, then it is assumed that the items that constitute the test are a random sample from a larger universe of items. The expected score of a person in the universe is called the universe score, analogous to a true score. The generalizability is defined analogously as the variance of the universe scores divided by the variance of the observable scores, analogous to the concept of [[Reliability (statistics)|reliability]] in [[classical test theory]]. In this theory, Cronbach's alpha is an unbiased estimate of the generalizability. For this to be true the assumptions of essential <math>\tau</math>-equivalence or parallelness are not needed. Consequently, Cronbach's alpha can be viewed as a measure of how well the sum score on the selected items capture the expected score in the entire domain, even if that domain is heterogeneous.
| |
| | |
| ==Intra-class correlation==
| |
| Cronbach's alpha is said to be equal to the stepped-up consistency version of the [[intra-class correlation coefficient]], which is commonly used in observational studies. But this is only conditionally true. In terms of variance components, this condition is, for item sampling: if and only if the value of the item (rater, in the case of rating) variance component equals zero. If this variance component is negative, alpha will underestimate the stepped-up [[intra-class correlation coefficient]]; if this variance component is positive, alpha will overestimate this stepped-up [[intra-class correlation coefficient]].
| |
| | |
| ==Factor analysis==
| |
| Cronbach's alpha also has a theoretical relation with [[factor analysis]]. As shown by Zinbarg, Revelle, Yovel and Li,<ref name="Zinbarg&Revelle"/> alpha may be expressed as a function of the parameters of the hierarchical factor analysis model which allows for a general factor that is common to all of the items of a measure in addition to group factors that are common to some but not all of the items of a measure. Alpha may be seen to be quite complexly determined from this perspective. That is, alpha is sensitive not only to general factor saturation in a scale but also to group factor saturation and even to variance in the scale scores arising from variability in the factor loadings. Coefficient omega_hierarchical<ref name=McDonald/><ref name="Zinbarg&Revelle"/> has a much more straightforward interpretation as the proportion of observed variance in the scale scores that is due to the general factor common to all of the items comprising the scale.
| |
| | |
| ==Notes==
| |
| {{Reflist}}
| |
| | |
| ==Further reading==
| |
| * Allen, M.J., & Yen, W. M. (2002). ''Introduction to Measurement Theory.'' Long Grove, IL: Waveland Press.
| |
| * Bland J.M., Altman D.G. (1997). [http://www.bmj.com/cgi/content/full/314/7080/572 Statistics notes: Cronbach's alpha]" ''BMJ'' 1997;314:572.
| |
| * Cronbach, Lee J., and Richard J. Shavelson. (2004). My Current Thoughts on Coefficient Alpha and Successor Procedures. ''Educational and Psychological Measurement'' 64, no. 3 (June 1) 391–418. {{doi|10.1177/0013164404266386}}.
| |
| | |
| | |
| {{DEFAULTSORT:Cronbach's Alpha}}
| |
| [[Category:Comparison of assessments]]
| |
| [[Category:Psychometrics]]
| |
| [[Category:Educational psychology research methods]]
| |
All person who wrote all of the article is called Leland but it's not the most masucline name out there. Managing people is undoubtedly where his primary purchases comes from. His wife and him live on Massachusetts and he is bound to have everything that he needs and wants there. Base jumping is something that he's been doing for a very long time. He can be running and maintaining a huge blog here: http://prometeu.net
Also visit my homepage: how to hack clash of clans (related webpage)