Cotangent bundle: Difference between revisions
en>Wikidsp morphisms matter. |
|||
Line 1: | Line 1: | ||
{{redirect|Standardize|industrial and technical standards|Standardization}} | |||
{{about||Fisher z-transformation in statistics|Fisher transformation|Z-values in ecology|Z-value|z-transformation to complex number domain|Z-transform|Z-factor in high-throughput screening|Z-factor|Z-score financial analysis tool|Altman Z-score}} | |||
[[Image:Normal distribution and scales.gif|thumb|350px|right|Compares the various grading methods in a normal distribution. Includes: Standard deviations, cumulative percentages, percentile equivalents, Z-scores, T-scores, standard nine, percent in [[stanine]]]] | |||
In [[statistics]], the '''standard score''' is the (signed) number of [[standard deviation]]s an observation or [[data|datum]] is ''above'' the [[mean]]. Thus, a positive standard score represents a datum above the mean, while a negative standard score represents a datum below the mean. It is a [[dimensionless number|dimensionless quantity]] obtained by subtracting the [[population mean]] from an individual [[raw score]] and then dividing the difference by the [[statistical population|population]] [[standard deviation]]. This conversion process is called '''standardizing''' or '''normalizing''' (however, "normalizing" can refer to many types of ratios; see [[normalization (statistics)]] for more). | |||
Standard scores are also called '''z-values, ''z''-scores, normal scores,''' and '''standardized variables;''' the use of "Z" is because the [[normal distribution]] is also known as the "Z distribution". They are most frequently used to compare a sample to a [[standard normal deviate]] (standard normal distribution, with ''μ'' = 0 and ''σ'' = 1), though they can be defined without assumptions of normality. | |||
The z-score is ''only'' defined if one knows the population parameters; if one only has a sample set, then the analogous computation with sample mean and sample standard deviation yields the [[Student's t-statistic]]. | |||
The standard score is not the same as the [[z-factor]] used in the analysis of [[high-throughput screening]] data though the two are often conflated. | |||
== Calculation from raw score == | |||
The standard score of a raw score ''x'' <ref>Kreyszig 1979, p880 eq(5)</ref> is | |||
:<math>z = {x- \mu \over \sigma}</math> | |||
where: | |||
: ''μ'' is the [[mean]] of the population; | |||
: ''σ'' is the [[standard deviation]] of the population. | |||
The absolute value of ''z'' represents the distance between the raw score and the population mean in units of the standard deviation. ''z'' is negative when the raw score is below the mean, positive when above. | |||
A key point is that calculating ''z'' requires the population mean and the population standard deviation, not the sample mean or sample deviation. It requires knowing the population parameters, not the statistics of a sample drawn from the population of interest. But knowing the true standard deviation of a population is often unrealistic except in cases such as [[standardized testing]], where the entire population is measured. In cases where it is impossible to measure every member of a population, the standard deviation may be estimated using a random sample. | |||
It measures the sigma distance of actual data from the average. | |||
The Z value provides an assessment of how off-target a process is operating. | |||
== Applications == | |||
The z-score is often used in the [[z-test]] in [[standardized testing]] – the analog of the [[Student's t-test]] for a population whose parameters are known, rather than estimated. As it is very unusual to know the entire population, the t-test is much more widely used. | |||
{{anchor|prediction intervals}} | |||
Also, standard score can be used in the calculation of [[prediction interval]]s. A prediction interval [''L'',''U''], consisting of a lower endpoint designated ''L'' and an upper endpoint designated ''U'', is an interval such that a future observation ''X'' will lie in the interval with high probability <math>\gamma</math>, i.e. | |||
:<math>P(L<X<U) =\gamma,</math> | |||
For example <math>\gamma= 0.95 (95\%)</math>. | |||
For the standard score ''Z'' of ''X'' it gives:<ref>Kreyszig 1979, p880 eq(6)</ref> | |||
:<math>P\left( \frac{L-\mu}{\sigma} < Z < \frac{U-\mu}{\sigma} \right) = \gamma.</math> | |||
By determining the quantile z such that | |||
:<math>P\left( -z < Z < z \right) = \gamma</math> | |||
it follows: | |||
:<math>L=\mu-z\sigma,\ U=\mu+z\sigma</math> | |||
==Standardizing in mathematical statistics== | |||
{{further2|[[Normalization (statistics)]]}} | |||
In [[mathematical statistics]], a [[random variable]] ''X'' is '''standardized''' by subtracting its [[expected value]] <math>\operatorname{E}[X]</math> and dividing the difference by its [[standard deviation]] <math>\sigma(X) = \sqrt{\operatorname{Var}(X)}:</math> | |||
:<math>Z = {X - \operatorname{E}[X] \over \sigma(X)}</math> | |||
If the random variable under consideration is the [[sample mean]] of a random sample <math> \ X_1,\dots, X_n</math> of ''X'': | |||
:<math>\bar{X}={1 \over n} \sum_{i=1}^n X_i</math> | |||
then the standardized version is | |||
:<math>Z = \frac{\bar{X}-\operatorname{E}[X]}{\sigma(X)/\sqrt{n}}.</math> | |||
See [[normalization (statistics)]] for other forms of normalization. | |||
==See also== | |||
* [[Standard normal deviate]] | |||
* [[Z-test]] | |||
==References== | |||
*Kreyszig, E (fourth edition 1979). ''Applied Mathematics'', Wiley Press. | |||
{{reflist}} | |||
==Further reading== | |||
* {{Cite book|last=Carroll|first=Susan Rovezzi|last2=Carroll|first2=David J. | |||
|title=Statistics Made Simple for School Leaders |url=http://books.google.com/?id=gccHkMDikb0C | |||
|accessdate=7 June 2009 |edition=illustrated |year=2002|publisher=Rowman & Littlefield |isbn=978-0-8108-4322-6|postscript=<!-- Bot inserted parameter. Either remove it; or change its value to "." for the cite to end in a ".", as necessary. -->{{inconsistent citations}}}} | |||
*Richard J. Larsen and Morris L. Marx (2000) ''An Introduction to Mathematical Statistics and Its Applications, Third Edition,'' ISBN 0-13-922303-7. p. 282. | |||
== External links == | |||
* [http://staff.argyll.epsb.ca/jreed/math30p/statistics/standardCurve.htm Interactive Flash on the z-scores and the probabilities of the normal curve] by Jim Reed | |||
{{Statistics}} | |||
[[Category:Statistical terminology]] | |||
[[Category:Statistical ratios]] |
Revision as of 18:33, 4 January 2014
Name: Jodi Junker
My age: 32
Country: Netherlands
Home town: Oudkarspel
Post code: 1724 Xg
Street: Waterlelie 22
my page - www.hostgator1centcoupon.info
29 yr old Orthopaedic Surgeon Grippo from Saint-Paul, spends time with interests including model railways, top property developers in singapore developers in singapore and dolls. Finished a cruise ship experience that included passing by Runic Stones and Church.
In statistics, the standard score is the (signed) number of standard deviations an observation or datum is above the mean. Thus, a positive standard score represents a datum above the mean, while a negative standard score represents a datum below the mean. It is a dimensionless quantity obtained by subtracting the population mean from an individual raw score and then dividing the difference by the population standard deviation. This conversion process is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see normalization (statistics) for more).
Standard scores are also called z-values, z-scores, normal scores, and standardized variables; the use of "Z" is because the normal distribution is also known as the "Z distribution". They are most frequently used to compare a sample to a standard normal deviate (standard normal distribution, with μ = 0 and σ = 1), though they can be defined without assumptions of normality.
The z-score is only defined if one knows the population parameters; if one only has a sample set, then the analogous computation with sample mean and sample standard deviation yields the Student's t-statistic.
The standard score is not the same as the z-factor used in the analysis of high-throughput screening data though the two are often conflated.
Calculation from raw score
The standard score of a raw score x [1] is
where:
- μ is the mean of the population;
- σ is the standard deviation of the population.
The absolute value of z represents the distance between the raw score and the population mean in units of the standard deviation. z is negative when the raw score is below the mean, positive when above.
A key point is that calculating z requires the population mean and the population standard deviation, not the sample mean or sample deviation. It requires knowing the population parameters, not the statistics of a sample drawn from the population of interest. But knowing the true standard deviation of a population is often unrealistic except in cases such as standardized testing, where the entire population is measured. In cases where it is impossible to measure every member of a population, the standard deviation may be estimated using a random sample.
It measures the sigma distance of actual data from the average.
The Z value provides an assessment of how off-target a process is operating.
Applications
The z-score is often used in the z-test in standardized testing – the analog of the Student's t-test for a population whose parameters are known, rather than estimated. As it is very unusual to know the entire population, the t-test is much more widely used.
<prediction intervals>...</prediction intervals>
Also, standard score can be used in the calculation of prediction intervals. A prediction interval [L,U], consisting of a lower endpoint designated L and an upper endpoint designated U, is an interval such that a future observation X will lie in the interval with high probability , i.e.
For the standard score Z of X it gives:[2]
By determining the quantile z such that
it follows:
Standardizing in mathematical statistics
Template:Further2 In mathematical statistics, a random variable X is standardized by subtracting its expected value and dividing the difference by its standard deviation
If the random variable under consideration is the sample mean of a random sample of X:
then the standardized version is
See normalization (statistics) for other forms of normalization.
See also
References
- Kreyszig, E (fourth edition 1979). Applied Mathematics, Wiley Press.
43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.
Further reading
- 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.
My blog: http://www.primaboinca.com/view_profile.php?userid=5889534 - Richard J. Larsen and Morris L. Marx (2000) An Introduction to Mathematical Statistics and Its Applications, Third Edition, ISBN 0-13-922303-7. p. 282.