Superconducting magnet: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
 
Line 1: Line 1:
== 、秦ゆう珠海の端に立って ==
{{Regression bar}}
{{merge from|Deviation (statistics)|date=February 2013}}


マイルの面積数百をカバー秦ゆうスペース力 [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_6.php クリスチャンルブタン 中古]。<br>リンダウ刺繍<br>は、半分以下の秦Yuは範囲を観測しています。<br>直接竹の島、秦ゆうのトリオが、非常に規則に向けて3にAvisionに<br>すぐに、秦ゆう三兄弟、竹の島の端に到着 [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_12.php クリスチャンルブタンジャパン]。トリオが到着した [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_7.php クリスチャンルブタン 取扱店]。<br><br>******<br>行の後に、100メートルを超える高地ひょろっと木、竹は無限大ですので<br>はエッジ領域にリンダウ非常に静かな、竹の島を刺繍し、、高い地球鍼緑豊かな木を持っています [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_2.php クリスチャンルブタン 東京]。<br><br>全体刺繍リンダウは、面積のほとんどは竹で覆われている。<br><br>潮風が吹い、さらさら。<br>、秦ゆう珠海の端に立って<br>手数料費用、および中どこへ行くかではない、珠海は侯料を入力する土地に住んで停止します [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_8.php クリスチャンルブタン スニーカー]。侯飛秦Yuは見て不思議にはいられませんでした。<br><br>黒い羽も混乱する外観です。<br><br>秦ゆうの目は、この無限の竹にある「全体霊の世界では、竹が実際に人々に巨大な戦いであることが分かる
In [[statistics]] and [[mathematical optimization|optimization]], '''statistical errors''' and '''residuals''' are two closely related and easily confused measures of the [[deviation (statistics)|deviation]] of an observed value of an element of a [[Sample (statistics)|statistical sample]] from its "theoretical value". The '''error''' (or '''disturbance''') of an observed value is the deviation of the observed value from the (unobservable) ''true'' function value, while the '''residual''' of an observed value is the difference between the observed value and the ''estimated'' function value.
相关的主题文章:
<ul>
 
  <li>[http://mini.baduqq.com/plus/feedback.php?aid=5774 http://mini.baduqq.com/plus/feedback.php?aid=5774]</li>
 
  <li>[http://50.6.204.150/plus/view.php?aid=11363 http://50.6.204.150/plus/view.php?aid=11363]</li>
 
  <li>[http://ebookspratiques.com/search.cgi http://ebookspratiques.com/search.cgi]</li>
 
</ul>


== 周りに立っ ==
The distinction is most important in [[regression analysis]], where it leads to the concept of [[studentized residual]]s.


生後子もありませんああです。 '<br><br>秦ゆういくつかの心幸せではない [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_14.php クリスチャンルブタン バッグ]。<br><br>「3年6ヶ月、6ヶ月、子供は今の時点まで、子どもたちは小さすぎる腹の膨らみを我慢......それを具現化したいと考えていますが、することができます。 '秦ゆう心も少し心配です。<br><br>心配は、3年6ヶ月では十分ではありません [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_1.php クリスチャンルブタン 価格]。<br><br> [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_13.php クリスチャンルブタン通販]......<br>10年後になくなって、時間の経過<br>。江Lihuaiタイヤフル十年が、7年前より胃、ほとんど変化 [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_5.php クリスチャンルブタン スニーカー]。<br><br>「わずかに上昇し、私はそれを検出。 '秦ゆう苦笑が自分自身を慰める。<br>それだけでは他の人が [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_10.php クリスチャンルブタン 直営店] '少し'を大きめJIANG腹を検出することができ、宇宙のマスターに自分のおかげで、<br>、目が見えないと推定される。<br><br>'ゆうの弟。'<br><br>秦ゆう李の子供たちは秦ゆうを見つめて。周りに立っ。 'ゆうの弟は、よく、私は子供たちの成長を遅らせるために私の胃の中の気持ちを持っている。単純に遅すぎるが、そのためである、私は精錬香港孟陵を継続それを大切に
==Introduction==
相关的主题文章:
Suppose there is a series of observations from a [[univariate distribution]] and we want to estimate the [[mean]] of that distribution (the so-called [[location model (statistics)|location model]]). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
<ul>
 
  <li>[http://www.nosabel.com/plus/view.php?aid=40036 http://www.nosabel.com/plus/view.php?aid=40036]</li>
 
  <li>[http://cazoo.org/cgi/guestbook/guestbook.cgi http://cazoo.org/cgi/guestbook/guestbook.cgi]</li>
 
  <li>[http://www.bzinfo.org/plus/feedback.php?aid=5 http://www.bzinfo.org/plus/feedback.php?aid=5]</li>
 
</ul>


== 鵬コモド、非常に明確なコースであっても、赤のため ==
A '''statistical error''' (or '''disturbance''') is the amount by which an observation differs from its [[expected value]], the latter being based on the whole [[statistical population|population]] from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the [[arithmetic mean|mean]] of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.


鵬コモド [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_14.php クリスチャンルブタン サイズ] - [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_2.php クリスチャンルブタン 店舗] 私の家です! '<br><br>鵬コモド素晴らしい、ほぼ3王朝乾隆本土エリアまで追加。もちろん、地域のほとんどは先史時代の乾隆本土領域です。<br><br>「鵬コモド、所有者は、私は最終的に鵬の魔法の伝説の島に来た [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_9.php クリスチャンルブタン メンズ 通販]。「黒いユニコーンは非常に興奮して見て、それはその強さを持つこの遠くから鵬は、そのように横断する勇気がなかった魔法の島を持っていた距離。<br><br>秦Yuは深呼吸を取った [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_4.php クリスチャンルブタン パンプス]。<br><br>「本当に価値がある鵬コモド。「秦ゆう賞賛。<br>でも、遠くに立って<br>、秦ゆうも明らかに巨大な息、秦ゆうのための巨大な息吹を感じることができる、それは普通定命のため傑作Xiongshouのようなものです。<br><br>「周辺と内側の円に分かれ鵬魔法の島、、だけでなく、コア領域6強盗。エリア周辺に、強盗カジュアルマジック7、8内円強盗、9専門家の強盗は、コアは10強盗と高く、 [http://www.lamartcorp.com/modules/mod_menu/rakuten_cl_9.php クリスチャンルブタン バッグ] 「赤いカジュアル道で。<br>鵬コモド、非常に明確なコースであっても、赤のため<br>。<br>'<br>
A '''residual''' (or fitting error), on the other hand, is an observable ''estimate'' of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of ''n'' people. The ''[[sample mean]]'' could serve as a good estimator of the ''population'' mean. Then we have:
相关的主题文章:
* The difference between the height of each man in the sample and the unobservable ''population'' mean is a ''statistical error'', whereas
<ul>
* The difference between the height of each man in the sample and the observable ''sample'' mean is a ''residual''.
 
 
  <li>[http://www.gokigen.co.jp/cgi-bin/gokigen/company.cgi http://www.gokigen.co.jp/cgi-bin/gokigen/company.cgi]</li>
Note that the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily ''not [[statistical independence|independent]]''. The statistical errors on the other hand are independent, and their sum within the random sample is [[almost surely]] not zero.
 
 
  <li>[http://bc13888.us11.kufan.net/forum.php?mod=viewthread&tid=64130&fromuid=44456 http://bc13888.us11.kufan.net/forum.php?mod=viewthread&tid=64130&fromuid=44456]</li>
One can standardize statistical errors (especially of a [[normal distribution]]) in a [[z-score]] (or "standard score"), and standardize residuals in a [[t-statistic]], or more generally [[studentized residuals]].
 
 
  <li>[http://www.standardall.com/plus/feedback.php?aid=8995 http://www.standardall.com/plus/feedback.php?aid=8995]</li>
===Example===
 
 
</ul>
If we assume a [[normal distribution|normally distributed]] population with mean μ and [[standard deviation]] σ, and choose individuals independently, then we have
 
:<math>X_1, \dots, X_n\sim N(\mu,\sigma^2)\,</math>
 
and the [[arithmetic mean|sample mean]]
 
:<math>\overline{X}={X_1 + \cdots + X_n \over n}</math>
 
is a random variable distributed thus:
 
:<math>\overline{X}\sim N(\mu, \sigma^2/n).</math>
 
The ''statistical errors'' are then
 
:<math>\varepsilon_i=X_i-\mu,\,</math>
 
whereas the ''residuals'' are
 
:<math>\widehat{\varepsilon}_i=X_i-\overline{X}.</math>
 
(As is often done, the "[[Caret|hat]]" over the letter ε indicates an observable ''estimate'' of an unobservable quantity called ε.)
 
The sum of squares of the '''statistical errors''', divided by σ<sup>2</sup>, has a [[chi-squared distribution]] with ''n'' [[Degrees of freedom (statistics)|degrees of freedom]]:
 
:<math>\sum_{i=1}^n \left(X_i-\mu\right)^2/\sigma^2\sim\chi^2_n.</math>
 
This quantity, however, is not observable.  The sum of squares of the '''residuals''', on the other hand, is observable. The quotient of that sum by σ<sup>2</sup> has a chi-squared distribution with only ''n''&nbsp;&minus;&nbsp;1 degrees of freedom:
 
:<math>\sum_{i=1}^n \left(\,X_i-\overline{X}\,\right)^2/\sigma^2\sim\chi^2_{n-1}.</math>
 
It is remarkable that the [[Squared deviations|sum of squares of the residuals]] and the sample mean can be shown to be independent of each other, using, e.g. [[Basu's theorem]]. That fact, and the normal and chi-squared distributions given above, form the basis of calculations involving the quotient
 
:<math>{\overline{X}_n - \mu \over S_n/\sqrt{n}}.</math>
 
The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation ''σ'', but ''σ'' appears in both the numerator and the denominator and cancels.  That is fortunate because it means that even though we do not know&nbsp;''σ'', we know the probability distribution of this quotient: it has a [[Student's t-distribution]] with ''n''&nbsp;&minus;&nbsp;1 degrees of freedom.  We can therefore use this quotient to find a [[confidence interval]] for&nbsp;''μ''.
 
==Regressions==
 
In [[regression analysis]], the distinction between ''errors'' and ''residuals'' is subtle and important, and leads to the concept of [[studentized residual]]s. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the ''fitted'' function are the residuals.
 
However, a terminological difference arises in the expression [[mean squared error]] (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed ''residuals'', and not of the unobservable ''errors''. If that sum of squares is divided by ''n'', the number of observations, the result is the mean of the squared residuals. Since this is a [[bias (statistics)|biased]]  estimate of the variance of the unobserved errors, the bias is removed by multiplying the mean of the squared residuals by ''n''&nbsp;/&nbsp;''df'' where ''df'' is the number of [[degrees of freedom (statistics)|degrees of freedom]] (''n'' minus the number of parameters being estimated). This latter formula serves as an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.<ref>{{cite book |last1=Steel |first1=Robert G. D. |last2=Torrie |first2=James H. |title=Principles and Procedures of Statistics, with Special Reference to Biological Sciences |year=1960 |publisher=McGraw-Hill |page=288}}</ref>  
 
However, because of the behavior of the process of regression, the ''distributions'' of residuals at different data points (of the input variable) may vary ''even if'' the errors themselves are identically distributed. Concretely, in a [[linear regression]] where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be ''higher'' than the variability of residuals at the ends of the domain: linear regressions fit endpoints better than the middle.
This is also reflected in the [[Influence function (statistics)|influence functions]] of various data points on the [[regression coefficient]]s: endpoints have more influence.
 
Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of ''residuals,'' which is called [[studentizing]]. This is particularly important in the case of detecting [[outliers]]: a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.
 
===Stochastic error===
The stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be [[gaussian]] (normal), in their distribution. That's because the stochastic error is most often the sum of many random errors, and when many random errors are added together, the distribution of their sum looks gaussian, as shown by the [[Central Limit Theorem]].
A stochastic error is added to a regression equation to introduce all the variation in Y that cannot be explained by the included Xs. It is, in effect, a symbol of our inability to model all the movements of the dependent variable.
 
==Other uses of the word "error" in statistics==
 
The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value.  At least two other uses also occur in statistics, both referring to observable prediction errors:
 
[[Mean square error]] or '''mean squared error''' (abbreviated MSE) and [[Root mean square deviation | root mean square error]] (RMSE) refer to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). 
 
'''Sum of squared errors''', typically abbreviated SSE or SS<sub>e</sub>, refers to the [[residual sum of squares]] (the sum of squared residuals) of a regression; this is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. Likewise, the '''sum of absolute errors''' (SAE) refers to the sum of the absolute values of the residuals, which is minimized in the [[least absolute deviations]] approach to regression.
 
==See also==
{{Portal|Statistics}}
{{Colbegin}}
* [[Absolute deviation]]
* [[Consensus forecasts]]
* [[Deviation (statistics)]]
* [[Error detection and correction]]
* [[Explained sum of squares]]
* [[Innovation (signal processing)]]
* [[Innovations vector]]
* [[Lack-of-fit sum of squares]]
* [[Margin of error]]
* [[Mean absolute error]]
* [[Propagation of error]]
* [[Regression dilution]]
* [[Root mean square deviation]]
* [[Sampling error]]
* [[Studentized residual]]
* [[Type I and type II errors]]
{{Colend}}
 
==References==
{{reflist}}
*{{cite book |last1=Cook |first1=R. Dennis |last2=Weisberg |first2=Sanford |title=Residuals and Influence in Regression. |year=1982 |publisher=[[Chapman and Hall]] |location=New York |isbn=041224280X |url=http://www.stat.umn.edu/rir/ |edition=Repr. |accessdate=23 February 2013}}
*{{cite book |last=Weisberg |first=Sanford |title=Applied Linear Regression |year=1985 |publisher=Wiley |location=New York |isbn=9780471879572 |url=http://books.google.com/books?id=yRrvAAAAMAAJ&dq=editions:UMM1U2yvYVUC |edition=2nd |accessdate=23 February 2013}}
*{{springer|title=Errors, theory of|id=p/e036240}}
 
{{least squares and regression analysis|state=expanded}}
{{Statistics|correlation|state=collapsed}}
 
{{DEFAULTSORT:Errors And Residuals In Statistics}}
[[Category:Statistical deviation and dispersion]]
[[Category:Regression analysis]]
[[Category:Statistical theory]]
[[Category:Error]]
[[Category:Measurement]]
[[Category:Statistical terminology]]
 
[[fr:Résidu (statistiques)]]
[[id:Galat]]
[[it:Errore statistico]]
[[pt:Teoria dos erros]]
[[zh:误差]]

Revision as of 04:18, 16 November 2013

Template:Regression bar Library Technician Anton from Strathroy, has many passions that include r/c helicopters, property developers in condo new launch singapore and coin collecting. Finds the beauty in planing a trip to spots around the globe, recently only returning from Old Town of Corfu.

In statistics and optimization, statistical errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true function value, while the residual of an observed value is the difference between the observed value and the estimated function value.

The distinction is most important in regression analysis, where it leads to the concept of studentized residuals.

Introduction

Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.

A statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.

A residual (or fitting error), on the other hand, is an observable estimate of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of n people. The sample mean could serve as a good estimator of the population mean. Then we have:

  • The difference between the height of each man in the sample and the unobservable population mean is a statistical error, whereas
  • The difference between the height of each man in the sample and the observable sample mean is a residual.

Note that the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily not independent. The statistical errors on the other hand are independent, and their sum within the random sample is almost surely not zero.

One can standardize statistical errors (especially of a normal distribution) in a z-score (or "standard score"), and standardize residuals in a t-statistic, or more generally studentized residuals.

Example

If we assume a normally distributed population with mean μ and standard deviation σ, and choose individuals independently, then we have

X1,,XnN(μ,σ2)

and the sample mean

X=X1++Xnn

is a random variable distributed thus:

XN(μ,σ2/n).

The statistical errors are then

εi=Xiμ,

whereas the residuals are

ε^i=XiX.

(As is often done, the "hat" over the letter ε indicates an observable estimate of an unobservable quantity called ε.)

The sum of squares of the statistical errors, divided by σ2, has a chi-squared distribution with n degrees of freedom:

i=1n(Xiμ)2/σ2χn2.

This quantity, however, is not observable. The sum of squares of the residuals, on the other hand, is observable. The quotient of that sum by σ2 has a chi-squared distribution with only n − 1 degrees of freedom:

i=1n(XiX)2/σ2χn12.

It is remarkable that the sum of squares of the residuals and the sample mean can be shown to be independent of each other, using, e.g. Basu's theorem. That fact, and the normal and chi-squared distributions given above, form the basis of calculations involving the quotient

XnμSn/n.

The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation σ, but σ appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know σ, we know the probability distribution of this quotient: it has a Student's t-distribution with n − 1 degrees of freedom. We can therefore use this quotient to find a confidence interval for μ.

Regressions

In regression analysis, the distinction between errors and residuals is subtle and important, and leads to the concept of studentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals.

However, a terminological difference arises in the expression mean squared error (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed residuals, and not of the unobservable errors. If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals. Since this is a biased estimate of the variance of the unobserved errors, the bias is removed by multiplying the mean of the squared residuals by n / df where df is the number of degrees of freedom (n minus the number of parameters being estimated). This latter formula serves as an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error.[1]

However, because of the behavior of the process of regression, the distributions of residuals at different data points (of the input variable) may vary even if the errors themselves are identically distributed. Concretely, in a linear regression where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higher than the variability of residuals at the ends of the domain: linear regressions fit endpoints better than the middle. This is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence.

Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of residuals, which is called studentizing. This is particularly important in the case of detecting outliers: a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain.

Stochastic error

The stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be gaussian (normal), in their distribution. That's because the stochastic error is most often the sum of many random errors, and when many random errors are added together, the distribution of their sum looks gaussian, as shown by the Central Limit Theorem. A stochastic error is added to a regression equation to introduce all the variation in Y that cannot be explained by the included Xs. It is, in effect, a symbol of our inability to model all the movements of the dependent variable.

Other uses of the word "error" in statistics

The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable prediction errors:

Mean square error or mean squared error (abbreviated MSE) and root mean square error (RMSE) refer to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated).

Sum of squared errors, typically abbreviated SSE or SSe, refers to the residual sum of squares (the sum of squared residuals) of a regression; this is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. Likewise, the sum of absolute errors (SAE) refers to the sum of the absolute values of the residuals, which is minimized in the least absolute deviations approach to regression.

See also

Sportspersons Hyslop from Nicolet, usually spends time with pastimes for example martial arts, property developers condominium in singapore singapore and hot rods. Maintains a trip site and has lots to write about after touring Gulf of Porto: Calanche of Piana. Template:Colbegin

Template:Colend

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • Other Sports Official Kull from Drumheller, has hobbies such as telescopes, property developers in singapore and crocheting. Identified some interesting places having spent 4 months at Saloum Delta.

    my web-site http://himerka.com/

Template:Least squares and regression analysis Template:Statistics

fr:Résidu (statistiques) id:Galat it:Errore statistico pt:Teoria dos erros zh:误差

  1. 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534