|
|
Line 1: |
Line 1: |
| In [[statistics]], [[Ronald Fisher|Fisher's]] '''scoring algorithm''' is a form of [[Newton's method]] used to solve [[maximum likelihood]] equations [[numerical analysis|numerically]].
| | Hello! Allow me start by stating my title - Ron Stephenson. Delaware is our birth location. To perform badminton is something he really enjoys performing. Managing individuals is how she makes money and she will not alter it anytime soon.<br><br>Also visit my web site [http://www.Cygnusoft.net/UserProfile/tabid/1263/userId/444509/Default.aspx car warranty] |
| | |
| ==Sketch of Derivation==
| |
| Let <math>Y_1,\ldots,Y_n</math> be [[random variable]]s, independent and identically distributed with twice differentiable [[Probability density function|p.d.f.]] <math>f(y; \theta)</math>, and we wish to calculate the [[maximum likelihood estimator]] (M.L.E.) <math>\theta^*</math> of <math>\theta</math>. First, suppose we have a starting point for our algorithm <math>\theta_0</math>, and consider a [[Taylor series|Taylor expansion]] of the [[Score (statistics)|score function]], <math>V(\theta)</math>, about <math>\theta_0</math>:
| |
| | |
| : <math>V(\theta) \approx V(\theta_0) - \mathcal{J}(\theta_0)(\theta - \theta_0), \,</math> | |
| | |
| where
| |
| | |
| : <math>\mathcal{J}(\theta_0) = - \sum_{i=1}^n \left. \nabla \nabla^{\top} \right|_{\theta=\theta_0} \log f(Y_i ; \theta)</math>
| |
| | |
| is the [[Observed information|observed information matrix]] at <math>\theta_0</math>. Now, setting <math>\theta = \theta^*</math>, using that <math>V(\theta^*) = 0</math> and rearranging gives us:
| |
| | |
| : <math>\theta^* \approx \theta_{0} + \mathcal{J}^{-1}(\theta_{0})V(\theta_{0}). \,</math>
| |
| | |
| We therefore use the algorithm
| |
| | |
| : <math>\theta_{m+1} = \theta_{m} + \mathcal{J}^{-1}(\theta_{m})V(\theta_{m}), \,</math>
| |
| | |
| and under certain regularity conditions, it can be shown that <math>\theta_m \rightarrow \theta^*</math>.
| |
| | |
| ==Fisher scoring==
| |
| | |
| In practice, <math>\mathcal{J}(\theta)</math> is usually replaced by <math>\mathcal{I}(\theta)= \mathrm{E}[\mathcal{J}(\theta)]</math>, the [[Fisher information]], thus giving us the '''Fisher Scoring Algorithm''':
| |
| | |
| : <math>\theta_{m+1} = \theta_{m} + \mathcal{I}^{-1}(\theta_{m})V(\theta_{m})</math>.
| |
| | |
| ==See also==
| |
| *[[Score (statistics)]]
| |
| | |
| ==References==
| |
| Jennrich, R. I., & Sampson, P. F. (1976). Newton-Raphson and related algorithms for maximum likelihood variance component estimation. Technometrics, 18, 11-17.
| |
| <references />
| |
| | |
| [[Category:Estimation theory]]
| |
| [[Category:Statistical algorithms]]
| |
Hello! Allow me start by stating my title - Ron Stephenson. Delaware is our birth location. To perform badminton is something he really enjoys performing. Managing individuals is how she makes money and she will not alter it anytime soon.
Also visit my web site car warranty