Quantum efficiency: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
 
fix link
Line 1: Line 1:
The '''Gauss–Newton algorithm''' is a method used to solve [[non-linear least squares]] problems. It can be seen as a modification of [[newton's method in optimization|Newton's method]] for finding a [[maxima and minima|minimum]] of a [[function (mathematics)|function]]. Unlike Newton's method, the Gauss–Newton algorithm can ''only'' be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.


Non-linear least squares problems arise for instance in [[non-linear regression]], where parameters in a model are sought such that the model is in good agreement with available observations.


Försåvitt ni söker samt  finemang ledtråd, inneha ett visas villig avsevärt användbart råd nedom. Leka bingo online  ett enastående förströelse. Samt när du vinner stor hop priser, spänningen i att spela bingo spel online överskridanden! På grund av det här söker all efter förslag att agera bingo online därför att segra högsta priser optimal busa.<br><br>Fryst ägodel  fasten inte allting bra. Ni fortfarande kommer att erhålla ett  din snögubbe bonus meter, så det icke är en fullkomlig förlust. Hjulen likaså rymma solen ikoner. När ni erhålla ett sun-ikon-gång deltar  toppen poäng-du kommer att mista 1 scenen från akt över snögubbe.<br><br>Någon duktig casino manual är faktiskt betydelsefullt stäv etta gången kungen länga casino publiken. Härnäst tidrymd  ordet lura  Poker skrivbordet  få ihop  namnge  som försöker att kuska de andra spelarna ur spelet worthlessly. De nya casino spelarna kan samt greppa dessa guider ciceron såsom förstå fraser kommer enbart att göra de där upplevt.<br><br>Denna enkla låg dvärg sport gynnas bruten mer  generationerna inneha utvecklats mot ett fängslande casino samt casinospel 2014  från vuxna. gällande länga casino 2014s använder skilda pro däck och husets favör höjer  antalet däck i play. På kurs casino War spelas tillsammans kuttra sju kortlekar, sviterna  betydelse inte, korten befinner sig rankade gällande ackurat samma metod  befinner sig i poker samt ess är alltid högre.<br><br>Poker jag befinner sig  någon sport, men saken där  kvantitet skilda online casino video spel  flitigt använder norm 5 card poker labb bedömning. Poker är  mest populär format bruten kortspelet inom världen samt  varit fascinerande personer därnäst 1800-talet och  oerhört populära  dessa dagar fullkomligt lätt alldenstund det  mycket mer  alldaglig sport lycka till.<br><br>Ej satsa  dina egna klöver. Avgiftsfri bonusar erbjuds  som inga nya casinobonus eller  upp extra. Ja, det  det ultimata sättet att bibehålla  finansiella skulder. Skada fullkomligt avgiftsfri online bingospel  1  du tvungen leta efter. Ni tvingas dryfta att hurdan  det genomförbart att agera  kontanter. , ska ni förbruka dom . Medryckande, erbjuder  bingo webbplatser även autentisk kontanter tillsammans helt fria video lockton. , det  definitivt  fabulöst sätt att förvärva kapital utan att betala ännu ett  öre! Fint, finns det gött ifall bingo [http://www.google.de/search?q=webbplatser webbplatserute. Och  bingo webbplatser erbjuder alldeles avgiftsfri bonusar alldeles gratis online bingospel tryta nya kunder. Kostnadsfri video lockton utnyttjas såsom en marknadsföringsverktyg av online bingorum att attrahera nya kunder.<br><br>Baserat kungen det belopp ni  när ni registrerar  villig ett särskild webbplats, ges personen  kvantitet såsom någon registreringsbonus. befinner sig endast utförs oftast odla om en stund såsom när individen inledningsvis åstadkommer den insättningen  en  webbplatsen. Signup insättning gratifikation: - idag avsevärt mer och avsevärt mer webbplatser kommer tillsammans . Detta befinner sig skilda förut varje webbplats utifrån  faktorer  någon flöde  tillgängliga mängden deponering etc.<br><br>Jävlas  spelaren att "vinna" kanske vanligtvis,  futtig ändock uppmuntrande antal kontanter kungen  bruten agenda. Las Vegas kasinon jävlas spelare kungen  fason såsom jag aldrig  fynd  Niagara Fallsview Casino  Niagara Falls, villig Mohegan Sun villig rutt casino gällande Pocono Downs  nordöstra Pennsylvania  gällande ängar travet och gällande streck casino inom Washington, Pennsylvania.<br><br>Lirare  reguljär  (FPP) inom förhållande nivån de spelar. Innerligt mer såsom ni , desto snabbare ni låser upp och fullgöra din poker nya casinobonus. När ni börjar spela intill pokerborden din belöning kommer att släppas stegvis när  fortsätter att .<br><br>Fria framgångsrika nya casino extra förslag #3 - vilka videospel bästa och utföra ! Inneha ni odds att minsann flanera borta med vinst casino! Själv skulle avstyra  slot maskiner därför att  fellatera dina kapital torka, anordna sig mot TV-spel såsom roulette, blackjack  poker.<br><br>När signalerar  med webbplatser kan  effektuera sporten från baccarat som  när ni vill. Spela fria online baccarat är kuligt absolut spartanskt alldenstund ni    ditt option, men  tar bit i det helt avgiftsfri. Dessa spel  gällande förut 24/7 timmar och ni kan företa det  innerligt som du skulle önska. Därför att  ej deltar tillsammans genuin kapital ändock dom digitala pengar  casinot vinner du inte  genuin klöver heller.<br><br>I retur inom inflytande, villig kurs casino Miami Jai-Alai, meddelade att det varenda att ringa 87 miljoner dollar  finansiering förut någon kasino utbyggnad. Det här integrerade  upprustning av on-line casino golv  tillägget itu en, 000 nya spelautomater. Omedelbart när byggandet genomförs nästan, bör 350 nya positioner fyllas  dess invigningen  januari.<br><br>Ifall det finns ingen duglig minskas, vinner höga handen hela potten. Seven Card Stud Hej-Lo - potten delas emellan den  högsta handen samt den ultimata certifierade låga handen med fem fotografi nedanför åtta  par.<br><br>If you have any questions about where and how to use Nya online casino 2014 ([http://pial0420.blogspot.fr/2013/06/symphony-ft45.html Going Her]), you can contact us at the web site.
The method is named after the mathematicians [[Carl Friedrich Gauss]] and [[Isaac Newton]].
 
== Description ==
Given ''m'' functions '''r''' = (''r''<sub>1</sub>, …, ''r''<sub>''m''</sub>) of ''n'' variables '''''&beta;'''''&nbsp;=&nbsp;(''&beta;''<sub>1</sub>, …, ''&beta;''<sub>''n''</sub>), with ''m''&nbsp;≥&nbsp;''n'', the Gauss–Newton algorithm [[iterative method|iteratively]] finds the minimum of the sum of squares<ref name=ab>Björck (1996)</ref>
 
:<math> S(\boldsymbol \beta)= \sum_{i=1}^m r_i^2(\boldsymbol \beta).</math>  
 
Starting with an initial guess <math>\boldsymbol \beta^{(0)}</math> for the minimum, the method proceeds by the iterations
 
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{ J_r} ^\top \mathbf{r}(\boldsymbol \beta^{(s)}) </math>
 
where
:<math> \mathbf{J_r} = \frac{\partial r_i (\boldsymbol \beta^{(s)})}{\partial \beta_j}</math>
 
is the [[Jacobian matrix]] of '''r''' at <math>\boldsymbol \beta^{(s)}</math> and the symbol <math>^\top</math> denotes the [[matrix transpose]].
 
If ''m''&nbsp;=&nbsp;''n'', the iteration simplifies to
 
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left( \mathbf{J_r} \right)^{-1} \mathbf{r}(\boldsymbol \beta^{(s)}) </math>
 
which is a direct generalization of [[Newton's method]] in one dimension.
 
In data fitting, where the goal is to find the parameters '''''&beta;''''' such that a given model function ''y''&nbsp;=&nbsp;''f''(''x'', '''''&beta;''''') best fits some data points (''x''<sub>''i''</sub>, ''y''<sub>''i''</sub>), the functions ''r''<sub>''i''</sub> are the [[residual (statistics)|residuals]]
 
: <math>r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).</math>
 
Then, the Gauss-Newton method can be expressed in terms of the Jacobian of the function ''f'' as
 
:<math> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_f}^\top \mathbf{J_f} \right)^{-1} \mathbf{ J_f} ^\top \mathbf{r}(\boldsymbol \beta^{(s)}). </math>
 
==Notes==
 
The assumption ''m''&nbsp;≥&nbsp;''n'' in the algorithm statement is necessary, as otherwise the matrix '''J<sub>r</sub>'''<sup>T</sup>'''J<sub>r</sub>''' is not invertible and the normal equations cannot be solved (at least uniquely).
 
The Gauss–Newton algorithm can be derived by [[linear approximation|linearly approximating]] the vector of functions ''r''<sub>''i''</sub>. Using [[Taylor's theorem]], we can write at every iteration:
 
: <math>\mathbf{r}(\boldsymbol \beta)\approx \mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta</math>
 
with <math>\Delta=\boldsymbol \beta-\boldsymbol \beta^s.</math> The task of finding &Delta; minimizing the sum of squares of the right-hand side, i.e.,
: <math>\mathbf{min}\|\mathbf{r}(\boldsymbol \beta^s)+\mathbf{J_r}(\boldsymbol \beta^s)\Delta\|_2^2</math>,
is a [[linear least squares (mathematics)|linear least squares]] problem, which can be solved explicitly, yielding the normal equations in the algorithm.
 
The normal equations are ''m'' linear simultaneous equations in the unknown increments, &Delta;. They may be solved in one step, using [[Cholesky decomposition]], or, better, the [[QR factorization]] of '''J'''<sub>'''r'''</sub>. For large systems, an [[iterative method]], such as the [[conjugate gradient]] method, may be more efficient. If there is a linear dependence between columns of '''J'''<sub>'''r'''</sub>, the iterations will fail as '''J<sub>r</sub>'''<sup>T</sup>'''J<sub>r</sub>''' becomes singular.
 
==Example==
[[File:Gauss Newton illustration.png|thumb|right|280px|Calculated curve obtained with <math>\hat\beta_1=0.362</math> and <math>\hat\beta_2=0.556</math> (in blue) versus the observed data (in red).]]
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.  
 
In a biology experiment studying the relation between substrate concentration [''S''] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
:{|class="wikitable" style="text-align: center;"
|''i'' || 1 || 2 || 3 || 4 || 5 || 6 || 7
|-
| [S] || 0.038 || 0.194 || 0.425 || 0.626 || 1.253 || 2.500 || 3.740
|-
| rate || 0.050 || 0.127 || 0.094 || 0.2122 ||  0.2729 ||  0.2665 || 0.3317
|}
 
It is desired to find a curve (model function) of the form
 
:<math>\text{rate}=\frac{V_\text{max}[S]}{K_M+[S]}</math>
 
that fits best the data in the least squares sense, with the parameters <math>V_\text{max}</math> and <math>K_M</math> to be determined.  
 
Denote by <math>x_i</math> and <math>y_i</math> the value of <math>[S]</math> and the rate from the table, <math>i=1, \dots, 7.</math> Let <math>\beta_1=V_\text{max}</math> and <math>\beta_2=K_M.</math> We will find <math>\beta_1</math> and <math>\beta_2</math> such that the sum of squares of the residuals
: <math>r_i = y_i - \frac{\beta_1x_i}{\beta_2+x_i}</math> &nbsp; (<math>i=1,\dots, 7</math>)
is minimized.
 
The Jacobian <math>\mathbf{J_r}</math> of the vector of residuals <math>r_i</math> in respect to the unknowns <math>\beta_j</math> is an <math>7\times 2</math> matrix with the <math>i</math>-th row having the entries
 
:<math>\frac{\partial r_i}{\partial \beta_1}= -\frac{x_i}{\beta_2+x_i},\ \frac{\partial r_i}{\partial \beta_2}= \frac{\beta_1x_i}{\left(\beta_2+x_i\right)^2}.</math>
 
Starting with the initial estimates of <math>\beta_1</math>=0.9 and <math>\beta_2</math>=0.2, after five iterations of the Gauss–Newton algorithm the optimal values <math>\hat\beta_1=0.362</math> and <math>\hat\beta_2=0.556</math> are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.
 
==Convergence properties==
 
It can be shown<ref>Björck (1996p260</ref> that the increment &Delta; is a [[descent direction]] for ''S'', and, if the algorithm converges, then the limit is a [[stationary point]] of ''S''. However, convergence is not guaranteed, not even [[local convergence]] as in [[Newton's method in optimization|Newton's method]].  
 
The rate of convergence of the Gauss–Newton algorithm can approach [[rate of convergence|quadratic]].<ref>Björck (1996)  p341, 342</ref> The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix <math>\mathbf{J_r^T J_r}</math> is [[ill-conditioned]]. For example, consider the problem with <math>m=2</math> equations and <math>n=1</math> variable, given by
:<math> \begin{align}
r_1(\beta) &= \beta + 1 \\
r_2(\beta) &= \lambda \beta^2 + \beta - 1.
\end{align} </math>
The optimum is at <math>\beta = 0</math>. If <math>\lambda = 0</math> then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.<ref>Fletcher (1987) p.113</ref>
 
== Derivation from Newton's method ==
 
In what follows, the Gauss–Newton algorithm will be derived from [[Newton's method in optimization|Newton's method]] for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic.  
 
The recurrence relation for Newton's method for minimizing a function ''S'' of parameters, '''''&beta;''''', is
:<math> \boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g \, </math>
where '''g''' denotes the [[gradient|gradient vector]] of ''S'' and '''H''' denotes the [[Hessian matrix]] of ''S''.
Since <math> S = \sum_{i=1}^m r_i^2</math>, the gradient is given by
:<math>g_j=2\sum_{i=1}^m r_i\frac{\partial r_i}{\partial \beta_j}.</math>
Elements of the Hessian are calculated by differentiating the gradient elements, <math>g_j</math>, with respect to <math>\beta_k</math>
:<math>H_{jk}=2\sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}+r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k} \right).</math>
 
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by
 
:<math>H_{jk}\approx 2\sum_{i=1}^m J_{ij}J_{ik}</math>
 
where <math>J_{ij}=\frac{\partial r_i}{\partial \beta_j}</math> are entries of the Jacobian '''J<sub>r</sub>'''. The gradient and the approximate Hessian can be written in matrix notation as
 
:<math>\mathbf g=2\mathbf{J_r}^\top \mathbf{r}, \quad \mathbf{H} \approx 2 \mathbf{J_r}^\top \mathbf{J_r}.\,</math>
 
These expressions are substituted into the recurrence relation above to obtain the operational equations
 
:<math> \boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)}+\Delta;\quad \Delta = -\left( \mathbf{J_r}^\top \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\top \mathbf{r}. </math>
 
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation
:<math>\left|r_i\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j}\frac{\partial r_i}{\partial \beta_k}\right|</math>
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.<ref>Nocedal (1997) {{Page needed|date=December 2010}}</ref>
#The function values <math>r_i</math> are small in magnitude, at least around the minimum.  
#The functions are only "mildly" non linear, so that <math>\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}</math> is relatively small in magnitude.
 
== Improved versions ==
 
With the Gauss–Newton method the sum of squares ''S'' may not decrease at every iteration. However, since &Delta; is a descent direction, unless <math>S(\boldsymbol \beta^s)</math> is a stationary point, it holds that <math>S(\boldsymbol \beta^s+\alpha\Delta) < S(\boldsymbol \beta^s)</math> for all sufficiently small <math>\alpha>0</math>. Thus, if divergence occurs, one solution is to employ a fraction, <math>\alpha</math>, of the increment vector, &Delta; in the updating formula 
:<math> \boldsymbol \beta^{s+1} = \boldsymbol \beta^s+\alpha\ \Delta</math>.
In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function ''S''. An optimal value for <math>\alpha</math> can be found by using a [[line search]] algorithm, that is, the magnitude of <math>\alpha</math> is determined by finding the value that minimizes ''S'', usually using a [[line search|direct search method]] in the interval <math>0<\alpha<1</math>.
 
In cases where the direction of the shift vector is such that the optimal fraction, <math> \alpha </math>, is close to zero, an alternative method for handling divergence is the use of the [[Levenberg–Marquardt algorithm]], also known as the "[[trust region]] method".<ref name="ab"/> The normal equations are modified in such a way that the increment vector is rotated towards the direction of [[steepest descent]],   
:<math>\left(\mathbf{J^TJ+\lambda D}\right)\Delta=\mathbf{J}^T \mathbf{r}</math>,
where '''D''' is a positive diagonal matrix. Note that when ''D'' is the identity matrix and <math>\lambda\to+\infty</math>, then  <math> \Delta/\lambda\to \mathbf{J}^T \mathbf{r}</math>, therefore the [[Direction (geometry, geography)|direction]] of &Delta; approaches the direction of the gradient <math> \mathbf{J}^T \mathbf{r}</math>.
 
The so-called Marquardt parameter, <math>\lambda</math>, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time <math>\lambda</math> is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of ''S'' then becomes a standard Gauss–Newton minimization.
 
== Related algorithms ==
 
In a [[quasi-Newton method]], such as that due to [[Davidon-Fletcher-Powell (DFP) formula|Davidon, Fletcher and Powell]] or Broyden–Fletcher–Goldfarb–Shanno ([[BFGS method]]) an estimate of the full Hessian, <math>\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}</math>, is built up numerically using first derivatives <math>\frac{\partial r_i}{\partial\beta_j}</math> only so that after ''n'' refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.
 
Another method for solving minimization problems using only first derivatives is [[gradient descent]]. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
 
==Notes==
<references />
 
==References==
*{{cite book
| last      = Björck | first      = A.
| title      = Numerical methods for least squares problems
| publisher  = SIAM, Philadelphia  | year      = 1996  | isbn      = 0-89871-360-9 }}
* {{Cite book | last1=Fletcher | first1=Roger | title=Practical methods of optimization | publisher=[[John Wiley & Sons]] | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 }}.
*{{cite book
| last      = Nocedal  | first      = Jorge  | coauthors  = Wright, Stephen
| title      = Numerical optimization
| publisher  = New York: Springer  | year      = 1999  | isbn      = 0-387-98793-2 }}
 
{{Least Squares and Regression Analysis}}
 
{{Optimization algorithms}}
 
{{DEFAULTSORT:Gauss-Newton algorithm}}
[[Category:Optimization algorithms and methods]]
[[Category:Least squares]]
[[Category:Statistical algorithms]]

Revision as of 13:15, 1 November 2013

The Gauss–Newton algorithm is a method used to solve non-linear least squares problems. It can be seen as a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.

The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.

Description

Given m functions r = (r1, …, rm) of n variables β = (β1, …, βn), with m ≥ n, the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares[1]

S(β)=i=1mri2(β).

Starting with an initial guess β(0) for the minimum, the method proceeds by the iterations

β(s+1)=β(s)(JrJr)1Jrr(β(s))

where

Jr=ri(β(s))βj

is the Jacobian matrix of r at β(s) and the symbol denotes the matrix transpose.

If m = n, the iteration simplifies to

β(s+1)=β(s)(Jr)1r(β(s))

which is a direct generalization of Newton's method in one dimension.

In data fitting, where the goal is to find the parameters β such that a given model function y = f(x, β) best fits some data points (xi, yi), the functions ri are the residuals

ri(β)=yif(xi,β).

Then, the Gauss-Newton method can be expressed in terms of the Jacobian of the function f as

β(s+1)=β(s)(JfJf)1Jfr(β(s)).

Notes

The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix JrTJr is not invertible and the normal equations cannot be solved (at least uniquely).

The Gauss–Newton algorithm can be derived by linearly approximating the vector of functions ri. Using Taylor's theorem, we can write at every iteration:

r(β)r(βs)+Jr(βs)Δ

with Δ=ββs. The task of finding Δ minimizing the sum of squares of the right-hand side, i.e.,

minr(βs)+Jr(βs)Δ22,

is a linear least squares problem, which can be solved explicitly, yielding the normal equations in the algorithm.

The normal equations are m linear simultaneous equations in the unknown increments, Δ. They may be solved in one step, using Cholesky decomposition, or, better, the QR factorization of Jr. For large systems, an iterative method, such as the conjugate gradient method, may be more efficient. If there is a linear dependence between columns of Jr, the iterations will fail as JrTJr becomes singular.

Example

File:Gauss Newton illustration.png
Calculated curve obtained with β^1=0.362 and β^2=0.556 (in blue) versus the observed data (in red).

In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.

In a biology experiment studying the relation between substrate concentration [S] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.

i 1 2 3 4 5 6 7
[S] 0.038 0.194 0.425 0.626 1.253 2.500 3.740
rate 0.050 0.127 0.094 0.2122 0.2729 0.2665 0.3317

It is desired to find a curve (model function) of the form

rate=Vmax[S]KM+[S]

that fits best the data in the least squares sense, with the parameters Vmax and KM to be determined.

Denote by xi and yi the value of [S] and the rate from the table, i=1,,7. Let β1=Vmax and β2=KM. We will find β1 and β2 such that the sum of squares of the residuals

ri=yiβ1xiβ2+xi   (i=1,,7)

is minimized.

The Jacobian Jr of the vector of residuals ri in respect to the unknowns βj is an 7×2 matrix with the i-th row having the entries

riβ1=xiβ2+xi,riβ2=β1xi(β2+xi)2.

Starting with the initial estimates of β1=0.9 and β2=0.2, after five iterations of the Gauss–Newton algorithm the optimal values β^1=0.362 and β^2=0.556 are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters versus the observed data.

Convergence properties

It can be shown[2] that the increment Δ is a descent direction for S, and, if the algorithm converges, then the limit is a stationary point of S. However, convergence is not guaranteed, not even local convergence as in Newton's method.

The rate of convergence of the Gauss–Newton algorithm can approach quadratic.[3] The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix JrTJr is ill-conditioned. For example, consider the problem with m=2 equations and n=1 variable, given by

r1(β)=β+1r2(β)=λβ2+β1.

The optimum is at β=0. If λ=0 then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1 then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.[4]

Derivation from Newton's method

In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm is at most quadratic.

The recurrence relation for Newton's method for minimizing a function S of parameters, β, is

β(s+1)=β(s)H1g

where g denotes the gradient vector of S and H denotes the Hessian matrix of S. Since S=i=1mri2, the gradient is given by

gj=2i=1mririβj.

Elements of the Hessian are calculated by differentiating the gradient elements, gj, with respect to βk

Hjk=2i=1m(riβjriβk+ri2riβjβk).

The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by

Hjk2i=1mJijJik

where Jij=riβj are entries of the Jacobian Jr. The gradient and the approximate Hessian can be written in matrix notation as

g=2Jrr,H2JrJr.

These expressions are substituted into the recurrence relation above to obtain the operational equations

β(s+1)=β(s)+Δ;Δ=(JrJr)1Jrr.

Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation

|ri2riβjβk||riβjriβk|

that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected.[5]

  1. The function values ri are small in magnitude, at least around the minimum.
  2. The functions are only "mildly" non linear, so that 2riβjβk is relatively small in magnitude.

Improved versions

With the Gauss–Newton method the sum of squares S may not decrease at every iteration. However, since Δ is a descent direction, unless S(βs) is a stationary point, it holds that S(βs+αΔ)<S(βs) for all sufficiently small α>0. Thus, if divergence occurs, one solution is to employ a fraction, α, of the increment vector, Δ in the updating formula

βs+1=βs+αΔ.

In other words, the increment vector is too long, but it points in "downhill", so going just a part of the way will decrease the objective function S. An optimal value for α can be found by using a line search algorithm, that is, the magnitude of α is determined by finding the value that minimizes S, usually using a direct search method in the interval 0<α<1.

In cases where the direction of the shift vector is such that the optimal fraction, α, is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm, also known as the "trust region method".[1] The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent,

(JTJ+λD)Δ=JTr,

where D is a positive diagonal matrix. Note that when D is the identity matrix and λ+, then Δ/λJTr, therefore the direction of Δ approaches the direction of the gradient JTr.

The so-called Marquardt parameter, λ, may also be optimized by a line search, but this is inefficient as the shift vector must be re-calculated every time λ is changed. A more efficient strategy is this. When divergence occurs increase the Marquardt parameter until there is a decrease in S. Then, retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization.

Related algorithms

In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian, 2Sβjβk, is built up numerically using first derivatives riβj only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss-Newton, Levenberg-Marquardt, etc. fits only to nonlinear least-squares problems.

Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.

Notes

  1. 1.0 1.1 Björck (1996)
  2. Björck (1996) p260
  3. Björck (1996) p341, 342
  4. Fletcher (1987) p.113
  5. Nocedal (1997) Template:Page needed

References

  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534.
  • 20 year-old Real Estate Agent Rusty from Saint-Paul, has hobbies and interests which includes monopoly, property developers in singapore and poker. Will soon undertake a contiki trip that may include going to the Lower Valley of the Omo.

    My blog: http://www.primaboinca.com/view_profile.php?userid=5889534

Template:Least Squares and Regression Analysis

Template:Optimization algorithms