Ricart–Agrawala algorithm: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Dicklyon
 
Line 1: Line 1:
Hi there, I am Alyson Boon even though it is not the name on my birth certificate. To play lacross is 1 of the issues she loves most. Alaska is where he's always been living. Since he was 18 he's been working as an info officer but he ideas on altering it.<br><br>Here is my site - [http://ustanford.com/index.php?do=/profile-38218/info/ love psychic]
'''BHHH''' is an [[Optimization (mathematics)|optimization]] [[algorithm]] in [[numerical optimization]] similar to [[Gauss–Newton algorithm]]. It is an [[Acronym and initialism|acronym]] of the four originators: Berndt, B. Hall, R. Hall, and [[Jerry Hausman]].
 
==Usage==
If a [[nonlinear]] model is fitted to the [[data]] one often needs to estimate [[coefficient]]s through [[Optimization (mathematics)|optimization]]. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized is ''Q''(''β''). Then the algorithms are iterative, defining a sequence of approximations, ''β<sub>k</sub>'' given by
:<math>\beta_{k+1}=\beta_{k}-\lambda_{k}A_{k}\frac{\partial Q}{\partial \beta}(\beta_{k}),</math>,
 
where <math>\beta_{k}</math> is the parameter estimate at step k, and <math>\lambda_{k}</math> is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm ''&lambda;<sub>k</sub>'' is determined by calculations within a given iterative step, involving a line-search until a point ''&beta;<sub>k''+1</sub> is found satisfying certain criteria. In addition, for the BHHH algorithm, ''Q'' has the form
 
:<math>Q = \sum_{i=1}^{N} Q_i</math>
and ''A'' is calculated using
:<math>A_{k}=\left[\sum_{i=1}^{N}\frac{\partial \ln Q_i}{\partial \beta}(\beta_{k})\frac{\partial \ln Q_i}{\partial \beta}(\beta_{k})'\right]^{-1} .</math>
In other cases, e.g. [[Newton-Raphson]], <math>A_{k}</math> can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.{{cn|date=March 2013}}
 
==Literature==
*Berndt, E., B. Hall, R. Hall, and J. Hausman, (1974), [http://elsa.berkeley.edu/~bhhall/papers/BerndtHallHallHausman74.pdf “Estimation and Inference in Nonlinear Structural Models”], ''Annals of Economic and Social Measurement'', 3, 653&ndash;665.
*Luenberger, D. (1972), ''Introduction to Linear and Nonlinear Programming'', Addison Wesley, Reading Massachusetts.
*Gill, P., W. Murray, and M. Wright, (1981), ''Practical Optimization'', Harcourt Brace and Company, London
*Sokolov, S.N., and I.N. Silin (1962), “Determination of the coordinates of the minima of functionals by the linearization method”, Joint Institute for Nuclear Research preprint D-810, Dubna.
 
{{DEFAULTSORT:Bhhh Algorithm}}
[[Category:Econometrics]]
[[Category:Optimization algorithms and methods]]

Revision as of 13:31, 22 November 2013

BHHH is an optimization algorithm in numerical optimization similar to Gauss–Newton algorithm. It is an acronym of the four originators: Berndt, B. Hall, R. Hall, and Jerry Hausman.

Usage

If a nonlinear model is fitted to the data one often needs to estimate coefficients through optimization. A number of optimisation algorithms have the following general structure. Suppose that the function to be optimized is Q(β). Then the algorithms are iterative, defining a sequence of approximations, βk given by

βk+1=βkλkAkQβ(βk),,

where βk is the parameter estimate at step k, and λk is a parameter (called step size) which partly determines the particular algorithm. For the BHHH algorithm λk is determined by calculations within a given iterative step, involving a line-search until a point βk+1 is found satisfying certain criteria. In addition, for the BHHH algorithm, Q has the form

Q=i=1NQi

and A is calculated using

Ak=[i=1NlnQiβ(βk)lnQiβ(βk)]1.

In other cases, e.g. Newton-Raphson, Ak can have other forms. The BHHH algorithm has the advantage that, if certain conditions apply, convergence of the iterative procedure is guaranteed.Template:Cn

Literature

  • Berndt, E., B. Hall, R. Hall, and J. Hausman, (1974), “Estimation and Inference in Nonlinear Structural Models”, Annals of Economic and Social Measurement, 3, 653–665.
  • Luenberger, D. (1972), Introduction to Linear and Nonlinear Programming, Addison Wesley, Reading Massachusetts.
  • Gill, P., W. Murray, and M. Wright, (1981), Practical Optimization, Harcourt Brace and Company, London
  • Sokolov, S.N., and I.N. Silin (1962), “Determination of the coordinates of the minima of functionals by the linearization method”, Joint Institute for Nuclear Research preprint D-810, Dubna.