Hann function: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Kvng
add ref
en>Mikhail Ryazanov
 
Line 1: Line 1:
[[File:Ellipsoid 2.png|thumb|400px|An iteration of the ellipsoid method]]
I would like to introduce myself to you, I am Jayson Simcox but I don't like when individuals use my full title. Doing ballet is something she would by no means give up. I am an invoicing officer and I'll be promoted soon. For a whilst I've been in Mississippi but now I'm contemplating other choices.<br><br>Take a look at my weblog ... best psychic ([http://kpupf.com/xe/talk/735373 http://kpupf.com/])
In [[mathematical optimization]], the '''ellipsoid method''' is an [[iterative method]] for [[convex optimization|minimizing]] [[convex function]]s. When specialized to solving feasible [[linear optimization]] problems with rational data, the ellipsoid method is an  [[algorithm]] which finds an optimal solution in a finite number of steps.
 
The ellipsoid method generates a sequence of [[ellipsoid]]s whose volume uniformly decreases at every step, thus enclosing a minimizer of a [[convex function]].
 
==History==
 
The ellipsoid method has a long history.  As an [[iterative method]], a preliminary version was introduced by [[Naum Z. Shor]]. In 1972, an [[approximation algorithm]] for real [[convex optimization|convex minimization]] was studied by [[Arkadi Nemirovski]] and David B. Yudin (Judin). As an algorithm for solving [[linear programming]] problems with rational data, the ellipsoid algorithm was studied by [[Leonid Khachiyan]]: Khachiyan's achievement was to prove the [[Polynomial time|polynomial-time]] solvability of linear programs.
 
Following Khachiyan's work, the ellipsoid method was the only algorithm for solving linear programs whose runtime had been proved to be polynomial until [[Karmarkar's algorithm]]. However, the [[interior-point method]] and variants of the [[simplex algorithm]] are much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case.
 
However, the ellipsoidal algorithm allows [[computational complexity|complexity theorists]] to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in [[combinatorial optimization]] theory for many decades.<ref>M. Grötschel, [[László Lovász|L. Lovász]], [[Alexander Schrijver|A. Schrijver]]: ''Geometric Algorithms and Combinatorial Optimization'', Springer, 1988.</ref><ref>[[Lovasz|L. Lovász]]: ''An Algorithmic Theory of Numbers, Graphs, and Convexity'', CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986.</ref><ref>V. Chandru and  M.R.Rao, Linear Programming, Chapter 31 in  ''Algorithms and Theory of Computation Handbook'', edited by [[Mikhail Atallah|M. J. Atallah]], CRC Press 1999, 31-1 to 31-37.</ref><ref>V. Chandru and  M.R.Rao, Integer Programming, Chapter 32 in '' Algorithms and Theory of Computation Handbook'', edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45.</ref> Only in the 21st century have interior-point algorithms with similar complexity properties appeared.{{Citation needed|date=April 2013}}
 
==Description==
{{main|Convex optimization}}
A convex minimization problem consists of a [[convex function]] <math>f_0(x): \mathbb{R}^n \to \mathbb{R}</math> to be minimized over the variable <math>x</math>, convex inequality constraints of the form <math>f_i(x) \leq 0</math>, where the functions <math>\ f_i</math> are convex, and linear equality constraints of the form <math>\ h_i(x) = 0</math>.  We are also given an initial [[ellipsoid]] <math>\mathcal{E}^{(0)} \subset \mathbb{R}^n</math> defined as
 
:<math>\mathcal{E}^{(0)} = \left \{z \in \mathbb{R}^n : (z - x_0)^T P_{(0)}^{-1} (z-x_0) \leq 1  \right \}</math>
 
containing a minimizer <math>\ x^*</math>, where <math>P \succ 0</math> and <math>x_0 \ </math> is the center of <math>\mathcal{E}</math>.  Finally, we require the existence of a [[cutting-plane]] oracle for the function <math>f \ </math>. One example of a cutting-plane is given by a [[subgradient]] <math>g \ </math> of <math>f \ </math>.
 
==Unconstrained minimization==
At the <math>k \ </math>-th iteration of the algorithm, we have a point <math>x^{(k)}</math> at the center of an ellipsoid <math>\mathcal{E}^{(k)} = \left \{x \in \mathbb{R}^n : (x-x^{(k)})^T P_{(k)}^{-1} (x-x^{(k)}) \leq 1  \right \}</math>. We query the cutting-plane oracle to obtain a vector <math>g^{(k+1)} \in \mathbb{R}^n</math> such that <math>g^{(k+1)T} (x^* - x^{(k)} ) \leq 0</math>. We therefore conclude that
 
:<math>x^* \in \mathcal{E}^{(k)} \cap \{z: g^{(k+1)T} (z - x^{(k)} ) \leq 0 \}.</math>
 
We set <math>\mathcal{E}^{(k+1)}</math> to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute <math>x^{(k+1)}</math>.  The update is given by
 
:<math>x^{(k+1)} = x^{(k)} - \frac{1}{n+1} P_{(k)} \tilde{g}^{(k+1)}</math>
:<math>P_{(k+1)} = \frac{n^2}{n^2-1} \left(P_{(k)} - \frac{2}{n+1} P_{(k)} \tilde{g}^{(k+1)} \tilde{g}^{(k+1)T} P_{(k)} \right ) </math>
 
where <math>\tilde{g}^{(k+1)} = (1/\sqrt{g^{(k+1)T} P g^{(k+1)}})g^{(k+1)}</math>.  The stopping criterion is given by the property that
 
<math>\sqrt{g^{(k)T}P_{(k)}g^{(k)}} \leq \epsilon \Rightarrow f(x^{(k)}) - f(x^*) \leq \epsilon.</math>
 
{|border = "1"
|+ Sample sequence of iterates
|-
|[[File:Ellipsoid 1.png|thumb|center|250px|<math>k = 0</math>]]
|[[File:Ellipsoid 2.png|thumb|center|250px|<math>k = 1</math>]]
|[[File:Ellipsoid 3.png|thumb|center|250px|<math>k = 2</math>]]
|-
|[[File:Ellipsoid 4.png|thumb|center|250px|<math>k = 3</math>]]
|[[File:Ellipsoid 5.png|thumb|center|250px|<math>k = 4</math>]]
|[[File:Ellipsoid 6.png|thumb|center|250px|<math>k = 5</math>]]
|}
 
==Inequality-constrained minimization==
At the <math>k \ </math>-th iteration of the algorithm for constrained minimization, we have a point <math>x^{(k)}</math> at the center of an ellipsoid <math>\mathcal{E}^{(k)}</math> as before.  We also must maintain a list of values <math>f_{\rm{best}}^{(k)}</math> recording the smallest objective value of feasible iterates so far.  Depending on whether or not the point <math>x^{(k)}</math> is feasible, we perform one of two tasks:
 
*If <math>x^{(k)}</math> is feasible, perform essentially the same update as in the unconstrained case, by choosing a subgradient <math>g_0 \ </math> that satisfies
 
:<math>g_0^T(x^*-x^{(k)} ) + f_0(x^{(k)}) - f_{\rm{best}}^{(k)} \leq 0</math>
 
*If <math>x^{(k)}</math> is infeasible and violates the <math>j</math>-th constraint, update the ellipsoid with a feasibility cut. Our feasibility cut may be a subgradient <math>g_j</math> of <math>f_j</math> which must satisfy
 
:<math>g_j^T(z-x^{(k)}) + f_j(x^{(k)})\leq 0</math>
 
for all feasible <math>z \ </math>.
 
===Application to linear programming===
 
Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point.  It turns out that any linear programming problem can be reduced to a linear feasibility problem (e.g. minimize the zero function subject to some linear inequality and equality constraints).  One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is [[weak duality|no worse than]] the value of the dual solution.  Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.{{cn|date=November 2012}}
 
==Performance==
The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is [[numerically stable]]. On even "small"-sized problems, it suffers from numerical instability and poor performance in practice.
 
However, the ellipsoid method is an important theoretical technique in [[combinatorial optimization]]. In [[computational complexity theory]], the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows. In the 21st century, interior-point algorithms with similar properties have appeared {{Citation needed|date=April 2013}}.
 
==Notes==
<references/>
 
==Further reading==
* Dmitris Alevras and Manfred W. Padberg, ''Linear Optimization and Extensions: Problems and Extensions'', Universitext, Springer-Verlag, 2001. (Problems from Padberg with solutions.)
* V. Chandru and  M.R.Rao, Linear Programming, Chapter 31 in  ''Algorithms and Theory of Computation Handbook'', edited by M.J.Atallah, CRC Press 1999, 31-1 to 31-37.
* V. Chandru and  M.R.Rao, Integer Programming, Chapter 32 in '' Algorithms and Theory of Computation Handbook'', edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45.
* [[George B. Dantzig]] and Mukund N. Thapa. 1997. ''Linear programming 1: Introduction''. Springer-Verlag.
* [[George B. Dantzig]] and Mukund N. Thapa. 2003. ''Linear Programming 2: Theory and Extensions''. Springer-Verlag.
*M. Grötschel, [[László Lovász|L. Lovász]], [[Alexander Schrijver|A. Schrijver]]: ''Geometric Algorithms and Combinatorial Optimization'', Springer, 1988
<!-- * {{cite journal | author = [[A. K. Lenstra|Lenstra, A. K.]]; [[H. W. Lenstra, Jr.|Lenstra, H. W., Jr.]]; [[Lovász|Lovász, L.]] | title = Factoring polynomials with rational coefficients | journal = [[Mathematische Annalen]] | volume = 261 | year = 1982 | issue = 4 | pages = 515–534 | id = {{hdl|1887/3810}} | doi = 10.1007/BF01457454 | mr = 0682664 }} -->
*[[László Lovász|L. Lovász]]: ''An Algorithmic Theory of Numbers, Graphs, and Convexity'', CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986
* Kattta G. Murty, ''Linear Programming'', Wiley, 1983.
* M. Padberg, ''Linear Optimization and Extensions'', Second Edition, Springer-Verlag, 1999.
* [[Christos H. Papadimitriou]] and Kenneth Steiglitz, ''Combinatorial Optimization: Algorithms and Complexity'', Corrected republication with a new preface, Dover.
* [[Alexander Schrijver]], ''Theory of Linear and Integer Programming''. John Wiley & sons, 1998, ISBN 0-471-98232-6
 
==External links==
* [http://www.stanford.edu/class/ee364b/ EE364b], a Stanford course homepage
 
{{optimization algorithms|convex}}
 
[[Category:Combinatorial optimization]]
[[Category:Operations research]]
[[Category:Convex optimization]]
[[Category:Linear programming]]

Latest revision as of 09:45, 18 November 2014

I would like to introduce myself to you, I am Jayson Simcox but I don't like when individuals use my full title. Doing ballet is something she would by no means give up. I am an invoicing officer and I'll be promoted soon. For a whilst I've been in Mississippi but now I'm contemplating other choices.

Take a look at my weblog ... best psychic (http://kpupf.com/)