Muzzle energy: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Msjayhawk
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
In [[Numerical analysis|numerical]] [[linear algebra]], the '''Arnoldi iteration''' is an [[eigenvalue algorithm]] and an important example of [[iterative method]]s. Arnoldi finds the [[eigenvalue]]s of general (possibly non-[[Hermitian matrix|Hermitian]]) [[Matrix (mathematics)|matrices]]; an analogous method for Hermitian matrices is the [[Lanczos algorithm|Lanczos iteration]].  The Arnoldi iteration was invented by [[W. E. Arnoldi]] in 1951.
They contact me Emilia. For many years I've been working as a payroll clerk. What I adore performing is to gather badges but I've been taking on new things recently. My family members lives in Minnesota and my family enjoys it.<br><br>Check out my webpage: [http://Z23.co/weightlossfooddelivery24650 z23.co]
 
The term ''iterative method'', used to describe Arnoldi, can perhaps be somewhat confusing.  Note that all general eigenvalue algorithms must be iterative.  This is not what is referred to when we say Arnoldi is an iterative method.  Rather, Arnoldi belongs to a class of linear algebra [[algorithm]]s (based on the idea of [[Krylov subspace]]s) that give a partial result after a relatively small number of iterations. This is in contrast to so-called ''direct methods'', which must complete to give any useful results.
 
Arnoldi iteration is a typical large sparse matrix algorithm: It does not access the elements of the matrix directly, but rather makes the matrix map vectors and makes its conclusions from their images. This is the motivation for building the [[Krylov subspace]].
 
==Krylov subspaces and the power iteration==
 
An intuitive method for finding an eigenvalue (specifically the largest eigenvalue) of a given ''m'' &times; ''m'' matrix <math>A</math> is the [[power iteration]].  Starting with an initial [[random]] [[vector space|vector]] <var>b</var>, this method calculates ''Ab'', ''A''<sup>2</sup>''b'', ''A''<sup>3</sup>''b'',… iteratively storing and normalizing the result into b on every turn. This sequence converges to the [[eigenvector]] corresponding to the largest eigenvalue, <math>\lambda_{1}</math>. However, much potentially useful computation is wasted by using only the final result, <math>A^{n-1}b</math>. This suggests that instead, we form the so-called ''Krylov matrix'':
:<math>K_{n} = \begin{bmatrix}b & Ab & A^{2}b & \cdots & A^{n-1}b \end{bmatrix}.</math>
 
The columns of this matrix are not [[orthogonal]], but in principle, we can extract an orthogonal [[basis (linear algebra)|basis]], via a method such as [[Gram–Schmidt process|Gram–Schmidt orthogonalization]].  The resulting vectors are a basis of the ''[[Krylov subspace]]'', <math>\mathcal{K}_{n}</math>.  We may expect the vectors of this basis to give good approximations of the eigenvectors corresponding to the <math>n</math> largest eigenvalues, for the same reason that <math>A^{n-1}b</math> approximates the dominant eigenvector.
 
==The Arnoldi iteration==
 
The process described above is intuitive.  Unfortunately, it is also [[numerical stability|unstable]]. This is where the Arnoldi iteration enters.
 
The Arnoldi iteration uses the stabilized [[Gram–Schmidt process]] to produce a sequence of orthonormal vectors, ''q''<sub>1</sub>, ''q''<sub>2</sub>, ''q''<sub>3</sub>, …, called the ''Arnoldi vectors'', such that for every ''n'', the vectors ''q''<sub>1</sub>, …, ''q''<sub>''n''</sub> span the Krylov subspace <math>\mathcal{K}_n</math>. Explicitly, the algorithm is as follows:
 
* Start with an arbitrary vector ''q''<sub>1</sub> with norm 1.
* Repeat for ''k'' = 2, 3, …
** <math> q_k \leftarrow Aq_{k-1} \,</math>
** '''for''' ''j'' from 1 to ''k'' &minus; 1
*** <math> h_{j,k-1} \leftarrow q_j^* q_k \, </math>
*** <math> q_k \leftarrow q_k - h_{j,k-1} q_j \, </math>
** <math> h_{k,k-1} \leftarrow \|q_k\| \, </math>
** <math> q_k \leftarrow \frac{q_k}{h_{k,k-1}} \, </math>
 
The ''j''-loop projects out the component of <math>q_k</math> in the directions of <math>q_1,\dots,q_{k-1}</math>.  This ensures the orthogonality of all the generated vectors.
 
The algorithm breaks down when ''q''<sub>''k''</sub> is the zero vector. This happens when the [[Minimal polynomial (linear algebra)|minimal polynomial]] of ''A'' is of degree ''k''. In most applications of the Arnoldi iteration, including the eigenvalue algorithm below and [[GMRES]], the algorithm has converged at this point.
 
Every step of the ''k''-loop takes one matrix-vector product and approximately 4''mk'' floating point operations.
 
==Properties of the Arnoldi iteration==
 
Let ''Q''<sub>''n''</sub> denote the ''m''-by-''n'' matrix formed by the first ''n'' Arnoldi vectors ''q''<sub>1</sub>, ''q''<sub>2</sub>, …, ''q''<sub>''n''</sub>, and let ''H''<sub>''n''</sub> be the (upper [[Hessenberg matrix|Hessenberg]]) matrix formed by the numbers ''h''<sub>''j'',''k''</sub> computed by the algorithm:
:<math> H_n = \begin{bmatrix}
  h_{1,1} & h_{1,2} & h_{1,3} & \cdots  & h_{1,n} \\
  h_{2,1} & h_{2,2} & h_{2,3} & \cdots  & h_{2,n} \\
  0      & h_{3,2} & h_{3,3} & \cdots  & h_{3,n} \\
  \vdots  & \ddots  & \ddots  & \ddots  & \vdots  \\
  0      & \cdots  & 0    & h_{n,n-1} & h_{n,n}
\end{bmatrix}. </math>
We then have
:<math> H_n = Q_n^* A Q_n. \, </math>
This yields an alternative interpretation of the Arnoldi iteration as a (partial) orthogonal reduction of ''A'' to Hessenberg form. The matrix ''H''<sub>''n''</sub> can be viewed as the representation in the basis formed by the Arnoldi vectors of the orthogonal projection of ''A'' onto the Krylov subspace <math>\mathcal{K}_n</math>.
 
The matrix ''H''<sub>''n''</sub> can be characterized by the following optimality condition. The [[characteristic polynomial]] of ''H''<sub>''n''</sub> minimizes ||''p''(''A'')''q''<sub>1</sub>||<sub>2</sub> among all [[monic polynomial]]s of degree ''n''. This optimality problem has a unique solution if and only if the Arnoldi iteration does not break down.
 
The relation between the ''Q'' matrices in subsequent iterations is given by
:<math> A Q_n = Q_{n+1} \tilde{H}_n </math>
where
:<math> \tilde{H}_n = \begin{bmatrix}
  h_{1,1} & h_{1,2} & h_{1,3} & \cdots  & h_{1,n} \\
  h_{2,1} & h_{2,2} & h_{2,3} & \cdots  & h_{2,n} \\
  0      & h_{3,2} & h_{3,3} & \cdots  & h_{3,n} \\
  \vdots  & \ddots  & \ddots  & \ddots  & \vdots  \\
  \vdots  &        & 0      & h_{n,n-1} & h_{n,n} \\
  0      & \cdots  & \cdots  & 0      & h_{n+1,n}
\end{bmatrix} </math>
is an (''n''+1)-by-''n'' matrix formed by adding an extra row to ''H''<sub>''n''</sub>.
 
==Finding eigenvalues with the Arnoldi iteration==
 
The idea of the Arnoldi iteration as an [[eigenvalue algorithm]] is to compute the eigenvalues of the orthogonal projection of ''A'' onto the Krylov subspace. This projection is represented by ''H''<sub>''n''</sub>. The eigenvalues of ''H''<sub>''n''</sub> are called the ''Ritz eigenvalues''. Since ''H''<sub>''n''</sub> is a Hessenberg matrix of modest size, its eigenvalues can be computed efficiently, for instance with the [[QR algorithm]].
 
It is often observed in practice that some of the Ritz eigenvalues converge to eigenvalues of ''A''. Since ''H''<sub>''n''</sub> is ''n''-by-''n'', it has at most ''n'' eigenvalues, and not all eigenvalues of ''A'' can be approximated. Typically, the Ritz eigenvalues converge to the extreme eigenvalues of ''A''. This can be related to the characterization of ''H''<sub>''n''</sub> as the matrix whose characteristic polynomial minimizes ||''p''(''A'')''q''<sub>1</sub>|| in the following way. A good way to get ''p''(''A'') small is to choose the polynomial ''p'' such that ''p''(''x'') is small whenever ''x'' is an eigenvalue of ''A''. Hence, the zeros of ''p'' (and thus the Ritz eigenvalues) will be close to the eigenvalues of ''A''.
 
However, the details are not fully understood yet. This is in contrast to the case where ''A'' is [[symmetric matrix|symmetric]]. In that situation, the Arnoldi iteration becomes the [[Lanczos algorithm|Lanczos iteration]], for which the theory is more complete.
 
==Implicitly restarted Arnoldi method (IRAM)==
Due to practical storage consideration, common implementations of Arnoldi methods typically restart after some number of iterations. One major innovation in restarting was due to Lehoucq and Sorensen who proposed the Implicitly Restarted Arnoldi Method.<ref>{{cite web |author=R. B. Lehoucq and D. C. Sorensen |year=1996 |title=Deflation Techniques for an Implicitly Restarted Arnoldi  Iteration |publisher=SIAM |doi=10.1137/S0895479895281484 }}</ref>  They also implemented the algorithm in a freely available software package called [[ARPACK]].<ref>{{cite web |author=R. B. Lehoucq, D. C. Sorensen, and C. Yang |year=1998 |title=ARPACK Users Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods |publisher=SIAM |url=http://www.ec-securehost.com/SIAM/SE06.html }}</ref>  This has spurred a number of other variations including Implicitly Restarted Lanczos method.<ref>{{cite web |author=D. CALVETTI, L. REICHEL, AND D.C. SORENSEN  |year=1994 |title=An Implicitly Restarted Lanczos Method for Large Symmetric Eigenvalue Problems |publisher=ETNA |url=http://etna.mcs.kent.edu/vol.2.1994/pp1-21.dir/pp1-21.ps}}</ref><ref>{{cite web |author=E. Kokiopoulou, C. Bekas, and E. Gallopoulos |year=2003 |title=An Implicitly Restarted Lanczos Bidiagonalization Method for Computing Smallest Singular Triplets |publisher=SIAM |url=http://www.siam.org/meetings/la03/proceedings/LA03proc.pdf }}</ref><ref>{{cite web |author=Zhongxiao Jia |year=2002 |title=The refined harmonic Arnoldi method and an implicitly restarted refined algorithm for computing interior eigenpairs of large matrices |publisher=Appl. Numer. Math. |doi=10.1016/S0168-9274(01)00132-5 }}</ref>  It also influenced how other restarted methods are analyzed.<ref>{{cite web |author=Andreas Stathopoulos and Yousef Saad and Kesheng Wu |year=1998 |title=Dynamic Thick Restarting of the Davidson,  and the Implicitly Restarted Arnoldi Methods |publisher=SIAM |doi=10.1137/S1064827596304162 }}</ref>
Theoretical results have shown that convergence improves with an increase in the Krylov subspace dimension ''n''. However, an a-priori value of ''n'' which would lead to optimal convergence is not known. Recently a dynamic switching strategy <ref>{{cite web |author=K.Dookhitram, R. Boojhawon, and M. Bhuruth |year=2009 |title=A New Method For Accelerating Arnoldi Algorithms For Large Scale Eigenproblems|publisher=Math. Comput. Simulat. |doi=10.1016/j.matcom.2009.07.009}}</ref> has been proposed which fluctuates the dimension ''n'' before each restart and thus leads to acceleration in the rate of convergence.
 
==See also==
 
The [[generalized minimal residual method]] (GMRES) is a method for solving ''Ax'' = ''b'' based on Arnoldi iteration.
 
==References==
* W. E. Arnoldi, "The principle of minimized iterations in the solution of the matrix eigenvalue problem," ''Quarterly of  Applied Mathematics'', volume 9, pages 17–29, 1951.
* [[Yousef Saad]], ''Numerical Methods for Large Eigenvalue Problems'', Manchester University Press, 1992. ISBN 0-7190-3386-1.
* Lloyd N. Trefethen and David Bau, III, ''Numerical Linear Algebra'', Society for Industrial and Applied Mathematics, 1997. ISBN 0-89871-361-7.
* Jaschke, Leonhard: ''Preconditioned Arnoldi Methods for Systems of Nonlinear Equations''. (2004). ISBN 2-84976-001-3
* Implementation: Matlab comes with ARPACK built-in. Both stored and implicit matrices can be analyzed through the [http://www.mathworks.com/help/techdoc/ref/eigs.html eigs()] function.
 
<references/>
 
{{Numerical linear algebra}}
 
{{DEFAULTSORT:Arnoldi Iteration}}
[[Category:Numerical linear algebra]]

Latest revision as of 19:27, 5 January 2015

They contact me Emilia. For many years I've been working as a payroll clerk. What I adore performing is to gather badges but I've been taking on new things recently. My family members lives in Minnesota and my family enjoys it.

Check out my webpage: z23.co