Fault reporting: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Chris the speller
m replaced: labor intensive → labor-intensive using AWB
en>John of Reading
m →‎Benefits: Typo fixing, replaced: using using → using using AWB
 
Line 1: Line 1:
In the [[mathematics|mathematical]] discipline of [[numerical linear algebra]], a  '''matrix splitting''' is an expression which represents a given [[matrix (mathematics)|matrix]] as a sum or difference of matrices.  Many [[iterative method]]s (e.g., for systems of  [[differential equation]]s) depend upon the direct solution of matrix equations involving matrices more general than [[tridiagonal matrix|tridiagonal matrices]].  These matrix equations can often be solved directly and efficiently when written as a matrix splitting.  The technique was devised by [[Richard S. Varga]] in 1960.<ref>{{harvtxt|Varga|1960}}</ref>
Free high speed Internet access and a business center make it easy to do business while on the road.<br>You can wear a little short tee that barely hits the hip bone, a pair of white colored shorts and some bohemian sandals. Complete the look with a very colorful scarf tied on the white shorts.<br>http://bulgariandepot.com/coach/?key=coach-outlet-factory-online-store-7 <br /> http://bulgariandepot.com/coach/?key=coach-factory-store-1 <br />  http://bulgariandepot.com/coach/?key=coach-factory-store-website-8 <br /> http://bulgariandepot.com/coach/?key=coach-factory-outlet-store-online-5 <br /> http://bulgariandepot.com/coach/?key=coach-factory-outlet-store-2 <br /> <br>http://www.desifun4u.com/User-EHebbleth<br>http://wikilearn.in/How_To_Teach_Bags_Outlet_Better_Than_Anyone_Else<br><br>If you liked this short article and you would like to receive more data relating to [http://www.restaurantcalcuta.com/outlet/ugg.asp Ugg Outlet Store] kindly pay a visit to the page.
 
==Regular splittings==
We seek to solve the [[Matrix(mathematics)#Linear_equations|matrix equation]]
 
: <math> \bold A \bold x = \bold k,  \quad (1) </math>
 
where '''A''' is a given ''n'' × ''n'' [[invertible matrix|non-singular]] matrix, and '''k''' is a given [[column vector]] with ''n'' components.  We split the matrix '''A''' into
 
:<math> \bold A = \bold B - \bold C, \quad (2) </math>
 
where '''B''' and '''C''' are ''n'' × ''n'' matrices.  If, for an arbitrary ''n'' × ''n'' matrix '''M''', '''M''' has nonnegative entries, we write '''M''' &ge; '''0'''.  If '''M''' has only positive entries, we write '''M''' &gt; '''0'''.  Similarly, if the matrix '''M'''<sub>1</sub> &minus; '''M'''<sub>2</sub> has nonnegative entries, we write '''M'''<sub>1</sub> &ge; '''M'''<sub>2</sub>.
 
Definition:  '''A''' = '''B''' &minus; '''C''' is a '''regular splitting of A''' if and only if '''B'''<sup>&minus;1</sup> &ge; '''0''' and '''C''' &ge; '''0'''.
 
We assume that matrix equations of the form
 
: <math> \bold B \bold x = \bold g,  \quad (3) </math>
 
where '''g''' is a given column vector, can be solved directly for the vector '''x'''. If (2) represents a regular splitting of '''A''', then the iterative method
 
: <math> \bold B \bold x^{(m+1)} = \bold C \bold x^{(m)} + \bold k, \quad m = 0, 1, 2, \ldots ,  \quad (4) </math>
 
where '''x'''<sup>(0)</sup> is an arbitrary vector, can be carried out.  Equivalently, we write (4) in the form
 
: <math> \bold x^{(m+1)} = \bold B^{-1} \bold C \bold x^{(m)} + \bold B^{-1} \bold k, \quad m = 0, 1, 2, \ldots  \quad (5) </math>
 
The matrix '''D''' = '''B'''<sup>&minus;1</sup>'''C''' has nonnegative entries if (2) represents a regular splitting of '''A'''.<ref>{{harvtxt|Varga|1960|pp=121–122}}</ref>
 
It can be shown that if '''A'''<sup>&minus;1</sup> &gt; '''0''', then <math>\rho (\bold D)</math> < 1, where <math>\rho (\bold D)</math> represents the [[spectral radius]] of '''D''', and thus '''D''' is a [[convergent matrix]].  As a consequence, the iterative method (5) is necessarily [[Jacobi method#Convergence|convergent]].<ref>{{harvtxt|Varga|1960|pp=122–123}}</ref><ref>{{harvtxt|Varga|1962|p=89}}</ref>
 
If, in addition, the splitting (2) is chosen so that the matrix '''B''' is a [[diagonal matrix]] (with the diagonal entries all non-zero, since '''B''' must be [[Invertible matrix|invertible]]), then '''B''' can be inverted in linear time (see [[Time complexity]]).
 
==Matrix iterative methods==
Many iterative methods can be described as a matrix splitting.  If the diagonal entries of the matrix '''A''' are all nonzero, and we express the matrix '''A''' as the matrix sum
 
:<math> \bold A = \bold D - \bold U - \bold L, \quad (6) </math>
 
where '''D''' is the diagonal part of '''A''', and '''U''' and '''L''' are respectively strictly upper and lower [[triangular matrix|triangular]] ''n'' × ''n'' matrices, then we have the following.
 
The [[Jacobi method]] can be represented in matrix form as a splitting
 
:<math> \bold x^{(m+1)} = \bold D^{-1}(\bold U + \bold L)\bold x^{(m)} + \bold D^{-1}\bold k. \quad (7) </math><ref>{{harvtxt|Burden|Faires|1993|p=408}}</ref><ref>{{harvtxt|Varga|1962|p=88}}</ref>
 
The [[Gauss-Seidel method]] can be represented in matrix form as a splitting
:<math> \bold x^{(m+1)} = (\bold D - \bold L)^{-1}\bold U \bold x^{(m)} + (\bold D - \bold L)^{-1}\bold k. \quad (8) </math><ref>{{harvtxt|Burden|Faires|1993|p=411}}</ref><ref>{{harvtxt|Varga|1962|p=88}}</ref>
 
The method of [[successive over-relaxation]] can be represented in matrix form as a splitting
:<math> \bold x^{(m+1)} = (\bold D - \omega \bold L)^{-1}[(1 - \omega) \bold D + \omega \bold U] \bold x^{(m)} + \omega (\bold D - \omega \bold L)^{-1}\bold k. \quad (9) </math><ref>{{harvtxt|Burden|Faires|1993|p=416}}</ref><ref>{{harvtxt|Varga|1962|p=88}}</ref>
 
==Example==
 
===Regular splitting===
In equation (1), let
:<math>\begin{align}
& \mathbf{A} = \begin{pmatrix}
6 & -2 & -3 \\
-1 & 4 & -2 \\
-3 & -1 & 5
\end{pmatrix}, \quad \mathbf{k} = \begin{pmatrix}
5 \\
-12 \\
10
\end{pmatrix}. \quad (10)
\end{align}</math>
Let us apply the splitting (7) which is used in the Jacobi method: we split '''A''' in such a way that '''B''' consists of ''all'' of the diagonal elements of '''A''', and '''C''' consists of ''all'' of the off-diagonal elements of '''A''', negated. (Of course this is not the only useful way to split a matrix into two matrices.)  We have
:<math>\begin{align}
& \mathbf{B} = \begin{pmatrix}
6 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & 5
\end{pmatrix}, \quad \mathbf{C} = \begin{pmatrix}
0 & 2 & 3 \\
1 & 0 & 2 \\
3 & 1 & 0
\end{pmatrix}, \quad (11)
\end{align}</math>
:<math>\begin{align}
& \mathbf{A^{-1}} = \frac{1}{47} \begin{pmatrix}
18 & 13 & 16 \\
11 & 21 & 15 \\
13 & 12 & 22
\end{pmatrix}, \quad \mathbf{B^{-1}} = \begin{pmatrix}
\frac{1}{6} & 0 & 0 \\[4pt]
0 & \frac{1}{4} & 0 \\[4pt]
0 & 0 & \frac{1}{5}
\end{pmatrix},
\end{align}</math>
:<math>\begin{align}
\mathbf{D} = \mathbf{B^{-1}C} = \begin{pmatrix}
0 & \frac{1}{3} & \frac{1}{2} \\[4pt]
\frac{1}{4} & 0 & \frac{1}{2} \\[4pt]
\frac{3}{5} & \frac{1}{5} & 0
\end{pmatrix}, \quad \mathbf{B^{-1}k} = \begin{pmatrix}
\frac{5}{6} \\[4pt]
-3 \\[4pt]
2
\end{pmatrix}.
\end{align}</math>
Since '''B'''<sup>&minus;1</sup> &ge; '''0''' and '''C''' &ge; '''0''', the splitting (11) is a regular splitting.  Since '''A'''<sup>&minus;1</sup> &gt; '''0''', the spectral radius  <math>\rho (\bold D)</math> < 1. (The approximate [[eigenvalues]] of '''A''' are <math>\lambda_i</math> &asymp; –0.4599820, –0.3397859, 0.7997679.)  Hence, the matrix '''D''' is convergent and the method (5) necessarily converges for the problem (10).  Note that the diagonal elements of '''A''' are all greater than zero, the off-diagonal elements of '''A''' are all less than zero and '''A''' is [[strictly diagonally dominant]].<ref>{{harvtxt|Burden|Faires|1993|p=371}}</ref>
 
The method (5) applied to the problem (10) then takes the form
: <math> \bold x^{(m+1)} =
\begin{align}
\begin{pmatrix}
0 & \frac{1}{3} & \frac{1}{2} \\[4pt]
\frac{1}{4} & 0 & \frac{1}{2} \\[4pt]
\frac{3}{5} & \frac{1}{5} & 0
\end{pmatrix}
\bold x^{(m)} +
\begin{pmatrix}
\frac{5}{6} \\[4pt]
-3 \\[4pt]
2
\end{pmatrix}
\end{align},
\quad m = 0, 1, 2, \ldots  \quad (12) </math>
 
The exact solution to equation (12) is
:<math>\begin{align}
& \mathbf{x} = \begin{pmatrix}
2 \\
-1 \\
3
\end{pmatrix}. \quad (13)
\end{align}</math>
The first few iterates for equation (12) are listed in the table below, beginning with '''x'''<sup>(0)</sup> = (0.0, 0.0, 0.0)<sup>T</sup>.  From the table one can see that the method is evidently converging to the solution (13), albeit rather slowly.
 
{| class="wikitable" border="1"
|-
! <math>x^{(m)}_1</math>
! <math>x^{(m)}_2</math>
! <math>x^{(m)}_3</math>
|-
| <math>0.0</math>
| <math>0.0</math>
| <math>0.0</math>
|-
| <math>0.83333</math>
| <math>-3.0000</math>
| <math>2.0000</math>
|-
| <math>0.83333</math>
| <math>-1.7917</math>
| <math>1.9000</math>
|-
| <math>1.1861</math>
| <math>-1.8417</math>
| <math>2.1417</math>
|-
| <math>1.2903</math>
| <math>-1.6326</math>
| <math>2.3433</math>
|-
| <math>1.4608</math>
| <math>-1.5058</math>
| <math>2.4477</math>
|-
| <math>1.5553</math>
| <math>-1.4110</math>
| <math>2.5753</math>
|-
| <math>1.6507</math>
| <math>-1.3235</math>
| <math>2.6510</math>
|-
| <math>1.7177</math>
| <math>-1.2618</math>
| <math>2.7257</math>
|-
| <math>1.7756</math>
| <math>-1.2077</math>
| <math>2.7783</math>
|-
| <math>1.8199</math>
| <math>-1.1670</math>
| <math>2.8238</math>
|}
 
===Jacobi method===
As stated above, the Jacobi method (7) is the same as the specific regular splitting (11) demonstrated above.
 
===Gauss-Seidel method===
Since the diagonal entries of the matrix '''A''' in problem (10) are all nonzero, we can express the matrix '''A''' as the splitting (6), where
 
:<math>\begin{align}
& \mathbf{D} = \begin{pmatrix}
6 & 0 & 0 \\
0 & 4 & 0 \\
0 & 0 & 5
\end{pmatrix}, \quad \mathbf{U} = \begin{pmatrix}
0 & 2 & 3 \\
0 & 0 & 2 \\
0 & 0 & 0
\end{pmatrix}, \quad \mathbf{L} = \begin{pmatrix}
0 & 0 & 0 \\
1 & 0 & 0 \\
3 & 1 & 0
\end{pmatrix}. \quad (14)
\end{align}</math>
 
We then have
 
:<math>\begin{align}
& \mathbf{(D-L)^{-1}} = \frac{1}{120} \begin{pmatrix}
20 & 0 & 0 \\
5 & 30 & 0 \\
13 & 6 & 24
\end{pmatrix},
\end{align}</math>
 
:<math>\begin{align}
& \mathbf{(D-L)^{-1}U} = \frac{1}{120} \begin{pmatrix}
0 & 40 & 60 \\
0 & 10 & 75 \\
0 & 26 & 51
\end{pmatrix}, \quad \mathbf{(D-L)^{-1}k} = \frac{1}{120} \begin{pmatrix}
100 \\
-335 \\
233
\end{pmatrix}.
\end{align}</math>
 
The Gauss-Seidel method (8) applied to the problem (10) takes the form
 
: <math> \bold x^{(m+1)} =
\begin{align}
& \frac{1}{120} \begin{pmatrix}
0 & 40 & 60 \\
0 & 10 & 75 \\
0 & 26 & 51
\end{pmatrix}
\bold x^{(m)} +
\frac{1}{120} \begin{pmatrix}
100 \\
-335 \\
233
\end{pmatrix},
\end{align}
\quad m = 0, 1, 2, \ldots  \quad (15) </math>
 
The first few iterates for equation (15) are listed in the table below, beginning with '''x'''<sup>(0)</sup> = (0.0, 0.0, 0.0)<sup>T</sup>.  From the table one can see that the method is evidently converging to the solution (13), somewhat faster than the Jacobi method described above.
 
{| class="wikitable" border="1"
|-
! <math>x^{(m)}_1</math>
! <math>x^{(m)}_2</math>
! <math>x^{(m)}_3</math>
|-
| <math>0.0</math>
| <math>0.0</math>
| <math>0.0</math>
|-
| <math>0.8333</math>
| <math>-2.7917</math>
| <math>1.9417</math>
|-
| <math>0.8736</math>
| <math>-1.8107</math>
| <math>2.1620</math>
|-
| <math>1.3108</math>
| <math>-1.5913</math>
| <math>2.4682</math>
|-
| <math>1.5370</math>
| <math>-1.3817</math>
| <math>2.6459</math>
|-
| <math>1.6957</math>
| <math>-1.2531</math>
| <math>2.7668</math>
|-
| <math>1.7990</math>
| <math>-1.1668</math>
| <math>2.8461</math>
|-
| <math>1.8675</math>
| <math>-1.1101</math>
| <math>2.8985</math>
|-
| <math>1.9126</math>
| <math>-1.0726</math>
| <math>2.9330</math>
|-
| <math>1.9423</math>
| <math>-1.0479</math>
| <math>2.9558</math>
|-
| <math>1.9619</math>
| <math>-1.0316</math>
| <math>2.9708</math>
|}
 
===Successive over-relaxation method===
Let ''ω'' = 1.1.  Using the splitting (14) of the matrix '''A''' in problem (10) for the successive over-relaxation method, we have
 
<!-- (D – wL)–1 -->
:<math>\begin{align}
& \mathbf{(D-\omega L)^{-1}} = \frac{1}{12} \begin{pmatrix}
2 & 0 & 0 \\
0.55 & 3 & 0 \\
1.441 & 0.66 & 2.4
\end{pmatrix},
\end{align}</math>
 
<!-- (D – wL)–1[(1 – w)D + wU] -->
:<math>\begin{align}
& \mathbf{(D-\omega L)^{-1}[(1-\omega )D+\omega U]} = \frac{1}{12} \begin{pmatrix}
-1.2 & 4.4 & 6.6 \\
-0.33 & 0.01 & 8.415 \\
-0.8646 & 2.9062 & 5.0073
\end{pmatrix},
\end{align}</math>
 
<!-- w(D – wL)–1k -->
:<math>\begin{align}
& \mathbf{\omega (D-\omega L)^{-1}k} = \frac{1}{12} \begin{pmatrix}
11 \\
-36.575 \\
25.6135
\end{pmatrix}.
\end{align}</math>
 
The successive over-relaxation method (9) applied to the problem (10) takes the form
 
: <math> \bold x^{(m+1)} =
\begin{align}
& \frac{1}{12} \begin{pmatrix}
-1.2 & 4.4 & 6.6 \\
-0.33 & 0.01 & 8.415 \\
-0.8646 & 2.9062 & 5.0073
\end{pmatrix}
\bold x^{(m)} +
\frac{1}{12} \begin{pmatrix}
11 \\
-36.575 \\
25.6135
\end{pmatrix},
\end{align}
\quad m = 0, 1, 2, \ldots  \quad (16) </math>
 
The first few iterates for equation (16) are listed in the table below, beginning with '''x'''<sup>(0)</sup> = (0.0, 0.0, 0.0)<sup>T</sup>.  From the table one can see that the method is evidently converging to the solution (13), slightly faster than the Gauss-Seidel method described above.
 
{| class="wikitable" border="1"
|-
! <math>x^{(m)}_1</math>
! <math>x^{(m)}_2</math>
! <math>x^{(m)}_3</math>
|-
| <math>0.0</math>
| <math>0.0</math>
| <math>0.0</math>
|-
| <math>0.9167</math>
| <math>-3.0479</math>
| <math>2.1345</math>
|-
| <math>0.8814</math>
| <math>-1.5788</math>
| <math>2.2209</math>
|-
| <math>1.4711</math>
| <math>-1.5161</math>
| <math>2.6153</math>
|-
| <math>1.6521</math>
| <math>-1.2557</math>
| <math>2.7526</math>
|-
| <math>1.8050</math>
| <math>-1.1641</math>
| <math>2.8599</math>
|-
| <math>1.8823</math>
| <math>-1.0930</math>
| <math>2.9158</math>
|-
| <math>1.9314</math>
| <math>-1.0559</math>
| <math>2.9508</math>
|-
| <math>1.9593</math>
| <math>-1.0327</math>
| <math>2.9709</math>
|-
| <math>1.9761</math>
| <math>-1.0185</math>
| <math>2.9829</math>
|-
| <math>1.9862</math>
| <math>-1.0113</math>
| <math>2.9901</math>
|}
 
==See also==
*[[Matrix decomposition]]
*[[M-matrix]]
*[[Stieltjes matrix]]
 
==Notes==
{{reflist}}
 
==References==
* {{citation | first1 = Richard L. | last1 = Burden | first2 = J. Douglas | last2 = Faires | year = 1993 | isbn = 0-534-93219-3 | title = Numerical Analysis | edition = 5th | publisher = [[Prindle, Weber and Schmidt]] | location = Boston }}.
 
* {{Cite book | first1 = Richard S. | last1 = Varga | chapter = Factorization and Normalized Iterative Methods | title = Boundary Problems in Differential Equations | editor1-last = Langer | editor1-first = Rudolph E. | publisher = [[University of Wisconsin Press]] | location = Madison | pages = 121&ndash;142 | year = 1960 | lccn = 60-60003}}
 
* {{citation | first1 = Richard S. | last1 = Varga | title = Matrix Iterative Analysis | publisher = [[Prentice-Hall]] | location = New Jersey | year = 1962 | lccn = 62-21277}}.
 
{{Numerical linear algebra}}
 
[[Category:Matrices]]
[[Category:Numerical linear algebra]]
[[Category:Relaxation (iterative methods)]]

Latest revision as of 19:18, 20 December 2014

Free high speed Internet access and a business center make it easy to do business while on the road.
You can wear a little short tee that barely hits the hip bone, a pair of white colored shorts and some bohemian sandals. Complete the look with a very colorful scarf tied on the white shorts.
http://bulgariandepot.com/coach/?key=coach-outlet-factory-online-store-7
http://bulgariandepot.com/coach/?key=coach-factory-store-1
http://bulgariandepot.com/coach/?key=coach-factory-store-website-8
http://bulgariandepot.com/coach/?key=coach-factory-outlet-store-online-5
http://bulgariandepot.com/coach/?key=coach-factory-outlet-store-2

http://www.desifun4u.com/User-EHebbleth
http://wikilearn.in/How_To_Teach_Bags_Outlet_Better_Than_Anyone_Else

If you liked this short article and you would like to receive more data relating to Ugg Outlet Store kindly pay a visit to the page.