|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| In [[numerical linear algebra]], the '''Gauss–Seidel method''', also known as the '''Liebmann method''' or the '''method of successive displacement''', is an [[iterative method]] used to solve a [[linear system of equations]]. It is named after the [[Germany|German]] [[mathematician]]s [[Carl Friedrich Gauss]] and [[Philipp Ludwig von Seidel]], and is similar to the [[Jacobi method]]. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either [[diagonally dominant matrix|diagonally dominant]], or [[Symmetric matrix|symmetric]] and [[Positive-definite matrix|positive definite]]. It was only mentioned in a private letter from Gauss to his student [[Christian Ludwig Gerling|Gerling]] in 1823.<ref>
| | Greetings! I am Myrtle Shroyer. For many years I've been operating as a payroll clerk. It's not a typical factor but what she likes doing is base leaping and now she is trying to make money with it. California is our beginning location.<br><br>Also visit my web blog :: [http://www.youporntime.com/user/KBiaggini std home test] |
| {{harvnb|Gauss|1903|p=279}}; [http://gdz.sub.uni-goettingen.de/en/dms/loader/img/?PPN=PPN23601515X&DMDID=DMDLOG_0112&LOGID=LOG_0112&PHYSID=PHYS_0286 direct link].</ref> A publication was not delivered before 1874 by Seidel.
| |
| | |
| == Description ==
| |
| The Gauss–Seidel method is an [[Iterative method|iterative technique]] for solving a square system of ''n'' linear equations with unknown '''x''':
| |
| | |
| :<math>A\mathbf x = \mathbf b</math>.
| |
| | |
| It is defined by the iteration
| |
| | |
| :<math> L_* \mathbf{x}^{(k+1)} = \mathbf{b} - U \mathbf{x}^{(k)}, </math>
| |
| | |
| where the matrix ''A'' is decomposed into a [[triangular matrix|lower triangular]] component <math>L_*</math>, and a [[triangular matrix#Strictly triangular matrix|strictly upper triangular]] component ''U'': <math> A = L_* + U </math>.<ref>{{harvnb|Golub|Van Loan|1996|p=511}}.</ref>
| |
| | |
| In more detail, write out ''A'', '''x''' and '''b''' in their components:
| |
| | |
| :<math>A=\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \qquad \mathbf{x} = \begin{bmatrix} x_{1} \\ x_2 \\ \vdots \\ x_n \end{bmatrix} , \qquad \mathbf{b} = \begin{bmatrix} b_{1} \\ b_2 \\ \vdots \\ b_n \end{bmatrix}.</math>
| |
| | |
| Then the decomposition of ''A'' into its lower triangular component and its strictly upper triangular component is given by:
| |
| | |
| :<math>A=L_*+U \qquad \text{where} \qquad L_* = \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ a_{21} & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}, \quad U = \begin{bmatrix} 0 & a_{12} & \cdots & a_{1n} \\ 0 & 0 & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\0 & 0 & \cdots & 0 \end{bmatrix}. </math>
| |
| | |
| The system of linear equations may be rewritten as:
| |
| | |
| :<math>L_* \mathbf{x} = \mathbf{b} - U \mathbf{x} </math>
| |
| | |
| The Gauss–Seidel method now solves the left hand side of this expression for '''x''', using previous value for '''x''' on the right hand side. Analytically, this may be written as:
| |
| | |
| :<math> \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}). </math>
| |
| | |
| However, by taking advantage of the triangular form of <math>L_*</math>, the elements of '''x'''<sup>(''k''+1)</sup> can be computed sequentially using [[forward substitution]]:
| |
| | |
| :<math> x^{(k+1)}_i = \frac{1}{a_{ii}} \left(b_i - \sum_{j<i}a_{ij}x^{(k+1)}_j - \sum_{j>i}a_{ij}x^{(k)}_j \right),\quad i,j=1,2,\ldots,n. </math> <ref>{{harvnb|Golub|Van Loan|1996|loc=eqn (10.1.3)}}.</ref>
| |
| | |
| The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small [[Residual (numerical analysis)|residual]].
| |
| | |
| === Discussion ===
| |
| The element-wise formula for the Gauss–Seidel method is extremely similar to that of the [[Jacobi method]].
| |
| | |
| The computation of ''x''<sub>''i''</sub><sup>(''k''+1)</sup> uses only the elements of '''x'''<sup>(''k''+1)</sup> that have already been computed, and only the elements of '''x'''<sup>(''k'')</sup> that have not yet to be advanced to iteration ''k''+1. This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems.
| |
| | |
| However, unlike the Jacobi method, the computations for each element cannot be done in [[Parallel algorithm|parallel]]. Furthermore, the values at each iteration are dependent on the order of the original equations.
| |
| | |
| Gauss-Seidel is the same as [[Successive Over-relaxation|SOR (successive over-relaxation)]] with <math>\omega=1</math>.
| |
| | |
| ==Convergence==
| |
| The convergence properties of the Gauss–Seidel method are dependent on the matrix ''A''. Namely, the procedure is known to converge if either:
| |
| * ''A'' is symmetric [[positive-definite matrix|positive-definite]],<ref>{{harvnb|Golub|Van Loan|1996|loc=Thm 10.1.2}}.</ref> or
| |
| * ''A'' is strictly or irreducibly [[diagonally dominant matrix|diagonally dominant]].
| |
| | |
| The Gauss–Seidel method sometimes converges even if these conditions are not satisfied.
| |
| | |
| == Algorithm ==
| |
| | |
| Since elements can be overwritten as they are computed in this algorithm, only one storage vector is needed, and vector indexing is omitted. The algorithm goes as follows:
| |
| | |
| Inputs: {{var|A}}, {{var|b}}
| |
| Output: <math>\phi</math>
| |
|
| |
| Choose an initial guess <math>\phi</math> to the solution
| |
| '''repeat''' until convergence
| |
| '''for''' {{var|i}} '''from''' 1 '''until''' {{var|n}} '''do'''
| |
| <math>\sigma \leftarrow 0</math>
| |
| '''for''' {{var|j}} '''from''' 1 '''until''' {{var|n}} '''do'''
| |
| '''if''' {{var|j}} ≠ {{var|i}} '''then'''
| |
| <math> \sigma \leftarrow \sigma + a_{ij} \phi_j </math>
| |
| '''end if'''
| |
| '''end''' ({{var|j}}-loop)
| |
| <math> \phi_i \leftarrow \frac 1 {a_{ii}} (b_i - \sigma)</math>
| |
| '''end''' ({{var|i}}-loop)
| |
| check if convergence is reached
| |
| '''end''' (repeat)
| |
| | |
| ==Examples==
| |
| | |
| ===An example for the matrix version===
| |
| | |
| A linear system shown as <math>A \mathbf{x} = \mathbf{b}</math> is given by:
| |
| | |
| :<math> A=
| |
| \begin{bmatrix}
| |
| 16 & 3 \\
| |
| 7 & -11 \\
| |
| \end{bmatrix}
| |
| </math> and <math> b=
| |
| \begin{bmatrix}
| |
| 11 \\
| |
| 13
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| We want to use the equation
| |
| | |
| :<math> \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}) </math>
| |
| | |
| in the form
| |
| | |
| :<math> \mathbf{x}^{(k+1)} = T \mathbf{x}^{(k)} + C </math>
| |
| | |
| where:
| |
| | |
| :<math>T = - L_*^{-1} U</math> and <math>C = L_*^{-1} \mathbf{b}.</math>
| |
| | |
| We must decompose <math>A_{}^{}</math> into the sum of a lower triangular component <math>L_*^{}</math> and a strict upper triangular component <math>U_{}^{}</math>:
| |
| | |
| :<math> L_*=
| |
| \begin{bmatrix}
| |
| 16 & 0 \\
| |
| 7 & -11 \\
| |
| \end{bmatrix}
| |
| </math> and <math> U =
| |
| \begin{bmatrix}
| |
| 0 & 3 \\
| |
| 0 & 0
| |
| \end{bmatrix}.</math>
| |
| | |
| The inverse of <math>L_*^{}</math> is:
| |
| :<math> L_*^{-1} =
| |
| \begin{bmatrix}
| |
| 16 & 0 \\
| |
| 7 & -11
| |
| \end{bmatrix}^{-1}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.0625 & 0.0000 \\
| |
| 0.0398 & -0.0909 \\
| |
| \end{bmatrix}
| |
| </math>.
| |
| | |
| Now we can find:
| |
| :<math> T = -
| |
| \begin{bmatrix}
| |
| 0.0625 & 0.0000 \\
| |
| 0.0398 & -0.0909
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0 & 3 \\
| |
| 0 & 0
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}, </math>
| |
| | |
| :<math> C =
| |
| \begin{bmatrix}
| |
| 0.0625 & 0.0000 \\
| |
| 0.0398 & -0.0909
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 11 \\
| |
| 13
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7439
| |
| \end{bmatrix}. </math>
| |
| | |
| Now we have <math>T_{}^{}</math> and <math>C_{}^{}</math> and we can use them to obtain the vectors <math>\mathbf{x}</math> iteratively.
| |
| | |
| First of all, we have to choose <math>\mathbf{x}^{(0)}</math>: we can only guess. The better the guess, the quicker the algorithm will perform.
| |
| | |
| We suppose:
| |
| | |
| :<math> x^{(0)} =
| |
| \begin{bmatrix}
| |
| 1.0 \\
| |
| 1.0
| |
| \end{bmatrix}.</math>
| |
| | |
| We can then calculate:
| |
| | |
| :<math> x^{(1)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 1.0 \\
| |
| 1.0
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.5000 \\
| |
| -0.8636
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(2)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0.5000 \\
| |
| -0.8636
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.8494 \\
| |
| -0.6413
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(3)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0.8494 \\
| |
| -0.6413 \\
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.8077 \\
| |
| -0.6678
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(4)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0.8077 \\
| |
| -0.6678
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.8127 \\
| |
| -0.6646
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(5)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0.8127 \\
| |
| -0.6646
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.8121 \\
| |
| -0.6650
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(6)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0.8121 \\
| |
| -0.6650
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.8122 \\
| |
| -0.6650
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(7)} =
| |
| \begin{bmatrix}
| |
| 0.000 & -0.1875 \\
| |
| 0.000 & -0.1193
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0.8122 \\
| |
| -0.6650
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 0.6875 \\
| |
| -0.7443
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.8122 \\
| |
| -0.6650
| |
| \end{bmatrix}. </math>
| |
| | |
| As expected, the algorithm converges to the exact solution:
| |
| | |
| :<math> \mathbf{x} = A^{-1} \mathbf{b} = \begin{bmatrix} 0.8122\\ -0.6650 \end{bmatrix}. </math>
| |
| | |
| In fact, the matrix A is strictly diagonally dominant (but not positive definite).
| |
| | |
| ===Another example for the matrix version===
| |
| | |
| Another linear system shown as <math>A \mathbf{x} = \mathbf{b}</math> is given by:
| |
| | |
| :<math> A=
| |
| \begin{bmatrix}
| |
| 2 & 3 \\
| |
| 5 & 7 \\
| |
| \end{bmatrix}
| |
| </math> and <math> b=
| |
| \begin{bmatrix}
| |
| 11 \\
| |
| 13 \\
| |
| \end{bmatrix}.
| |
| </math>
| |
| | |
| We want to use the equation
| |
| | |
| :<math> \mathbf{x}^{(k+1)} = L_*^{-1} (\mathbf{b} - U \mathbf{x}^{(k)}) </math>
| |
| | |
| in the form
| |
| | |
| :<math> \mathbf{x}^{(k+1)} = T \mathbf{x}^{(k)} + C </math>
| |
| | |
| where:
| |
| | |
| :<math>T = - L_*^{-1} U</math> and <math>C = L_*^{-1} \mathbf{b}.</math>
| |
| | |
| We must decompose <math>A_{}^{}</math> into the sum of a lower triangular component <math>L_*^{}</math> and a strict upper triangular component <math>U_{}^{}</math>:
| |
| | |
| :<math> L_*=
| |
| \begin{bmatrix}
| |
| 2 & 0 \\
| |
| 5 & 7 \\
| |
| \end{bmatrix}
| |
| </math> and <math> U =
| |
| \begin{bmatrix}
| |
| 0 & 3 \\
| |
| 0 & 0 \\
| |
| \end{bmatrix}.</math>
| |
| | |
| The inverse of <math>L_*^{}</math> is:
| |
| :<math> L_*^{-1} =
| |
| \begin{bmatrix}
| |
| 2 & 0 \\
| |
| 5 & 7 \\
| |
| \end{bmatrix}^{-1}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.500 & 0.000 \\
| |
| -0.357 & 0.143 \\
| |
| \end{bmatrix}
| |
| </math>.
| |
| | |
| Now we can find:
| |
| :<math> T = -
| |
| \begin{bmatrix}
| |
| 0.500 & 0.000 \\
| |
| -0.357 & 0.143 \\
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 0 & 3 \\
| |
| 0 & 0 \\
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 0.000 & -1.500 \\
| |
| 0.000 & 1.071 \\
| |
| \end{bmatrix}, </math>
| |
| | |
| :<math> C =
| |
| \begin{bmatrix}
| |
| 0.500 & 0.000 \\
| |
| -0.357 & 0.143 \\
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 11 \\
| |
| 13 \\
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 5.500 \\
| |
| -2.071 \\
| |
| \end{bmatrix}. </math>
| |
| | |
| Now we have <math>T_{}^{}</math> and <math>C_{}^{}</math> and we can use them to obtain the vectors <math>\mathbf{x}</math> iteratively.
| |
| | |
| First of all, we have to choose <math>\mathbf{x}^{(0)}</math>: we can only guess. The better the guess, the quicker will perform the algorithm.
| |
| | |
| We suppose:
| |
| | |
| :<math> x^{(0)} =
| |
| \begin{bmatrix}
| |
| 1.1 \\
| |
| 2.3 \\
| |
| \end{bmatrix}.</math>
| |
| | |
| We can then calculate:
| |
| | |
| :<math> x^{(1)} =
| |
| \begin{bmatrix}
| |
| 0 & -1.500 \\
| |
| 0 & 1.071 \\
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 1.1 \\
| |
| 2.3 \\
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 5.500 \\
| |
| -2.071 \\
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 2.050 \\
| |
| 0.393 \\
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(2)} =
| |
| \begin{bmatrix}
| |
| 0 & -1.500 \\
| |
| 0 & 1.071 \\
| |
| \end{bmatrix}
| |
| \times
| |
| \begin{bmatrix}
| |
| 2.050 \\
| |
| 0.393 \\
| |
| \end{bmatrix}
| |
| +
| |
| \begin{bmatrix}
| |
| 5.500 \\
| |
| -2.071 \\
| |
| \end{bmatrix}
| |
| =
| |
| \begin{bmatrix}
| |
| 4.911 \\
| |
| -1.651 \\
| |
| \end{bmatrix}. </math>
| |
| | |
| :<math> x^{(3)} = \cdots. \, </math>
| |
| | |
| If we test for convergence we'll find that the algorithm diverges. In fact, the matrix A is neither diagonally dominant nor positive definite.
| |
| Then, convergence to the exact solution
| |
| | |
| :<math> \mathbf{x} = A^{-1} \mathbf{b} = \begin{bmatrix} -38\\ 29 \end{bmatrix} </math>
| |
| | |
| is not guaranteed and, in this case, will not occur.
| |
| | |
| ===An example for the equation version===
| |
| | |
| Suppose given ''k'' equations where ''x''<sub>''n''</sub> are vectors of these equations and starting point ''x''<sub>0</sub>.
| |
| From the first equation solve for ''x''<sub>1</sub> in terms of <math>x_{n+1}, x_{n+2}, \dots, x_n.</math> For the next equations substitute the previous values of ''x''s.
| |
| | |
| To make it clear let's consider an example.
| |
| | |
| :<math>
| |
| \begin{align}
| |
| 10x_1 - x_2 + 2x_3 & = 6, \\
| |
| -x_1 + 11x_2 - x_3 + 3x_4 & = 25, \\
| |
| 2x_1- x_2+ 10x_3 - x_4 & = -11, \\
| |
| 3x_2 - x_3 + 8x_4 & = 15.
| |
| \end{align}
| |
| </math>
| |
| | |
| Solving for <math>x_1</math>, <math>x_2</math>, <math>x_3</math> and <math>x_4</math> gives:
| |
| | |
| :<math>
| |
| \begin{align}
| |
| x_1 & = x_2/10 - x_3/5 + 3/5, \\
| |
| x_2 & = x_1/11 + x_3/11 - 3x_4/11 + 25/11, \\
| |
| x_3 & = -x_1/5 + x_2/10 + x_4/10 - 11/10, \\
| |
| x_4 & = -3x_2/8 + x_3/8 + 15/8.
| |
| \end{align}
| |
| </math>
| |
| | |
| Suppose we choose (0, 0, 0, 0) as the initial approximation, then the first
| |
| approximate solution is given by
| |
| | |
| :<math>
| |
| \begin{align}
| |
| x_1 & = 3/5 = 0.6, \\
| |
| x_2 & = (3/5)/11 + 25/11 = 3/55 + 25/11 = 2.3272, \\
| |
| x_3 & = -(3/5)/5 +(2.3272)/10-11/10 = -3/25 + 0.23272-1.1 = -0.9873,\\
| |
| x_4 & = -3(2.3272)/8 +(-0.9873)/8+15/8 = 0.8789.
| |
| \end{align}
| |
| </math>
| |
| | |
| Using the approximations obtained, the iterative procedure is repeated until
| |
| the desired accuracy has been reached. The following are the approximated
| |
| solutions after four iterations.
| |
| | |
| {| class="wikitable" border="1"
| |
| |-
| |
| ! <math>x_1</math>
| |
| ! <math>x_2</math>
| |
| ! <math>x_3</math>
| |
| ! <math>x_4</math>
| |
| |-
| |
| | <math>0.6</math>
| |
| | <math>2.32727</math>
| |
| | <math>-0.987273</math>
| |
| | <math>0.878864</math>
| |
| |-
| |
| | <math>1.03018</math>
| |
| | <math>2.03694</math>
| |
| | <math>-1.01446</math>
| |
| | <math>0.984341</math>
| |
| |-
| |
| | <math>1.00659</math>
| |
| | <math>2.00356</math>
| |
| | <math>-1.00253</math>
| |
| | <math>0.998351</math>
| |
| |-
| |
| | <math>1.00086</math>
| |
| | <math>2.0003</math>
| |
| | <math>-1.00031</math>
| |
| | <math>0.99985</math>
| |
| |}
| |
| The exact solution of the system is (1, 2, −1, 1).
| |
| | |
| ===An example using Python===
| |
| The following numerical procedure simply iterates through to produce the solution vector.
| |
| | |
| <source lang="python">
| |
| #initialize the matrix
| |
| mat = [ [3/5.0, 0.0, 1/10.0, -1/5.0, 0.0 ], \
| |
| [25/11.0, 1/11.0, 0.0, 1/11.0, -3/11.0], \
| |
| [-11/10.0, -1/5.0, 1/10.0, 0, 1/10.0 ], \
| |
| [15/8.0, 0.0, -3/8.0, 1/8.0, 0.0 ] ]
| |
| | |
| x = [0,0,0,0] #initial guess
| |
| | |
| for i in xrange(6):
| |
| x[0] = mat[0][0] + mat[0][1]*0 + mat[0][2]*x[1] + mat[0][3]*x[2] + mat[0][4]*x[3]
| |
| x[1] = mat[1][0] + mat[1][1]*x[0] + mat[1][2]*0 + mat[1][3]*x[2] + mat[1][4]*x[3]
| |
| x[2] = mat[2][0] + mat[2][1]*x[0] + mat[2][2]*x[1] + mat[2][3]*0 + mat[2][4]*x[3]
| |
| x[3] = mat[3][0] + mat[3][1]*x[0] + mat[3][2]*x[1] + mat[3][3]*x[2] + mat[3][4]*0
| |
| print '%f %f %f %f' %(x[0],x[1],x[2],x[3]) #display the iterations to the user
| |
| </source>
| |
| | |
| Produces the output,
| |
| | |
| <source lang="python">
| |
| 0.600000 2.327273 -0.987273 0.878864
| |
| 1.030182 2.036938 -1.014456 0.984341
| |
| 1.006585 2.003555 -1.002527 0.998351
| |
| 1.000861 2.000298 -1.000307 0.999850
| |
| 1.000091 2.000021 -1.000031 0.999988
| |
| 1.000008 2.000001 -1.000003 0.999999
| |
| </source>
| |
| | |
| ===Program to solve arbitrary no. of equations using Matlab===
| |
| <source lang="matlab">
| |
| disp('Give the input to solve the set of equations AX=B')
| |
| a=input('Input the square matrix A : \n');
| |
| b=input('Input the column matrix B : \n');
| |
| m=length(a);
| |
| %z is a two dimensional array in which row corresponds to values of X in a
| |
| %specific iteration and the column corresponds to values of specific
| |
| %element of X in different iterations
| |
| c=0;%random assignment
| |
| e=1;%'e' represents the maximum error
| |
| d=0;%random assignment
| |
| for u=1:m
| |
| x(u)=b(u,1)/a(u,u);
| |
| z(1,u)=0;%initializing the values for matrix X(x1;x2;...xm)
| |
| end
| |
| l=2;%'l' represents the iteration no.
| |
| %loop for finding the convergence factor (C.F)
| |
| for r = 1:m
| |
| for s = 1:m
| |
| if r~=s
| |
| p(r)=abs(a(r,s)/a(r,r))+d;%p(r) is the C.F for equation no. r
| |
| d=p(r);
| |
| end
| |
| end
| |
| d=0;
| |
| end
| |
| if min(p)>=1 %at least one equation must satisfy the condition p<1
| |
| fprintf('Roots will not converge for this set of equations')
| |
| else
| |
| while(e>=1e-4)
| |
| j1=1;%while calculating elements in first column we consider only the old values of X
| |
| for i1=2:m
| |
| q(j1)=(a(j1,i1)/a(j1,j1))*z(l-1,i1)+c;
| |
| c=q(j1);
| |
| end
| |
| c=0;
| |
| z(l,j1)=x(j1)-q(j1);%elements of z in the iteration no. l
| |
| x(j1)=z(l,j1);
| |
| for u=1:m
| |
| x(u)=b(u,1)/a(u,u);
| |
| z(1,u)=0;
| |
| end
| |
| for j1=2:m-1%for intermediate columns between 1 and m, we use the updated values of X
| |
| for i1=1:j1-1
| |
| q(j1)=(a(j1,i1)/a(j1,j1))*z(l,i1)+c;
| |
| c=q(j1);
| |
| end
| |
| for i1=j1+1:m
| |
| q(j1)=(a(j1,i1)/a(j1,j1))*z(l-1,i1)+c;
| |
| c=q(j1);
| |
| end
| |
| c=0;
| |
| z(l,j1)=x(j1)-q(j1);
| |
| x(j1)=z(l,j1);
| |
| for u=1:m
| |
| x(u)=b(u,1)/a(u,u);
| |
| z(1,u)=0;
| |
| end
| |
| end
| |
| j1=m;%for the last column, we use only the updated values of X
| |
| for i1=1:m-1
| |
| q(j1)=(a(j1,i1)/a(j1,j1))*z(l,i1)+c;
| |
| c=q(j1);
| |
| end
| |
| c=0;
| |
| z(l,j1)=x(j1)-q(j1);
| |
| for v=1:m
| |
| t=abs(z(l,v)-z(l-1,v));%calculates the error
| |
| end
| |
| e=max(t);%evaluates the maximum error out of errors of all elements of X
| |
| l=l+1;%iteration no. gets updated
| |
| for i=1:m
| |
| X(1,i)=z(l-1,i);%the final solution X
| |
| end
| |
| end
| |
| %loop to show iteration number along with the values of z
| |
| for i=1:l-1
| |
| for j=1:m
| |
| w(i,j+1)=z(i,j);
| |
| end
| |
| w(i,1)=i;
| |
| end
| |
| disp(' It. no. x1 x2 x3 x4 ')
| |
| disp(w)
| |
| disp('The final solution is ')
| |
| disp(X)
| |
| fprintf('The total number of iterations is %d',l-1)
| |
| end
| |
| </source> | |
| | |
| Program output is
| |
| <source lang="matlab"> | |
| Give the input to solve the set of equations AX=B
| |
| Input the square matrix A :
| |
| [10 -2 -1 -1;-2 10 -1 -1;-1 -1 10 -2;-1 -1 -2 10]
| |
| Input the column matrix B :
| |
| [3;15;27;-9]
| |
| It. no. x1 x2 x3 x4
| |
| | |
| 1.0000 0 0 0 0
| |
| 2.0000 0.3000 1.5600 2.8860 -0.1368
| |
| 3.0000 0.8869 1.9523 2.9566 -0.0248
| |
| 4.0000 0.9836 1.9899 2.9924 -0.0042
| |
| 5.0000 0.9968 1.9982 2.9987 -0.0008
| |
| 6.0000 0.9994 1.9997 2.9998 -0.0001
| |
| 7.0000 0.9999 1.9999 3.0000 -0.0000
| |
| 8.0000 1.0000 2.0000 3.0000 -0.0000
| |
| | |
| The final solution is
| |
| | |
| 1.0000 2.0000 3.0000 -0.0000
| |
| | |
| The total number of iterations is 8
| |
| </source>
| |
| | |
| ==See also==
| |
| *[[Jacobi method]]
| |
| *[[Successive over-relaxation]]
| |
| *[[Iterative_method#Linear_systems|Iterative method. Linear systems]]
| |
| *[[Belief_propagation#Gaussian_belief_propagation_.28GaBP.29|Gaussian belief propagation]]
| |
| *[[Matrix splitting]]
| |
| | |
| ==Notes==
| |
| {{reflist}}
| |
| | |
| ==References==
| |
| * {{citation | first = Carl Friedrich | last = Gauss | authorlink = Carl Friedrich Gauss | title = Werke | publisher = Köninglichen Gesellschaft der Wissenschaften | location = Göttingen | date = 1903 | volume = 9 | language = German}}.
| |
| * {{citation | first1=Gene H. | last1=Golub | author1-link=Gene H. Golub | first2=Charles F. | last2=Van Loan | author2-link=Charles F. Van Loan | year=1996 | title=Matrix Computations | edition=3rd | publisher=Johns Hopkins | place=Baltimore | isbn=978-0-8018-5414-9}}.
| |
| *{{MathWorld|urlname=Gauss-SeidelMethod|title=Gauss-Seidel Method|author=Black, Noel and Moore, Shirley}}
| |
| {{CFDWiki|name=Gauss-Seidel_method}}
| |
| | |
| ==External links==
| |
| *{{springer|title=Seidel method|id=p/s083810}}
| |
| *[http://www.math-linux.com/spip.php?article48 Gauss–Seidel from www.math-linux.com]
| |
| *[http://math.fullerton.edu/mathews/n2003/GaussSeidelMod.html Module for Gauss–Seidel Iteration]
| |
| *[http://numericalmethods.eng.usf.edu/topics/gauss_seidel.html Gauss–Seidel] From Holistic Numerical Methods Institute
| |
| *[http://www.webcitation.org/query?url=http://www.geocities.com/rsrirang2001/Mathematics/NumericalMethods/gsiedel/gsiedel.htm&date=2009-10-26+01:52:27 Gauss Siedel Iteration from www.geocities.com]
| |
| *[http://www.netlib.org/linalg/html_templates/node14.html#figgs The Gauss-Seidel Method]
| |
| *[http://arxiv.org/abs/0901.4192 Bickson]
| |
| *[http://matlabdb.mathematik.uni-stuttgart.de/gauss_seidel.m?MP_ID=406 Matlab code]
| |
| *[http://adrianboeing.blogspot.com/2010/02/solving-linear-systems.html C code example]
| |
| {{Numerical linear algebra}}
| |
| | |
| {{DEFAULTSORT:Gauss-Seidel Method}}
| |
| [[Category:Numerical linear algebra]]
| |
| [[Category:Articles with example pseudocode]]
| |
| [[Category:Relaxation (iterative methods)]]
| |