9 min read•august 20, 2024
Iterative methods are powerful tools for solving large, sparse linear systems in numerical analysis. They start with an initial guess and refine the solution iteratively, making them efficient for problems where direct methods are impractical.
These methods, including Jacobi, Gauss-Seidel, and conjugate gradient, offer advantages in memory usage and computational cost. They're particularly useful in data science and statistics when dealing with massive datasets or complex models.
Given a linear system , where is an matrix and and are -dimensional vectors, the Jacobi method starts with an initial guess
At each iteration , the method updates each component of the solution vector using the formula:
The process is repeated until convergence is reached, i.e., when the difference between consecutive iterations falls below a specified tolerance
Consider the linear system:
Starting with an initial guess , the Jacobi method updates the solution as follows:
The process continues until the desired convergence is achieved, with the solution approaching
Given a linear system , where is an matrix and and are -dimensional vectors, the Gauss-Seidel method starts with an initial guess
At each iteration , the method updates each component of the solution vector using the formula:
The process is repeated until convergence is reached, i.e., when the difference between consecutive iterations falls below a specified tolerance
Consider the same linear system as in the Jacobi method example:
Starting with an initial guess , the Gauss-Seidel method updates the solution as follows:
The process continues until the desired convergence is achieved, with the solution approaching faster than the Jacobi method
Given a linear system , where is an matrix and and are -dimensional vectors, the SOR method starts with an initial guess and a relaxation parameter
At each iteration , the method updates each component of the solution vector using the formula:
The process is repeated until convergence is reached, i.e., when the difference between consecutive iterations falls below a specified tolerance
The convergence of the SOR method depends on the properties of the matrix and the choice of the relaxation parameter
For the SOR method to converge, the matrix must be symmetric positive definite or strictly diagonally dominant
The optimal value of that minimizes the spectral radius of the iteration matrix and maximizes the convergence rate is given by:
where is the spectral radius of the Jacobi iteration matrix
Consider the same linear system as in the previous examples:
Starting with an initial guess and a relaxation parameter , the SOR method updates the solution as follows:
The process continues until the desired convergence is achieved, with the solution approaching faster than the Gauss-Seidel method when is chosen appropriately
Consider the linear system:
The matrix is symmetric positive definite
Starting with an initial guess , the CG method proceeds as follows:
$p^{(1)} = r