Solving Systems of Linear Equations

Yaşar Yücel Yeşilbağ
5 min readApr 16, 2021

There are many methods for solving systems of linear equations. Let’s briefly explain some of them with their pros and cons, and define a method choosing algorithm.

This article will not dive into methods’ details or coding examples, since there already are lots of sources covering theoretical and practical aspects.

(Image created by jcomp — www.freepik.com)

Some Definitions

[1] In mathematics, a system of linear equations is a collection of linear equations involving same set of variables. A solution to this system is a set of values to the variables such that all the equations are simultaneously satisfied.

A general system of m linear equations with n unknowns can be written as below. Note that, in order to have a single unique solution, m must be equal to n, and, equations must be independent (cannot be derived from each other).

This system can also be written in matrix form as below, where A is coefficient matrix and b is constant matrix.

[2] A triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero, upper triangular if all the entries below the main diagonal are zero.

A matrix equation in the form Lx=b or Ux=b is easy to solve by a process called forward substitution or backward substitution, respectively. One first computes x1, then substitutes that into the next equation to solve for x2, and repeats through to xn.

Gaussian Elimination

[3] This method first augments coefficient matrix and constant matrix. Then uses a sequence of elementary row operations to modify the matrix until it becomes a triangular (Gaussian) or identity (Gauss–Jordan) matrix.

Pros: It’s fast, solution set will be obtained by performing Gaussian elimination and substitution, or by performing Gauss-Jordan elimination. Since the operations are performed in-place, no additional memory is needed.

Cons: If many solutions are needed with different constant matrices but with the same coefficient matrix, whole process should be performed from start each time. [6]

Matrix Solution

[1] This method requires calculation of the inverse of the coefficient matrix. This inverse can be obtained by Gauss–Jordan elimination as shown below. Then the solution is calculated by x=A⁻¹b.

Pros: Once the inverse matrix is calculated, many solutions with different constant matrices can be calculated very quickly by just a matrix product.

Cons: Since another matrix with the same size gets involved, double amount of memory is required.

LU Decomposition

[4] This method decomposes the coefficient matrix into the product of a lower triangular matrix and an upper triangular matrix, A=LU.

So, Ax=b becomes LUx=b. Let Ux=y. Method first solves Ly=b for y by forward substitution and then solves Ux=y for x by backward substitution.

Pros: Once the L and U matrices are calculated, many solutions with different constant matrices can be calculated quickly by forward and backward substitutions.

Cons: LU decomposition is more time consuming than matrix inversion [7]. And 2 substitutions are slower than the simple matrix product at matrix solution method.

Cholesky Decomposition

[5] This method decomposes the symmetric coefficient matrix into the product of a lower triangular matrix and its transpose, A=LL*.

So, Ax=b becomes LL*x=b. Let L*x=y. Method first solves Ly=b for y by forward substitution and then solves L*x=y for x by backward substitution.

A variant of this method is the LDL decomposition with the advantage of avoiding extracting square roots, A=LDL*.

So, Ax=b becomes LDL*x=b. Let DL*x=y. Method first solves Ly=b for y by forward substitution and then solves DL*x=y for x by backward substitution. Note that L*x=D⁻¹y can be calculated as L*x=y/D, since D⁻¹=1/D for diagonal matrices.

Pros: Decomposition generates only one triangular matrix, and the pattern of access allows L matrix to be built in coefficient matrix, so memory requirement is very low. Smaller amount of matrix elements is calculated so it’s more efficient, and its numerical stability is higher.

Cons: Requires the coefficient matrix to be symmetric. Since its formula contains square roots, it’s a bit processor intensive. Though this can be overcome by using its LDL variant, which eliminates square root calculations completely.

Method Choosing Algorithm

if( coefficient matrix is symmetric ){
choose Cholesky Decomposition
}else if( many solutions with the same coefficient matrix ){
if( memory efficiency is preferred ) choose LU Decomposition
if( processor efficiency is preferred ) choose Matrix Solution
}else choose Gaussian Elimination

References

[1]: https://en.wikipedia.org/wiki/System_of_linear_equations
[2]: https://en.wikipedia.org/wiki/Triangular_matrix
[3]: https://en.wikipedia.org/wiki/Gaussian_elimination
[4]: https://en.wikipedia.org/wiki/LU_decomposition
[5]: https://en.wikipedia.org/wiki/Cholesky_decomposition
[6]: https://math.stackexchange.com/questions/266355/necessity-advantage-of-lu-decomposition-over-gaussian-elimination
[7]: https://math.stackexchange.com/questions/2010045/what-is-the-advantage-of-lu-factorization

--

--