GAUSS ELIMINATION

Gaussian elimination (also known as row reduction) is an algorithm for solving system of linear equations. It is usually understood as a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix

to perform row reduction on a matrix, one uses a sequence of elementary square row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: 1) Swapping two rows, 2) Multiplying a row by a non-zero number, 3) Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon matrix. Once all of the leading coefficients (the left-most non-zero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where multiple elementary operations might be done at each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form.

\left[{\begin{array}{rrr|r}1&3&1&9\\1&1&-1&1\\3&11&5&35\end{array}}\right]\to \left[{\begin{array}{rrr|r}1&3&1&9\\0&-2&-2&-8\\0&2&2&8\end{array}}\right]\to \left[{\begin{array}{rrr|r}1&3&1&9\\0&-2&-2&-8\\0&0&0&0\end{array}}\right]\to \left[{\begin{array}{rrr|r}1&0&-2&-3\\0&1&1&4\\0&0&0&0\end{array}}\right]

 

Row operations 

There are three types of elementary row operations which may be performed on the rows of a matrix:

Type 1: Swap the positions of two rows.
Type 2: Multiply a row by a nonzero scalar
Type 3: Add to one row a scalar multiple of another.

If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one’s goal is to solve a system of linear equations, then using these row operations could make the problem easier.

Echelon form 

For each row in a matrix, if the row does not consist of only zeros, then the left-most non-zero entry is called the leading coefficient (or pivot) of that row. So if two leading coefficients are in the same column, then a row operation of type 3   could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in row echelon form. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word “echelon” is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom.

For example, the following matrix is in row echelon form, and its leading coefficients are shown in red.

 \left[{\begin{array}{cccc}0&\color {red}{\mathbf {2} }&1&-1\\0&0&\color {red}{\mathbf {3} }&1\\0&0&0&0\end{array}}\right]

It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column).

A matrix is said to be in reduced row echelon form if furthermore all of the leading coefficients are equal to 1 , and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3).

Example of the algorithm 

Suppose the goal is to find and describe the set of solutions to the following systems of linear equations:

 {\begin{alignedat}{7}2x&&\;+\;&&y&&\;-\;&&z&&\;=\;&&8&\qquad (L_{1})\\-3x&&\;-\;&&y&&\;+\;&&2z&&\;=\;&&-11&\qquad (L_{2})\\-2x&&\;+\;&&y&&\;+\;&&2z&&\;=\;&&-3&\qquad (L_{3})\end{alignedat}}

The table below is the row reduction process applied simultaneously to the system of equations, and its associated augmented matrix . In practice, one does not usually deal with the systems in terms of equations but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate x from all equations below L_{1}, and then eliminate y from all equations below L_{2}. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for.

System of equations Row operations Augmented matrix
 {\begin{alignedat}{7}2x&&\;+\;&&y&&\;-\;&&z&&\;=\;&&8&\\-3x&&\;-\;&&y&&\;+\;&&2z&&\;=\;&&-11&\\-2x&&\;+\;&&y&&\;+\;&&2z&&\;=\;&&-3&\end{alignedat}}  \left[{\begin{array}{ccc|c}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]
 {\begin{alignedat}{7}2x&&\;+&&y&&\;-&&\;z&&\;=\;&&8&\\&&&&{\frac {1}{2}}y&&\;+&&\;{\frac {1}{2}}z&&\;=\;&&1&\\&&&&2y&&\;+&&\;z&&\;=\;&&5&\end{alignedat}}   L_{2}+{\frac {3}{2}}L_{1}\rightarrow L_{2}
 L_{3}+L_{1}\rightarrow L_{3}
 \left[{\begin{array}{ccc|c}2&1&-1&8\\0&1/2&1/2&1\\0&2&1&5\end{array}}\right]
 {\begin{alignedat}{7}2x&&\;+&&y\;&&-&&\;z\;&&=\;&&8&\\&&&&{\frac {1}{2}}y\;&&+&&\;{\frac {1}{2}}z\;&&=\;&&1&\\&&&&&&&&\;-z\;&&\;=\;&&1&\end{alignedat}}  L_{3}+-4L_{2}\rightarrow L_{3}  \left[{\begin{array}{ccc|c}2&1&-1&8\\0&1/2&1/2&1\\0&0&-1&1\end{array}}\right]
The matrix is now in echelon form (also called triangular form)
  {\begin{alignedat}{7}2x&&\;+&&y\;&&&&\;\;&&=\;&&7&\\&&&&{\frac {1}{2}}y\;&&&&\;\;&&=\;&&{\frac {3}{2}}&\\&&&&&&&&\;-z\;&&\;=\;&&1&\end{alignedat}}  L_{2}+{\frac {1}{2}}L_{3}\rightarrow L_{2}
 L_{1}-L_{3}\rightarrow L_{1}
 \left[{\begin{array}{ccc|c}2&1&0&7\\0&1/2&0&3/2\\0&0&-1&1\end{array}}\right]
 {\begin{alignedat}{7}2x&&\;+&&y\;&&&&\;\;&&=\;&&7&\\&&&&y\;&&&&\;\;&&=\;&&3&\\&&&&&&&&\;z\;&&\;=\;&&-1&\end{alignedat}}  2L_{2}\rightarrow L_{2}
 -L_{3}\rightarrow L_{3}
  \left[{\begin{array}{ccc|c}2&1&0&7\\0&1&0&3\\0&0&1&-1\end{array}}\right]
 {\begin{alignedat}{7}x&&\;&&\;&&&&\;\;&&=\;&&2&\\&&&&y\;&&&&\;\;&&=\;&&3&\\&&&&&&&&\;z\;&&\;=\;&&-1&\end{alignedat}}  L_{1}-L_{2}\rightarrow L_{1}
 {\frac {1}{2}}L_{1}\rightarrow L_{1}
  \left[{\begin{array}{ccc|c}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right]

The second column describes which row operations have just been performed. So for the first step, the x is eliminated from L_{2} by adding  {\begin{matrix}{\frac {3}{2}}\end{matrix}}L_{1} to  L_{2}. Next x is eliminated from  L_{3}by adding  L_{1} to L_{3}. These row operations are labelled in the table as

 L_{2}+{\frac {3}{2}}L_{1}\rightarrow L_{2}
 L_{3}+L_{1}\rightarrow L_{3}.

Once y is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is z = -1y = 3, and x = 2. So there is a unique solution to the original system of equations.