Simultaneous Linear Equations
We have a number of different ways to solve given a set of simultaneous linear equations such as:
|3x|| + ||4y|| + ||2z|| = ||7
|2x|| − ||y|| + ||5z|| = ||12
|x|| || || − ||z|| = ||2
Which is the best approach to find the solution? The answer is not exact, but depends upon several factors.
The problem above is a set of 3 simultaneous linear equations in 3 unknowns. The coefficients of the variables can be arranged in a 3 by 3 table called a matrix of coefficients. The values on the right side of the equations are the constants.
First, what are some of the ways we can solve a system of n simultaneous linear equations in n unknowns?
|Substitution||Using one of the equations, solve for one variable in terms of the other(s) and substitute in the remaining equations—this reduces the system to n−1 equations in the remaining n−1 unknowns. Repeat until one variable can be solved for an exact value. Then work backwards, solving for the other variables.
|Addition (or subtraction)||We multiply one of the equations by a constant such that when we add it to another equation, the coefficient of one of the variables is zero. Repeat for the other equations. In this way we "eliminate" one variable and end up with n−1 equations in the remaining n−1 variables. Repeat until one variable is solved. Then substitute backwards until values are determined for all variables.
|Gaussian Elimination||Write the augmented matrix using the coefficients of the variables and an additional column containing the constants. By applying the three rules for Gaussian Elimination, change the original table of coefficients into an "identity matrix". The solution is contained in the column that originally held the constants.
|Inverse Matrix||Write a "super" augmented matrix with the original coefficients and an additional "identity matrix". Perform the same steps of Gaussian Elimination to change the original table of coefficients to the "identity matrix". The original "identity matrix" is changed into the inverse matrix of the coefficients. Using this inverse matrix perform matrix multiplication on the column of constants to produce the solution.
|Determinants||Each variable becomes the ratio of two n by n determinants. The denominator is the determinant of the matrix of the coefficients. The numerator is the determinant of the matrix formed by substituting the constant column for the coefficients of that particular variable. Thus we need to evaluate n+1 nxn determinants.
The first observation is that there is generally no quick and easy way to solve a large system of n simultaneous linear equations in n unknows. The amount of arithmetic involved for any method grows rapidly. The primary question for determining the best method must ask which method is the most straight forward to apply without make errors?
First. let us compare Gaussian Elimination with Inverse Matrix. These two involve the same steps.
However, Gaussian Elimination works on an n by n+1 table; whereas finding the Inverse Matrix involves a n by 2n table.
Therefore, finding the Inverse Matrix involves about twice as many arithmetic operations.
Further, after the Inverse Matrix is computed, there still remains a matrix multiplication of the inverse matrix and the constant column to get a solution.
On the surface, it would appear that perhaps we should never compute the Inverse Matrix. This is indeed the case if we have only one set of the simultaneous linear equations to solve. However, if there are a number of sets to solve, all with the same set of coefficients but different constants, then the Inverse Matrix may be faster. Once we have the inverse matrix, then we can relatively quickly perform a matrix multiplication to solve each set.
As a rule of thumb, if there are three or more sets of equations, then the Inverse Matrix approach may result in the least work.
This suggestion remains true even compared to the other methods. Find the inverse matrix when dealing with multiple sets of equations with the same coefficients and different constants.
The most effort is performed in obtaining the inverse matrix.
It is well worth the small additional effort to multiply the inverse matrix by the original as a check on the correctness before proceeding.
Substitution tends to be the most straight forward when there are two equations and is typically the first method studied. It requires only ordinary algebra. If one of the coefficients is 1, then it easy to solve for that variable without introducing fractions. On the other hand, it can be messy if fractions are introduced or the set of equations is larger than 2.
The Addition method to make one of the coefficients zero is great if you can readily see what to do and it can be done without introducing fractions.
Note, this is essentially the same as the steps for Gaussian Elimination except that instead of writing and manipulating only the coefficients and constant, you need to write each equation with the explicit variables included.
The method does not inherently contain the orderliness of the Gaussian Elimination process since it generally applied at each step to be the easiest for that particular step. This method can efficiently be applied to the 2 by 2 situation if "it jumps out" how to make one coefficient zero quickly.
The last method is to use Determinants. Writing out and evaluating the three 2 by 2 determinants is easy and quick. (one determinant of the coeficients for the denominator and two determinants—one for each variable—as numerators).
Therefore, the case of two equations in 2 unknowns can be relatively quickly solved using any of these methods. Use the one you feel most comfortable with.
For two or three equations, determinants can be one of the easiest ways. The 2 by 2 determinant can be evaluated quickly by the basic definition and the 3 by 3 determinant by a shortcut method.
For larger systems, the work involved in the Determinant approach tends to grow quickly.
However, applying the determinant rules of factoring out a common factor from a column or row and adding a multiple of a row or column to another row or column, respectively, to produce several zero elements is not difficult or messy to perform and the resultant determinant may be reduced quickly to the 3 x 3 or 2 x 2 situations.
Indeed, only consider using determinants for 4 x 4 or larger problems if you will actively attempt to create a row or column with many zeros by using the determinant rules.
One advantage for the determinant method is that for initial integer coefficients and constants, the arithmetic does not involve fractions except as a final division.
As a summary, suggested methods in order for the various situations are:
|2 by 2||Substitution|
|3 by 3||Determinants|
|4 by 4||Gaussian Elimination|
|Inverse Matrix||Inverse Matrix
|5 by 5|
|Gaussian Elimination||Inverse Matrix||Inverse Matrix
Of course, the suggested method may change depending upon the exact set of coefficients. If one or more coefficients are one, then Substitution may "move up" the list of methods. If several coefficients are zero, then Determinants may become a better prefered method
As a final note, keep in mind that checking the results by substituting the solution back into the original equations is relatively quick and easy—always finish the problem up with such a check!.