1.5 Find an Elementary Matrix Solutions
Elementary Matrix
An elementary matrix E is a square matrix that generates an elementary row operation on a matrix A (which need not be square) under the multiplication EA.
From: Linear Algebra (Third Edition) , 2014
E
Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015
Elementary Matrix
An elementary matrix is a matrix obtained from an identity matrix by applying one elementary row operation to the identity matrix.
Illustration
- ■
-
An elementary matrix resulting from the interchange of two rows
A = IdentityMatrix[3]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
A = {A[[1]] , A [[3]] , A [[2]] }
{{1, 0, 0}, {0, 0, 1}, {0, 1, 0}}
- ■
-
An elementary matrix resulting from multiplication of a row by a nonzero constant
A = IdentityMatrix[3]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
A[[2]] = 3 A[[2]];
A
{{1, 0, 0} , {0, 3, 0} , {0, 0, 1}}
- ■
-
An elementary matrix resulting from the addition of a multiple of a row to another row
A = IdentityMatrix[3]
{{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}
A[[3]] = A[[3]] − 4 A[[1]] ;
A
{{1, 0, 0} , {0, 1, 0} , {− 4, 0,1}}
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124095205500126
Additional Applications
Stephen Andrilli , David Hecker , in Elementary Linear Algebra (Fourth Edition), 2010
Exercises for Section 8.6
- 1.
-
For each elementary matrix below, determine its corresponding row operation. Also, use the inverse operation to find the inverse of the given matrix.
- ⋆(a)
-
- ⋆(b)
-
- (c)
-
- (d)
-
- ⋆(e)
-
- (f)
-
- 2.
-
Express each of the following as a product of elementary matrices (if possible), in the manner of Example 5:
- ⋆(a)
-
- (b)
-
- ⋆(c)
-
- 3.
-
Let A and B be m × n matrices. Prove that A and B are row equivalent if and only if B = PA, for some nonsingular m × m matrix P.
- 4.
-
Prove that if U is an upper triangular matrix with all main diagonal entries nonzero, then U −1 exists and is upper triangular. (Hint: Show that the method for calculating the inverse of a matrix does not produce a row of zeroes on the left side of the augmented matrix. Also, show that for each row reduction step, the corresponding elementary matrix is upper triangular. Conclude that U −1 is the product of upper triangular matrices, and is therefore upper triangular (see Exercise 18(b) in Section 1.5).)
- 5.
-
If E is an elementary matrix, show that E T is also an elementary matrix. What is the relationship between the row operation corresponding to E and the row operation corresponding to E T ?
- 6.
-
Let F be an elementary n × n matrix. Show that the product AF T is the matrix obtained by performing a "column" operation on A analogous to one of the three types of row operations. (Hint: What is (AF T ) T ?)
- ►7.
-
Prove Corollary 8.9.
- 8.
-
Consider the homogeneous system AX = O, where A is an n × n matrix. Show that this system has a nontrivial solution if and only if A cannot be expressed as the product of elementary n × n matrices.
- 9.
-
Let A and B be m × n and n × p matrices, respectively, and let E be an m × m elementary matrix.
- (a)
-
Prove that rank(EA) = rank(A).
- (b)
-
Show that if A has k rows of all zeroes, then rank(A) ≤ m − k.
- (c)
-
Show that if A is in reduced row echelon form, then rank(AB) ≤ rank(A). (Use part (b).)
- (d)
-
Use parts (a) and (c) to prove that for a general matrix A, rank(AB) ≤ rank(A).
- (e)
-
Compare this exercise with Exercise 18 in Section 2.3.
- ⋆10.
-
True or False:
- (a)
-
Every elementary matrix is square.
- (b)
-
If A and B are row equivalent matrices, then there must be an elementary matrix E such that B = EA.
- (c)
-
If E 1,…, E k are n × n elementary matrices, then the inverse of E 1 E 2…E k is E k …E 2 E 1.
- (d)
-
If A is a nonsingular matrix, then A −1 can be expressed as a product of elementary matrices.
- (e)
-
If R is a row operation, E is its corresponding m × m matrix, and A is any m × n matrix, then the reverse row operation R −1 has the property R −1(A) = E −1 A.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123747518000196
Volume 2
A. de Juan , ... R. Tauler , in Comprehensive Chemometrics, 2009
2.19.2.2 Resolution Based on Elementary Matrix Transformations (Gentle)
The resolution method based on elementary matrix transformations proposed by Manne and Grande, 34 hereafter called Gentle, works within the space of raw data and the central idea is that the true solutions C and S T can be obtained by matrix transformation from the evolution of any pair of matrices, C o and S o T, fitting optimally the raw data matrix X. This starting point agrees with the model in Equation (2), where the link between the initial estimates for C o and S o T and the desired chemically meaningful solutions, C and S T, happens via a transformation matrix R.
Thus, this method starts with an initial set of real spectra from the original matrix, S o T, obtained with the use of a pure variable selection method. 13 This matrix S o T contains good approximate spectra, which are linear combinations of the true spectra sought. Subsequently, an initial C o matrix is calculated by least squares from X and S o T. The pair of matrices C o and S o T obtained in this way reproduces the original data set X with an optimal fit. The matrix C o is then transformed into the chemically meaningful matrix C via a series of elementary matrix transformation steps that each affects a small part of the data. Thus, the series of successive transformations can be expressed as
(12)
Or, in a more compact way,
(13)
To preserve the fit of the original data matrix X, the spectra matrix S o T is successively transformed as
(14)
If R is defined as R = R o R 1…R n , we can easily recall the expression in Equation (2). The particularity of this approach lies in the fact that the successive transformation matrices R o, R 1, …, R n are elementary matrices. To understand the way these R i matrices act on small parts of the concentration profiles and spectra, we reformulate the bilinear CR model in a profilewise way. Let us assume that we have a two-component system. The related CR model, expressed as a sum of the contributions of each dyad of profiles, can be written as
(15)
Adding and subtracting the term k c 2 s 1 T to the above expression and reorganizing the terms, we obtain the following two equations:
(16)
(17)
Figure 3 shows the relationship between the original bilinear model in Equation (15) and the transformed one in Equation (17) in matrix form.
Figures 3(a) and 3(c) show the original model in Equation (15) and the model in Equation (17), respectively. The link between the two is the transformation matrix R shown in Figure 3(b) . This transformation matrix has the property of having ones along the diagonal and only one nonnull off-diagonal element. Matrices with this structure receive the name of elementary matrices. 35
The elementary matrix in Figure 3(b) is represented by the algebraic notation E 21(k), where the subindex indicates the nonnull element in the data matrix. The inverse of the elementary matrix E 21(k) is E 21(−k). Right multiplication of C by the elementary matrix E 21(k) modifies the first column of C by addition of k times the second profile, that is, c 1→c 1+kc 2, and left multiplication of S T by E 21(−k) modifies the second row of S T by addition of −k times the first profile, that is, s 2→s 2−ks 1.
Without loss of generality, for a system with n components, an elementary matrix transformation using matrix E ji (k) would lead to the replacements c i→c i+kc j and s j→s j−ks j. To represent a concentration profile c i as a linear combination of all of the others in a data set, the product of the elementary matrices Î E ji (k j ) will lead to the replacement c i→c i+Σkj c j. As a consequence, all spectra with j ≠ i would be modified accordingly, as s j→s j−kj s j.
Figure 3 and the explanations above have helped to visualize the two main ideas linked to elementary matrix transformations. First, the transformed concentration profiles and spectra are linear combinations of the starting profiles. Second, an elementary matrix transformation affects concentration profiles and spectra pairwise, hence the need of successive elementary matrix transformations to obtain a global modification of C and S T. The elementary matrix transformations presented are the ones used by the Gentle method to transform the initial concentration and spectra estimates to the real solutions sought in the resolution approach. The strategy used in practice and the conditions that the data set should fulfill to be analyzed are explained below in detail.
The application of the resolution method based on elementary matrix transformations will be shown for the chromatographic system (HPLC-DAD) with four components shown in Figure 1 . Once the number of components in the system is known, the steps given below should be followed:
- (a)
-
Generation of initial spectral estimates, S o T, from the original data matrix X, by pure variable selection methods and normalization to constant area.
- (b)
-
Calculation of C o by least squares according to the bilinear resolution model, as .
- (c)
-
Organization of concentration profiles according to elution order in C o.
- (d)
-
Application of constraints to the elution profiles via elementary matrix transformations (nonnegativity in the elution profiles and preservation of the correct elution windows).
- (e)
-
Modification of the pure spectra with the inverse of the elementary matrices used in step (d) to provide an optimal reproduction of the original data set.
Step (d) is the core of the algorithm, and highlights the fact that the application of constraints in Gentle differs from the methods used in other iterative approaches. Knowing that elementary matrix transformations produce linear combinations of existing profiles, it seems logical that the shape of a given concentration profile c i must be basically modified by combination with the closest left c i−1 and/or right c i+1 neighboring profiles. Therefore, the elementary matrix transformations to eliminate the negative parts of a given concentration profile c i follow the sequence given below:
- i.
-
Selection of the profile with the lowest minimum (largest negative contribution) c i (m) where m is the number of the iterative transformation step.
- ii.
-
See whether c i (m) needs to pass through a transformation. This will happen if
- (a)
-
The negative minimum is lower than a preset negative threshold value.
- (b)
-
One of the neighboring profiles, c i−1 (m) or c i+1 (m), is the largest profile above the minimum.
- (c)
-
The largest profile is larger than the preset positive threshold value (the absolute value of the threshold is usually 1% of the absolute maximum in C o).
- iii.
-
If all conditions in step (ii) are fulfilled, take the largest neighboring profile, for example, c i−1 (m), and replace c i (m) by c i (m+1) = c i (m) + k c i−1 (m), where k is chosen so that the lowest minimum in the profile c i (m+1) equals the preset negative threshold value. This is equivalent to performing C m+1 = C m E i,i−1(k).
- iv.
-
Transform accordingly the connected spectral profiles, that is, s i−1 (m+1) = s i−1 (m) – k s i (m). This is the elementary matrix transformation S T m+1 = E i,i−1(−k)S T m . (For a better understanding of steps (iii) and (iv), see the matrix C o and the concentration profiles and related spectra after the first elementary transformation given in Figures 4(a) and 4(b) .)
- v.
-
Normalize the spectra in S T m+1, rescale the profiles in C m+1 accordingly, and go back to step (ii) until none of the concentration profiles fulfill the conditions to be transformed.
Once the concentration profiles are positive, it is possible to check whether the inverse transformations keep the related spectra positive. If the factor k is modified in a gradual manner, keeping the value slightly smaller than the exact optimum, this check may be avoided very often.
This is the procedure followed to apply the nonnegativity constraint. After this step, the final solution is obtained modifying the concentration profiles so that they match their predetermined elution windows, that is, minimizing the concentration profile elements outside the preset elution window. The initial setting of these windows is done in an automatic way after the nonnegativity correction, taking the values around the maximum that exceed the preset positive threshold. Once this is done, each concentration profile is modified by a product of elementary matrix transformations that will give
(18)
where the k j elements are calculated by linear regression minimizing the deviation from zero of the elements of c i (m) outside the determined elution window. This calculation slightly modifies the initial elution windows and, in order to obtain more reliable solutions, this procedure is repeated a few times, modifying the definition of the elution windows according to the results of the last iterative cycle. Figure 4(c) shows the final profiles recovered for the data set of Figure 1 by elementary matrix transformations after suitable introduction of all needed constraints.
Quality checks of the results can be done by examining the meaningfulness of the resolved profiles, for example, lack of unimodality could be considered a serious fault and be likely related to a wrong estimation of the number of compounds. In addition, the presence of nonnegative elements in the R matrix is indicative of a successful result. An independent validation of the resolved spectra could be achieved by comparing the spectral shapes recovered with those obtained by subwindow factor analysis (SFA), 36 a resolution method mainly focused on the resolution of pure spectra (see the chapter on noniterative curve resolution methods for further explanation of SFA, Chapter 2.18). Although rotational ambiguity can still exist, if the two methods give the same solution, the results might be very likely correct.
An important point to make is that Gentle is a suitable method for analyzing process data, where concentration profiles evolve in a sequential way and elution windows can be easily determined. Data sets with less structured concentration profiles, such as environmental or spectroscopic images, cannot be easily solved using this approach.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444527011000508
Matrices
Richard Bronson , ... John T. Saccoman , in Linear Algebra (Third Edition), 2014
Important Terms
-
augmented matrix
-
block diagonal matrix
-
coefficient matrix
-
cofactor
-
column matrix
-
component
-
consistent equations
-
derived set
-
determinant
-
diagonal element
-
diagonal matrix
-
dimension
-
directed line segment
-
element
-
elementary matrix
-
elementary row operations
-
equivalent directed line segments
-
expansion by cofactor
-
Gaussian elimination
-
homogeneous equations
-
identity matrix
-
inconsistent equations
-
inverse
-
invertible matrix
-
linear equation
-
lower triangular matrix
-
LU decomposition
-
main diagonal
-
mathematical induction
-
matrix
-
nonhomogeneous equations
-
nonsingular matrix
-
n-tuple
-
order
-
partitioned matrix
-
pivot
-
pivotal condersation
-
power of a matrix
-
row matrix
-
row-reduced form
-
scalar
-
singular matrix
-
skew-symmetric matrix
-
square
-
submatrix
-
symmetric matrix
-
transpose
-
trivial solution
-
upper triangular matrix
-
zero matrix
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123914200000019
The inverse
Richard Bronson , Gabriel B. Costa , in Matrix Methods (Fourth Edition), 2021
3.1 Introduction
Definition 1
An inverse of an n × n matrix A is an n × n matrix B having the property that
(1)
Here, B is called an inverse of A and is usually denoted by A −1. If a square matrix A has an inverse, it is said to be invertible or nonsingular. If A does not possess an inverse, it is said to be singular. Note that inverses are only defined for square matrices. In particular, the identity matrix is invertible and is its own inverse because
Example 1
Determine whether
are inverses for
Solution B is an inverse if and only if AB = BA= I; C is an inverse if and only if AC = CA= I. Here,
while
Thus, B is not an inverse for A, but C is. We may write A −1 = C. ■
Definition 1 is a test for checking whether a given matrix is an inverse of another given matrix. In the Final Comments to this chapter we prove that if AB=I for two square matrices of the same order, then A and B commute and BA=I. Thus, we can reduce the checking procedure by half. A matrix B is an inverse for a square matrix A if either AB=I or BA=I; each equality automatically guarantees the other for square matrices. We will show in Section 3.4 that an inverse is unique. If a square matrix has an inverse, it has only one.
Definition 1 does not provide a method for finding inverses. We develop such a procedure in the next section. Still, inverses for some matrices can be found directly.
The inverse for a diagonal matrix D having only nonzero elements on its main diagonal is also a diagonal matrix whose diagonal elements are the reciprocals of the corresponding diagonal elements of D. That is, if
then,
It is easy to show that if any diagonal element in a diagonal matrix is zero, then that matrix is singular. (See Problem 57.)
An elementary matrix E is a square matrix that generates an elementary row operation on a matrix A (which need not be square) under the multiplication EA. Elementary matrices are constructed by applying the desired elementary row operation to an identity matrix of appropriate order. The appropriate order for both I and E is a square matrix having as many columns as there are rows in A; then, the multiplication EA is defined. Because identity matrices contain many zeros, the process for constructing elementary matrices can be simplified still further. After all, nothing is accomplished by interchanging the positions of zeros, multiplying zeros by nonzero constants, or adding zeros to zeros.
- (i)
-
To construct an elementary matrix that interchanges the ith row with the jth row, begin with an identity matrix of the appropriate order. First, interchange the unity element in the i − i position with the zero in the j − i position and then interchange the unity element in the j − j position with the zero in the i − j position.
- (ii)
-
To construct an elementary matrix that multiplies the ith row of a matrix by the nonzero scalar k, replace the unity element in the i − i position of the identity matrix of appropriate order with the scalar k.
- (iii)
-
To construct an elementary matrix that adds to the jth row of a matrix k times the ith row, replace the zero element in the j − i position of the identity matrix of appropriate order with the scalar k.
Example 2
Find elementary matrices that when multiplied on the right by any 4 × 3 matrix A will (a) interchange the second and fourth rows of A, (b) multiply the third row of A by 3, and (c) add to the fourth row of A − 5 times its second row.
Solution
-
(a) (b) (c) ■
Example 3
Find elementary matrices that when multiplied on the right by any 3 × 5 matrix A will (a) interchange the first and second rows of A, (b) multiply the third row of A by −0.5, and (c) add to the third row of A −1 times its second row.
Solution
-
(a) (b) (c) ■
The inverse of an elementary matrix that interchanges two rows is the matrix itself, it is its own inverse. The inverse of an elementary matrix that multiplies one row by a nonzero scalar k is obtained by replacing k by 1/k. The inverse of an elementary matrix that adds to one row a constant k times another row is obtained by replacing the scalar k by −k.
Example 4
Compute the inverses of the elementary matrices found in Example 2.
Solution
-
(a) (b) (c) ■
Example 5
Compute the inverses of the elementary matrices found in Example 3.
Solution
-
(a) (b) (c)
Finally, if A can be partitioned into the block diagonal form,
then A is invertible if and only if each of the diagonal blocks A 1, A 2, …, A n is invertible and
Example 6
Find the inverse of
Solution Set
then, A is in the block diagonal form
Here A 1 is a diagonal matrix with nonzero diagonal elements, A 2 is an elementary matrix that adds to the second row four times the first row, and A 3 is an elementary matrix that interchanges the second and third rows; thus,
and
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128184196000034
Linear Transformations from a Geometric Viewpoint
J. Douglas Carroll , Paul E. Green , in Mathematical Tools for Applied Multivariate Analysis, 1997
4.7.1 Elementary Operations
Elementary operations play an essential role in the solution of sets of simultaneous equations. Illustratively taking the case of the rows of a transformation matrix, there are three basic operations–called elementary row operations–that can be used to transform one matrix into another. We may
- 1.
-
interchange any two rows;
- 2.
-
multiply any row by a nonzero scalar;
- 3.
-
add to any given row a scalar multiple of some other row.
If we change some matrix A into another matrix B by the use of elementary row operations, we say that B is row equivalent to A.
Elementary row operations involve the multiplication of A by special kinds of matrices that effect the above transformations. However, we could just as easily talk about elementary column operations–the same kinds as these shown above–that are applied to the columns of A. A matrix so transformed would be called column equivalent to the original matrix. To simplify our discussion, we illustrate the ideas via elementary row operations. The reader should bear in mind, however, that the same approach is applicable to the columns of A.
To illustrate how elementary row operations can be applied to the general problem of solving a set of simultaneous equations, let us again consider the two equations described earlier:
Suppose we first multiply all members of the first equation by and then add the result to those of the second equation:
Next, let us multiply the second equation by . If so, we obtain
All these sets of equations are row equivalent in the sense that all three sets have the same solution:
and each can be transformed to either of the others via elementary row operations. However, it is clear that the solution to the third set of equations is most apparent and easily found.
While elementary row operations are useful in the general task of solving sets of simultaneous equations, this is not their only desirable feature. A second, and major, attraction is the fact that elementary operations (row or column), as applied to a matrix A, do not change its rank. Moreover, as will be shown, elementary operations transform the given matrix in such a way as to make its rank easy to determine by inspection. As it turns out, all three sets of equations above have the same rank (rank 2) since they are all equivalent in terms of elementary row operations.
Elementary row operations are performed by a special set of square, nonsingular matrices called elementary matrices. An elementary matrix is a nonsingular matrix that can be obtained from the identity matrix by an elementary row operation. For example, if we wanted to interchange two rows of a matrix, we could do so by means of the permutation matrix
For example, if we have the point in two dimensions, the premultiplication of x by the permutation matrix above would yield: 11
We note that the coordinates of x have, indeed, been permuted.
As mentioned above, elementary matrices are nonsingular. In the 2 × 2 matrix case, the set of elementary matrices consists of the following:
- I.
-
Permutation
- II.
-
Stretches
- III.
-
Shears
Continuing with the numerical example involving , we have
Stretches
Shears
But, there is nothing stopping us from applying, in some prescribed order, a series of premultiplications by elementary matrices. Furthermore, the product of a set of nonsingular matrices will itself be nonsingular.
The geometric character of permutations, stretches, and shears has already been illustrated in Section 4.5. Here we are interested in two major applications of elementary row operations and the matrices that represent them:
- 1.
-
determining the rank of a matrix, and
- 2.
-
finding the inverse of a matrix, when such inverse exists. Each application is described in turn.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121609542500056
Numerical Analysis
John N. Shoosmith , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
V.C Nonsymmetric Matrices
The problem of finding the eigenvalues of a nonsymmetric matrix is inherently less stable than the symmetric case; furthermore, some or all of the eigenvalues and eigenvectors may be complex.
The first stage is to "balance" the matrix by applying similarity tranformations in order to make the maximum element in corresponding rows and columns approximately equal. Next, the matrix is reduced to "Hessenburg form" by applying stablilized elementary similarity transformations. An upper Hessenburg matrix has the form
The elementary matrices are similar in form to those used in the LU factorization (Gaussian elimination) process, except that the pivotal element is taken to be one element below the diagonal. As in LU factorization for general matrices, the rows are interchanged to maximize the pivotal element; however, a difference is that for each elementary matrix multiplying A on the left, its inverse is multiplied on the right, and each row interchange is accompanied by the corresponding column interchange, in order to preserve the eigenvalues.
Since real nonsymmetric matrices may have complex conjugate pairs of eigenvalues, an application of the QR algorithm with origin shift as described for symmetric matrices will involve computing with complex numbers. To avoid this, the double-shift QR algorithm is usually applied to the Hessenburg matrix H (0) obtained in the first stage. The kth iteration of this method is
The shifts σ k, 1 and σ k, 2 are taken to be the eigenvalues of the lowest 2 × 2 submatrix of H (k); however, when convergence of this submatrix is ascertained, the method is applied to the remaining upper submatrix in the same way. Eventually the sequence H (k), k = 0, 1, 2, …, will converge to a triangular matrix, except that there will be 2 × 2 blocks on the diagonal for each pair of complex eigenvalues.
As before, the transformations may be accumulated if all of the eigenvectors are required. If the eigenvector associated with a specific eigenvalue is required, inverse iteration may be used (albeit with complex arithmetic if the eigenvalue is complex).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105005056
SOLUTION OF EQUATIONS
M.V.K. Chari , S.J. Salon , in Numerical Methods in Electromagnetism, 2000
The LDLT Decomposition
If A is symmetric, then the decomposition can be made in the more convenient form A = LDLT , where L is unit lower triangular and D is a diagonal matrix. We see this as follows: We have shown that A can be expressed as A = LDU′. In this case A = AT = (LDU′) T = U′ T D T L T . Here U′ T is unit lower triangular, DT is diagonal, and LT is upper triangular. As UTDTLT is a decomposition of A and this decomposition is unique, we deduce that U′ T = LT . If A is positive definite, then the elements of D are positive.
We will now show that Gauss elimination, triangular factorization, and LU decomposition involve the same operations. We will use the following example [125]:
(11.38)
In matrix notation
(11.39)
Each step of the Gaussian elimination procedure is what we have defined as an elementary row operation. Each corresponds to the premultiplication of A by an elementary matrix. We can therefore express the Gaussian elimination as follows:
(11.40)
This procedure will transform the original matrix into an identity matrix. Consider the last two matrices. Premultiplying the original matrix by the row vector [1/2, 0, 0] divides the first row by 2, thus putting a 1 in the (1,1) place. Premultiplying by [-1, 1, 0] subtracts the second row from the first, putting a 0 in the (2,1) position. Premultiplying by the third row, [-3/2, 0, 1], puts a zero in the (3,1) place. The reader can confirm that the process results in the identity matrix. Note that all of the elementary matrices are triangular, that is,
(11.41)
Because the product of upper (lower) triangular matrices is an upper (lower) triangular matrix, we can write
(11.42)
Here the subscript π is used to indicate a product of matrices. It is also true that the inverse of an upper (lower) triangular matrix is an upper (lower) triangular matrix. Therefore
(11.43)
The solution by Gauss elimination involves two stages: (1.) forward elimination (premultiplication by L π) brings the system into an upper triangular form (U = Lπ A) and (2.) back-substitution (premultiplication by U π) transforms the system into an identity matrix (I = UπLπA).
Returning to Ax = y,
(11.44)
Therefore the two methods are equivalent.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780126157604500122
I
Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015
Illustration
- ■
-
An invertible 2-by-2 matrix
MatrixForm[A1 = {{1, 2}, {3, 4}}]
MatrixForm [A2 = Inverse[A1]]
A1.A2 == IdentityMatrix [2] == A2.A1
True
- ■
-
Invertible 2-by-2 elementary matrices
- ■
-
Invertible 3-by-3 matrices
Adding 9 times the second row to the first row
MatrixForm [A2 = E1. A]
Adding 4 times the second row to the third row
MatrixForm [A3 = E2. A2]
Interchanging the first and second rows
MatrixForm [A4 = E3. A3]
Multiplying the first row by − 1
MatrixForm[A5 = E4. A4]
Dividing the second row by 30
MatrixForm[A6 = E5. A5]
Subtracting 12 times the second row from the third row
MatrixForm[A7 = E6. A6]
Adding 3 times the second row to the first row
MatrixForm [A8 = E7. A7 ]
Multiplying the third row by 5/43
MatrixForm[A9 = E8. A8]
Subtracting 1/30 times the third row from the second row
MatrixForm[A10 = E9. A9]
Subtracting 1/10 times the third row from the second row
MatrixForm[A11 = E10. A10]
The product of the ten elementary matrices used to convert the matrix A to the 3-by-3 identity matrix is the inverse B matrix of the matrix A.
B = E10.E9.E8.E7.E6.E5.E4.E3.E2.E1 == Inverse[A]
True
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124095205500163
Review of Linear Algebra (Optional)
Bernard Kolman , Robert E. Beck , in Elementary Linear Programming with Applications (Second Edition), 1995
0.3 THE INVERSE OF A MATRIX
In this section we restrict our attention to square matrices.
DEFINITION.
An n × n matrix A is called nonsingular or invertible if there exists an n × n matrix B such that
AB −BA = In
The matrix B is called the inverse of A. If no such matrix B exists, then A is called singular or noninvertible. If the inverse of A exists, we shall write it as A −1. Thus
AA−1= A−1A = In
EXAMPLE 1.
Let
Since
AB = BA = I2
it follows that B is the inverse of A or B = A −1 and that A is nonsingular.
EXAMPLE 2.
Let
Does A have an inverse? If it does, let such an inverse be denoted by
Then
Thus,
which means that we must have
x + 3z =1
2x + 6z =0
Since this linear system has no solution, we conclude that A has no inverse.
In practice, if the matrix B is proposed as the inverse of A, we need only check that AB = I n because it can be shown that if AB = I n , then also BA = I n .
THEOREM 0.3. (PROPERTIES OF INVERSE).
- (a)
-
If A is nonsingular, then A −1 is nonsingular and
(A−1)−1= A.
- (b)
-
If A and B are nonsingular, then AB is nonsingular and
(AB)−1=B−1A−1(note that the order changes).
- (c)
-
If A is nonsingular, then A T is nonsingular and
(AT)−1=(A−1)T.
We now develop some results that will yield a practical method for computing the inverse of a matrix. The method is based on the properties of elementary matrices, which we discuss next.
DEFINITION.
An n × n elementary matrix of type I, type II, or type III is a matrix obtained from the identity matrix I n by performing a single elementary row operation of type I, type II, or type III, respectively.
EXAMPLE 3.
Matrices E 1, E 2, and E 3 as defined below are elementary matrices.
THEOREM 0.4.
Let A be an m × n matrix, and let the matrix B be obtained from A by performing an elementary row operation on A. Let E be the elementary matrix obtained by performing the same elementary row operation on I n as that performed on A. Then B = EA.
EXAMPLE 4.
Let
If we add − 2 times row 1 to row 2, we obtain
Performing the same elementary row operation on I 3, we obtain the elementary matrix
It is now easy to verify that B = EA.
The following theorem follows easily from the definition of row equivalence of matrices and Theorem 0.3
THEOREM 0.5.
The m × n matrices A and B are row equivalent if and only if
B=EkEk−1…E2E1A,
where E 1, E 2, …, E k . are elementary matrices.
THEOREM 0.6.
An elementary matrix is nonsingular and its inverse is an elementary matrix of the same type.
THEOREM 0.7.
An n × n matrix A is nonsingular if and only if A is a product of elementary matrices.
COROLLARY 0.1.
An n × n matrix A is nonsingular if and only if A is row equivalent to I n .
Suppose now that A is nonsingular. Then, by Corollary 0.1 and Theorem 0.5, there exist elementary matrices E 1, E 2, …, E k such that
I n =E k E k−1…E2E1A.
Procedure for Computing A−1
We now have an effective algorithm for computing A −1. We use elementary row operations to transform A to I n ; the product of the elementary matrices E k E k−1 … E 2 E 1 gives A −1. The algorithm can be efficiently organized as follows. Form the n × 2n matrix [A | I n ] and perform elementary row operations to transform this matrix to [I n | A −1]. Every elementary row operation that is performed on a row of A is also performed on the corresponding row of I n .
EXAMPLE 5.
Let
Then
Thus,
Hence,
It can be shown that if the matrix C in reduced row echelon form obtained from A has a row of zeros, then A is singular. However, observe that once a matrix obtained from A by this process has a row of zeros, then the final matrix C in reduced row echelon form will also have a row of zeros. Thus, we can stop our calculations as soon as we get a matrix from A with a row of zeros and conclude that A −1 does not exist.
EXAMPLE 6.
Let
To find A −1 we proceed as above:
Since we have a matrix with a row of zeros under A, we stop and conclude that A is singular.
The method just presented for calculating A −1 has the nice feature that we do not have to know, in advance, whether or not A −1 exists. That is, we set out to find A −1, and we either find it or determine that it does not exist.
Suppose now that A is an n × n matrix, and consider the linear system
(1)
If A is nonsingular, then A −1 exists. Multiplying both sides of (1) by A −1 on the left, we have
A−1Ax=A−1b
Thus, if A −1 is known, then the solution to (1) can be simply calculated as A −1 b.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124179103500036
1.5 Find an Elementary Matrix Solutions
Source: https://www.sciencedirect.com/topics/mathematics/elementary-matrix