Chapter 5
Eigenvalues and Eigenvectors
Linear maps from one vector space to another vector space were the objects of study in Chapter 3.
Now we begin our investigation of operators, which are linear maps from a vector space to itself.
Their study constitutes the most important part of linear algebra.
To learn about an operator, we might try restricting it to a smaller subspace.
Asking for that restriction to be an operator will lead us to the notion of invariant subspaces.
Each one-dimensional invariant subspace arises from a vector that the operator maps into a scalar multiple of the vector.
This path will lead us to eigenvectors and eigenvalues.
We will then prove one of the most important results in linear algebra: every operator on a finite-dimensional nonzero complex vector space has an eigenvalue.
This result will allow us to show that for each operator on a finite-dimensional complex vector space, there is a basis of the vector space with respect to which the matrix of the operator has at least almost half its entries equal to 0 .
standing assumptions for this chapter
โข ๐
denotes ๐ or ๐ .
โข ๐ denotes a vector space over ๐
.
Hans - Peter Postel
CC BY Statue of Leonardo of Pisa ( 1170โ1250, approximate dates ) , also known as Fibonacci.
Exercise 21 in Section 5D shows how linear algebra can be used to find the explicit formula for the Fibonacci sequence shown on the front cover.
Section 5A Invariant Subspaces 133
5A Invariant Subspaces Eigenvalues
5.1 definition: operator A linear map from a vector space to itself is called an operator .
Recall that we defined the notation โ (๐) to mean โ (๐ , ๐) .
Suppose ๐ โ โ (๐) . If ๐ โฅ 2 and ๐ = ๐ 1 โ โฏ โ ๐ ๐ , where each ๐ ๐ is a nonzero subspace of ๐ , then to understand the behavior of ๐ we only need to understand the behavior of each ๐| ๐ ๐ ; here ๐| ๐ ๐ denotes the restriction of ๐ to the smaller domain ๐ ๐ .
Dealing with ๐| ๐ ๐ should be easier than dealing with ๐ because ๐ ๐ is a smaller vector space than ๐ .
However, if we intend to apply tools useful in the study of operators (such as taking powers), then we have a problem: ๐| ๐ ๐ may not map ๐ ๐ into itself; in other words, ๐| ๐ ๐ may not be an operator on ๐ ๐ .
Thus we are led to consider only decompositions of ๐ of the form above in which ๐ maps each ๐ ๐ into itself. Hence we now give a name to subspaces of ๐ that get mapped into themselves by ๐ .
5.2 definition: invariant subspace
Suppose ๐ โ โ (๐) . A subspace ๐ of ๐ is called invariant under ๐ if ๐๐ข โ ๐ for every ๐ข โ ๐ .
Thus ๐ is invariant under ๐ if ๐| ๐ is an operator on ๐ .
5.3 example: subspace invariant under differentiation operator
Suppose that ๐ โ โ ( ๐ซ (๐)) is defined by ๐๐ = ๐ โฒ . Then ๐ซ 4 (๐) , which is a subspace of ๐ซ (๐) , is invariant under ๐ because if ๐ โ ๐ซ (๐) has degree at most 4 , then ๐ โฒ also has degree at most 4 .
5.4 example: four invariant subspaces, not necessarily all different
If ๐ โ โ (๐) , then the following subspaces of ๐ are all invariant under ๐ .
{0} The subspace {0} is invariant under ๐ because if ๐ข โ {0} , then ๐ข = 0 and hence ๐๐ข = 0 โ {0} .
๐ The subspace ๐ is invariant under ๐ because if ๐ข โ ๐ , then ๐๐ข โ ๐ .
null ๐ The subspace null ๐ is invariant under ๐ because if ๐ข โ null ๐ , then ๐๐ข = 0 , and hence ๐๐ข โ null ๐ .
range ๐ The subspace range ๐ is invariant under ๐ because if ๐ข โ range ๐ , then ๐๐ข โ range ๐ .
Must an operator ๐ โ โ (๐) have any invariant subspaces other than {0} and ๐ ?
Later we will see that this question has an affirmative answer if ๐ is finite-dimensional and dim ๐ > 1 (for ๐
= ๐ ) or dim ๐ > 2 (for ๐
= ๐) ; see 5.19 and Exercise 29 in Section 5B.
The previous example noted that null ๐ and range ๐ are invariant under ๐ . However, these subspaces do not necessarily provide easy answers to the question above about the existence of invariant subspaces other than {0} and ๐ , because null ๐ may equal {0} and range ๐ may equal ๐ (this happens when ๐ is invertible).
We will return later to a deeper study of invariant subspaces.
Now we turn to an investigation of the simplest possible nontrivial invariant subspacesโinvariant subspaces of dimension one.
Take any ๐ฃ โ ๐ with ๐ฃ โ 0 and let ๐ equal the set of all scalar multiples of ๐ฃ : ๐ = {๐๐ฃ โถ ๐ โ ๐
} = span (๐ฃ).
Then ๐ is a one-dimensional subspace of ๐ (and every one-dimensional subspace of ๐ is of this form for an appropriate choice of ๐ฃ ).
If ๐ is invariant under an operator ๐ โ โ (๐) , then ๐๐ฃ โ ๐ , and hence there is a scalar ๐ โ ๐
such that ๐๐ฃ = ๐๐ฃ. Conversely, if ๐๐ฃ = ๐๐ฃ for some ๐ โ ๐
, then span (๐ฃ) is a one-dimensional subspace of ๐ invariant under ๐ .
The equation ๐๐ฃ = ๐๐ฃ , which we have just seen is intimately connected with one-dimensional invariant subspaces, is important enough that the scalars ๐ and vectors ๐ฃ satisfying it are given special names.
5.5 definition: eigenvalue
Suppose ๐ โ โ (๐) . A number ๐ โ ๐
is called an eigenvalue of ๐ if there exists ๐ฃ โ ๐ such that ๐ฃ โ 0 and ๐๐ฃ = ๐๐ฃ .
The word eigenvalue is half-German, half-English. The German prefix eigen means โownโ in the sense of characterizing an intrinsic property.
In the definition above, we require that ๐ฃ โ 0 because every scalar ๐ โ ๐
satisfies ๐0 = ๐0 .
The comments above show that ๐ has a one-dimensional subspace invariant under ๐ if and only if ๐ has an eigenvalue.
5.6 example: eigenvalue
Define an operator ๐ โ โ (๐
3 ) by ๐(๐ฅ , ๐ฆ , ๐ง) = (7๐ฅ + 3๐ง , 3๐ฅ + 6๐ฆ + 9๐ง , โ6๐ฆ) for (๐ฅ , ๐ฆ , ๐ง) โ ๐
3 . Then ๐(3 , 1 , โ1) = (18 , 6 , โ6) = 6(3 , 1 , โ1) . Thus 6 is an eigenvalue of ๐ .
The equivalences in the next result, along with many deep results in linear algebra, are valid only in the context of finite-dimensional vector spaces.
5.7 equivalent conditions to be an eigenvalue
Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ โ ๐
.
Then the following are equivalent.
(a) ๐ is an eigenvalue of ๐ . (b) ๐ โ ๐๐ผ is not injective. (c) ๐ โ ๐๐ผ is not surjective. Reminder: ๐ผ โ โ (๐) is the identity operator. Thus ๐ผ๐ฃ = ๐ฃ for all ๐ฃ โ ๐ . (d) ๐ โ ๐๐ผ is not invertible. Proof Conditions (a) and (b) are equivalent because the equation ๐๐ฃ = ๐๐ฃ is equivalent to the equation (๐ โ ๐๐ผ)๐ฃ = 0 . Conditions (b), (c), and (d) are equivalent by 3.65.
5.8 definition: eigenvector
Suppose ๐ โ โ (๐) and ๐ โ ๐
is an eigenvalue of ๐ . A vector ๐ฃ โ ๐ is called an eigenvector of ๐ corresponding to ๐ if ๐ฃ โ 0 and ๐๐ฃ = ๐๐ฃ .
In other words, a nonzero vector ๐ฃ โ ๐ is an eigenvector of an operator ๐ โ โ (๐) if and only if ๐๐ฃ is a scalar multiple of ๐ฃ .
Because ๐๐ฃ = ๐๐ฃ if and only if (๐ โ ๐๐ผ)๐ฃ = 0 , a vector ๐ฃ โ ๐ with ๐ฃ โ 0 is an eigenvector of ๐ corresponding to ๐ if and only if ๐ฃ โ null (๐ โ ๐๐ผ) .
5.9 example: eigenvalues and eigenvectors Suppose ๐ โ โ (๐
2 ) is defined by ๐(๐ค , ๐ง) = (โ๐ง , ๐ค) . (a) First consider the case ๐
= ๐ . Then ๐ is a counterclockwise rotation by 90 โ about the origin in ๐ 2 . An operator has an eigenvalue if and only if there exists a nonzero vector in its domain that gets sent by the operator to a scalar multiple of itself. A 90 โ counterclockwise rotation of a nonzero vector in ๐ 2 cannot equal a scalar multiple of itself. Conclusion: if ๐
= ๐ , then ๐ has no eigenvalues (and thus has no eigenvectors). (b) Now consider the case ๐
= ๐ . To find eigenvalues of ๐ , we must find the scalars ๐ such that ๐(๐ค , ๐ง) = ๐(๐ค , ๐ง) has some solution other than ๐ค = ๐ง = 0 . The equation ๐(๐ค , ๐ง) = ๐(๐ค , ๐ง) is equivalent to the simultaneous equations
5.10 โ๐ง = ๐๐ค , ๐ค = ๐๐ง. Substituting the value for ๐ค given by the second equation into the first equation gives โ๐ง = ๐ 2 ๐ง.
136 Chapter 5 Eigenvalues and Eigenvectors Now ๐ง cannot equal 0 [ otherwise 5.10 implies that ๐ค = 0 ; we are looking for solutions to 5.10 such that (๐ค , ๐ง) is not the 0 vector ] , so the equation above leads to the equation โ1 = ๐ 2 . The solutions to this equation are ๐ = ๐ and ๐ = โ๐ . You can verify that ๐ and โ๐ are eigenvalues of ๐ . Indeed, the eigenvectors corresponding to the eigenvalue ๐ are the vectors of the form (๐ค , โ๐ค๐) , with ๐ค โ ๐ and ๐ค โ 0 . Furthermore, the eigenvectors corresponding to the eigenvalue โ๐ are the vectors of the form (๐ค , ๐ค๐) , with ๐ค โ ๐ and ๐ค โ 0 . In the next proof, we again use the equivalence ๐๐ฃ = ๐๐ฃ โบ (๐ โ ๐๐ผ)๐ฃ = 0.
5.11 linearly independent eigenvectors Suppose ๐ โ โ (๐) . Then every list of eigenvectors of ๐ corresponding to distinct eigenvalues of ๐ is linearly independent. Proof Suppose the desired result is false. Then there exists a smallest positive integer ๐ such that there exists a linearly dependent list ๐ฃ 1 , โฆ , ๐ฃ ๐ of eigenvectors of ๐ corresponding to distinct eigenvalues ๐ 1 , โฆ , ๐ ๐ of ๐ (note that ๐ โฅ 2 because an eigenvector is, by definition, nonzero). Thus there exist ๐ 1 , โฆ , ๐ ๐ โ ๐
, none of which are 0 (because of the minimality of ๐ ), such that ๐ 1 ๐ฃ 1 + โฏ + ๐ ๐ ๐ฃ ๐ = 0. Apply ๐ โ ๐ ๐ ๐ผ to both sides of the equation above, getting ๐ 1 (๐ 1 โ ๐ ๐ )๐ฃ 1 + โฏ + ๐ ๐โ1 (๐ ๐โ1 โ ๐ ๐ )๐ฃ ๐โ1 = 0. Because the eigenvalues ๐ 1 , โฆ , ๐ ๐ are distinct, none of the coefficients above equal 0 . Thus ๐ฃ 1 , โฆ , ๐ฃ ๐โ1 is a linearly dependent list of ๐ โ 1 eigenvectors of ๐ corresponding to distinct eigenvalues, contradicting the minimality of ๐ . This contradiction completes the proof. The result above leads to a short proof of the result below, which puts an upper bound on the number of distinct eigenvalues that an operator can have.
5.12 operator cannot have more eigenvalues than dimension of vector space Suppose ๐ is finite-dimensional. Then each operator on ๐ has at most dim ๐ distinct eigenvalues. Proof Let ๐ โ โ (๐) . Suppose ๐ 1 , โฆ , ๐ ๐ are distinct eigenvalues of ๐ . Let ๐ฃ 1 , โฆ , ๐ฃ ๐ be corresponding eigenvectors. Then 5.11 implies that the list ๐ฃ 1 , โฆ , ๐ฃ ๐ is linearly independent. Thus ๐ โค dim ๐ (see 2.22), as desired.
Section 5A Invariant Subspaces 137 Polynomials Applied to Operators The main reason that a richer theory exists for operators (which map a vector space into itself) than for more general linear maps is that operators can be raised to powers. In this subsection we define that notion and the concept of applying a polynomial to an operator. This concept will be the key tool that we use in the next section when we prove that every operator on a nonzero finite-dimensional complex vector space has an eigenvalue. If ๐ is an operator, then ๐๐ makes sense (see 3.7) and is also an operator on the same vector space as ๐ . We usually write ๐ 2 instead of ๐๐ . More generally, we have the following definition of ๐ ๐ .
5.13 notation: ๐ ๐ Suppose ๐ โ โ (๐) and ๐ is a positive integer. โข ๐ ๐ โ โ (๐) is defined by ๐ ๐ = ๐โฏ๐ โ ๐ times . โข ๐ 0 is defined to be the identity operator ๐ผ on ๐ . โข If ๐ is invertible with inverse ๐ โ1 , then ๐ โ๐ โ โ (๐) is defined by ๐ โ๐ = (๐ โ1 ) ๐ . You should verify that if ๐ is an operator, then ๐ ๐ ๐ ๐ = ๐ ๐ + ๐ and (๐ ๐ ) ๐ = ๐ ๐๐ , where ๐ and ๐ are arbitrary integers if ๐ is invertible and are nonnegative integers if ๐ is not invertible. Having defined powers of an operator, we can now define what it means to apply a polynomial to an operator.
5.14 notation: ๐(๐) Suppose ๐ โ โ (๐) and ๐ โ ๐ซ (๐
) is a polynomial given by ๐(๐ง) = ๐ 0 + ๐ 1 ๐ง + ๐ 2 ๐ง 2 + โฏ + ๐ ๐ ๐ง ๐ for all ๐ง โ ๐
. Then ๐(๐) is the operator on ๐ defined by ๐(๐) = ๐ 0 ๐ผ + ๐ 1 ๐ + ๐ 2 ๐ 2 + โฏ + ๐ ๐ ๐ ๐ . This is a new use of the symbol ๐ because we are applying ๐ to operators, not just elements of ๐
. The idea here is that to evaluate ๐(๐) , we simply replace ๐ง with ๐ in the expression defining ๐ . Note that the constant term ๐ 0 in ๐(๐ง) becomes the operator ๐ 0 ๐ผ ( which is a reasonable choice because ๐ 0 = ๐ 0 ๐ง 0 and thus we should replace ๐ 0 with ๐ 0 ๐ 0 , which equals ๐ 0 ๐ผ) .
138 Chapter 5 Eigenvalues and Eigenvectors
5.15 example: a polynomial applied to the differentiation operator Suppose ๐ท โ โ ( ๐ซ (๐)) is the differentiation operator defined by ๐ท๐ = ๐ โฒ and ๐ is the polynomial defined by ๐(๐ฅ) = 7 โ 3๐ฅ + 5๐ฅ 2 . Then ๐(๐ท) = 7๐ผ โ 3๐ท + 5๐ท 2 . Thus (๐(๐ท))๐ = 7๐ โ 3๐ โฒ + 5๐ โณ for every ๐ โ ๐ซ (๐) . If we fix an operator ๐ โ โ (๐) , then the function from ๐ซ (๐
) to โ (๐) given by ๐ โฆ ๐(๐) is linear, as you should verify.
5.16 definition: product of polynomials If ๐ , ๐ โ ๐ซ (๐
) , then ๐๐ โ ๐ซ (๐
) is the polynomial defined by (๐๐)(๐ง) = ๐(๐ง)๐(๐ง) for all ๐ง โ ๐
. The order does not matter in taking products of polynomials of a single operator, as shown by (b) in the next result.
5.17 multiplicative properties
Suppose ๐ , ๐ โ ๐ซ (๐
) and ๐ โ โ (๐) . Then (a) (๐๐)(๐) = ๐(๐)๐(๐) ; (b) ๐(๐)๐(๐) = ๐(๐)๐(๐) .
Informal proof: When a product of polynomials is expanded using the dis- tributive property, it does not matter whether the symbol is ๐ง or ๐ . Proof (a) Suppose ๐(๐ง) = ๐ โ ๐ = 0 ๐ ๐ ๐ง ๐ and ๐(๐ง) = ๐ โ ๐=0 ๐ ๐ ๐ง ๐ for all ๐ง โ ๐
. Then (๐๐)(๐ง) = ๐ โ ๐ = 0 ๐ โ ๐=0 ๐ ๐ ๐ ๐ ๐ง ๐ + ๐ . Thus (๐๐)(๐) = ๐ โ ๐ = 0 ๐ โ ๐=0 ๐ ๐ ๐ ๐ ๐ ๐ + ๐ = ( ๐ โ ๐ = 0 ๐ ๐ ๐ ๐ )( ๐ โ ๐=0 ๐ ๐ ๐ ๐ ) = ๐(๐)๐(๐). (b) Using (a) twice, we have ๐(๐)๐(๐) = (๐๐)(๐) = (๐๐)(๐) = ๐(๐)๐(๐) .
Section 5A Invariant Subspaces 139 We observed earlier that if ๐ โ โ (๐) , then the subspaces null ๐ and range ๐ are invariant under ๐ (see 5.4). Now we show that the null space and the range of every polynomial of ๐ are also invariant under ๐ .
5.18 null space and range of ๐(๐) are invariant under ๐
Suppose ๐ โ โ (๐) and ๐ โ ๐ซ (๐
) . Then null ๐(๐) and range ๐(๐) are invariant under ๐ . Proof Suppose ๐ข โ null ๐(๐) . Then ๐(๐)๐ข = 0 . Thus (๐(๐))(๐๐ข) = ๐(๐(๐)๐ข) = ๐(0) = 0. Hence ๐๐ข โ null ๐(๐) . Thus null ๐(๐) is invariant under ๐ , as desired. Suppose ๐ข โ range ๐(๐) . Then there exists ๐ฃ โ ๐ such that ๐ข = ๐(๐)๐ฃ . Thus ๐๐ข = ๐(๐(๐)๐ฃ) = ๐(๐)(๐๐ฃ). Hence ๐๐ข โ range ๐(๐) . Thus range ๐(๐) is invariant under ๐ , as desired. Exercises 5A 1 Suppose ๐ โ โ (๐) and ๐ is a subspace of ๐ . (a) Prove that if ๐ โ null ๐ , then ๐ is invariant under ๐ . (b) Prove that if range ๐ โ ๐ , then ๐ is invariant under ๐ . 2 Suppose that ๐ โ โ (๐) and ๐ 1 , โฆ , ๐ ๐ are subspaces of ๐ invariant under ๐ . Prove that ๐ 1 + โฏ + ๐ ๐ is invariant under ๐ . 3 Suppose ๐ โ โ (๐) . Prove that the intersection of every collection of subspaces of ๐ invariant under ๐ is invariant under ๐ . 4 Prove or give a counterexample: If ๐ is finite-dimensional and ๐ is a sub- space of ๐ that is invariant under every operator on ๐ , then ๐ = {0} or ๐ = ๐ . 5 Suppose ๐ โ โ (๐ 2 ) is defined by ๐(๐ฅ , ๐ฆ) = (โ3๐ฆ , ๐ฅ) . Find the eigenvalues of ๐ . 6 Define ๐ โ โ (๐
2 ) by ๐(๐ค , ๐ง) = (๐ง , ๐ค) . Find all eigenvalues and eigenvec- tors of ๐ . 7 Define ๐ โ โ (๐
3 ) by ๐(๐ง 1 , ๐ง 2 , ๐ง 3 ) = (2๐ง 2 , 0 , 5๐ง 3 ) . Find all eigenvalues and eigenvectors of ๐ . 8 Suppose ๐ โ โ (๐) is such that ๐ 2 = ๐ . Prove that if ๐ is an eigenvalue of ๐ , then ๐ = 0 or ๐ = 1 .
140 Chapter 5 Eigenvalues and Eigenvectors 9 Define ๐ โถ ๐ซ (๐) โ ๐ซ (๐) by ๐๐ = ๐ โฒ . Find all eigenvalues and eigenvectors of ๐ . 10 Define ๐ โ โ ( ๐ซ 4 (๐)) by (๐๐)(๐ฅ) = ๐ฅ๐ โฒ (๐ฅ) for all ๐ฅ โ ๐ . Find all eigenval- ues and eigenvectors of ๐ . 11 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ผ โ ๐
. Prove that there ex- ists ๐ฟ > 0 such that ๐โ๐๐ผ is invertible for all ๐ โ ๐
such that 0 < |๐ผ โ ๐| < ๐ฟ . 12 Suppose ๐ = ๐ โ ๐ , where ๐ and ๐ are nonzero subspaces of ๐ . Define ๐ โ โ (๐) by ๐(๐ข + ๐ค) = ๐ข for each ๐ข โ ๐ and each ๐ค โ ๐ . Find all eigenvalues and eigenvectors of ๐ . 13 Suppose ๐ โ โ (๐) . Suppose ๐ โ โ (๐) is invertible. (a) Prove that ๐ and ๐ โ1 ๐๐ have the same eigenvalues. (b) What is the relationship between the eigenvectors of ๐ and the eigenvec- tors of ๐ โ1 ๐๐ ? 14 Give an example of an operator on ๐ 4 that has no (real) eigenvalues. 15 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ โ ๐
. Show that ๐ is an eigenvalue of ๐ if and only if ๐ is an eigenvalue of the dual operator ๐ โฒ โ โ (๐ โฒ ) . 16 Suppose ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ and ๐ โ โ (๐) . Prove that if ๐ is an eigenvalue of ๐ , then |๐| โค ๐ max {โฃ โณ (๐) ๐ , ๐ โฃ โถ 1 โค ๐ , ๐ โค ๐} , where โณ (๐) ๐ , ๐ denotes the entry in row ๐ , column ๐ of the matrix of ๐ with respect to the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ . See Exercise 19 in Section 6A for a different bound on |๐| . 17 Suppose ๐
= ๐ , ๐ โ โ (๐) , and ๐ โ ๐ . Prove that ๐ is an eigenvalue of ๐ if and only if ๐ is an eigenvalue of the complexification ๐ ๐ . See Exercise 33 in Section 3B for the definition of ๐ ๐ . 18 Suppose ๐
= ๐ , ๐ โ โ (๐) , and ๐ โ ๐ . Prove that ๐ is an eigenvalue of the complexification ๐ ๐ if and only if ๐ is an eigenvalue of ๐ ๐ . 19 Show that the forward shift operator ๐ โ โ (๐
โ ) defined by ๐(๐ง 1 , ๐ง 2 , โฆ ) = (0 , ๐ง 1 , ๐ง 2 , โฆ ) has no eigenvalues. 20 Define the backward shift operator ๐ โ โ (๐
โ ) by ๐(๐ง 1 , ๐ง 2 , ๐ง 3 , โฆ ) = (๐ง 2 , ๐ง 3 , โฆ ). (a) Show that every element of ๐
is an eigenvalue of ๐ . (b) Find all eigenvectors of ๐ .
Section 5A Invariant Subspaces โ 100 โ 2 21 Suppose ๐ โ โ (๐) is invertible. (a) Suppose ๐ โ ๐
with ๐ โ 0 . Prove that ๐ is an eigenvalue of ๐ if and only if 1๐ is an eigenvalue of ๐ โ1 . (b) Prove that ๐ and ๐ โ1 have the same eigenvectors. 22 Suppose ๐ โ โ (๐) and there exist nonzero vectors ๐ข and ๐ค in ๐ such that ๐๐ข = 3๐ค and ๐๐ค = 3๐ข. Prove that 3 or โ3 is an eigenvalue of ๐ . 23 Suppose ๐ is finite-dimensional and ๐ , ๐ โ โ (๐) . Prove that ๐๐ and ๐๐ have the same eigenvalues. 24 Suppose ๐ด is an ๐ -by- ๐ matrix with entries in ๐
. Define ๐ โ โ (๐
๐ ) by ๐๐ฅ = ๐ด๐ฅ , where elements of ๐
๐ are thought of as ๐ -by- 1 column vectors. (a) Suppose the sum of the entries in each row of ๐ด equals 1 . Prove that 1 is an eigenvalue of ๐ . (b) Suppose the sum of the entries in each column of ๐ด equals 1 . Prove that 1 is an eigenvalue of ๐ . 25 Suppose ๐ โ โ (๐) and ๐ข , ๐ค are eigenvectors of ๐ such that ๐ข + ๐ค is also an eigenvector of ๐ . Prove that ๐ข and ๐ค are eigenvectors of ๐ corresponding to the same eigenvalue. 26 Suppose ๐ โ โ (๐) is such that every nonzero vector in ๐ is an eigenvector of ๐ . Prove that ๐ is a scalar multiple of the identity operator. 27 Suppose that ๐ is finite-dimensional and ๐ โ {1 , โฆ , dim ๐ โ 1} . Suppose ๐ โ โ (๐) is such that every subspace of ๐ of dimension ๐ is invariant under ๐ . Prove that ๐ is a scalar multiple of the identity operator. 28 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that ๐ has at most 1 + dim range ๐ distinct eigenvalues. 29 Suppose ๐ โ โ (๐ 3 ) and โ4 , 5 , and โ 7 are eigenvalues of ๐ . Prove that there exists ๐ฅ โ ๐ 3 such that ๐๐ฅ โ 9๐ฅ = (โ4 , 5 , โ 7) . 30 Suppose ๐ โ โ (๐) and (๐ โ 2๐ผ)(๐ โ 3๐ผ)(๐ โ 4๐ผ) = 0 . Suppose ๐ is an eigenvalue of ๐ . Prove that ๐ = 2 or ๐ = 3 or ๐ = 4 . 31 Give an example of ๐ โ โ (๐ 2 ) such that ๐ 4 = โ๐ผ .
32 Suppose ๐ โ โ (๐) has no eigenvalues and ๐ 4 = ๐ผ . Prove that ๐ 2 = โ๐ผ .
33 Suppose ๐ โ โ (๐) and ๐ is a positive integer. (a) Prove that ๐ is injective if and only if ๐ ๐ is injective. (b) Prove that ๐ is surjective if and only if ๐ ๐ is surjective.
142 Chapter 5 Eigenvalues and Eigenvectors 34
Suppose ๐ is finite-dimensional and ๐ฃ 1 , โฆ , ๐ฃ ๐ โ ๐ . Prove that the list ๐ฃ 1 , โฆ , ๐ฃ ๐ is linearly independent if and only if there exists ๐ โ โ (๐) such that ๐ฃ 1 , โฆ , ๐ฃ ๐ are eigenvectors of ๐ corresponding to distinct eigenvalues. 35 Suppose that ๐ 1 , โฆ , ๐ ๐ is a list of distinct real numbers. Prove that the list ๐ ๐ 1 ๐ฅ , โฆ , ๐ ๐ ๐ ๐ฅ is linearly independent in the vector space of real-valued functions on ๐ . Hint: Let ๐ = span (๐ ๐ 1 ๐ฅ , โฆ , ๐ ๐ ๐ ๐ฅ ) , and define an operator ๐ท โ โ (๐) by ๐ท ๐ = ๐ โฒ . Find eigenvalues and eigenvectors of ๐ท . 36 Suppose that ๐ 1 , โฆ , ๐ ๐ is a list of distinct positive numbers. Prove that the list cos (๐ 1 ๐ฅ) , โฆ , cos (๐ ๐ ๐ฅ) is linearly independent in the vector space of real-valued functions on ๐ . 37 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Define ๐ โ โ ( โ (๐)) by ๐ (๐) = ๐๐ for each ๐ โ โ (๐) . Prove that the set of eigenvalues of ๐ equals the set of eigenvalues of ๐ . 38 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ is a subspace of ๐ invariant under ๐ . The quotient operator ๐/๐ โ โ (๐/๐) is defined by (๐/๐)(๐ฃ + ๐) = ๐๐ฃ + ๐ for each ๐ฃ โ ๐ . (a) Show that the definition of ๐/๐ makes sense (which requires using the condition that ๐ is invariant under ๐ ) and show that ๐/๐ is an operator on ๐/๐ . (b) Show that each eigenvalue of ๐/๐ is an eigenvalue of ๐ . 39 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that ๐ has an eigen- value if and only if there exists a subspace of ๐ of dimension dim ๐ โ 1 that is invariant under ๐ . 40 Suppose ๐ , ๐ โ โ (๐) and ๐ is invertible. Suppose ๐ โ ๐ซ (๐
) is a polynomial. Prove that ๐(๐๐๐ โ1 ) = ๐๐(๐)๐ โ1 . 41 Suppose ๐ โ โ (๐) and ๐ is a subspace of ๐ invariant under ๐ . Prove that ๐ is invariant under ๐(๐) for every polynomial ๐ โ ๐ซ (๐
) . 42 Define ๐ โ โ (๐
๐ ) by ๐(๐ฅ 1 , ๐ฅ 2 , ๐ฅ 3 , โฆ , ๐ฅ ๐ ) = (๐ฅ 1 , 2๐ฅ 2 , 3๐ฅ 3 , โฆ , ๐๐ฅ ๐ ) . (a) Find all eigenvalues and eigenvectors of ๐ . (b) Find all subspaces of ๐
๐ that are invariant under ๐ . 43 Suppose that ๐ is finite-dimensional, dim ๐ > 1 , and ๐ โ โ (๐) . Prove that {๐(๐) โถ ๐ โ ๐ซ (๐
)} โ โ (๐) .
Section 5B The Minimal Polynomial 143 5B The Minimal Polynomial Existence of Eigenvalues on Complex Vector Spaces Now we come to one of the central results about operators on finite-dimensional complex vector spaces.
5.19 existence of eigenvalues
Every operator on a finite-dimensional nonzero complex vector space has an eigenvalue. Proof Suppose ๐ is a finite-dimensional complex vector space of dimension ๐ > 0 and ๐ โ โ (๐) . Choose ๐ฃ โ ๐ with ๐ฃ โ 0 . Then ๐ฃ , ๐๐ฃ , ๐ 2 ๐ฃ , โฆ , ๐ ๐ ๐ฃ is not linearly independent, because ๐ has dimension ๐ and this list has length ๐ + 1 . Hence some linear combination (with not all the coefficients equal to 0 ) of the vectors above equals 0 . Thus there exists a nonconstant polynomial ๐ of smallest degree such that ๐(๐)๐ฃ = 0. By the first version of the fundamental theorem of algebra (see 4.12), there exists ๐ โ ๐ such that ๐(๐) = 0 . Hence there exists a polynomial ๐ โ ๐ซ (๐) such that ๐(๐ง) = (๐ง โ ๐)๐(๐ง) for every ๐ง โ ๐ (see 4.6). This implies (using 5.17) that 0 = ๐(๐)๐ฃ = (๐ โ ๐๐ผ)(๐(๐)๐ฃ). Because ๐ has smaller degree than ๐ , we know that ๐(๐)๐ฃ โ 0 . Thus the equation above implies that ๐ is an eigenvalue of ๐ with eigenvector ๐(๐)๐ฃ . The proof above makes crucial use of the fundamental theorem of algebra. The comment following Exercise 16 helps explain why the fundamental theorem of algebra is so tightly connected to the result above. The hypothesis in the result above that ๐
= ๐ cannot be replaced with the hypothesis that ๐
= ๐ , as shown by Example 5.9. The next example shows that the finite-dimensional hypothesis in the result above also cannot be deleted.
5.20 example: an operator on a complex vector space with no eigenvalues
Define ๐ โ โ ( ๐ซ (๐)) by (๐๐)(๐ง) = ๐ง๐(๐ง) .
If ๐ โ ๐ซ (๐) is a nonzero polynomial, then the degree of ๐๐ is one more than the degree of ๐ , and thus ๐๐ cannot equal a scalar multiple of ๐ . Hence ๐ has no eigenvalues. Because ๐ซ (๐) is infinite-dimensional, this example does not contradict the result above.
144 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and the Minimal Polynomial
In this subsection we introduce an important polynomial associated with each operator. We begin with the following definition.
5.21 definition: monic polynomial
A monic polynomial is a polynomial whose highest-degree coefficient equals 1 . For example, the polynomial 2 + 9๐ง 2 + ๐ง 7 is a monic polynomial of degree 7 .
5.22 existence, uniqueness, and degree of minimal polynomial
Suppose ๐ is finite-dimensional and ๐ โ โ (๐) .
Then there is a unique monic polynomial ๐ โ ๐ซ (๐
) of smallest degree such that ๐(๐) = 0 .
Furthermore, deg ๐ โค dim ๐ .
Proof If dim ๐ = 0 , then ๐ผ is the zero operator on ๐ and thus we take ๐ to be the constant polynomial 1 .
Now use induction on dim ๐ . Thus assume that dim ๐ > 0 and that the desired result is true for all operators on all vector spaces of smaller dimension. Let ๐ข โ ๐ be such that ๐ข โ 0 . The list ๐ข , ๐๐ข , โฆ , ๐ dim ๐ ๐ข has length 1 + dim ๐ and thus is linearly dependent.
By the linear dependence lemma (2.19), there is a smallest positive integer ๐ โค dim ๐ such that ๐ ๐ ๐ข is a linear combination of ๐ข , ๐๐ข , โฆ , ๐ ๐โ1 ๐ข .
Thus there exist scalars ๐ 0 , ๐ 1 , ๐ 2 , โฆ , ๐ ๐โ1 โ ๐
such that
5.23 ๐ 0 ๐ข + ๐ 1 ๐๐ข + โฏ + ๐ ๐โ1 ๐ ๐โ1 ๐ข + ๐ ๐ ๐ข = 0.
Define a monic polynomial ๐ โ ๐ซ ๐ (๐
) by ๐(๐ง) = ๐ 0 + ๐ 1 ๐ง + โฏ + ๐ ๐โ1 ๐ง ๐โ1 + ๐ง ๐ . Then 5.23 implies that ๐(๐)๐ข = 0 .
If ๐ is a nonnegative integer, then ๐(๐)(๐ ๐ ๐ข) = ๐ ๐ (๐(๐)๐ข) = ๐ ๐ (0) = 0. The linear dependence lemma (2.19) shows that ๐ข , ๐๐ข , โฆ , ๐ ๐โ1 ๐ข is linearly inde- pendent. Thus the equation above implies that dim null ๐(๐) โฅ ๐ . Hence dim range ๐(๐) = dim ๐ โ dim null ๐(๐) โค dim ๐ โ ๐. Because range ๐(๐) is invariant under ๐ (by 5.18), we can apply our induction hypothesis to the operator ๐| range ๐(๐) on the vector space range ๐(๐) . Thus there is a monic polynomial ๐ โ ๐ซ (๐
) with deg ๐ โค dim ๐ โ ๐ and ๐ (๐| range ๐(๐) ) = 0. Hence for all ๐ฃ โ ๐ we have ((๐ ๐)(๐))(๐ฃ) = ๐ (๐)(๐(๐)๐ฃ) = 0 because ๐(๐)๐ฃ โ range ๐(๐) and ๐ (๐)| range ๐(๐) = ๐ (๐| range ๐(๐) ) = 0 . Thus ๐ ๐ is a monic polynomial such that deg ๐ ๐ โค dim ๐ and (๐ ๐)(๐) = 0 .
Section 5B The Minimal Polynomial 145 The paragraph above shows that there is a monic polynomial of degree at most dim ๐ that when applied to ๐ gives the 0 operator. Thus there is a monic polynomial of smallest degree with this property, completing the existence part of this result. Let ๐ โ ๐ซ (๐
) be a monic polynomial of smallest degree such that ๐(๐) = 0 . To prove the uniqueness part of the result, suppose ๐ โ ๐ซ (๐
) is a monic poly-nomial of the same degree as ๐ and ๐(๐) = 0 . Then (๐ โ ๐)(๐) = 0 and also deg (๐ โ ๐) < deg ๐ . If ๐ โ ๐ were not equal to 0 , then we could divide ๐ โ ๐ by the coefficient of the highest-order term in ๐ โ ๐ to get a monic polynomial (of smaller degree than ๐ ) that when applied to ๐ gives the 0 operator. Thus ๐โ๐ = 0 , as desired. The previous result justifies the following definition.
5.24 definition: minimal polynomial
Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Then the minimal polynomial of ๐ is the unique monic polynomial ๐ โ ๐ซ (๐
) of smallest degree such that ๐(๐) = 0 . To compute the minimal polynomial of an operator ๐ โ โ (๐) , we need to find the smallest positive integer ๐ such that the equation ๐ 0 ๐ผ + ๐ 1 ๐ + โฏ + ๐ ๐โ1 ๐ ๐โ1 = โ๐ ๐ has a solution ๐ 0 , ๐ 1 , โฆ , ๐ ๐โ1 โ ๐
. If we pick a basis of ๐ and replace ๐ in the equation above with the matrix of ๐ , then the equation above can be thought of as a system of ( dim ๐) 2 linear equations in the ๐ unknowns ๐ 0 , ๐ 1 , โฆ , ๐ ๐โ1 โ ๐
. Gaussian elimination or another fast method of solving systems of linear equations can tell us whether a solution exists, testing successive values ๐ = 1 , 2 , โฆ until a solution exists. By 5.22, a solution exists for some smallest positive integer ๐ โค dim ๐ . The minimal polynomial of ๐ is then ๐ 0 + ๐ 1 ๐ง + โฏ + ๐ ๐โ1 ๐ง ๐โ1 + ๐ง ๐ . Even faster (usually), pick ๐ฃ โ ๐ and consider the equation
5.25 ๐ 0 ๐ฃ + ๐ 1 ๐๐ฃ + โฏ + ๐ dim ๐โ1 ๐ dim ๐โ1 ๐ฃ = โ๐ dim ๐ ๐ฃ.
Use a basis of ๐ to convert the equation above to a system of dim ๐ linear equations in dim ๐ unknowns ๐ 0 , ๐ 1 , โฆ , ๐ dim ๐โ1 . If this system of equations has a unique solution ๐ 0 , ๐ 1 , โฆ , ๐ dim ๐โ1 (as happens most of the time), then the scalars ๐ 0 , ๐ 1 , โฆ , ๐ dim ๐โ1 , 1 are the coefficients of the minimal polynomial of ๐ (because 5.22 states that the degree of the minimal polynomial is at most dim ๐ ). These estimates are based on testing millions of random matrices. Consider operators on ๐ 4 (thought of as 4 -by- 4 matrices with respect to the standard basis), and take ๐ฃ = (1 , 0 , 0 , 0) in the paragraph above. The faster method described above works on over 99.8% of the 4 -by- 4 matrices with integer entries in the interval [โ10 , 10] and on over 99.999% of the 4 -by- 4 matrices with integer entries in [โ100 , 100] .
146 Chapter 5 Eigenvalues and Eigenvectors The next example illustrates the faster procedure discussed above.
5.26 example: minimal polynomial of an operator on ๐
5 Suppose ๐ โ โ (๐
5 ) and โณ (๐) = โ โ โ โ โโโโโโโโ 0 0 0 0 โ3 1 0 0 0 6 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 โ โ โ โ โโโโโโโโ with respect to the standard basis ๐ 1 , ๐ 2 , ๐ 3 , ๐ 4 , ๐ 5 . Taking ๐ฃ = ๐ 1 for 5.25, we have ๐๐ 1 = ๐ 2 , ๐ 4 ๐ 1 = ๐(๐ 3 ๐ 1 ) = ๐๐ 4 = ๐ 5 , ๐ 2 ๐ 1 = ๐(๐๐ 1 ) = ๐๐ 2 = ๐ 3 , ๐ 5 ๐ 1 = ๐(๐ 4 ๐ 1 ) = ๐๐ 5 = โ3๐ 1 + 6๐ 2 . ๐ 3 ๐ 1 = ๐(๐ 2 ๐ 1 ) = ๐๐ 3 = ๐ 4 , Thus 3๐ 1 โ 6๐๐ 1 = โ๐ 5 ๐ 1 . The list ๐ 1 , ๐๐ 1 , ๐ 2 ๐ 1 , ๐ 3 ๐ 1 , ๐ 4 ๐ 1 , which equals the list ๐ 1 , ๐ 2 , ๐ 3 , ๐ 4 , ๐ 5 , is linearly independent, so no other linear combination of this list equals โ๐ 5 ๐ 1 . Hence the minimal polynomial of ๐ is 3 โ 6๐ง + ๐ง 5 . Recall that by definition, eigenvalues of operators on ๐ and zeros of polynomials in ๐ซ (๐
) must be elements of ๐
. In particular, if ๐
= ๐ , then eigenvalues and zeros must be real numbers.
5.27 eigenvalues are the zeros of the minimal polynomial
Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . (a) The zeros of the minimal polynomial of ๐ are the eigenvalues of ๐ . (b) If ๐ is a complex vector space, then the minimal polynomial of ๐ has the form (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) , where ๐ 1 , โฆ , ๐ ๐ is a list of all eigenvalues of ๐ , possibly with repetitions. Proof Let ๐ be the minimal polynomial of ๐ . (a) First suppose ๐ โ ๐
is a zero of ๐ . Then ๐ can be written in the form ๐(๐ง) = (๐ง โ ๐)๐(๐ง) , where ๐ is a monic polynomial with coefficients in ๐
(see 4.6). Because ๐(๐) = 0 , we have 0 = (๐ โ ๐๐ผ)(๐(๐)๐ฃ) for all ๐ฃ โ ๐ . Because deg ๐ = ( deg ๐) โ 1 and ๐ is the minimal polynomial of ๐ , there exists at least one vector ๐ฃ โ ๐ such that ๐(๐)๐ฃ โ 0 . The equation above thus implies that ๐ is an eigenvalue of ๐ , as desired.
Section 5B The Minimal Polynomial 147 To prove that every eigenvalue of ๐ is a zero of ๐ , now suppose ๐ โ ๐
is an eigenvalue of ๐ . Thus there exists ๐ฃ โ ๐ with ๐ฃ โ 0 such that ๐๐ฃ = ๐๐ฃ . Repeated applications of ๐ to both sides of this equation show that ๐ ๐ ๐ฃ = ๐ ๐ ๐ฃ for every nonnegative integer ๐ . Thus ๐(๐)๐ฃ = ๐(๐)๐ฃ. Because ๐ is the minimal polynomial of ๐ , we have ๐(๐)๐ฃ = 0 . Hence the equation above implies that ๐(๐) = 0 . Thus ๐ is a zero of ๐ , as desired. (b) To get the desired result, use (a) and the second version of the fundamental theorem of algebra (see 4.13). A nonzero polynomial has at most as many distinct zeros as its degree (see 4.8). Thus (a) of the previous result, along with the result that the minimal polynomial of an operator on ๐ has degree at most dim ๐ , gives an alternative proof of 5.12, which states that an operator on ๐ has at most dim ๐ distinct eigenvalues. Every monic polynomial is the minimal polynomial of some operator, as shown by Exercise 16, which generalizes Example 5.26. Thus 5.27(a) shows that finding exact expressions for the eigenvalues of an operator is equivalent to the problem of finding exact expressions for the zeros of a polynomial (and thus is not possible for some operators).
5.28 example: An operator whose eigenvalues cannot be found exactly Let ๐ โ โ (๐ 5 ) be the operator defined by ๐(๐ง 1 , ๐ง 2 , ๐ง 3 , ๐ง 4 , ๐ง 5 ) = (โ3๐ง 5 , ๐ง 1 + 6๐ง 5 , ๐ง 2 , ๐ง 3 , ๐ง 4 ). The matrix of ๐ with respect to the standard basis of ๐ 5 is the 5 -by- 5 matrix in Example 5.26. As we showed in that example, the minimal polynomial of ๐ is the polynomial 3 โ 6๐ง + ๐ง 5 . No zero of the polynomial above can be expressed using rational numbers, roots of rational numbers, and the usual rules of arithmetic (a proof of this would take us considerably beyond linear algebra). Because the zeros of the polynomial above are the eigenvalues of ๐ [ by 5.27(a) ] , we cannot find an exact expression for any eigenvalue of ๐ in any familiar form. Numeric techniques, which we will not discuss here, show that the zeros of the polynomial above, and thus the eigenvalues of ๐ , are approximately the following five complex numbers: โ1.67 , 0.51 , 1.40 , โ0.12 + 1.59๐ , โ0.12 โ 1.59๐. Note that the two nonreal zeros of this polynomial are complex conjugates of each other, as we expect for a polynomial with real coefficients (see 4.14).
148 Chapter 5 Eigenvalues and Eigenvectors The next result completely characterizes the polynomials that when applied to an operator give the 0 operator.
5.29 ๐(๐) = 0 โบ ๐ is a polynomial multiple of the minimal polynomial
Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ โ ๐ซ (๐
) . Then ๐(๐) = 0 if and only if ๐ is a polynomial multiple of the minimal polynomial of ๐ .
Proof Let ๐ denote the minimal polynomial of ๐ . First suppose ๐(๐) = 0 . By the division algorithm for polynomials (4.9), there exist polynomials ๐ , ๐ โ ๐ซ (๐
) such that
5.30 ๐ = ๐๐ + ๐ and deg ๐ < deg ๐ .
We have 0 = ๐(๐) = ๐(๐)๐ (๐) + ๐(๐) = ๐(๐).
The equation above implies that ๐ = 0 (otherwise, dividing ๐ by its highest-degree coefficient would produce a monic polynomial that when applied to ๐ gives 0 ; this polynomial would have a smaller degree than the minimal polynomial, which would be a contradiction). Thus 5.30 becomes the equation ๐ = ๐๐ .
Hence ๐ is a polynomial multiple of ๐ , as desired.
To prove the other direction, now suppose ๐ is a polynomial multiple of ๐ . Thus there exists a polynomial ๐ โ ๐ซ (๐
) such that ๐ = ๐๐ . We have ๐(๐) = ๐(๐)๐ (๐) = 0 ๐ (๐) = 0 , as desired. The next result is a nice consequence of the result above.
5.31 minimal polynomial of a restriction operator
Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ is a subspace of ๐ that is invariant under ๐ . Then the minimal polynomial of ๐ is a polynomial multiple of the minimal polynomial of ๐| ๐ . Proof Suppose ๐ is the minimal polynomial of ๐ . Thus ๐(๐)๐ฃ = 0 for all ๐ฃ โ ๐ . In particular, ๐(๐)๐ข = 0 for all ๐ข โ ๐. Thus ๐(๐| ๐ ) = 0 . Now 5.29, applied to the operator ๐| ๐ in place of ๐ , implies that ๐ is a polynomial multiple of the minimal polynomial of ๐| ๐ . See Exercise 25 for a result about quotient operators that is analogous to the result above. The next result shows that the constant term of the minimal polynomial of an operator determines whether the operator is invertible.
Section 5B The Minimal Polynomial 149
5.32 ๐ not invertible โบ constant term of minimal polynomial of ๐ is 0
Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Then ๐ is not invertible if and only if the constant term of the minimal polynomial of ๐ is 0 .
Proof Suppose ๐ โ โ (๐) and ๐ is the minimal polynomial of ๐ . Then ๐ is not invertible โบ 0 is an eigenvalue of ๐ โบ 0 is a zero of ๐ โบ the constant term of ๐ is 0 , where the first equivalence holds by 5.7, the second equivalence holds by 5.27(a), and the last equivalence holds because the constant term of ๐ equals ๐(0) .
Eigenvalues on Odd-Dimensional Real Vector Spaces
The next result will be the key tool that we use to show that every operator on an odd-dimensional real vector space has an eigenvalue.
5.33 even-dimensional null space
Suppose ๐
= ๐ and ๐ is finite-dimensional. Suppose also that ๐ โ โ (๐) and ๐ , ๐ โ ๐ with ๐ 2 < 4๐ .
Then dim null (๐ 2 + ๐๐ + ๐๐ผ) is an even number.
Proof
Recall that null (๐ 2 + ๐๐ + ๐๐ผ) is invariant under ๐ (by 5.18).
By replacing ๐ with null (๐ 2 + ๐๐ + ๐๐ผ) and replacing ๐ with ๐ restricted to null (๐ 2 + ๐๐ + ๐๐ผ) , we can assume that ๐ 2 + ๐๐ + ๐๐ผ = 0 ; we now need to prove that dim ๐ is even.
Suppose ๐ โ ๐ and ๐ฃ โ ๐ are such that ๐๐ฃ = ๐๐ฃ .
Then 0 = (๐ 2 + ๐๐ + ๐๐ผ)๐ฃ = (๐ 2 + ๐๐ + ๐)๐ฃ = ((๐ + ๐2 ) 2 + ๐ โ ๐ 2 4 )๐ฃ.
The term in large parentheses above is a positive number. Thus the equation above implies that ๐ฃ = 0 .
Hence we have shown that ๐ has no eigenvectors.
Let ๐ be a subspace of ๐ that is invariant under ๐ and has the largest dimension among all subspaces of ๐ that are invariant under ๐ and have even dimension.
If ๐ = ๐ , then we are done; otherwise assume there exists ๐ค โ ๐ such that ๐ค โ ๐ .
Let ๐ = span (๐ค , ๐๐ค) .
Then ๐ is invariant under ๐ because ๐(๐๐ค) = โ๐๐๐ค โ ๐๐ค .
Furthermore, dim ๐ = 2 because otherwise ๐ค would be an eigenvector of ๐ . Now dim (๐ + ๐) = dim ๐ + dim ๐ โ dim (๐ โฉ ๐) = dim ๐ + 2 , where ๐ โฉ ๐ = {0} because otherwise ๐ โฉ ๐ would be a one-dimensional subspace of ๐ that is invariant under ๐ (impossible because ๐ has no eigenvectors). Because ๐ + ๐ is invariant under ๐ , the equation above shows that there exists a subspace of ๐ invariant under ๐ of even dimension larger than dim ๐ . Thus the assumption that ๐ โ ๐ was incorrect. Hence ๐ has even dimension.
150 Chapter 5 Eigenvalues and Eigenvectors The next result states that on odd-dimensional vector spaces, every operator has an eigenvalue. We already know this result for finite-dimensional complex vectors spaces (without the odd hypothesis). Thus in the proof below, we will assume that ๐
= ๐ .
5.34 operators on odd-dimensional vector spaces have eigenvalues
Every operator on an odd-dimensional vector space has an eigenvalue.
Proof Suppose ๐
= ๐ and ๐ is finite-dimensional. Let ๐ = dim ๐ , and suppose ๐ is an odd number. Let ๐ โ โ (๐) .
We will use induction on ๐ in steps of size two to show that ๐ has an eigenvalue.
To get started, note that the desired result holds if dim ๐ = 1 because then every nonzero vector in ๐ is an eigenvector of ๐ .
Now suppose that ๐ โฅ 3 and the desired result holds for all operators on all odd-dimensional vector spaces of dimension less than ๐ . Let ๐ denote the minimal polynomial of ๐ .
If ๐ is a polynomial multiple of ๐ฅ โ ๐ for some ๐ โ ๐ , then ๐ is an eigenvalue of ๐ [ by 5.27(a) ] and we are done.
Thus we can assume that there exist ๐ , ๐ โ ๐ such that ๐ 2 < 4๐ and ๐ is a polynomial multiple of ๐ฅ 2 + ๐๐ฅ + ๐ (see 4.16).
There exists a monic polynomial ๐ โ ๐ซ (๐) such that ๐(๐ฅ) = ๐(๐ฅ)(๐ฅ 2 + ๐๐ฅ + ๐) for all ๐ฅ โ ๐ .
Now 0 = ๐(๐) = (๐(๐))(๐ 2 + ๐๐ + ๐๐ผ) , which means that ๐(๐) equals 0 on range (๐ 2 + ๐๐ + ๐๐ผ) .
Because deg ๐ < deg ๐ and ๐ is the minimal polynomial of ๐ , this implies that range (๐ 2 + ๐๐ + ๐๐ผ) โ ๐ .
The fundamental theorem of linear maps (3.21) tells us that dim ๐ = dim null (๐ 2 + ๐๐ + ๐๐ผ) + dim range (๐ 2 + ๐๐ + ๐๐ผ).
Because dim ๐ is odd (by hypothesis) and dim null (๐ 2 + ๐๐ + ๐๐ผ) is even (by 5.33), the equation above shows that dim range (๐ 2 + ๐๐ + ๐๐ผ) is odd.
Hence range (๐ 2 + ๐๐ + ๐๐ผ) is a subspace of ๐ that is invariant under ๐ (by 5.18) and that has odd dimension less than dim ๐ .
Our induction hypothesis now implies that ๐ restricted to range (๐ 2 + ๐๐ + ๐๐ผ) has an eigenvalue, which means that ๐ has an eigenvalue.
See Exercise 23 in Section 8B and Exercise 10 in Section 9C for alternative proofs of the result above.
Exercises 5B
1 Suppose ๐ โ โ (๐) . Prove that 9 is an eigenvalue of ๐ 2 if and only if 3 or โ3 is an eigenvalue of ๐ .
2 Suppose ๐ is a complex vector space and ๐ โ โ (๐) has no eigenvalues. Prove that every subspace of ๐ invariant under ๐ is either {0} or infinite- dimensional.
3 Suppose ๐ is a positive integer and ๐ โ โ (๐
๐ ) is defined by ๐(๐ฅ 1 , โฆ , ๐ฅ ๐ ) = (๐ฅ 1 + โฏ + ๐ฅ ๐ , โฆ , ๐ฅ 1 + โฏ + ๐ฅ ๐ ).
(a) Find all eigenvalues and eigenvectors of ๐ .
(b) Find the minimal polynomial of ๐ . The matrix of ๐ with respect to the standard basis of ๐
๐ consists of all 1 โs.
4 Suppose ๐
= ๐ , ๐ โ โ (๐) , ๐ โ ๐ซ (๐) , and ๐ผ โ ๐ . Prove that ๐ผ is an eigenvalue of ๐(๐) if and only if ๐ผ = ๐(๐) for some eigenvalue ๐ of ๐ .
5 Give an example of an operator on ๐ 2 that shows the result in Exercise 4 does not hold if ๐ is replaced with ๐ .
6 Suppose ๐ โ โ (๐
2 ) is defined by ๐(๐ค , ๐ง) = (โ๐ง , ๐ค) . Find the minimal polynomial of ๐ .
7 (a) Give an example of ๐ , ๐ โ โ (๐
2 ) such that the minimal polynomial of ๐๐ does not equal the minimal polynomial of ๐๐ .
(b) Suppose ๐ is finite-dimensional and ๐ , ๐ โ โ (๐) . Prove that if at least one of ๐ , ๐ is invertible, then the minimal polynomial of ๐๐ equals the minimal polynomial of ๐๐ . Hint: Show that if ๐ is invertible and ๐ โ ๐ซ (๐
) , then ๐(๐๐) = ๐ โ1 ๐(๐๐)๐ .
8 Suppose ๐ โ โ (๐ 2 ) is the operator of counterclockwise rotation by 1 โ . Find the minimal polynomial of ๐ . Because dim ๐ 2 = 2 , the degree of the minimal polynomial of ๐ is at most 2 . Thus the minimal polynomial of ๐ is not the tempting polynomial ๐ฅ 180 + 1 , even though ๐ 180 = โ๐ผ .
9 Suppose ๐ โ โ (๐) is such that with respect to some basis of ๐ , all entries of the matrix of ๐ are rational numbers. Explain why all coefficients of the minimal polynomial of ๐ are rational numbers.
10 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ฃ โ ๐ . Prove that span (๐ฃ , ๐๐ฃ , โฆ , ๐ ๐ ๐ฃ) = span (๐ฃ , ๐๐ฃ , โฆ , ๐ dim ๐โ1 ๐ฃ) for all integers ๐ โฅ dim ๐ โ 1 .
11 Suppose ๐ is a two-dimensional vector space, ๐ โ โ (๐) , and the matrix of ๐ with respect to some basis of ๐ is ( ๐ ๐ ๐ ๐ ) .
(a) Show that ๐ 2 โ (๐ + ๐)๐ + (๐๐ โ ๐๐)๐ผ = 0 .
(b) Show that the minimal polynomial of ๐ equals โง{ โจ{ โฉ ๐ง โ ๐ if ๐ = ๐ = 0 and ๐ = ๐ , ๐ง 2 โ (๐ + ๐)๐ง + (๐๐ โ ๐๐) otherwise .
152 Chapter 5 Eigenvalues and Eigenvectors
12 Define ๐ โ โ (๐
๐ ) by ๐(๐ฅ 1 , ๐ฅ 2 , ๐ฅ 3 , โฆ , ๐ฅ ๐ ) = (๐ฅ 1 , 2๐ฅ 2 , 3๐ฅ 3 , โฆ , ๐๐ฅ ๐ ) . Find the minimal polynomial of ๐ .
13 Suppose ๐ โ โ (๐) and ๐ โ ๐ซ (๐
) . Prove that there exists a unique ๐ โ ๐ซ (๐
) such that ๐(๐) = ๐(๐) and deg ๐ is less than the degree of the minimal polynomial of ๐ .
14 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) has minimal polynomial 4 + 5๐ง โ 6๐ง 2 โ 7๐ง 3 + 2๐ง 4 + ๐ง 5 . Find the minimal polynomial of ๐ โ1 .
15 Suppose ๐ is a finite-dimensional complex vector space with dim ๐ > 0 and ๐ โ โ (๐) . Define ๐ โถ ๐ โ ๐ by ๐ (๐) = dim range (๐ โ ๐๐ผ). Prove that ๐ is not a continuous function.
16 Suppose ๐ 0 , โฆ , ๐ ๐โ1 โ ๐
. Let ๐ be the operator on ๐
๐ whose matrix (with respect to the standard basis) is โโโโโโโโโโโโโโโ 0 โ๐ 0 1 0 โ๐ 1 1 โฑ โ๐ 2 โฑ โฎ 0 โ๐ ๐โ2 1 โ๐ ๐โ1 โโโโโโโโโโโโโโโ . Here all entries of the matrix are 0 except for all 1 โs on the line under the diagonal and the entries in the last column (some of which might also be 0 ). Show that the minimal polynomial of ๐ is the polynomial ๐ 0 + ๐ 1 ๐ง + โฏ + ๐ ๐โ1 ๐ง ๐โ1 + ๐ง ๐ .
The matrix above is called the companion matrix of the polynomial above. This exercise shows that every monic polynomial is the minimal polynomial of some operator. Hence a formula or an algorithm that could produce exact eigenvalues for each operator on each ๐
๐ could then produce exact zeros for each polynomial [ by 5.27 ( a ) ] . Thus there is no such formula or algorithm. However, efficient numeric methods exist for obtaining very good approximations for the eigenvalues of an operator.
17 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ is the minimal polynomial of ๐ . Suppose ๐ โ ๐
. Show that the minimal polynomial of ๐ โ ๐๐ผ is the polynomial ๐ defined by ๐(๐ง) = ๐(๐ง + ๐) .
18 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ is the minimal polynomial of ๐ . Suppose ๐ โ ๐
\{0} . Show that the minimal polynomial of ๐๐ is the polynomial ๐ defined by ๐(๐ง) = ๐ deg ๐ ๐( ๐ง ๐ ) .
Section 5B The Minimal Polynomial 153
19 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Let โฐ be the subspace of โ (๐) defined by โฐ = {๐(๐) โถ ๐ โ ๐ซ (๐
)}.
Prove that dim โฐ equals the degree of the minimal polynomial of ๐ .
20 Suppose ๐ โ โ (๐
4 ) is such that the eigenvalues of ๐ are 3 , 5 , 8 . Prove that (๐ โ 3๐ผ) 2 (๐ โ 5๐ผ) 2 (๐ โ 8๐ผ) 2 = 0 .
21 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that the minimal polynomial of ๐ has degree at most 1 + dim range ๐ . If dim range ๐ < dim ๐ โ 1 , then this exercise gives a better upper bound than 5.22 for the degree of the minimal polynomial of ๐ .
22 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that ๐ is invertible if and only if ๐ผ โ span (๐ , ๐ 2 , โฆ , ๐ dim ๐ ) .
23 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Let ๐ = dim ๐ . Prove that if ๐ฃ โ ๐ , then span (๐ฃ , ๐๐ฃ , โฆ , ๐ ๐โ1 ๐ฃ) is invariant under ๐ .
24 Suppose ๐ is a finite-dimensional complex vector space. Suppose ๐ โ โ (๐) is such that 5 and 6 are eigenvalues of ๐ and that ๐ has no other eigenvalues. Prove that (๐ โ 5๐ผ) dim ๐โ1 (๐ โ 6๐ผ) dim ๐โ1 = 0 .
25 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ is a subspace of ๐ that is invariant under ๐ .
(a) Prove that the minimal polynomial of ๐ is a polynomial multiple of the minimal polynomial of the quotient operator ๐/๐ .
(b) Prove that (minimal polynomial of ๐| ๐ ) ร (minimal polynomial of ๐/๐ ) is a polynomial multiple of the minimal polynomial of ๐ . The quotient operator ๐/๐ was defined in Exercise 38 in Section 5A.
26 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ is a subspace of ๐ that is invariant under ๐ . Prove that the set of eigenvalues of ๐ equals the union of the set of eigenvalues of ๐| ๐ and the set of eigenvalues of ๐/๐ .
27 Suppose ๐
= ๐ , ๐ is finite-dimensional, and ๐ โ โ (๐) . Prove that the minimal polynomial of ๐ ๐ equals the minimal polynomial of ๐ . The complexification ๐ ๐ was defined in Exercise 33 of Section 3B.
28 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that the minimal polynomial of ๐ โฒ โ โ (๐ โฒ ) equals the minimal polynomial of ๐ . The dual map ๐ โฒ was defined in Section 3F.
29 Show that every operator on a finite-dimensional vector space of dimension at least two has an invariant subspace of dimension two. Exercise 6 in Section 5C will give an improvement of this result when ๐
= ๐ .
154 Chapter 5 Eigenvalues and Eigenvectors
5C Upper-Triangular Matrices
In Chapter 3 we defined the matrix of a linear map from a finite-dimensional vector space to another finite-dimensional vector space. That matrix depends on a choice of basis of each of the two vector spaces. Now that we are studying operators, which map a vector space to itself, the emphasis is on using only one basis.
5.35 definition: matrix of an operator, โณ (๐) Suppose ๐ โ โ (๐) . The matrix of ๐ with respect to a basis ๐ฃ 1 , โฆ , ๐ฃ ๐ of ๐ is the ๐ -by- ๐ matrix โณ (๐) = โโโโโ ๐ด 1 , 1 โฏ ๐ด 1 , ๐ โฎ โฎ ๐ด ๐ , 1 โฏ ๐ด ๐ , ๐ โโโโโ whose entries ๐ด ๐ , ๐ are defined by ๐๐ฃ ๐ = ๐ด 1 , ๐ ๐ฃ 1 + โฏ + ๐ด ๐ , ๐ ๐ฃ ๐ . The notation โณ (๐ , (๐ฃ 1 , โฆ , ๐ฃ ๐ )) is used if the basis is not clear from the context. Operators have square matrices (meaning that the number of rows equals the number of columns), rather than the more general rectangular matrices that we considered earlier for linear maps. The ๐ th column of the matrix โณ (๐) is formed from the coefficients used to write ๐๐ฃ ๐ as a linear combination of the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ . If ๐ is an operator on ๐
๐ and no ba- sis is specified, assume that the basis in question is the standard one ( where the ๐ th basis vector is 1 in the ๐ th slot and 0 in all other slots ) . You can then think of the ๐ th column of โณ (๐) as ๐ applied to the ๐ th basis vector, where we identify ๐ -by- 1 column vectors with elements of ๐
๐ . 5.36 example: matrix of an operator with respect to standard basis Define ๐ โ โ (๐
3 ) by ๐(๐ฅ , ๐ฆ , ๐ง) = (2๐ฅ + ๐ฆ , 5๐ฆ + 3๐ง , 8๐ง) . Then the matrix of ๐ with respect to the standard basis of ๐
3 is โณ (๐) = โ โโโโ 2 1 0 0 5 3 0 0 8 โ โโโโ , as you should verify. A central goal of linear algebra is to show that given an operator ๐ on a finite- dimensional vector space ๐ , there exists a basis of ๐ with respect to which ๐ has a reasonably simple matrix. To make this vague formulation a bit more precise, we might try to choose a basis of ๐ such that โณ (๐) has many 0 โs.
Section 5C Upper-Triangular Matrices 155 If ๐ is a finite-dimensional complex vector space, then we already know enough to show that there is a basis of ๐ with respect to which the matrix of ๐ has 0 โs everywhere in the first column, except possibly the first entry. In other words, there is a basis of ๐ with respect to which the matrix of ๐ looks like โ โ โโโโโโ ๐ 0 โ โฎ 0 โ โ โโโโโโ ; here โ denotes the entries in all columns other than the first column. To prove this, let ๐ be an eigenvalue of ๐ (one exists by 5.19) and let ๐ฃ be a corresponding eigenvector. Extend ๐ฃ to a basis of ๐ . Then the matrix of ๐ with respect to this basis has the form above. Soon we will see that we can choose a basis of ๐ with respect to which the matrix of ๐ has even more 0 โs. 5.37 definition: diagonal of a matrix The diagonal of a square matrix consists of the entries on the line from the upper left corner to the bottom right corner. For example, the diagonal of the matrix โณ (๐) = โโโโโ 2 1 0 0 5 3 0 0 8 โโโโโ from Example 5.36 consists of the entries 2 , 5 , 8 , which are shown in red in the matrix above. 5.38 definition: upper-triangular matrix A square matrix is called upper triangular if all entries below the diagonal are 0 . For example, the 3 -by- 3 matrix above is upper triangular. Typically we represent an upper-triangular matrix in the form โ โโโโ ๐ 1 โ โฑ 0 ๐ ๐ โ โโโโ ; We often use โ to denote matrix entries that we do not know or that are irrele- vant to the questions being discussed. the 0 in the matrix above indicates that all entries below the diagonal in this ๐ -by- ๐ matrix equal 0 . Upper-triangular matrices can be considered reasonably simpleโif ๐ is large, then at least almost half the entries in an ๐ -by- ๐ upper- triangular matrix are 0 .
156 Chapter 5 Eigenvalues and Eigenvectors The next result provides a useful connection between upper-triangular matrices and invariant subspaces. 5.39 conditions for upper-triangular matrix Suppose ๐ โ โ (๐) and ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ . Then the following are equivalent. (a) The matrix of ๐ with respect to ๐ฃ 1 , โฆ , ๐ฃ ๐ is upper triangular. (b) span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) is invariant under ๐ for each ๐ = 1 , โฆ , ๐ . (c) ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) for each ๐ = 1 , โฆ , ๐ . Proof First suppose (a) holds. To prove that (b) holds, suppose ๐ โ {1 , โฆ , ๐} . If ๐ โ {1 , โฆ , ๐} , then ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) because the matrix of ๐ with respect to ๐ฃ 1 , โฆ , ๐ฃ ๐ is upper triangular. Because span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) if ๐ โค ๐ , we see that ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) for each ๐ โ {1 , โฆ , ๐} . Thus span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) is invariant under ๐ , completing the proof that (a) implies (b). Now suppose (b) holds, so span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) is invariant under ๐ for each ๐ = 1 , โฆ , ๐ . In particular, ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) for each ๐ = 1 , โฆ , ๐ . Thus (b) implies (c). Now suppose (c) holds, so ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) for each ๐ = 1 , โฆ , ๐ . This means that when writing each ๐๐ฃ ๐ as a linear combination of the basis vectors ๐ฃ 1 , โฆ , ๐ฃ ๐ , we need to use only the vectors ๐ฃ 1 , โฆ , ๐ฃ ๐ . Hence all entries under the diagonal of โณ (๐) are 0 . Thus โณ (๐) is an upper-triangular matrix, completing the proof that (c) implies (a). We have shown that (a) โน (b) โน (c) โน (a), which shows that (a), (b), and (c) are equivalent. The next result tells us that if ๐ โ โ (๐) and with respect to some basis of ๐ we have โณ (๐) = โโโโโ ๐ 1 โ โฑ 0 ๐ ๐ โโโโโ , then ๐ satisfies a simple equation depending on ๐ 1 , โฆ , ๐ ๐ . 5.40 equation satisfied by operator with upper-triangular matrix Suppose ๐ โ โ (๐) and ๐ has a basis with respect to which ๐ has an upper- triangular matrix with diagonal entries ๐ 1 , โฆ , ๐ ๐ . Then (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ) = 0.
Section 5C Upper-Triangular Matrices 157 Proof Let ๐ฃ 1 , โฆ , ๐ฃ ๐ denote a basis of ๐ with respect to which ๐ has an upper- triangular matrix with diagonal entries ๐ 1 , โฆ , ๐ ๐ . Then ๐๐ฃ 1 = ๐ 1 ๐ฃ 1 , which means that (๐ โ ๐ 1 ๐ผ)๐ฃ 1 = 0 , which implies that (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ)๐ฃ 1 = 0 for ๐ = 1 , โฆ , ๐ (using the commutativity of each ๐ โ ๐ ๐ ๐ผ with each ๐ โ ๐ ๐ ๐ผ ). Note that (๐ โ ๐ 2 ๐ผ)๐ฃ 2 โ span (๐ฃ 1 ) . Thus (๐ โ ๐ 1 ๐ผ)(๐ โ ๐ 2 ๐ผ)๐ฃ 2 = 0 (by the previous paragraph), which implies that (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ)๐ฃ 2 = 0 for ๐ = 2 , โฆ , ๐ (using the commutativity of each ๐ โ ๐ ๐ ๐ผ with each ๐ โ ๐ ๐ ๐ผ ). Note that (๐ โ ๐ 3 ๐ผ)๐ฃ 3 โ span (๐ฃ 1 , ๐ฃ 2 ) . Thus by the previous paragraph, (๐โ๐ 1 ๐ผ)(๐โ๐ 2 ๐ผ)(๐โ๐ 3 ๐ผ)๐ฃ 3 = 0 , which implies that (๐โ๐ 1 ๐ผ)โฏ(๐โ๐ ๐ ๐ผ)๐ฃ 3 = 0 for ๐ = 3 , โฆ , ๐ (using the commutativity of each ๐ โ ๐ ๐ ๐ผ with each ๐ โ ๐ ๐ ๐ผ ). Continuing this pattern, we see that (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ)๐ฃ ๐ = 0 for each ๐ = 1 , โฆ , ๐ . Thus (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ) is the 0 operator because it is 0 on each vector in a basis of ๐ . Unfortunately no method exists for exactly computing the eigenvalues of an operator from its matrix. However, if we are fortunate enough to find a basis with respect to which the matrix of the operator is upper triangular, then the problem of computing the eigenvalues becomes trivial, as the next result shows. 5.41 determination of eigenvalues from upper-triangular matrix Suppose ๐ โ โ (๐) has an upper-triangular matrix with respect to some basis of ๐ . Then the eigenvalues of ๐ are precisely the entries on the diagonal of that upper-triangular matrix. Proof Suppose ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ with respect to which ๐ has an upper- triangular matrix โณ (๐) = โโโโโ ๐ 1 โ โฑ 0 ๐ ๐ โโโโโ . Because ๐๐ฃ 1 = ๐ 1 ๐ฃ 1 , we see that ๐ 1 is an eigenvalue of ๐ . Suppose ๐ โ {2 , โฆ , ๐} . Then (๐ โ ๐ ๐ ๐ผ)๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐โ1 ) . Thus ๐ โ ๐ ๐ ๐ผ maps span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) into span (๐ฃ 1 , โฆ , ๐ฃ ๐โ1 ) . Because dim span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) = ๐ and dim span (๐ฃ 1 , โฆ , ๐ฃ ๐โ1 ) = ๐ โ 1 , this implies that ๐ โ ๐ ๐ ๐ผ restricted to span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) is not injective (by 3.22). Thus there exists ๐ฃ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) such that ๐ฃ โ 0 and (๐ โ ๐ ๐ ๐ผ)๐ฃ = 0 . Thus ๐ ๐ is an eigenvalue of ๐ . Hence we have shown that every entry on the diagonal of โณ (๐) is an eigenvalue of ๐ . To prove ๐ has no other eigenvalues, let ๐ be the polynomial defined by ๐(๐ง) = (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) . Then ๐(๐) = 0 (by 5.40). Hence ๐ is a polynomial multiple of the minimal polynomial of ๐ (by 5.29). Thus every zero of the minimal polynomial of ๐ is a zero of ๐ . Because the zeros of the minimal polynomial of ๐ are the eigenvalues of ๐ (by 5.27), this implies that every eigenvalue of ๐ is a zero of ๐ . Hence the eigenvalues of ๐ are all contained in the list ๐ 1 , โฆ , ๐ ๐ .
158 Chapter 5 Eigenvalues and Eigenvectors 5.42 example: eigenvalues via an upper-triangular matrix Define ๐ โ โ (๐
3 ) by ๐(๐ฅ , ๐ฆ , ๐ง) = (2๐ฅ + ๐ฆ , 5๐ฆ + 3๐ง , 8๐ง) . The matrix of ๐ with respect to the standard basis is โณ (๐) = โ โโโโ 2 1 0 0 5 3 0 0 8 โ โโโโ . Now 5.41 implies that the eigenvalues of ๐ are 2 , 5 , and 8 . The next example illustrates 5.44: an operator has an upper-triangular matrix with respect to some basis if and only if the minimal polynomial of the operator is the product of polynomials of degree 1 . 5.43 example: whether ๐ has an upper-triangular matrix can depend on ๐
Define ๐ โ โ (๐
4 ) by ๐(๐ง 1 , ๐ง 2 , ๐ง 3 , ๐ง 4 ) = (โ๐ง 2 , ๐ง 1 , 2๐ง 1 + 3๐ง 3 , ๐ง 3 + 3๐ง 4 ). Thus with respect to the standard basis of ๐
4 , the matrix of ๐ is โโโโโโโโ 0 โ1 0 0 1 0 0 0 2 0 3 0 0 0 1 3 โโโโโโโโ . You can ask a computer to verify that the minimal polynomial of ๐ is the polyno- mial ๐ defined by ๐(๐ง) = 9 โ 6๐ง + 10๐ง 2 โ 6๐ง 3 + ๐ง 4 . First consider the case ๐
= ๐ . Then the polynomial ๐ factors as ๐(๐ง) = (๐ง 2 + 1)(๐ง โ 3)(๐ง โ 3) , with no further factorization of ๐ง 2 + 1 as the product of two polynomials of degree 1 with real coefficients. Thus 5.44 states that there does not exist a basis of ๐ 4 with respect to which ๐ has an upper-triangular matrix. Now consider the case ๐
= ๐ . Then the polynomial ๐ factors as ๐(๐ง) = (๐ง โ ๐)(๐ง + ๐)(๐ง โ 3)(๐ง โ 3) , where all factors above have the form ๐งโ๐ ๐ . Thus 5.44 states that there is a basis of ๐ 4 with respect to which ๐ has an upper-triangular matrix. Indeed, you can verify that with respect to the basis (4โ3๐ , โ3โ4๐ , โ3 + ๐ , 1) , (4 + 3๐ , โ3 + 4๐ , โ3โ๐ , 1) , (0 , 0 , 0 , 1) , (0 , 0 , 1 , 0) of ๐ 4 , the operator ๐ has the upper-triangular matrix โโโ โโโโโ ๐ 0 0 0 0 โ๐ 0 0 0 0 3 1 0 0 0 3 โโโ โโโโโ .
Section 5C Upper-Triangular Matrices 159 5.44 necessary and sufficient condition to have an upper-triangular matrix Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Then ๐ has an upper- triangular matrix with respect to some basis of ๐ if and only if the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some ๐ 1 , โฆ , ๐ ๐ โ ๐
. Proof First suppose ๐ has an upper-triangular matrix with respect to some basis of ๐ . Let ๐ผ 1 , โฆ , ๐ผ ๐ denote the diagonal entries of that matrix. Define a polynomial ๐ โ ๐ซ (๐
) by ๐(๐ง) = (๐ง โ ๐ผ 1 )โฏ(๐ง โ ๐ผ ๐ ). Then ๐(๐) = 0 , by 5.40. Hence ๐ is a polynomial multiple of the minimal polyno- mial of ๐ , by 5.29. Thus the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some ๐ 1 , โฆ , ๐ ๐ โ ๐
with {๐ 1 , โฆ , ๐ ๐ } โ {๐ผ 1 , โฆ , ๐ผ ๐ } . To prove the implication in the other direction, now suppose the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some ๐ 1 , โฆ , ๐ ๐ โ ๐
. We will use induction on ๐ . To get started, if ๐ = 1 then ๐ง โ ๐ 1 is the minimal polynomial of ๐ , which implies that ๐ = ๐ 1 ๐ผ , which implies that the matrix of ๐ (with respect to any basis of ๐ ) is upper triangular. Now suppose ๐ > 1 and the desired result holds for all smaller positive integers. Let ๐ = range (๐ โ ๐ ๐ ๐ผ). Then ๐ is invariant under ๐ [ this is a special case of 5.18 with ๐(๐ง) = ๐ง โ ๐ ๐ ] . Thus ๐| ๐ is an operator on ๐ . If ๐ข โ ๐ , then ๐ข = (๐ โ ๐ ๐ ๐ผ)๐ฃ for some ๐ฃ โ ๐ and (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐โ1 ๐ผ)๐ข = (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ)๐ฃ = 0. Hence (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐โ1 ) is a polynomial multiple of the minimal polynomial of ๐| ๐ , by 5.29. Thus the minimal polynomial of ๐| ๐ is the product of at most ๐ โ 1 terms of the form ๐ง โ ๐ ๐ . By our induction hypothesis, there is a basis ๐ข 1 , โฆ , ๐ข ๐ of ๐ with respect to which ๐| ๐ has an upper-triangular matrix. Thus for each ๐ โ {1 , โฆ , ๐} , we have (using 5.39) 5.45 ๐๐ข ๐ = (๐| ๐ )(๐ข ๐ ) โ span (๐ข 1 , โฆ , ๐ข ๐ ). Extend ๐ข 1 , โฆ , ๐ข ๐ to a basis ๐ข 1 , โฆ , ๐ข ๐ , ๐ฃ 1 , โฆ , ๐ฃ ๐ of ๐ . For each ๐ โ {1 , โฆ , ๐} , we have ๐๐ฃ ๐ = (๐ โ ๐ ๐ ๐ผ)๐ฃ ๐ + ๐ ๐ ๐ฃ ๐ . The definition of ๐ shows that (๐ โ ๐ ๐ ๐ผ)๐ฃ ๐ โ ๐ = span (๐ข 1 , โฆ , ๐ข ๐ ) . Thus the equation above shows that 5.46 ๐๐ฃ ๐ โ span (๐ข 1 , โฆ , ๐ข ๐ , ๐ฃ 1 , โฆ , ๐ฃ ๐ ). From 5.45 and 5.46, we conclude (using 5.39) that ๐ has an upper-triangular matrix with respect to the basis ๐ข 1 , โฆ , ๐ข ๐ , ๐ฃ 1 , โฆ , ๐ฃ ๐ of ๐ , as desired.
160 Chapter 5 Eigenvalues and Eigenvectors The set of numbers {๐ 1 , โฆ , ๐ ๐ } from the previous result equals the set of eigenvalues of ๐ (because the set of zeros of the minimal polynomial of ๐ equals the set of eigenvalues of ๐ , by 5.27), although the list ๐ 1 , โฆ , ๐ ๐ in the previous result may contain repetitions. In Chapter 8 we will improve even the wonderful result below; see 8.37 and 8.46. 5.47 if ๐
= ๐ , then every operator on ๐ has an upper-triangular matrix Suppose ๐ is a finite-dimensional complex vector space and ๐ โ โ (๐) . Then ๐ has an upper-triangular matrix with respect to some basis of ๐ . Proof The desired result follows immediately from 5.44 and the second version of the fundamental theorem of algebra (see 4.13). For an extension of the result above to two operators ๐ and ๐ such that ๐๐ = ๐๐ , see 5.80. Also, for an extension to more than two operators, see Exercise 9(b) in Section 5E. Caution: If an operator ๐ โ โ (๐) has a upper-triangular matrix with respect to some basis ๐ฃ 1 , โฆ , ๐ฃ ๐ of ๐ , then the eigenvalues of ๐ are exactly the entries on the diagonal of โณ (๐) , as shown by 5.41, and furthermore ๐ฃ 1 is an eigenvector of ๐ . However, ๐ฃ 2 , โฆ , ๐ฃ ๐ need not be eigenvectors of ๐ . Indeed, a basis vector ๐ฃ ๐ is an eigenvector of ๐ if and only if all entries in the ๐ th column of the matrix of ๐ are 0 , except possibly the ๐ th entry. The row echelon form of the matrix of an operator does not give us a list of the eigenvalues of the operator. In contrast, an upper-triangular matrix with respect to some basis gives us a list of all the eigenvalues of the op- erator. However, there is no method for computing exactly such an upper- triangular matrix, even though 5.47 guarantees its existence if ๐
= ๐ . You may recall from a previous course that every matrix of numbers can be changed to a matrix in what is called row echelon form. If one begins with a square matrix, the matrix in row echelon form will be an upper-triangular matrix. Do not confuse this upper-triangular ma- trix with the upper-triangular matrix of an operator with respect to some basis whose existence is proclaimed by 5.47 (if ๐
= ๐ )โthere is no connection between these upper-triangular matrices. Exercises 5C 1 Prove or give a counterexample: If ๐ โ โ (๐) and ๐ 2 has an upper-triangular matrix with respect to some basis of ๐ , then ๐ has an upper-triangular matrix with respect to some basis of ๐ .
Section 5C Upper-Triangular Matrices 161 2 Suppose ๐ด and ๐ต are upper-triangular matrices of the same size, with ๐ผ 1 , โฆ , ๐ผ ๐ on the diagonal of ๐ด and ๐ฝ 1 , โฆ , ๐ฝ ๐ on the diagonal of ๐ต . (a) Show that ๐ด + ๐ต is an upper-triangular matrix with ๐ผ 1 + ๐ฝ 1 , โฆ , ๐ผ ๐ + ๐ฝ ๐ on the diagonal. (b) Show that ๐ด๐ต is an upper-triangular matrix with ๐ผ 1 ๐ฝ 1 , โฆ , ๐ผ ๐ ๐ฝ ๐ on the diagonal. The results in this exercise are used in the proof of 5.81. 3 Suppose ๐ โ โ (๐) is invertible and ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ with respect to which the matrix of ๐ is upper triangular, with ๐ 1 , โฆ , ๐ ๐ on the diagonal. Show that the matrix of ๐ โ1 is also upper triangular with respect to the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ , with 1 ๐ 1 , โฆ , 1 ๐ ๐ on the diagonal. 4 Give an example of an operator whose matrix with respect to some basis contains only 0 โs on the diagonal, but the operator is invertible. This exercise and the exercise below show that 5.41 fails without the hypoth- esis that an upper-triangular matrix is under consideration. 5 Give an example of an operator whose matrix with respect to some basis contains only nonzero numbers on the diagonal, but the operator is not invertible. 6 Suppose ๐
= ๐ , ๐ is finite-dimensional, and ๐ โ โ (๐) . Prove that if ๐ โ {1 , โฆ , dim ๐} , then ๐ has a ๐ -dimensional subspace invariant under ๐ . 7 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and ๐ฃ โ ๐ . (a) Prove that there exists a unique monic polynomial ๐ ๐ฃ of smallest degree such that ๐ ๐ฃ (๐)๐ฃ = 0 . (b) Prove that the minimal polynomial of ๐ is a polynomial multiple of ๐ ๐ฃ . 8 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) , and there exists a nonzero vector ๐ฃ โ ๐ such that ๐ 2 ๐ฃ + 2๐๐ฃ = โ2๐ฃ . (a) Prove that if ๐
= ๐ , then there does not exist a basis of ๐ with respect to which ๐ has an upper-triangular matrix. (b) Prove that if ๐
= ๐ and ๐ด is an upper-triangular matrix that equals the matrix of ๐ with respect to some basis of ๐ , then โ1 + ๐ or โ1 โ ๐ appears on the diagonal of ๐ด . 9 Suppose ๐ต is a square matrix with complex entries. Prove that there exists an invertible square matrix ๐ด with complex entries such that ๐ด โ1 ๐ต๐ด is an upper-triangular matrix.
162 Chapter 5 Eigenvalues and Eigenvectors 10 Suppose ๐ โ โ (๐) and ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ . Show that the following are equivalent. (a) The matrix of ๐ with respect to ๐ฃ 1 , โฆ , ๐ฃ ๐ is lower triangular. (b) span (๐ฃ ๐ , โฆ , ๐ฃ ๐ ) is invariant under ๐ for each ๐ = 1 , โฆ , ๐ . (c) ๐๐ฃ ๐ โ span (๐ฃ ๐ , โฆ , ๐ฃ ๐ ) for each ๐ = 1 , โฆ , ๐ . A square matrix is called lower triangular if all entries above the diagonal are 0 . 11 Suppose ๐
= ๐ and ๐ is finite-dimensional. Prove that if ๐ โ โ (๐) , then there exists a basis of ๐ with respect to which ๐ has a lower-triangular matrix. 12 Suppose ๐ is finite-dimensional, ๐ โ โ (๐) has an upper-triangular matrix with respect to some basis of ๐ , and ๐ is a subspace of ๐ that is invariant under ๐ . (a) Prove that ๐| ๐ has an upper-triangular matrix with respect to some basis of ๐ . (b) Prove that the quotient operator ๐/๐ has an upper-triangular matrix with respect to some basis of ๐/๐ . The quotient operator ๐/๐ was defined in Exercise 38 in Section 5A. 13 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Suppose there exists a subspace ๐ of ๐ that is invariant under ๐ such that ๐| ๐ has an upper- triangular matrix with respect to some basis of ๐ and also ๐/๐ has an upper-triangular matrix with respect to some basis of ๐/๐ . Prove that ๐ has an upper-triangular matrix with respect to some basis of ๐ . 14 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that ๐ has an upper- triangular matrix with respect to some basis of ๐ if and only if the dual operator ๐ โฒ has an upper-triangular matrix with respect to some basis of the dual space ๐ โฒ .
Section 5D Diagonalizable Operators 163 5D Diagonalizable Operators Diagonal Matrices 5.48 definition: diagonal matrix A diagonal matrix is a square matrix that is 0 everywhere except possibly on the diagonal. 5.49 example: diagonal matrix โโโโโ 8 0 0 0 5 0 0 0 5 โโโโโ is a diagonal matrix. Every diagonal matrix is upper tri- angular. Diagonal matrices typically have many more 0 โs than most upper- triangular matrices of the same size. If an operator has a diagonal matrix with respect to some basis, then the en- tries on the diagonal are precisely the eigenvalues of the operator; this follows from 5.41 (or find an easier direct proof for diagonal matrices). 5.50 definition: diagonalizable An operator on ๐ is called diagonalizable if the operator has a diagonal matrix with respect to some basis of ๐ . 5.51 example: diagonalization may require a different basis Define ๐ โ โ (๐ 2 ) by ๐(๐ฅ , ๐ฆ) = (41๐ฅ + 7๐ฆ , โ20๐ฅ + 74๐ฆ). The matrix of ๐ with respect to the standard basis of ๐ 2 is ( 41 7 โ20 74 ) , which is not a diagonal matrix. However, ๐ is diagonalizable. Specifically, the matrix of ๐ with respect to the basis (1 , 4) , (7 , 5) is ( 69 0 0 46 ) because ๐(1 , 4) = (69 , 276) = 69(1 , 4) and ๐(7 , 5) = (322 , 230) = 46(7 , 5) .
164 Chapter 5 Eigenvalues and Eigenvectors For ๐ โ ๐
, we will find it convenient to have a name and a notation for the set of vectors that an operator ๐ maps to ๐ times the vector. 5.52 definition: eigenspace, ๐ธ(๐ , ๐) Suppose ๐ โ โ (๐) and ๐ โ ๐
. The eigenspace of ๐ corresponding to ๐ is the subspace ๐ธ(๐ , ๐) of ๐ defined by ๐ธ(๐ , ๐) = null (๐ โ ๐๐ผ) = {๐ฃ โ ๐ โถ ๐๐ฃ = ๐๐ฃ}. Hence ๐ธ(๐ , ๐) is the set of all eigenvectors of ๐ corresponding to ๐ , along with the 0 vector. For ๐ โ โ (๐) and ๐ โ ๐
, the set ๐ธ(๐ , ๐) is a subspace of ๐ because the null space of each linear map on ๐ is a subspace of ๐ . The definitions imply that ๐ is an eigenvalue of ๐ if and only if ๐ธ(๐ , ๐) โ {0} . 5.53 example: eigenspaces of an operator Suppose the matrix of an operator ๐ โ โ (๐) with respect to a basis ๐ฃ 1 , ๐ฃ 2 , ๐ฃ 3 of ๐ is the matrix in Example 5.49. Then ๐ธ(8 , ๐) = span (๐ฃ 1 ) , ๐ธ(5 , ๐) = span (๐ฃ 2 , ๐ฃ 3 ). If ๐ is an eigenvalue of an operator ๐ โ โ (๐) , then ๐ restricted to ๐ธ(๐ , ๐) is just the operator of multiplication by ๐ . 5.54 sum of eigenspaces is a direct sum Suppose ๐ โ โ (๐) and ๐ 1 , โฆ , ๐ ๐ are distinct eigenvalues of ๐ . Then ๐ธ(๐ 1 , ๐) + โฏ + ๐ธ(๐ ๐ , ๐) is a direct sum. Furthermore, if ๐ is finite-dimensional, then dim ๐ธ(๐ 1 , ๐) + โฏ + dim ๐ธ(๐ ๐ , ๐) โค dim ๐. Proof To show that ๐ธ(๐ 1 , ๐) + โฏ + ๐ธ(๐ ๐ , ๐) is a direct sum, suppose ๐ฃ 1 + โฏ + ๐ฃ ๐ = 0 , where each ๐ฃ ๐ is in ๐ธ(๐ ๐ , ๐) . Because eigenvectors corresponding to distinct eigenvalues are linearly independent (by 5.11), this implies that each ๐ฃ ๐ equals 0 . Thus ๐ธ(๐ 1 , ๐) + โฏ + ๐ธ(๐ ๐ , ๐) is a direct sum (by 1.45), as desired. Now suppose ๐ is finite-dimensional. Then dim ๐ธ(๐ 1 , ๐) + โฏ + dim ๐ธ(๐ ๐ , ๐) = dim (๐ธ(๐ 1 , ๐) โ โฏ โ ๐ธ(๐ ๐ , ๐)) โค dim ๐ , where the first line follows from 3.94 and the second line follows from 2.37.
Section 5D Diagonalizable Operators 165 Conditions for Diagonalizability The following characterizations of diagonalizable operators will be useful. 5.55 conditions equivalent to diagonalizability Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Let ๐ 1 , โฆ , ๐ ๐ denote the distinct eigenvalues of ๐ . Then the following are equivalent. (a) ๐ is diagonalizable. (b) ๐ has a basis consisting of eigenvectors of ๐ . (c) ๐ = ๐ธ(๐ 1 , ๐) โ โฏ โ ๐ธ(๐ ๐ , ๐) . (d) dim ๐ = dim ๐ธ(๐ 1 , ๐) + โฏ + dim ๐ธ(๐ ๐ , ๐) . Proof An operator ๐ โ โ (๐) has a diagonal matrix โโโโโ ๐ 1 0 โฑ 0 ๐ ๐ โโโโโ with respect to a basis ๐ฃ 1 , โฆ , ๐ฃ ๐ of ๐ if and only if ๐๐ฃ ๐ = ๐ ๐ ๐ฃ ๐ for each ๐ . Thus (a) and (b) are equivalent. Suppose (b) holds; thus ๐ has a basis consisting of eigenvectors of ๐ . Hence every vector in ๐ is a linear combination of eigenvectors of ๐ , which implies that ๐ = ๐ธ(๐ 1 , ๐) + โฏ + ๐ธ(๐ ๐ , ๐). Now 5.54 shows that (c) holds, proving that (b) implies (c). That (c) implies (d) follows immediately from 3.94. Finally, suppose (d) holds; thus 5.56 dim ๐ = dim ๐ธ(๐ 1 , ๐) + โฏ + dim ๐ธ(๐ ๐ , ๐). Choose a basis of each ๐ธ(๐ ๐ , ๐) ; put all these bases together to form a list ๐ฃ 1 , โฆ , ๐ฃ ๐ of eigenvectors of ๐ , where ๐ = dim ๐ (by 5.56). To show that this list is linearly independent, suppose ๐ 1 ๐ฃ 1 + โฏ + ๐ ๐ ๐ฃ ๐ = 0 , where ๐ 1 , โฆ , ๐ ๐ โ ๐
. For each ๐ = 1 , โฆ , ๐ , let ๐ข ๐ denote the sum of all the terms ๐ ๐ ๐ฃ ๐ such that ๐ฃ ๐ โ ๐ธ(๐ ๐ , ๐) . Thus each ๐ข ๐ is in ๐ธ(๐ ๐ , ๐) , and ๐ข 1 + โฏ + ๐ข ๐ = 0. Because eigenvectors corresponding to distinct eigenvalues are linearly indepen- dent (see 5.11), this implies that each ๐ข ๐ equals 0 . Because each ๐ข ๐ is a sum of terms ๐ ๐ ๐ฃ ๐ , where the ๐ฃ ๐ โs were chosen to be a basis of ๐ธ(๐ ๐ , ๐) , this implies that all ๐ ๐ โs equal 0 . Thus ๐ฃ 1 , โฆ , ๐ฃ ๐ is linearly independent and hence is a basis of ๐ (by 2.38). Thus (d) implies (b), completing the proof. For additional conditions equivalent to diagonalizability, see 5.62, Exercises 5 and 15 in this section, Exercise 24 in Section 7B, and Exercise 15 in Section 8A.
166 Chapter 5 Eigenvalues and Eigenvectors As we know, every operator on a finite-dimensional complex vector space has an eigenvalue. However, not every operator on a finite-dimensional complex vector space has enough eigenvectors to be diagonalizable, as shown by the next example. 5.57 example: an operator that is not diagonalizable Define an operator ๐ โ โ (๐
3 ) by ๐(๐ , ๐ , ๐) = (๐ , ๐ , 0) . The matrix of ๐ with respect to the standard basis of ๐
3 is โโโโโ 0 1 0 0 0 1 0 0 0 โโโโโ , which is an upper-triangular matrix but is not a diagonal matrix. As you should verify, 0 is the only eigenvalue of ๐ and furthermore ๐ธ(0 , ๐) = {(๐ , 0 , 0) โ ๐
3 โถ ๐ โ ๐
}. Hence conditions (b), (c), and (d) of 5.55 fail (of course, because these conditions are equivalent, it is sufficient to check that only one of them fails). Thus condition (a) of 5.55 also fails. Hence ๐ is not diagonalizable, regardless of whether ๐
= ๐ or ๐
= ๐ . The next result shows that if an operator has as many distinct eigenvalues as the dimension of its domain, then the operator is diagonalizable. 5.58 enough eigenvalues implies diagonalizability Suppose ๐ is finite-dimensional and ๐ โ โ (๐) has dim ๐ distinct eigenvalues. Then ๐ is diagonalizable. Proof Suppose ๐ has distinct eigenvalues ๐ 1 , โฆ , ๐ dim ๐ . For each ๐ , let ๐ฃ ๐ โ ๐ be an eigenvector corresponding to the eigenvalue ๐ ๐ . Because eigenvectors corre- sponding to distinct eigenvalues are linearly independent (see 5.11), ๐ฃ 1 , โฆ , ๐ฃ dim ๐ is linearly independent. A linearly independent list of dim ๐ vectors in ๐ is a basis of ๐ (see 2.38); thus ๐ฃ 1 , โฆ , ๐ฃ dim ๐ is a basis of ๐ . With respect to this basis consisting of eigenvectors, ๐ has a diagonal matrix. In later chapters we will find additional conditions that imply that certain operators are diagonalizable. For example, see the real spectral theorem (7.29) and the complex spectral theorem (7.31). The result above gives a sufficient condition for an operator to be diagonal- izable. However, this condition is not necessary. For example, the operator ๐ on ๐
3 defined by ๐(๐ฅ , ๐ฆ , ๐ง) = (6๐ฅ , 6๐ฆ , 7๐ง) has only two eigenvalues ( 6 and 7 ) and dim ๐
3 = 3 , but ๐ is diagonalizable ( by the standard basis of ๐
3 ) .
Section 5D Diagonalizable Operators 167 For a spectacular application of these techniques, see Exercise 21, which shows how to use diagonalization to find an exact formula for the ๐ th term of the Fibonacci sequence. The next example illustrates the im- portance of diagonalization, which can be used to compute high powers of an operator, taking advantage of the equa- tion ๐ ๐ ๐ฃ = ๐ ๐ ๐ฃ if ๐ฃ is an eigenvector of ๐ with eigenvalue ๐ . 5.59 example: using diagonalization to compute ๐ 100 Define ๐ โ โ (๐
3 ) by ๐(๐ฅ , ๐ฆ , ๐ง) = (2๐ฅ + ๐ฆ , 5๐ฆ + 3๐ง , 8๐ง) . With respect to the standard basis, the matrix of ๐ is โโโโโ 2 1 0 0 5 3 0 0 8 โโโโโ . The matrix above is an upper-triangular matrix but it is not a diagonal matrix. By 5.41, the eigenvalues of ๐ are 2 , 5 , and 8 . Because ๐ is an operator on a vector space of dimension three and ๐ has three distinct eigenvalues, 5.58 assures us that there exists a basis of ๐
3 with respect to which ๐ has a diagonal matrix. To find this basis, we only have to find an eigenvector for each eigenvalue. In other words, we have to find a nonzero solution to the equation ๐(๐ฅ , ๐ฆ , ๐ง) = ๐(๐ฅ , ๐ฆ , ๐ง) for ๐ = 2 , then for ๐ = 5 , and then for ๐ = 8 . Solving these simple equations shows that for ๐ = 2 we have an eigenvector (1 , 0 , 0) , for ๐ = 5 we have an eigenvector (1 , 3 , 0) , and for ๐ = 8 we have an eigenvector (1 , 6 , 6) . Thus (1 , 0 , 0) , (1 , 3 , 0) , (1 , 6 , 6) is a basis of ๐
3 consisting of eigenvectors of ๐ , and with respect to this basis the matrix of ๐ is the diagonal matrix โโโโโ 2 0 0 0 5 0 0 0 8 โโโโโ . To compute ๐ 100 (0 , 0 , 1) , for example, write (0 , 0 , 1) as a linear combination of our basis of eigenvectors: (0 , 0 , 1) = 16 (1 , 0 , 0) โ 13 (1 , 3 , 0) + 16 (1 , 6 , 6). Now apply ๐ 100 to both sides of the equation above, getting ๐ 100 (0 , 0 , 1) = 16 (๐ 100 (1 , 0 , 0)) โ 13 (๐ 100 (1 , 3 , 0)) + 16 (๐ 100 (1 , 6 , 6)) = 16 (2 100 (1 , 0 , 0) โ 2 โ
5 100 (1 , 3 , 0) + 8 100 (1 , 6 , 6)) = 16 (2 100 โ 2 โ
5 100 + 8 100 , 6 โ
8 100 โ 6 โ
5 100 , 6 โ
8 100 ).
168 Chapter 5 Eigenvalues and Eigenvectors We saw earlier that an operator ๐ on a finite-dimensional vector space ๐ has an upper-triangular matrix with respect to some basis of ๐ if and only if the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some ๐ 1 , โฆ , ๐ ๐ โ ๐
(see 5.44). As we previously noted (see 5.47), this condition is always satisfied if ๐
= ๐ . Our next result 5.62 states that an operator ๐ โ โ (๐) has a diagonal matrix with respect to some basis of ๐ if and only if the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some distinct ๐ 1 , โฆ , ๐ ๐ โ ๐
. Before formally stating this result, we give two examples of using it. 5.60 example: diagonalizable, but with no known exact eigenvalues Define ๐ โ โ (๐ 5 ) by ๐(๐ง 1 , ๐ง 2 , ๐ง 3 , ๐ง 4 , ๐ง 5 ) = (โ3๐ง 5 , ๐ง 1 + 6๐ง 5 , ๐ง 2 , ๐ง 3 , ๐ง 4 ). The matrix of ๐ is shown in Example 5.26, where we showed that the minimal polynomial of ๐ is 3 โ 6๐ง + ๐ง 5 . As mentioned in Example 5.28, no exact expression is known for any of the zeros of this polynomial, but numeric techniques show that the zeros of this polynomial are approximately โ1.67 , 0.51 , 1.40 , โ0.12 + 1.59๐ , โ0.12 โ 1.59๐ . The software that produces these approximations is accurate to more than three digits. Thus these approximations are good enough to show that the five numbers above are distinct. The minimal polynomial of ๐ equals the fifth degree monic polynomial with these zeros. Now 5.62 shows that ๐ is diagonalizable. 5.61 example: showing that an operator is not diagonalizable Define ๐ โ โ (๐
3 ) by ๐(๐ง 1 , ๐ง 2 , ๐ง 3 ) = (6๐ง 1 + 3๐ง 2 + 4๐ง 3 , 6๐ง 2 + 2๐ง 3 , 7๐ง 3 ). The matrix of ๐ with respect to the standard basis of ๐
3 is โโโโโ 6 3 4 0 6 2 0 0 7 โโโโโ . The matrix above is an upper-triangular matrix but is not a diagonal matrix. Might ๐ have a diagonal matrix with respect to some other basis of ๐
3 ? To answer this question, we will find the minimal polynomial of ๐ . First note that the eigenvalues of ๐ are the diagonal entries of the matrix above (by 5.41). Thus the zeros of the minimal polynomial of ๐ are 6 , 7 [ by 5.27(a) ] . The diagonal of the matrix above tells us that (๐ โ 6๐ผ) 2 (๐ โ 7๐ผ) = 0 (by 5.40). The minimal polynomial of ๐ has degree at most 3 (by 5.22). Putting all this together, we see that the minimal polynomial of ๐ is either (๐ง โ 6)(๐ง โ 7) or (๐ง โ 6) 2 (๐ง โ 7) . A simple computation shows that (๐ โ 6๐ผ)(๐ โ 7๐ผ) โ 0 . Thus the minimal polynomial of ๐ is (๐ง โ 6) 2 (๐ง โ 7) . Now 5.62 shows that ๐ is not diagonalizable.
Section 5D Diagonalizable Operators 169 5.62 necessary and sufficient condition for diagonalizability Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Then ๐ is diagonalizable if and only if the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some list of distinct numbers ๐ 1 , โฆ , ๐ ๐ โ ๐
. Proof First suppose ๐ is diagonalizable. Thus there is a basis ๐ฃ 1 , โฆ , ๐ฃ ๐ of ๐ consisting of eigenvectors of ๐ . Let ๐ 1 , โฆ , ๐ ๐ be the distinct eigenvalues of ๐ . Then for each ๐ฃ ๐ , there exists ๐ ๐ with (๐ โ ๐ ๐ ๐ผ)๐ฃ ๐ = 0 . Thus (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ)๐ฃ ๐ = 0 , which implies that the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) . To prove the implication in the other direction, now suppose the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some list of distinct numbers ๐ 1 , โฆ , ๐ ๐ โ ๐
. Thus 5.63 (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ) = 0. We will prove that ๐ is diagonalizable by induction on ๐ . To get started, suppose ๐ = 1 . Then ๐ โ ๐ 1 ๐ผ = 0 , which means that ๐ is a scalar multiple of the identity operator, which implies that ๐ is diagonalizable. Now suppose that ๐ > 1 and the desired result holds for all smaller values of ๐ . The subspace range (๐ โ ๐ ๐ ๐ผ) is invariant under ๐ [ this is a special case of 5.18 with ๐(๐ง) = ๐ง โ ๐ ๐ ] . Thus ๐ restricted to range (๐ โ ๐ ๐ ๐ผ) is an operator on range (๐ โ ๐ ๐ ๐ผ) . If ๐ข โ range (๐ โ ๐ ๐ ๐ผ) , then ๐ข = (๐ โ ๐ ๐ ๐ผ)๐ฃ for some ๐ฃ โ ๐ , and 5.63 implies 5.64 (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐โ1 ๐ผ)๐ข = (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐ ๐ผ)๐ฃ = 0. Hence (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐โ1 ) is a polynomial multiple of the minimal polynomial of ๐ restricted to range (๐ โ ๐ ๐ ๐ผ) [by 5.29]. Thus by our induction hypothesis, there is a basis of range (๐ โ ๐ ๐ ๐ผ) consisting of eigenvectors of ๐ . Suppose that ๐ข โ range (๐ โ ๐ ๐ ๐ผ) โฉ null (๐ โ ๐ ๐ ๐ผ) . Then ๐๐ข = ๐ ๐ ๐ข . Now 5.64 implies that 0 = (๐ โ ๐ 1 ๐ผ)โฏ(๐ โ ๐ ๐โ1 ๐ผ)๐ข = (๐ ๐ โ ๐ 1 )โฏ(๐ ๐ โ ๐ ๐โ1 )๐ข. Because ๐ 1 , โฆ , ๐ ๐ are distinct, the equation above implies that ๐ข = 0 . Hence range (๐ โ ๐ ๐ ๐ผ) โฉ null (๐ โ ๐ ๐ ๐ผ) = {0} . Thus range (๐โ๐ ๐ ๐ผ) + null (๐โ๐ ๐ ๐ผ) is a direct sum (by 1.46) whose dimension is dim ๐ (by 3.94 and 3.21). Hence range (๐ โ ๐ ๐ ๐ผ) โ null (๐ โ ๐ ๐ ๐ผ) = ๐ . Every vector in null (๐ โ ๐ ๐ ๐ผ) is an eigenvector of ๐ with eigenvalue ๐ ๐ . Earlier in this proof we saw that there is a basis of range (๐ โ ๐ ๐ ๐ผ) consisting of eigenvectors of ๐ . Adjoining to that basis a basis of null (๐ โ ๐ ๐ ๐ผ) gives a basis of ๐ consisting of eigenvectors of ๐ . The matrix of ๐ with respect to this basis is a diagonal matrix, as desired.
170 Chapter 5 Eigenvalues and Eigenvectors No formula exists for the zeros of polynomials of degree 5 or greater. However, the previous result can be used to determine whether an operator on a complex vector space is diagonalizable without even finding approximations of the zeros of the minimal polynomialโsee Exercise 15. The next result will be a key tool when we prove a result about the simultaneous diagonalization of two operators; see 5.76. Note how the use of a characterization of diagonalizable operators in terms of the minimal polynomial (see 5.62) leads to a short proof of the next result. 5.65 restriction of diagonalizable operator to invariant subspace Suppose ๐ โ โ (๐) is diagonalizable and ๐ is a subspace of ๐ that is invariant under ๐ . Then ๐| ๐ is a diagonalizable operator on ๐ . Proof Because the operator ๐ is diagonalizable, the minimal polynomial of ๐ equals (๐ง โ ๐ 1 )โฏ(๐ง โ ๐ ๐ ) for some list of distinct numbers ๐ 1 , โฆ , ๐ ๐ โ ๐
(by 5.62). The minimal polynomial of ๐ is a polynomial multiple of the minimal polynomial of ๐| ๐ (by 5.31). Hence the minimal polynomial of ๐| ๐ has the form required by 5.62, which shows that ๐| ๐ is diagonalizable. Gershgorin Disk Theorem 5.66 definition: Gershgorin disks Suppose ๐ โ โ (๐) and ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ . Let ๐ด denote the matrix of ๐ with respect to this basis. A Gershgorin disk of ๐ with respect to the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ is a set of the form {๐ง โ ๐
โถ |๐ง โ ๐ด ๐ , ๐ | โค ๐ โ ๐=1๐โ ๐ |๐ด ๐ , ๐ |} , where ๐ โ {1 , โฆ , ๐} . Because there are ๐ choices for ๐ in the definition above, ๐ has ๐ Gershgorin disks. If ๐
= ๐ , then for each ๐ โ {1 , โฆ , ๐} , the corresponding Gershgorin disk is a closed disk in ๐ centered at ๐ด ๐ , ๐ , which is the ๐ th entry on the diagonal of ๐ด . The radius of this closed disk is the sum of the absolute values of the entries in row ๐ of ๐ด , excluding the diagonal entry. If ๐
= ๐ , then the Gershgorin disks are closed intervals in ๐ . In the special case that the square matrix ๐ด above is a diagonal matrix, each Gershgorin disk consists of a single point that is a diagonal entry of ๐ด (and each eigenvalue of ๐ is one of those points, as required by the next result). One consequence of our next result is that if the nondiagonal entries of ๐ด are small, then each eigenvalue of ๐ is near a diagonal entry of ๐ด .
Section 5D Diagonalizable Operators 171 5.67 Gershgorin disk theorem Suppose ๐ โ โ (๐) and ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ . Then each eigenvalue of ๐ is contained in some Gershgorin disk of ๐ with respect to the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ . Proof Suppose ๐ โ ๐
is an eigenvalue of ๐ . Let ๐ค โ ๐ be a corresponding eigenvector. There exist ๐ 1 , โฆ , ๐ ๐ โ ๐
such that 5.68 ๐ค = ๐ 1 ๐ฃ 1 + โฏ + ๐ ๐ ๐ฃ ๐ . Let ๐ด denote the matrix of ๐ with respect to the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ . Applying ๐ to both sides of the equation above gives ๐๐ค = ๐ โ ๐=1 ๐ ๐ ๐๐ฃ ๐ 5.69 = ๐ โ ๐=1 ๐ ๐ ๐ โ ๐ = 1 ๐ด ๐ , ๐ ๐ฃ ๐ = ๐ โ ๐ = 1 ( ๐ โ ๐=1 ๐ด ๐ , ๐ ๐ ๐ )๐ฃ ๐ . 5.70 Let ๐ โ {1 , โฆ , ๐} be such that |๐ ๐ | = max {|๐ 1 | , โฆ , |๐ ๐ |}. Using 5.68, we see that the coefficient of ๐ฃ ๐ on the left side of 5.69 equals ๐๐ ๐ , which must equal the coefficient of ๐ฃ ๐ on the right side of 5.70. In other words, ๐๐ ๐ = ๐ โ ๐=1 ๐ด ๐ , ๐ ๐ ๐ . Subtract ๐ด ๐ , ๐ ๐ ๐ from each side of the equation above and then divide both sides by ๐ ๐ to get |๐ โ ๐ด ๐ , ๐ | = โฃ ๐ โ ๐=1๐โ ๐ ๐ด ๐ , ๐ ๐ ๐ ๐ ๐ โฃ โค ๐ โ ๐=1๐โ ๐ |๐ด ๐ , ๐ |. Thus ๐ is in the ๐ th Gershgorin disk with respect to the basis ๐ฃ 1 , โฆ , ๐ฃ ๐ . The Gershgorin disk theorem is named for Semyon Aronovich Gershgorin, who published this result in 1931. Exercise 22 gives a nice application of the Gershgorin disk theorem. Exercise 23 states that the radius of each Gershgorin disk could be changed to the sum of the absolute values of corresponding column entries (instead of row entries), excluding the diagonal entry, and the theorem above would still hold.
172 Chapter 5 Eigenvalues and Eigenvectors Exercises 5D 1 Suppose ๐ is a finite-dimensional complex vector space and ๐ โ โ (๐) . (a) Prove that if ๐ 4 = ๐ผ , then ๐ is diagonalizable. (b) Prove that if ๐ 4 = ๐ , then ๐ is diagonalizable. (c) Give an example of an operator ๐ โ โ (๐ 2 ) such that ๐ 4 = ๐ 2 and ๐ is not diagonalizable. 2 Suppose ๐ โ โ (๐) has a diagonal matrix ๐ด with respect to some basis of ๐ . Prove that if ๐ โ ๐
, then ๐ appears on the diagonal of ๐ด precisely dim ๐ธ(๐ , ๐) times. 3 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that if the operator ๐ is diagonalizable, then ๐ = null ๐ โ range ๐ . 4 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that the following are equivalent. (a) ๐ = null ๐ โ range ๐ . (b) ๐ = null ๐ + range ๐ . (c) null ๐ โฉ range ๐ = {0} . 5 Suppose ๐ is a finite-dimensional complex vector space and ๐ โ โ (๐) . Prove that ๐ is diagonalizable if and only if ๐ = null (๐ โ ๐๐ผ) โ range (๐ โ ๐๐ผ) for every ๐ โ ๐ . 6 Suppose ๐ โ โ (๐
5 ) and dim ๐ธ(8 , ๐) = 4 . Prove that ๐ โ 2๐ผ or ๐ โ 6๐ผ is invertible. 7 Suppose ๐ โ โ (๐) is invertible. Prove that ๐ธ(๐ , ๐) = ๐ธ( 1๐ , ๐ โ1 ) for every ๐ โ ๐
with ๐ โ 0 . 8 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Let ๐ 1 , โฆ , ๐ ๐ denote the distinct nonzero eigenvalues of ๐ . Prove that dim ๐ธ(๐ 1 , ๐) + โฏ + dim ๐ธ(๐ ๐ , ๐) โค dim range ๐. 9 Suppose ๐
, ๐ โ โ (๐
3 ) each have 2 , 6 , 7 as eigenvalues. Prove that there exists an invertible operator ๐ โ โ (๐
3 ) such that ๐
= ๐ โ1 ๐๐ . 10 Find ๐
, ๐ โ โ (๐
4 ) such that ๐
and ๐ each have 2 , 6 , 7 as eigenvalues, ๐
and ๐ have no other eigenvalues, and there does not exist an invertible operator ๐ โ โ (๐
4 ) such that ๐
= ๐ โ1 ๐๐ .
Section 5D Diagonalizable Operators 173 11 Find ๐ โ โ (๐ 3 ) such that 6 and 7 are eigenvalues of ๐ and such that ๐ does not have a diagonal matrix with respect to any basis of ๐ 3 . 12 Suppose ๐ โ โ (๐ 3 ) is such that 6 and 7 are eigenvalues of ๐ . Furthermore, suppose ๐ does not have a diagonal matrix with respect to any basis of ๐ 3 . Prove that there exists (๐ง 1 , ๐ง 2 , ๐ง 3 ) โ ๐ 3 such that ๐(๐ง 1 , ๐ง 2 , ๐ง 3 ) = (6 + 8๐ง 1 , 7 + 8๐ง 2 , 13 + 8๐ง 3 ). 13 Suppose ๐ด is a diagonal matrix with distinct entries on the diagonal and ๐ต is a matrix of the same size as ๐ด . Show that ๐ด๐ต = ๐ต๐ด if and only if ๐ต is a diagonal matrix. 14 (a) Give an example of a finite-dimensional complex vector space and an operator ๐ on that vector space such that ๐ 2 is diagonalizable but ๐ is not diagonalizable. (b) Suppose ๐
= ๐ , ๐ is a positive integer, and ๐ โ โ (๐) is invertible. Prove that ๐ is diagonalizable if and only if ๐ ๐ is diagonalizable. 15 Suppose ๐ is a finite-dimensional complex vector space, ๐ โ โ (๐) , and ๐ is the minimal polynomial of ๐ . Prove that the following are equivalent. (a) ๐ is diagonalizable. (b) There does not exist ๐ โ ๐ such that ๐ is a polynomial multiple of (๐ง โ ๐) 2 . (c) ๐ and its derivative ๐ โฒ have no zeros in common. (d) The greatest common divisor of ๐ and ๐ โฒ is the constant polynomial 1 . The greatest common divisor of ๐ and ๐ โฒ is the monic polynomial ๐ of largest degree such that ๐ and ๐ โฒ are both polynomial multiples of ๐ . The Euclidean algorithm for polynomials ( look it up ) can quickly determine the greatest common divisor of two polynomials, without requiring any information about the zeros of the polynomials. Thus the equivalence of ( a ) and ( d ) above shows that we can determine whether ๐ is diagonalizable without knowing anything about the zeros of ๐ . 16 Suppose that ๐ โ โ (๐) is diagonalizable. Let ๐ 1 , โฆ , ๐ ๐ denote the distinct eigenvalues of ๐ . Prove that a subspace ๐ of ๐ is invariant under ๐ if and only if there exist subspaces ๐ 1 , โฆ , ๐ ๐ of ๐ such that ๐ ๐ โ ๐ธ(๐ ๐ , ๐) for each ๐ and ๐ = ๐ 1 โ โฏ โ ๐ ๐ . 17 Suppose ๐ is finite-dimensional. Prove that โ (๐) has a basis consisting of diagonalizable operators. 18 Suppose that ๐ โ โ (๐) is diagonalizable and ๐ is a subspace of ๐ that is invariant under ๐ . Prove that the quotient operator ๐/๐ is a diagonalizable operator on ๐/๐ . The quotient operator ๐/๐ was defined in Exercise 38 in Section 5A.
174 Chapter 5 Eigenvalues and Eigenvectors 19 Prove or give a counterexample: If ๐ โ โ (๐) and there exists a subspace ๐ of ๐ that is invariant under ๐ such that ๐| ๐ and ๐/๐ are both diagonalizable, then ๐ is diagonalizable. See Exercise 13 in Section 5C for an analogous statement about upper- triangular matrices. 20 Suppose ๐ is finite-dimensional and ๐ โ โ (๐) . Prove that ๐ is diagonaliz- able if and only if the dual operator ๐ โฒ is diagonalizable. 21 The Fibonacci sequence ๐น 0 , ๐น 1 , ๐น 2 , โฆ is defined by ๐น 0 = 0 , ๐น 1 = 1 , and ๐น ๐ = ๐น ๐โ2 + ๐น ๐โ1 for ๐ โฅ 2. Define ๐ โ โ (๐ 2 ) by ๐(๐ฅ , ๐ฆ) = (๐ฆ , ๐ฅ + ๐ฆ) . (a) Show that ๐ ๐ (0 , 1) = (๐น ๐ , ๐น ๐ + 1 ) for each nonnegative integer ๐ . (b) Find the eigenvalues of ๐ . (c) Find a basis of ๐ 2 consisting of eigenvectors of ๐ . (d) Use the solution to (c) to compute ๐ ๐ (0 , 1) . Conclude that ๐น ๐ = 1 โ 5[(1 + โ 5 2 ) ๐ โ (1 โ โ 5 2 ) ๐ ] for each nonnegative integer ๐ . (e) Use (d) to conclude that if ๐ is a nonnegative integer, then the Fibonacci number ๐น ๐ is the integer that is closest to 1 โ 5( 1 + โ 5 2 ) ๐ . Each ๐น ๐ is a nonnegative integer, even though the right side of the formula in ( d ) does not look like an integer. The number 1 + โ 5 2 is called the golden ratio . 22 Suppose ๐ โ โ (๐) and ๐ด is an ๐ -by- ๐ matrix that is the matrix of ๐ with respect to some basis of ๐ . Prove that if |๐ด ๐ , ๐ | > ๐ โ ๐=1๐โ ๐ |๐ด ๐ , ๐ | for each ๐ โ {1 , โฆ , ๐} , then ๐ is invertible. This exercise states that if the diagonal entries of the matrix of ๐ are large compared to the nondiagonal entries, then ๐ is invertible. 23 Suppose the definition of the Gershgorin disks is changed so that the radius of the ๐ th disk is the sum of the absolute values of the entries in column (instead of row) ๐ of ๐ด , excluding the diagonal entry. Show that the Gershgorin disk theorem (5.67) still holds with this changed definition.
Section 5E Commuting Operators 175 5E Commuting Operators 5.71 definition: commute โข Two operators ๐ and ๐ on the same vector space commute if ๐๐ = ๐๐ . โข Two square matrices ๐ด and ๐ต of the same size commute if ๐ด๐ต = ๐ต๐ด . For example, if ๐ is an operator and ๐ , ๐ โ ๐ซ (๐
) , then ๐(๐) and ๐(๐) commute [ see 5.17(b) ] . As another example, if ๐ผ is the identity operator on ๐ , then ๐ผ commutes with every operator on ๐ . 5.72 example: partial differentiation operators commute Suppose ๐ is a nonnegative integer. Let ๐ซ ๐ (๐ 2 ) denote the real vector space of polynomials (with real coefficients) in two real variables and of degree at most ๐ , with the usual operations of addition and scalar multiplication of real-valued functions. Thus the elements of ๐ซ ๐ (๐ 2 ) are functions ๐ on ๐ 2 of the form 5.73 ๐ = โ ๐ + ๐โค๐ ๐ ๐ , ๐ ๐ฅ ๐ ๐ฆ ๐ , where the indices ๐ and ๐ take on all nonnegative integer values such that ๐ + ๐ โค ๐ , each ๐ ๐ , ๐ is in ๐ , and ๐ฅ ๐ ๐ฆ ๐ denotes the function on ๐ 2 defined by (๐ฅ , ๐ฆ) โฆ ๐ฅ ๐ ๐ฆ ๐ . Define operators ๐ท ๐ฅ , ๐ท ๐ฆ โ โ ( ๐ซ ๐ (๐ 2 )) by ๐ท ๐ฅ ๐ = ๐๐ ๐๐ฅ = โ ๐ + ๐โค๐ ๐๐ ๐ , ๐ ๐ฅ ๐โ1 ๐ฆ ๐ and ๐ท ๐ฆ ๐ = ๐๐ ๐๐ฆ = โ ๐ + ๐โค๐ ๐๐ ๐ , ๐ ๐ฅ ๐ ๐ฆ ๐โ1 , where ๐ is as in 5.73. The operators ๐ท ๐ฅ and ๐ท ๐ฆ are called partial differentiation operators because each of these operators differentiates with respect to one of the variables while pretending that the other variable is a constant. The operators ๐ท ๐ฅ and ๐ท ๐ฆ commute because if ๐ is as in 5.73, then (๐ท ๐ฅ ๐ท ๐ฆ )๐ = โ ๐ + ๐โค๐ ๐๐๐ ๐ , ๐ ๐ฅ ๐โ1 ๐ฆ ๐โ1 = (๐ท ๐ฆ ๐ท ๐ฅ )๐. The equation ๐ท ๐ฅ ๐ท ๐ฆ = ๐ท ๐ฆ ๐ท ๐ฅ on ๐ซ ๐ (๐ 2 ) illustrates a more general result that the order of partial differentiation does not matter for nice functions. All 214,358,881 ( which equals 11 8 ) pairs of the 2 -by- 2 matrices under con- sideration were checked by a computer to discover that only 674,609 of these pairs of matrices commute. Commuting matrices are unusual. For example, there are 214,358,881 pairs of 2 -by- 2 matrices all of whose entries are integers in the interval [โ5 , 5] . Only about 0.3% of these pairs of matrices commute.
176 Chapter 5 Eigenvalues and Eigenvectors The next result shows that two operators commute if and only if their matrices (with respect to the same basis) commute. 5.74 commuting operators correspond to commuting matrices Suppose ๐ , ๐ โ โ (๐) and ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ . Then ๐ and ๐ commute if and only if โณ (๐ , (๐ฃ 1 , โฆ , ๐ฃ ๐ )) and โณ (๐ , (๐ฃ 1 , โฆ , ๐ฃ ๐ )) commute. Proof We have ๐ and ๐ commute โบ ๐๐ = ๐๐ โบ โณ (๐๐) = โณ (๐๐) โบ โณ (๐) โณ (๐) = โณ (๐) โณ (๐) โบ โณ (๐) and โณ (๐) commute , as desired. The next result shows that if two operators commute, then every eigenspace for one operator is invariant under the other operator. This result, which we will use several times, is one of the main reasons why a pair of commuting operators behaves better than a pair of operators that does not commute. 5.75 eigenspace is invariant under commuting operator Suppose ๐ , ๐ โ โ (๐) commute and ๐ โ ๐
. Then ๐ธ(๐ , ๐) is invariant under ๐ . Proof Suppose ๐ฃ โ ๐ธ(๐ , ๐) . Then ๐(๐๐ฃ) = (๐๐)๐ฃ = (๐๐)๐ฃ = ๐(๐๐ฃ) = ๐(๐๐ฃ) = ๐๐๐ฃ. The equation above shows that ๐๐ฃ โ ๐ธ(๐ , ๐) . Thus ๐ธ(๐ , ๐) is invariant under ๐ . Suppose we have two operators, each of which is diagonalizable. If we want to do computations involving both operators (for example, involving their sum), then we want the two operators to be diagonalizable by the same basis, which according to the next result is possible when the two operators commute. 5.76 simultaneous diagonalizablity โบ commutativity Two diagonalizable operators on the same vector space have diagonal matrices with respect to the same basis if and only if the two operators commute. Proof First suppose ๐ , ๐ โ โ (๐) have diagonal matrices with respect to the same basis. The product of two diagonal matrices of the same size is the diagonal matrix obtained by multiplying the corresponding elements of the two diagonals. Thus any two diagonal matrices of the same size commute. Thus ๐ and ๐ commute, by 5.74.
Section 5E Commuting Operators 177 To prove the implication in the other direction, now suppose that ๐ , ๐ โ โ (๐) are diagonalizable operators that commute. Let ๐ 1 , โฆ , ๐ ๐ denote the distinct eigenvalues of ๐ . Because ๐ is diagonalizable, 5.55(c) shows that 5.77 ๐ = ๐ธ(๐ 1 , ๐) โ โฏ โ ๐ธ(๐ ๐ , ๐). For each ๐ = 1 , โฆ , ๐ , the subspace ๐ธ(๐ ๐ , ๐) is invariant under ๐ (by 5.75). Because ๐ is diagonalizable, 5.65 implies that ๐| ๐ธ(๐ ๐ , ๐) is diagonalizable for each ๐ . Hence for each ๐ = 1 , โฆ , ๐ , there is a basis of ๐ธ(๐ ๐ , ๐) consisting of eigenvectors of ๐ . Putting these bases together gives a basis of ๐ (because of 5.77), with each vector in this basis being an eigenvector of both ๐ and ๐ . Thus ๐ and ๐ both have diagonal matrices with respect to this basis, as desired. See Exercise 2 for an extension of the result above to more than two operators. Suppose ๐ is a finite-dimensional nonzero complex vector space. Then every operator on ๐ has an eigenvector (see 5.19). The next result shows that if two operators on ๐ commute, then there is a vector in ๐ that is an eigenvector for both operators (but the two commuting operators might not have a common eigenvalue). For an extension of the next result to more than two operators, see Exercise 9(a). 5.78 common eigenvector for commuting operators Every pair of commuting operators on a finite-dimensional nonzero complex vector space has a common eigenvector. Proof Suppose ๐ is a finite-dimensional nonzero complex vector space and ๐ , ๐ โ โ (๐) commute. Let ๐ be an eigenvalue of ๐ (5.19 tells us that ๐ does indeed have an eigenvalue). Thus ๐ธ(๐ , ๐) โ {0} . Also, ๐ธ(๐ , ๐) is invariant under ๐ (by 5.75). Thus ๐| ๐ธ(๐ , ๐) has an eigenvector (again using 5.19), which is an eigenvector for both ๐ and ๐ , completing the proof. 5.79 example: common eigenvector for partial differentiation operators Let ๐ซ ๐ (๐ 2 ) be as in Example 5.72 and let ๐ท ๐ฅ , ๐ท ๐ฆ โ โ ( ๐ซ ๐ (๐ 2 )) be the commuting partial differentiation operators in that example. As you can verify, 0 is the only eigenvalue of each of these operators. Also ๐ธ(0 , ๐ท ๐ฅ ) = { ๐ โ ๐=0 ๐ ๐ ๐ฆ ๐ โถ ๐ 0 , โฆ , ๐ ๐ โ ๐} , ๐ธ(0 , ๐ท ๐ฆ ) = { ๐ โ ๐=0 ๐ ๐ ๐ฅ ๐ โถ ๐ 0 , โฆ , ๐ ๐ โ ๐}. The intersection of these two eigenspaces is the set of common eigenvectors of the two operators. Because ๐ธ(0 , ๐ท ๐ฅ ) โฉ ๐ธ(0 , ๐ท ๐ฆ ) is the set of constant functions, we see that ๐ท ๐ฅ and ๐ท ๐ฆ indeed have a common eigenvector, as promised by 5.78.
178 Chapter 5 Eigenvalues and Eigenvectors The next result extends 5.47 (the existence of a basis that gives an upper- triangular matrix) to two commuting operators. 5.80 commuting operators are simultaneously upper triangularizable Suppose ๐ is a finite-dimensional complex vector space and ๐ , ๐ are commuting operators on ๐ . Then there is a basis of ๐ with respect to which both ๐ and ๐ have upper-triangular matrices. Proof Let ๐ = dim ๐ . We will use induction on ๐ . The desired result holds if ๐ = 1 because all 1 -by- 1 matrices are upper triangular. Now suppose ๐ > 1 and the desired result holds for all complex vector spaces whose dimension is ๐ โ 1 . Let ๐ฃ 1 be any common eigenvector of ๐ and ๐ (using 5.78). Hence ๐๐ฃ 1 โ span (๐ฃ 1 ) and ๐๐ฃ 1 โ span (๐ฃ 1 ) . Let ๐ be a subspace of ๐ such that ๐ = span (๐ฃ 1 ) โ ๐ ; see 2.33 for the existence of ๐ . Define a linear map ๐ โถ ๐ โ ๐ by ๐(๐๐ฃ 1 + ๐ค) = ๐ค for each ๐ โ ๐ and each ๐ค โ ๐ . Define ฬ๐ , ฬ๐ โ โ (๐) by ฬ๐๐ค = ๐(๐๐ค) and ฬ๐๐ค = ๐(๐๐ค) for each ๐ค โ ๐ . To apply our induction hypothesis to ฬ๐ and ฬ๐ , we must first show that these two operators on ๐ commute. To do this, suppose ๐ค โ ๐ . Then there exists ๐ โ ๐ such that ( ฬ๐ ฬ๐)๐ค = ฬ๐(๐(๐๐ค)) = ฬ๐(๐๐ค โ ๐๐ฃ 1 ) = ๐(๐(๐๐ค โ ๐๐ฃ 1 )) = ๐((๐๐)๐ค) , where the last equality holds because ๐ฃ 1 is an eigenvector of ๐ and ๐๐ฃ 1 = 0 . Similarly, ( ฬ๐ ฬ๐)๐ค = ๐((๐๐)๐ค). Because the operators ๐ and ๐ commute, the last two displayed equations show that ( ฬ๐ ฬ๐)๐ค = ( ฬ๐ ฬ๐)๐ค . Hence ฬ๐ and ฬ๐ commute. Thus we can use our induction hypothesis to state that there exists a basis ๐ฃ 2 , โฆ , ๐ฃ ๐ of ๐ such that ฬ๐ and ฬ๐ both have upper-triangular matrices with respect to this basis. The list ๐ฃ 1 , โฆ , ๐ฃ ๐ is a basis of ๐ . If ๐ โ {2 , โฆ , ๐} , then there exist ๐ ๐ , ๐ ๐ โ ๐ such that ๐๐ฃ ๐ = ๐ ๐ ๐ฃ 1 + ฬ๐๐ฃ ๐ and ๐๐ฃ ๐ = ๐ ๐ ๐ฃ 1 + ฬ๐๐ฃ ๐ . Because ฬ๐ and ฬ๐ have upper-triangular matrices with respect to ๐ฃ 2 , โฆ , ๐ฃ ๐ , we know that ฬ๐๐ฃ ๐ โ span (๐ฃ 2 , โฆ , ๐ฃ ๐ ) and ฬ๐๐ฃ ๐ โ span (๐ฃ 2 , โฆ , ๐ฃ ๐ ) . Hence the equations above imply that ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ) and ๐๐ฃ ๐ โ span (๐ฃ 1 , โฆ , ๐ฃ ๐ ). Thus ๐ and ๐ have upper-triangular matrices with respect to ๐ฃ 1 , โฆ , ๐ฃ ๐ , as desired. Exercise 9(b) extends the result above to more than two operators.
Section 5E Commuting Operators 179 In general, it is not possible to determine the eigenvalues of the sum or product of two operators from the eigenvalues of the two operators. However, the next result shows that something nice happens when the two operators commute. 5.81 eigenvalues of sum and product of commuting operators Suppose ๐ is a finite-dimensional complex vector space and ๐ , ๐ are commut- ing operators on ๐ . Then โข every eigenvalue of ๐ + ๐ is an eigenvalue of ๐ plus an eigenvalue of ๐ , โข every eigenvalue of ๐๐ is an eigenvalue of ๐ times an eigenvalue of ๐ . Proof There is a basis of ๐ with respect to which both ๐ and ๐ have upper- triangular matrices (by 5.80). With respect to that basis, โณ (๐ + ๐) = โณ (๐) + โณ (๐) and โณ (๐๐) = โณ (๐) โณ (๐) , as stated in 3.35 and 3.43. The definition of matrix addition shows that each entry on the diagonal of โณ (๐ + ๐) equals the sum of the corresponding entries on the diagonals of โณ (๐) and โณ (๐) . Similarly, because โณ (๐) and โณ (๐) are upper-triangular matrices, the definition of matrix multiplication shows that each entry on the diagonal of โณ (๐๐) equals the product of the corresponding entries on the diagonals of โณ (๐) and โณ (๐) . Furthermore, โณ (๐ + ๐) and โณ (๐๐) are upper-triangular matrices (see Exercise 2 in Section 5C). Every entry on the diagonal of โณ (๐) is an eigenvalue of ๐ , and every entry on the diagonal of โณ (๐) is an eigenvalue of ๐ (by 5.41). Every eigenvalue of ๐ + ๐ is on the diagonal of โณ (๐ + ๐) , and every eigenvalue of ๐๐ is on the diagonal of โณ (๐๐) (these assertions follow from 5.41). Putting all this together, we conclude that every eigenvalue of ๐ + ๐ is an eigenvalue of ๐ plus an eigenvalue of ๐ , and every eigenvalue of ๐๐ is an eigenvalue of ๐ times an eigenvalue of ๐ . Exercises 5E 1 Give an example of two commuting operators ๐ , ๐ on ๐
4 such that there is a subspace of ๐
4 that is invariant under ๐ but not under ๐ and there is a subspace of ๐
4 that is invariant under ๐ but not under ๐ . 2 Suppose โฐ is a subset of โ (๐) and every element of โฐ is diagonalizable. Prove that there exists a basis of ๐ with respect to which every element of โฐ has a diagonal matrix if and only if every pair of elements of โฐ commutes. This exercise extends 5.76, which considers the case in which โฐ contains only two elements. For this exercise, โฐ may contain any number of elements, and โฐ may even be an infinite set.
180 Chapter 5 Eigenvalues and Eigenvectors 3 Suppose ๐ , ๐ โ โ (๐) are such that ๐๐ = ๐๐ . Suppose ๐ โ ๐ซ (๐
) . (a) Prove that null ๐(๐) is invariant under ๐ . (b) Prove that range ๐(๐) is invariant under ๐ . See 5.18 for the special case ๐ = ๐ . 4 Prove or give a counterexample: If ๐ด is a diagonal matrix and ๐ต is an upper-triangular matrix of the same size as ๐ด , then ๐ด and ๐ต commute. 5 Prove that a pair of operators on a finite-dimensional vector space commute if and only if their dual operators commute. See 3.118 for the definition of the dual of an operator. 6 Suppose ๐ is a finite-dimensional complex vector space and ๐ , ๐ โ โ (๐) commute. Prove that there exist ๐ผ , ๐ โ ๐ such that range (๐ โ ๐ผ๐ผ) + range (๐ โ ๐๐ผ) โ ๐. 7 Suppose ๐ is a complex vector space, ๐ โ โ (๐) is diagonalizable, and ๐ โ โ (๐) commutes with ๐ . Prove that there is a basis of ๐ such that ๐ has a diagonal matrix with respect to this basis and ๐ has an upper-triangular matrix with respect to this basis. 8 Suppose ๐ = 3 in Example 5.72 and ๐ท ๐ฅ , ๐ท ๐ฆ are the commuting partial differentiation operators on ๐ซ 3 (๐ 2 ) from that example. Find a basis of ๐ซ 3 (๐ 2 ) with respect to which ๐ท ๐ฅ and ๐ท ๐ฆ each have an upper-triangular matrix. 9 Suppose ๐ is a finite-dimensional nonzero complex vector space. Suppose that โฐ โ โ (๐) is such that ๐ and ๐ commute for all ๐ , ๐ โ โฐ . (a) Prove that there is a vector in ๐ that is an eigenvector for every element of โฐ . (b) Prove that there is a basis of ๐ with respect to which every element of โฐ has an upper-triangular matrix. This exercise extends 5.78 and 5.80, which consider the case in which โฐ contains only two elements. For this exercise, โฐ may contain any number of elements, and โฐ may even be an infinite set. 10 Give an example of two commuting operators ๐ , ๐ on a finite-dimensional real vector space such that ๐ + ๐ has a eigenvalue that does not equal an eigenvalue of ๐ plus an eigenvalue of ๐ and ๐๐ has a eigenvalue that does not equal an eigenvalue of ๐ times an eigenvalue of ๐ . This exercise shows that 5.81 does not hold on real vector spaces.