All right, let's continue with Eigenvalues.
We are in chapter five which is essentially about finding nice forms for linear operators. This is closely connected to what you already studied in 54 first course in linear algebra which went under the name of diagonalization.
The idea that in many cases you can find the basis of your vector space with respect to which the matrix of the operator becomes diagonal.
This is a very favorable situation in which you can learn a lot about the structure of the operator and so on.
Now in general it doesn't quite work and we will see various reasons for that.
But in any case, that's the leads us to the question of what are the good forms of apparatus that we can construct.
Let me recall the set up which we already started to discuss just before, the mid term. The set up is the following.
We take a finite dimensional dimensional vector space over field, we consider a linear map from V to itself.
You see this is a special case of a general situation of a linear map.
Because in general, linear map mass from vector space V to another vector space, which we usually call W.
But this is a special situation where it goes back to itself. For instance, you can iterate this operator, you can apply it several times if you have an operator acting from V to W, and V is not equal to W. Then the composition of Tiz itself doesn't make sense.
Also, the equation for the eigenvector, which we'll talk about in a moment, also doesn't make sense. If it goes from V to then TV is going to be in, and this is in V. We cannot possibly say that they're equal because they live in two different vector spaces.
This is the reason why in this chapter we will restrict ourselves to linear maps acting from the vector space to itself. We have a special name for it.
We call it a linear operator or just an operator because you think about it as operating on the vector space.
We'll look at some examples today which rhyme this term operator.
Okay, so then we introduced the notion of an eigenvalue.
It's a vector space over a field. An element of this field is called an eigenvalue. Let me emphasize it of if there exists a vector v in V, such that it satisfies two conditions, and this is important.
The first condition is that it is non zero, is not a zero element of the vector space.
The second condition is that V is equal to lambda V, because the operator acts from V to V.
Both sides represent vectors in V. And therefore, it makes perfect sense to say that they are equal or to demand that they be equal, or to ask whether it exists, such vectors so that the left hand side is equal to the right hand side.
This condition is often overlooked, but it is essential because you see if your vector is zero, it satisfies this equation for every lambda. Zero vector satisfies this equation for every lambda, because on the left you're going to have zero. Every linear operator maps a zero vector to zero vector. On the right, you'll have lambda times zero, which is also zero. This equation will be satisfied. Trivially, for every lambda is a zero vector, that's why we have to exclude it.
It's not very interesting to consider this equation if v is equal to zero.
When we speak about eigen values, we only speak about non zero vector satisfying this equation. In this case, V is called an eigenvector of T. So that is a basic set up.
Now the question, under what conditions do linear operators have eigenvectors and eigenvalues?
First, I will give you a positive result about existence of, of eigenvalues, eigenvectors.
And then I will give a counter example in a different context.
The positive example is the following theorem which applies when is a field of complex numbers.
Remember just before the mid term, we talked about how special the field of complex numbers is, that every polynomial over the field of complex numbers has a zero. That is essential in what we are going to in the proof of this theorem.
This theorum applies in the case when the field is the field of complex numbers. If f is the field of complex numbers, and I want to emphasize it then, every linear operator operator on the fine dimensional ector space from V, where V is fine dimensional vector space.
Dimensional vector space overseen has an eigenvalue and hence an eigenvector as well. This is, this is a good notion, as in the case of complex numbers. That's the case, right?
How to prove this? And it is a very nice proof. First of all, I just want to remind you that doesn't make sense. But if we equal, then the composition makes sense. It's like you go from V to V and then again from V, you can actually apply it as many times as you want. Can apply M times M times, or M is any positive number. It makes sense. It makes sense.
We already have introduced a notation for it in section five A. We call this T squared.
We call this to the M. In other words, if you encounter an expression like so, it means that you apply the separator several times, as many times as the power here.
Here's what we're going to do. Just pick any non zero vector vector V. Okay? Non zero, okay? And then start applying to it.
I can think of V as to power zero of V. Zero is identity we'll call, yeah, I should mention that, that sometimes we'll write zero, meaning just the identity operator.
Identity, it makes sense, means you don't apply anything. So V stays V for every V, and that's the idea is separated. Okay? So let's start V, then we take V squared V and so on up to, to the n. Okay? Now, how many vectors did we get?
We got let n be the dimension because we are assuming that it is finite dimensional. No negative, in principle, positive integer. Okay?
But now we've got n plus one here. Therefore, we know that this is linearly dependent, right? Any linear independent subset has fewer elements than fewer or equal to the dimension linearly dependent subset of V. Right?
How can we exploit this? It follows that there is a linear dependence condition, The linear dependent. But what does mean linearly dependent? It means that we can write, let's say a zero times V plus a one times V, et cetera. A n minus one minus one plus N to the V equals zero. That's the equation of linear dependence between this vector, this vector, et cetera, this vector and this vector, right?
But in fact, we can do more. We have, we proved previously that actually there exists an M between one, et cetera, and n such that this vector to the M to the MV will be equal to, expressed in terms of the, as a linear combination of the preceding ones.
A general property of linear dependence is that there is some linear combination which is equal to zero.
But we can actually make it precise if you have an ordering on your set, as we do now, obviously because by just increasing the power of t in that situation, there is the smallest value for which the ms element here can be expressed as a combination of this. But minus first, cant, minus second, connote, and so on, that there exists an m such that to the M can be expressed as a near combination like so.
Let's say a zero V to be in agreement with this formula. Put a minus sign so that we can then reverse it and put it on this side and was positive so that they mean the same thing. Minus a one V minus et, minus a minus one to the minus one. Instead of trying to write a combination of 20, we're saying that one of them can be expressed in terms of the preceding ones, but the smaller ones cannot be expressed in this way. In this way if k is less than m. So in other words, M is the smallest such such that if you take a smaller number, you cannot express two K as a combination of the preceding ones, but to them you can. That is a direct consequence of the fact that they are linearly dependent. Okay? But you see that if that's the case, I can rewrite it like this. Again, I will write it as a zero plus a one V plus, et cetera. A minus one, minus one plus m is equal to zero. So at first glance, it looks like I have not gained anything, but I have. Because here I know for sure that this guy has efficient one. Yes. Yeah. Because there is a subtle difference between saying one of them is a linear combination and saying that they are linearly dependent. But the point is that early on in this course we have proved that the two things are connected in this way. That if you have any linearly dependent set like this, which is ordered, then there is the smallest number m such that the ms one can be expressed in terms of the preceding ones, right? That's what I am writing it here. This will become important here. I know for sure that discficient is one point is that given any polynomial like for example here, the polyoma will be like so polynom in one variable.
First of all, if the polynoma has this form, where the leading coefficient is one, coefficient is one, then we will call it the monic polynomial. Okay? This is called polynom, but even in general, if you have a polynomial, you can substitute instead of our operator. The result will be Now here, in principle, what should be put here, this is a number. We cannot just leave it as a number because it's going to be an operator. This convention saves us because it's like taking the zero power of t, and that's the identity separator. A zero will come, multiply with identity separator, then a one, et minus one, t to the minus one, plus t to the m. Now, what is this, What does this represent? Well, each of this is a bona fide operator, is an operator on V because we know that is a well defined operator. Therefore, t to the M is a well defined operator. Just explain what it is. It's just a composition of with itself M times. But if t to the M is a well defined operator, then any scale or multiple of it is also a well defined operator. Finally, once we have all of these guys, then the sum is also well defined, is well defined operator on V. What I'm using here, I'm exploiting the fact that first of all we're acting from V to V, therefore to the, makes sense as a iterated composition of the operatorus itself. And second that the space L of V, which I recall denotes the set of all apperators acting on V, is actually vector space because it's a vector space, any linear combination of well defined apparatus is also an operator, right? This is a vector space. Now suddenly I have a nicer way to rewrite this equation. Namely, I can simply write, let me start on the next board. This equation, let's call it star, can be written as of, of equal, applied to V is equal to z. Here is our vector. That we started with, Remember, it feels like it was a long time ago, but a few minutes ago we started out by picking a non zero vector. Okay, Then we saw that there is a linear combination linear dependence, which we expressed in this way as this equation for some that is less than n. Now we have a nice neat way to express this property condition property of V. And there is a polynomial in Z. That polynomial is, it has this efficient which we have found from the linear dependence condition, this polyoma P, which is written at the top of that board. If you substitute into it, you get a new operator and guess what you apply to V. You get 00, meaning zero vector V. Okay? That is interesting. Suddenly every vector in V gives rise to a polynomial. Moreover, moreover, from this discussion, from the fact that it is small possible, this M is as small as possible. Moreover, there is no polynomial, smaller degree of degree less than m. This one has degree m and it has also leading efficient one monic polynomial. Monic polynomial. Monic means that it has actually the term Z to the m and it's cofficient in front is one, there is no monolynomal degree less than m of say such that of T applied to V is equal to zero is really special. It's not something that it's not like there are other choices. It's uniquely defined by this property that this is satisfied and it has the smallest possible degree, right? Because if there were such a problem of z, then this equation would also hold for some smaller number than for less than would be picked precisely to be the smallest possible guy, right? Okay, great. So now how do I derive from this that that is an eigenvector? You see here is a very cute argument. Because remember, we proved or we did not prove, but we stated. Let me start here. Okay, well, I suppose I could raise it actually. Let me raise that instead, so I'm closer to, this is where it is crucial that we are over the field of complex numbers. Because we know that for any polynomial P of Z over any polynomial has a zero degree of degree greater quota, One has a zero in C. That is to say that there exists some lambda in such that P of Z of lambda is equal to zero. And then we show that this is equivalent actually to P of Z vising divisible by z minus lambda, okay? Over complex numbers. We know that every Panama has a linear factor like this. In fact, we know more because we can use it as inductive argument to show that it can actually be factored into linear factors like this one, lamb two. But for now this will suffice. Look, now we can rewrite this equation. It's double star, as it's equivalent to t minus lambda times identity. Because remember, if you have a constant and you substitute t, you have to replace it by the identity operator. Time times the constant times q of t applied to v is equal to zero. But now I can interpret it like this, that this operator is a product of two. But again, product in this context means composition. If I have a composition of two, apply to v, it means that I first apply the one on the right. I get something, I get some new vector, which I will call V prime. V prime is Q of t times V V prime, satisfies also the equation t minus lambda prime physical to zero. But this equation, let me push it up a little bit. But this equation, if I open brackets, I get V prime minus lambda V prime, and then I get, take it to the other side, is exactly the equation that we encounter in the definition of eigenvalues and eigenvectors. We get V prime is equal to lambda V prime. You see where I'm going with this? I have broken my polynomial V satisfies this equation, that there is a polynomial, monic, polynomial degree M. If you subsite into it, it will annihilate V. Because we are over the field of complex numbers, we can split this polynomial as a product of a linear factor, another polynomial of degree minus one, right? Because degrees add up, when you multiply, this has degree M, by our construction has degree one, therefore it has degree minus one. Right? The first factor will help us to get to the equation V prime was lambda V prime, which is exactly the second property of nigenvalue, except I'm using nation V prime, because I already used V. But this also implies that V prime is non zero. Why? Because I know that there is no pallynomial for which here I have degree smaller than m. I can be sure that this is non zero. Because if it were, it would contradict our construction that m is the smallest possible degree of such a pallynomial. You see both both conditions are satisfied. This is condition one. This is condition two. That means that lambda is indeed an eigenvalue. The theorem is proved. Okay, so that was kind of a warm up. First, I want to explain why these conditions are important, that two conditions in this theoreum. First of all this, that the field is a field of complex numbers. The second condition is that spine dimensional vector space over complex numbers. I want to show you that these two conditions are essential, but if we relax one of them, then the statement of the theorem will no longer be true. Let me first consider the case when the vector space is infant dimensional. Theorem one is not true if ads on infinite dimensional effect space. This is actually the country example invokes an operator we have looked at before. I take as V the infinite collections, a set of infinite collections of complex numbers, 12 and so on, where each AI is a complex number for a natural numbers. Okay? So this infant collection, this is an infant dimensional or space. It doesn't have a finite basis. Get a finite base. You have to truncate it, it goes from one to N san. It's going to be our familiar vector space, F N. This is F infinity, if you will. It's a limit of this n dimensional spaces as n goes to infinity. Now I claim that I can construct an operator on it which does not have eigen values. Namely, it will be a shift operator. It will send such a to a sequence where the first compoint is zero and then it gets shifted. Let's, so I claim that it has no eigen values. Why? Because let's suppose, let's suppose that it does. Let's suppose that it does. Suppose there exists in, in the sector space such that these two conditions are satisfied. The second, I mean the two conditions. Second condition means that V, which is this, it's has to be of this form. That's going to be V, this sequence, that will be our V. That's a generic element of this vector space. If I apply to it to the left hand side of formula two, I get 012 and so on. And then on the right hand side, I should have a 12 On. Now I start comparing the two vectors are equal. Over there, two vectors are equal if and only if all of their components are equal. Which means that this translate into a system of infinitely many equations. The first of them is this, because the first component of this one is zero. First component of this one is m times a one. All right? The second equation is a one, is equal to lambda times a two, and so on. And then I is equal to lambda times a I plus one. Now, there are two possibilities. If lambda is non zero, I can take the inverse. Then I will see that a one is a two is zero, so on all of them are zero, are zero. V is actually zero vector. The first condition, remember, I keep insisting the first condition is essential for the second condition to be satisfied. It turns out that V has to be zero vector, therefore, it's not really ang vector. You see why. If lambda is not zero, then what am I getting here? If lambda is non zero, then a one has to be zero. But if a one is zero, then a two is zero and so on. All of them are zero and therefore it's a zero vector. Doesn't work, but you say, okay, but what if lambda is equal to zero? Then I cannot invert it. That's however, if lambda is equal to zero, then the whole thing is equal to zero. The left hand side, the right hand side is equal to zero, and therefore the left hand side is equal to zero. On the left hand side is this, it contains all of the AI's. This being zero. By the way, zero vector means 0000, which means again that all I abi are equal to zero. And again, V exqual to zero. This shows that this appeator on this infinite dimensional vector space has no eigen values, therefore no eigenvectors. Is that clear? Please ask me if it's not clear, that's why it is important. And by the way, of course, in the course of the argument we have used the finite dimensionality of V because we said that this set is linearly dependent. How do we know that? Because we know that the dimension is finite. We denote it by n. That's how we know that this set is linearly dependent. If our space is infinite dimensional, we cannot make this statement. In fact, you may wonder, what does it mean for this? For example, let me take a vector V. In this example, the vector 100, and so on. Then what will be TV. One will migrate to the second position squared will have one and the third position, all of them are linearly independent. Therefore, this argument collapses. It doesn't work. You see, this is essential finite dimensionality. In this example, all of this V TV and so on are linearly independent. Therefore, this argument doesn't work. Okay, That's one counter example. Now I want to explain what happens if F is not a field of complex numbers. Now in our course, we mostly consider the case of the field of real numbers and complex numbers. In this context, not complex numbers means real numbers. What happens if you have a vector space over the real numbers, final and dimensional? I'm relaxing one of the two conditions. The condition on the field being complex numbers. What happens if you have an operator on the vector space over the real numbers? Does this mean that it always has an Can value the way it is for complex numbers? Here is a simple examples which shows that this is not the case. Namely, you see this is not true if this is infinite dimensional. Now, second statement is them, one isn't true If is, here is an example. You have V is two, which we will think of as this blackboard in which I have chosen to coordinate axes, or equivalently two basis vectors. I consider ST is the operation of rotation by some angle which is greater than zero and less than two pi, but not equal to pi. Then I claim that there is no vector here which goes to its multiple because multiples of this vector. Let's take any generic vector on the plane. Its multiples are all lying on this line. The multiple is going to either be multiple going this way or going this way. But if I rotate, the only way in which I can get something that's proportional to this guy is if I don't rotate at all, my angle is zero or it's 180 degrees. Yes, if it's 100 degrees, I get minus V. It's proportional with a coefficient lambda being minus one. But if I take a generic angle, anything other than 0.880 degrees, then for sure they are not proportional to each other. Okay, So what's going on here? In fact, what we're going to show is that every vector here, the first part of the qu of the argument that I have given, will work in the case of vector space over the field of real numbers, that is to say the existence of polynomial. So let me explain in a special case. Okay, so maybe let's just write the conclusion is no eigen vectors, no eigen values. On the other hand, it turns out that the part of this proof in which we reached the polynomial P of Z, that is going to work in this case, in the case of Vero space, finite dimensional Vero space, over the real numbers, we can find the polynomial P of Z, Polynomial of degree, which is less than not recal to n in this case where P of Z, it's going to be a real polynomial. Now with real efficiency such that, this is a more general statement dimensional case. But in general, a V is equal to zero. We can actually, but the problem is that of Z may not have zeros because we know very well that there are polynom real coefficients which do not have zeros in real numbers. They will always have zeros and complex numbers because you can treat the real polynomial as a complex polynom. After all it has a coefficients real numbers, but every real number is for sure a complex number as well. If you treat a polynomial as a complex polynomial, it will have complex zeros here. When we say, when we define the notion of eigenvalue, we demanded that the lambdvalue be an element of our field over which director space is defined. In this setting, it would mean that it has to be a real number. The argument. Where does it break down? It breaks down because we cannot, we cannot write it, P of Z as a product of Z minus lambda times Q of z, where lambda is in R. Which is what is necessary to demand, to claim that there is a real number which is in Eigen value. You see, we know that the simplest example of this is the polynomial z squared plus one. For example, take P of z equals z squared plus one. We can factor it in complex numbers. The zeros, the two zeros which appear as I and negative stands for a complex number who square is negative one. It is not a real number, right? Yet the polynomial has real efficient. So it is a bona fide real ponosis. It is in two of r, but it's zero are not in R there. They are complex numbers. I keep hearing someone, okay. It's not in my head. You never know. Anyway, So you see. Now let me demonstrate how this Panama appears in this context. Consider rotation by 90 degrees counterclock. By the way, when I say rotation, I mean counterclockwise by 90 degrees, which is pi over two. What is the matrix which represents this rotation? With respect to standard basis, we have standard basis 1.2 of one, you rotate one by 90 degrees, you get two. And of two is negative one, right? Because you go like this, you get negative. So what is a matrix of this? It's first colon should be this vector represented in our basis. And that's 01, right? Two is 01, negative one is negative 10. That's the matrix of this operator relative to this basis. With respect to this basis, whatever statement we make about the operator will be true for matrices and vice versa. This equation that we are writing, substituting apparators into polynomials. We can do the same by substituting matrices into the polynomials. I claim that, let's call this matrix A. I claim that this matrix satisfies this equation, P of T of eight is equal to zero. I claim that let's say that's right of A is the zero matrix. Why this is A squared? A squared, there's no minus, it's just a spot on the plaqboardquar squared plus identity. But what is a squared? A squared means just multiply this mat by itself. What are you going to get? You get minus 100 minus one. This is minus 100 minus one, this is 1001. Of course, if you take the sum, you get the zero matrix. Indeed, if you subs this matrix, which corresponds to the rotation by nine degrees counterclockwise, you will get zero. How interesting? In fact, it means that actually every vector satisfies this property. It means that P of t or of A or of A applied to V. Zero for all V because already this polyoma is identically zero as a matrix. Therefore is operator. But the problem is that this polynomial being a real polynomial, we're trying to look at it as a real polynomial. As a real polynomial. It doesn't have real solutions or real zero only over complex numbers. Therefore, we cannot claim that there is Igen value. But this suggests that actually, if we don't want to stay just in the realm of complex numbers, but we want to consider more general fields, we may actually be interested in finding out to what extent this argument actually works. Because see here, we get something much stronger than what we obtained in the process of the proof of theorem one. Let me remind you that in theorem one, in the proof of theorem one, we picked a specific, then we prove that that specific V satisfies the equation that some polynomial applied to vehicle to zero. But we have discovered now in this specific example is that the statement is actually much stronger that there is a polynomial. Such that if you subsitute into it, it will actually annihilate every vector. In other words, if you substitute this operator into, you will actually get the zero operator. It's a much stronger statement than saying that there is one vector which satisfies you. See what I mean? Yes, because that's the phenomenon of Z is z squared plus one. I am being a little bit cavalier because I'm using PFZ within the context of the discussion. It's not generic. Here, I start with a specific polynomial I'm explaining here. I give an example of a polynomial with real coefficient which does not have real roots or real zeros, right? Then I explain that that polynomial actually arises naturally from rotation by 90 degrees. In fact, if you substitute the matrix of rotation by 90 degrees into this polyoma, you get zero matrix, which means that actually a strengthening of the equation that we had encountered in the proof of theorem one. What I'm going to show now is that actually this is true always. There exists a monic polynomial for any vector space over any field. Actually, it could be real numbers, actually could be any field. There is a field, there is a finite vector space, then there is a unique monic polynomial, smallest degree such that p of t is actually zero operator. It's a much stronger statement than theorem one in two ways. First of all, it applies to any field second vector space already. Second of all, we're getting an equation which every vector, not just one vector you see. In other words, that is a truly interesting statement. The existence of an eigenvalue a Posteriori appears to us that as something which depends more on the specifics of the field. There are some fields which are called algebraically closed, in which every polynomial risk efficiency in that field has a zero or root in that field, such as the field of complex numbers. But for some fields like field of real numbers, that's not the case. They're not algebraically closed, but we shouldn't fault them for that. Instead, let's find out what holds even for non algebraically closed fields. What holds is that there exists a pi poo such that if you substitute into it, you get zero. Then if this polynomial has zero in your field, then you're in luck. Then you're getting eigen values. But that is already a very strong statement. Okay, that's our next project. Ask me if something is unclear so far that P called the minimal polynom, this very important concept. This is by the way, one place where this textbook that we're using by Al is essentially different from other textbooks in that he introduces this, first of all introduces this notion of minimal polynomial. The focus on minimal polynomial, more so than on eigenvectors and eigenvalues, and not using the determinants. A more traditional approach, which you may have followed in my 54, is to define what's called the critiiclynomia, which is defined using the determinant. The determinant of a matrix is an interesting formula, but there is different opinions about it. The way it's usually introduced is very a students are surprised why who came up with this. It creates a certain attention as I understand the point of acid was to write a book without using the determinants. And this is how it's done. In some sense we're doing in a slightly unconventional way. Of course, everything is equivalent ultimately, but it's slightly unconventional. And I like it because from this discussion, I hope I was able to get across this point that the existence of eigen values is oftentimes is a property of a field, is not a property of linear algebraic. Property of the field does not have solutions for all of the equations, equations. Therefore. It's not clear whether we should actually make such a big deal of eigen values, but instead look at what is true across the field. Of course, we'll talk about eigen values anyway. But the existence of eigen value, this part, which has two parts, one is universal, the existence of minimal polynomial. The second part is specific to algebraically closed fields, that every polynomial has a zero. And that is in some sense a minor issue because you can always complexify the vector space. You can e vector space over R be the vx space over. And then suddenly genvalus will appear. So for instance, in this example, this operator actually has gen values which are I and negative which are the roots of this polynomial. It's just that they are not real numbers. Okay? All right, so let's prove this next result, Theorem two. Suppose that V is a finite dimensional nectorspace. Over here is arbitrary field, actually, in particular, R C and so on. I want to warn you that in the book, even in the statement of this there puts arbitrary field, but then in the proof inadvertently says elements references elements of F as complex numbers. It's a typo, basically. Don't be alarmed by this. This theorem is really true for every field, okay? And is a linear operator on V. Then there exists a unique monic polynomial of smallest degree, polynom of, of, of smallest degree, such that P of t is equal to zero. Now what is the 00? If you substitute an operator into polynomial, you get an operator. This zero is zero in the space of operators which we call LV. Moreover, the degree of is actually less than equal to the dimension of. By the way, this reminds me that in our exam last week, we actually had an example of such a polynomial, right? As a remark. Let me put it as a remark. If you look at problem, I want to say six, but I'm 100% sure there was an equation t square equals zero, right? Here's your polynomial. This is a P of t. The matrix is actually, here's an example. The problem was that this is equivalent to the image of being contained. We don't call image, we call range in the null space of t. Right? That was the problem. But I want to say that is, here's an example of an expression which you get by substituting into polynomial. In this case, the polynomial is squared. Here's an example, is represented by the matrix 0100, then t squared. It satisfies this because its null space is spent by the first vector in the basis range is also spent by the first vector. In this case, the range is equal to the null space and it's a one dimensional subspace. But I claim that it satisfies this property because in this case, squared, if you multiply this met by itself, you get 000, right? This is zero. No, there is no pater of degree one which satisfies. If you take a zero plus a one, you will get a zero here and one here. If one monic polynomial one like this, this is definitely non zero. There's no monic polyoma degree one such that if you substitute for z, you get zero. But there is a monic polyoma of degree two, namely z squared, which satisfies this property. That's the minimal polynomial here. That's, this will call the minimal polynom of t is a very one of the first examples to see that. Okay, let's prove this. The proof will be by induction on the dimension. By induction on the dimension which I'll call one dimension of v. The first case to consider is one is to 00 dimensional space degenerate case. But it's okay. It often happens that at the base of induction, the first case to consider is degenerate. But then things become more complicated. Why is it true in this case? In this case we claim that there is not much room. The claim is that there has to be a polyoma of degree less dimension. Dimension is zero. Has palm degree zero. Degree zero is a constant palomares, only one Poa degree zero, namely one. We're claiming that if we substitute into this polynomial, we get zero operator. But if you substitute into constant polynomial, it means substituting identity for one. That's the identity. But identity, it's zero to zero because there's only one element called zero. The identity operator is also the zero operator. It's just because it's such a, a general case. So therefore, but the name case, the statement holds trivial, right? Trivial because in this case, actually, the space is so small that the identity operator also has a second function. It is also the zero operator. It only happens in dimmention zero. Okay? So now suppose we have proved that for all such that the dimension of less than n dimension is less than n. Let's prove it in the case, maybe to make it nicer, just like this I don't introduce before, let's prove it for the case one dimension of V is equal to n. Okay? Right. Okay. So, okay, so now we started out in the same way as before as in the proof of theorem one in one, namely, now by the way, this is greater than zero because we have already taken care of case is equal to zero. Pick a non zero vector V, then we have shown that, that there is a smallest poma, we have shown that there exists a polynomial. Um. Now the problem with not because we are trying to prove that there is something which we will call, we already the notation we have used P of Z in the proof of the one for that phenomena which will annihilate. But in the course of this proof, that's not going to be the final. Therefore we have to use a different name for it. Let's call it. And I don't want to use Q either because Q we have also used, how about, okay, the existing poom, R of z of degree which is less than article to N, right? All right. Such that if you substitute into it, you get zero. Apply to V and apply to V. You see where we are now. We are trying to prove that there exists polynomial such that if you subsite, you get just something which is completely zero. You see the difference between this statement and that statement. This means this means that P of T applied to v is zero in V for all vectors in V, right? This equation is equivalent to the fact that the statement that if you take this polynom apply to any vectoriatet zero vector. Right? But now, so far we have proved that the existence of some polynomial monic polynomial such that if you apply to specific V, our V that we started with, we pick some vector that's not enough. You see it looks like a big gap between the two. How can we go from knowing that some vector is annihilated by some polynomial? To saying that all vectors are annihilated, not necessarily by this polynomial, but by some other polynomial. You see that's what our concern is next. How do we move from an equation that concerns a single vector that we have picked to a statement that applies to all vectors? This is a very nice argument. I want to say that this is the mallet is the smallest number polynomial that was part of our construction. There is no polynomial of smaller degree less than M satisfying the same equation. Let's call start. Okay, so let's, let's see how we can get some mileage out of it so that we can apply things to more general vectors. Of course, at some point we'll have to use inductive assumption as well. From this, this last statement implies that actually, if you take the span of square, et cetera, minus one v, then it has dimension m. In other words, these are linearly independent. This is a crucial idea because remember I emphasized that when we did the construction in the proof, we had a TM included. We said M is the smallest one for which this guy can be expressed as a lecompation of the preceding ones. But that means that none of these guys can be expressed as a leocombination of the preceding ones because is the smallest one. Therefore, that means that they are linearly independent. Not only do we know that there is a paromal of degree m such that if you substitute into it, it will annihilate our vector. But we also know that if you take all the Since we remove this, now we stop at management that they're linearly independent. Okay? But that means, but now observe the following, that if you apply R of t, not to V but to TV, it will also be zero. Why? Because this to commute, because this is a combination of AI by t to the I, goes from zero to m, then you have t, but then it's equal to t times the sum of the, for the simple reason that if you have t to the n times t, it's t to the plus one, and that's also t times t to the right. That's obviously this is, I have mentioned a number of occasions the fact that the algebra of operators is non commutative. If you have two operators, 1.2 then the product in one direction is not equal to the proto, the other direction 12, is not equal to 21. But powers of a single operator do commute with each other. For the simple reason that if you mount by By on the left or on the right, you're going to get the same thing. It's just to the plus one. It doesn't matter in which order you do from beginning to the end, the composition you see. Therefore, you can pull this to the left. And then V gets hit by RFT. Rft kills V, we get, you get of zero. And that's zero by star, because R of TV is zero in V. Okay? So that means that actually this subspace, it means that this subspace span. But I should, I forgot to mention, the same argument also shows that the J is also zero for J actually, but we will only be interested in this guy. Same pull it this way, one by one, if you will. Okay? Each of these guys which are at the top, on the top line, they are all in the no space of r of t. Each of them is annihilated by r of t. All right, because V is annihilated by this construction, but then V also is annihilated T squared and so on by the previous formula. Therefore, the span is also, every linear combination is annihilated. But we know that the span has exactly from this, we derive that the dimension of the no space of R of t is greater than to. Which implies by the fundamental theory for liner maps that the dimension of the range of R of t is less than circle two n minus m. In particular, it is less than n because m is positive, M is the degree of this polynomial. M cannot be zero because in that case, this polynomial would have to be identity V is non zero vector. Therefore, identity V is non zero, it cannot be satisfied. This means this equation implies that M is positive. Since M is positive, mention the range of r of t is less than n, strictly less than n. But if there exists a polynomial, so now use our inductive assumption. Okay, then we know that that are also, here's one more thing we have to say is that this is still invariant. You see the span of the guys? Well, not even, Let's just talk about this. Oh yeah, here, this is where this notion of T variance comes candy. Remember we have this notion of range. This is a bona fide operator. Think of t squared or t squared pass identity. These are two examples that we have considered, two explicit examples. T squared or two squared pass identity. That's an operator on V. It has a range. What is the range? It's all such that is equal to R of applied to some W. Okay? I claim that it's invariant, which means that if I apply to V is also in range. Why? Because TV is R of t times. You see I'm using, again, this idea that commutes with R of TV. You can write it as R of T. You have this, but then you can move it through and you get R of T of t. Therefore, there a W prime in V such that TV is also in the range. That's the definition of invariance. Whatever vector you have in the subspace, if you apply to it, if you're rotated by, so to speak. Loosely speaking, you'll still end up in this subspace therefore, which starts life as an operator on V, actually also gives rise to well defined operator on the subspace. Remember, my standard example is the three dimensional space of this classroom and then the stage. This is a zero and there is a perpendicular line going through zero. Your operator is rotation about line this axis. Then the stage is going to rotate, but every vector will go to a vector within that stage. That's a typical example of an invariant subspace range of R of t is like that. It's something which is preserved by that every vector in it will go to another vector in it. Right? But that means that we called the restriction of the operator to the subspace. But the subspace crucially has dimension less than n. We can apply our inductive assumption to acting one, the range, okay? And what it gives us is the existence of a polynomial that is stipulated in this theory because we're using the induct assumption, because we have assumed that. We have already proved things for all spaces of dimension less than n. Right? So we conclude that there is a polynomial. There's a morning polynomial which call of Z, right, of degree less than equal to M, sorry, minus M such that of T is equal to zero but not on V on the range. In other words I of T applied to V zero for all V in, in the range. But now we're done. This implies that for every v in V. Now, I cannot claim that S of t kills every vector in V. But I can claim that S of t applied to r of t, of v is zero because this one is in the range, This vector he is in the range. I start with V, this is for V. I take V, I apply R of t to it. R of t is its own R of t, which we found on that blackboard, right? If I apply R of t to it, it takes me to the range of R of, obviously that's the definition of r of t. But I know that every vector in the range satisfies this equation. That S of t applied to it is zero. The result is that S of t times R of t times V is equal to zero for all V. Now, what is the degree of this? First of all, this is monic. We can rewrite this set of z equals a times r of z. Then we get the equation p of z of t. V is zero for all V. V moreover is monic because it is a product of two monic polynomial and its degree he's less than equal to the degree of this guy. Less to n minus m. And this is less circle to n as we needed as we want it, which is dimension of V. This completes the existence proof of existence that exists moni, polynomial of degree. Let's not equal to n which satisfies this equation. It remains to show that it's unique since I'm running out of time. Let me just right here. It remains to uniqueness is very easy. Okay, I have 30 seconds uniqueness. This proves existence. Uniqueness is very easy because if you have two polynomials, obviously we pick the one which has the smallest degree. We want to prove uniqueness of such a polynomial of smallest degree, right? Suppose you have two monique polynomials of smallest degree which both do the service, both this property then take the difference between them. You'll get a poly, smaller degree which will still annihilate every vector. That would be contradiction one of the two polynomials, the difference. Anyway, I'm out of time, so I suggest you just read this part in the book. I'm pretty much done. This Paloma is called the Minimal Paloma, and so next time we will exploit it to understand better the structure of linear operatus order. In my, my pre school, I was added in the discussion because I'm not added in the lecture, so I cannot see my mid term exam. So I don't understand how can negotiate, send me an E mail. I don't understand. Do you know you are part of the, you are part of the great, the roster of this class, are you? You're enrolled. Enrolled.