l right, so we are going to continue what we discussed last time. I want to remind you the main definition that came up in the last lecture, that is the definition of the generalized in space. The set up as before is the following. We have a finite dimensional, a finite dimensional vector space over the field of complex numbers and an operator ting on it. Now, given lambda in, we define a subspace, subspace of V. It consists of all vectors in V such that t minus lambda I to sum power k applied to v equal to zero. Okay? This is called the generalized Tigen space Space corresponding to Lambda. Now this is a generalization of the notion of an eigenspace. I remind you that previously we defined another subspace called E of lambda, also subspace. Which consists of those vectors in V which scifi this equation for k equal one. Oh, I forgot to write k is a natural number n and will stand for the set of positive integers natural numbers. Here the condition is minus m to the first power is equal to one. In this formula and the above formula applied to visqual to zero, This is called eigenspace corresponding to lambda, right? Naturally, if you have a vector which satisfies this equation, it satisfies the condition above. For k equal one, it satisfies it. Therefore, this is a subspace, is a subspace of this. And this justifies the terminology, the generalized in space. What is less obvious but also true is the following Lemma of lambda T is non zero if and only if of lamb is non zero, naturally because one is contained in the other. This implies the implication from right to left. If this is non zero, this must be non zero because it's, it's bigger or equal. But let's prove also that the opposite. Suppose that of lambda is non zero, Then there exists a vector V zero. When I say non zero, yeah, to be more pedantic, we should not say not equal to zero because zero is a number or a vector. Here we're talking about the vector space. Strictly speaking, we should put curly brackets to indicate that we're talking about a set consisting of an element zero. With the obvious operations of addition and scalar multiplication. I was being a little bit imprecise. If it is non zero, there exists a vector in V which is non zero. First of all, it is non zero such that it is non zero. The subspace, this pass space is non zero, it means that there is a non zero vector in it and it has to satisfy this equation. Minus lambda to the k is zero for some k. But we don't know what it is. It could be 345 billion, we don't know. Depends on the situation. But for sure we can just compile a list of this vector. We know V is non zero. Then write minus lambda, then we write minus lambda squared, minus lambda I k, minus one. And then we have to minus K, we know this is zero. So we have a finite list of vectors, right? We know the first element is non zero and the last is zero. Somewhere in the middle, we turn from non zero to zero. There exists, there exists a J between one and K minus one such that minus d j is non zero, but minus Lda I j plus one is zero. This is somewhere here. Has to switch somewhere then. This is non zero. Let's call it then this guy, this is, this vector is. But then this vector is minus lambda of W because we increase the power by one, We find the non zero, minus lambda zero. And that's the definition of Nigenvector. There exists by the argument on that blackboard, we find that if there is such a vector. Besides find these two properties, there exists another vector W such that W is non zero, but minus lambda I is zero. That means that is of lambda. Lambda t here is non zero. Of lambda t is not a zero vector space. Okay? Is that clear? So this is not immediately obvious, but this simple argument, this is a very common thing that as we get more and more proficiency with proofs in this course, you see that this type of argument happens very often. Find a minimal value for which something holds, and oftentimes we then use that to get a contradiction. We will have at least one example of this later today, but it is a very elementary way application of the same principle. That means that corollary of lama one is that of lambda is non zero If lambda is an eigenvalue of it can be it it can possibly happen that there are no eigenvectors and yet there are generalized inv, non zero generalized invectors, there will be an eigenvector as well as this argument shows eigen value of which we also know equivalent to lambda is a root of the minimal polynomial. That is to say those subspaces are non trivial, not equal to zero, precisely for the eigen values of t. Okay? So that's the first. Now what we really want to prove today, what we really want to prove is the following theorem, that V is the direct sum of lambda IT, where lambda m is the set of all on values of without repetitions. We list them all exactly once. In light of the corollary, we're not missing anything because there are no other non trivial journalist eigenspaces other than the ones associated to eigen values. But it's not obvious. First of all, there are two aspects of it. First of all, we'll have to show that the sum of the subspaces is a direct sum, which is not always the case. You could have two planes in the mission space which intersect along the line. That's not a direct sum. Then is the first and second that the sum is actually equal to V. That is to say that every vector in V can be written uniquely as a linear combination of generalized eigen vectors, one for each eigen value. That is our goal. Let's call it theorem one because we'll have at least one other theorem. How to prove this? Here we will use the theorem which we proved last time, which I will call theorem two, even though there two technique comes before theorem one, because it was already proved last time. But in my exposition today, this one comes first, so I call it there one. Theorem two was proved at the end of last lecture. What does it say? He says that V is a direct sum. Let n be the dimension of V is a direct sum of the null space of t to the end and the range of t to the end. I proved that at the end of last lecture. It crucially our discussion about null spaces of powers of how they stabilize. This is our previous lecture laid the groundwork for what's going to happen today. It's a really powerful result you see. If you were to replace here G by E, that corresponds to the case of diagonalizable apparators. In this case, each generalized again space is actually going to be honest to goodness again space. But as we discussed, not every appiator is diagnoizable. What we're doing this week is really, we're fulfilling the promise of describing the journal case. What happens in general, how do, what is the substitute for this theorem about diagonalization? In the case of diagnization, direct sum of eigenspaces. Here you have direct sum of generalized eigenspaces. And then the first lecture after the break, we will give a more precise description of, of the operator. Will show that it can be represented by a matrix which has a block structure, with every block being a Jordan block. But for now, this is our goal. Okay, last lecture we included a sequence of technical lemmas about no spaces of powers of T. Now we're going to reap the benefits of those lemmas because the end result of those lemmas was that we could always decompose our ecto space V as a direct sum of the null space. And range of a specific power of this is a very powerful result. But to make it really powerful, we need one more. Because also I delay this null space to the generalized Eigen space. And that's the following Lemma. How many lemas do we have so far? Just one. Okay, so that's Lemma two. Let me say that if you have a vector like this, if a vector V satisfies this equation, V satisfies this equation for some k n is non zero, we call it a generalized eigenvector corresponding to lambda, right? Similarly to how we call this guy, if it's non zero, known as to goodness eigenvector. Now suppose that is the following is generalized eigenvector corresponding eigenvector corresponding to some lambda in this sense if and only if t minus lambda to the n is equal to zero of V is equal to zero where a and again is a dimension, our vector space. You see, you see how nice from this, it follows that the null space of this is precisely the generalized in space kris point to lambda. This is actually nullspace kris 0.0 but if we could replace by min lambda I, we would get the first factor will be the generalized space for that lambda. That's where it's going, but let me explain the proof. If statement, there are two statements, but the statement this way is obvious from the definition. Of course, we are finite. We're assuming that is non zero. Because if is a zero vector space, there is nothing to do. There is only one apparatorIjt sends zero to itself. Implicit in all of this is that the dimension of is positive, otherwise there is no discussion. Therefore, this n is positive number. If you have this equation satisfied, where n is the dimension of Evector space, then it means that this equation is satisfied for a specific, namely equal fine. That proves this way. But in fact we can also prove the other way. And for that we use the stabilization of the non no spaces that we discussed last time, right? Because remember follows from one of the lemmas. One of the state, I've actually, I'm not sure if it was a lemma or I think it was a lemma. But let me check so I don't confuse you unnecessarily. Yeah, it was a lemma. So lemma three, I think. So lema three was okay. And what did it say? It said that the space we said no space of, we talked about no space of operator. But therefore we can rename and call it, take STI, prerogative, we can put whatever operator we want. This was called before last time. Null space of t minus lambda to the n is equal to the null space of t minus lambda I to n plus I for all I. Okay. Now, suppose that, suppose you have a generalized eigenvector. It satisfies according to the equation minus lumber to some power K V is equal to zero. For some k, which is a natural number, there are two possibilities. If K is less than equal to n, we are done, because then we just simply apply minus m to the power n minus k and we'll still get zero. Apply minus lumber to N minus k to this equation, to this equation. If you apply minus lamb N minus k, you will get minus lamb to the applied to V is zero, which is what we want, this equation right? In this case we are done. The only possibility is that remains is that is greater than n. Then we just use this because then V is in a null space of T minus lumber to the part plus I, but it is equal to the null space of T minus lumber to the N. Again, T minus lumber to the applied to V equal to zero. Okay. Is that clear? If not clear, please ask me. Very nice argument that seemingly technical and satiric discussion about all spaces of powers of various operators comes in handy because it shows us that the no space cannot grow after n or n as dimension. That means any equation of this kind can be reduced to n regardless whether k is greater than n or less nircle to n. This is very nice because instead of, you see there is this element of uncertainty in determinancy. In this definition, you say the subspace consists of all vectors which is united by some power of k. But which power, we don't know what, maybe you need to apply it ten times the dimension of the vector space or twice the dimension of p dimensional express plus one. We don't know. But from the previous discussion, on our last lecture, we know that that's impossible. It allows us to pin down to actually describe the space as just the set of vectors satisfying this equation with e equal to n. And that's the corollery of this lemma. The corollary of this Lemma is that of lambda is actually equal to the no space of t minus lambda to the n, where n is a dimension of v, Right? Because we just showed that is a generalized in vectors point to lamda if and only if this equation is satisfied. But what does it mean that this equation is satisfied? It means that is in all space of this operator, instead of this whole range of natural numbers K, we have now reduced it to a single equation. That's very nice as you will see now in the next result. The next result is almost theorem one. I mentioned that during our last lecture, we will see what is needed extra to get from it to theorem one. I will now, I will call it there three now. Okay. So them three is that again we are always under the assumptions V is a non zero finite mention elector space or the field of complex numbers and is the operator acting on this elector space, Right? There exists a basis of V consisting of generalized eigenvectors of, of any given operator. That is a step surely. Theorem one is a stronger statement. Why? Because suppose we already know theorem one, We can derive theorem three from it easily. Because from theorem one, we know that is a direct sum of subspaces. Therefore, if we choose a basis in each of those subspaces and take the union of those bases, we will have a basis of the entire space. But each subspace consists of generalized like in. Therefore, any basis we choose in any of the subspaces will consist of generalized invectors. That the union of those will also consist of generalized invectors. This shows you that theorem one implies theorem three. But our approach is that we're actually going to use theorem three as a springboard. To get to theorem one, we will have to prove a couple more lemmas to get to theorem one, but I just want to show you that it's actually very close and for that we'll prove that using theorem two, which was at the end of last lecture. Okay, so here is the proof. Please ask me if it's not clear, the statement is not clear or anything in the previous discussion is not clear. Okay, don't be shy. Just raise your hand and I'll answer your question. So we will do the induction by induction on n. So if n is equal to one, it's clear why? Because in this case, V is isomorphic just to one dimensional vector space. And any such isomorphism corresponds to a linear map from C to C. Any map from C to C sends x to a times x is a complex number. Therefore, x is actually eigenvector non zero. X is an eigenvector with eigenvalue A. Therefore, a generalized vector that gives the basis of it's obvious right now. Suppose we proved it. It's meaning theorem three, the statement of theorem three. For all of dimension smaller than n, Let us prove it for V of dimension n. That's usually how we run inductive arguments. Of dimension equal to n. Here is a crucial moment, is that we will use the direct sum decomposition. First of all, we have, we know that has at least one eigen value denoted by lambda. Why does an operator have an eigen value? Because the ground field is a field of complex numbers. Because we have proved that for every linear operator like this, acting on the fine dimensional vector space of non zero vector space, there exists a unique monic polynomial of smallest degree such that if we substitute into it, we will get zero. We have also proved that eigen values of t are the zeros of this polynomia ground field is a field of complex numbers. Every polynomial of degree grade equal to one can be written as a product of linear factors. The linear factor has to form z minus lambda for some lambda. Therefore lambda is a zero of this polynomial is the root of this polynomial. Therefore, there exists at least one eigen value here we use crucial here is that our ground field is the field of complex numbers. We have seen examples of operators acting on real vector spaces, vector spaces of the field of real numbers, which do not have eigen values. And the typical example is rotation on a plane. Rotation a plan unless it is rotation by 180 degrees or by zero degrees, meaning just ident separator, rotation by any other angle. Theta does not have any eigenvectors, but this is solely because the field of real numbers is not algebraically closed. The corresponding minimally is of degree two. And we know that direct polynomials over the reals, which cannot be written as a product of linear factors with real coefficients. Okay, here we use crucially that the ground field is a field of complex numbers. From this, we conclude that there is at least one eigenvalue. Let's call it lambda. Then by theorem two can be written as an null space of minus. I will use a yellow notion to emphasize that we are plus the range of minus lumber to the end. Now this is what we denote previously in theorem two, we consider the null space of t to the N plus range of t to the N. But again in the statement is an arbitrary linear operator. You can take minus lambda instead of, it doesn't matter, it's still a linear operator, right? So don't be fooled by this equation. But this equation is only for. Yeah, but can be arbitrary. Instead of t minus lambda, then the same statement will be true. That's what we're doing. Substituting t minus lambda for, in that equation and we get this direct sum decomposition. V is equal to the null space of the nth power of t lambda. And the range of tm lambda again to the n's power, remember, is a dimension, our vector, of our vector space in this equation. Okay, now we use Bima two bio corollary corollary of Lemma two. Shows that this is nothing but of lambda t, right? Because we have just shown there it is. That of lambda is no space of minus lambda to the N is dimension of V. And that's the first term on the direct sum, right? How nice you see if you have a direct sum like this, you can construct a basis of V by choosing a basis of the first and the base of the second summon and just taking the union. Now we know that every basis here is going to consist of the generalized in vector a half way. There we have part of the basis that we need because what we are trying to prove now is that there exists a basis of V which consists of generalized like invectors. There is a basis here which consists of generalized invectors. Let's move here. I want to keep, keep remembering what the main notation this we don't need anymore. Okay, so we see that lambda, the null space of t minus lambda, Lambda to the N, which is a G of lambda. By the corollary two has a basis of generalized in vectors of say V one up to Vik, right? Because it is a space of general eigen space is corresponding to lambda. Since lambda is an actual eigenvalue of t, that's how we chose it. That's why we had this little discussion about why every operator has at least one eigen value. We didn't choose a lamda randomly, we choose lambda to be one of the eigen values. We are certain that there is at least one. That there is some place to choose to choose from is an actual egenvalue of. It means that the of lambda T is non zero. The Spagenspacepoint lamb is non zero. And therefore, the generalized Eigen space is also non zero because it's bigger, If anything, it's biggers, bigger or equal to lambda t. As we just discussed, its dimension is positive, which is this number k that I introduced here. Actually, let's not use, maybe let's use something like that, something we haven't used before. P is greater than zero, But then because the dimensions of V is equal to the sum of this dimension and this dimension, it follows that dimension of the range of the second range of t minus lambda to the n is strictly less than n. If the first summon has dimension greater than zero, the second son have dimension smaller than n, otherwise it cannot together give you n. That we're going to use our inductive hypothesis here. Our inductive hypothesis is that we have already proved that every vector space of dimension LessanN has the basis of genetic invectors of any operator acting on it. This is how we will proceed because we already have a part of the sought after basis coming from here, which consists of generalizic invectors as we want. What we need now is to show that this subspace also has a basis of generetic invectors. That's where we'll use the inductive assumption by virtue of the fact that this dimension is strictly LesanN. Of course the rest is I'm starting to think that maybe I should raise it because Is that okay if I raise it or do you guys want to keep it? No opinion. I just feel like otherwise again, I jump over there and then if I need it, I'll write it again. I want to be in the same kind of to be localized, not to go all over the blackboard. Now here is the thing. We use another trick which you have already used before. We just observe that the second part, this range minus lambda to the n is enviaran. Remember this crucial notion of Vi subspace? It's a subspace which preserves in its totality, takes any vector from there. If you apply, it will just rotate it around, but it will not take it out of it. Why is that? Simply because by definition, this consists of all vector v and v such that V can be written as minus lambda to the n of W. But then if you have such a V, then TV can be written minus lambda I to the. You see for the simple reason that if you have something which is a polynomial in, we can always push through it. They commute something we have discussed a number of times. Journal two operators do not commute. You cannot write S equals t. But if S is some polynomial of t, then S is equal to t simply be example. If you have to, the r times t is just to the r plus one. And then you can write it, you can break it. And we discussed this many times, but I'll do it again. T times t to the. Likewise, if you have commuter operators, the sums will also commute and so on. Therefore, this is invariant. We can restrict, let's call this that, a shorter nation. Okay? Then we can talk about restricted to you. And that's going to be an apparatus from you to you. Again, this is just a range of this guy. We have found out that, we have learned that it's dimension less than n. We can apply the inductive hypothesis to you. And then we find that you has a basis of generalized eigen vectors of restricted to you Re, but if you have a generalized invector of restricted to it is certainly a generalized inventor of itself because is acting on an invector in, as acts on it. That's the whole point of the restriction. Restriction means that as long as you are focusing on the subspace, the action of this restriction is the same as the restriction of the original apt And so I have just enough space so I don't have to erase the statement of the theorum to continue. If a generalized, or let's call it generalized, restrict to the, is a generalized eigenvector, we get a basis. W1s of consisting of generalized vectors of. But they will also have a basis of the first summon, the null space consisting of generalized Ag invectors of we take the union V one to VP one. S is what we want, namely a basis of V consisting of generalized Eigen vectors. Any questions. It's a really nice argument now, and we can really appreciate why this theorem about sum compulsion is so important. Because this is just such a powerful tool which allows us to reduce the dimension just in one shot like this. Okay, What more do we need if there are no other questions? Let me proceed to prove the original statement. The one Julia. I suppose. I don't, I don't need this anymore. Okay. So what is missing? Now remember theorem one, which is our price is our goal is that V is a direct sum of lambda T. Lambda I is a set of eigen values, each of them taken exactly once, I from one to. That's what we want. We need to show two things, so that together with theorem three, we will get theorem one in addition to two. Theorem three, the first one is a lemma, and I guess how many lemmas have we had? We had two lemmas, right? Lema three. Three is that you cannot be a generizicg invector with two different eigen values. You see that is something that is left open. We have defined the generalized invectors. The definition is more, is a lot more fizzy than the definition of an eigenvector. Before I stated, I want to make a remark. A vector v cannot be an eigenvector. We just like this with two different eigen values. Let's say lambda and mu, right? Because if you have a vector V, which is nigenvectorzaginvalue lambda, then TV is lambda V. But if it also is nigenvector zaginvaluemu, then TV is also mu v. That means mu lambda minus mu is zero, is zero. That's something that we have to appreciate. That's why when we talked about a diagonizationaw in the case of a diagnoizable operator, V can be written as a direct sum of eigenspaces crespin to different eigen values, They do not intersect. If they did, it would not be a direct sum. So three is exactly the same statement, but for generalized convector. A generalized invector cannot be generalized invector with respect to two different eigen values different. I was stated like this. Each generalized eigenvector V and V corresponds to unique eigenvalue. Now remember the definition of generalized eigenvector, just as the definition of an eigenvector includes the property that it has to be non zero. We don't consider zero vector as an eigenvector. Even though zero vector belongs to the eigenspace, it is not an eigenvector. In other words, every vector in the eigenspace is either an eigenvector or zero. Likewise, for the generalized eigenvector, generalized invectors are precisely non zero vectors in the generalized eigenspace corresponding to a specific eigen value. So, suppose, suppose the opposite. Suppose that there are two different Egan values minus lambda to the m is equal to zero. Yes, we already know that. Yeah, let me back up a little bit. Suppose that is a generalized vector corresponding to some lambda. It means that is zero, n minus lambda to the n is equal to zero by a two, right? Okay, Suppose V is also a generalized eigenvector corresponding to some alpha which is not equal to London. Okay, so here's what we're going to do. We will take this could be smaller than, than what am I trying to do. That's right, general. But from this property we will do slightly different Then there exists the smallest number. This is I'm referencing now the argument I gave at the very beginning. There exists a number m such that m alpha minus one is non zero, but minus lambda to the zero v. You see this is what we discussed at the very beginning when I showed you that if there exists a generalized invectorrresponding to lambda, then there exists n in vector crespo to lambda. And I showed it by showing that when you start applying minus lamb to some power, at some point it will switch from non zero to zero. I believe I denote it by J here instead of J, we call it minus one. I'm trying to be consistent with the notation from the book. Okay, for Lambda, we write this equation for n where n is the dimension, is always the dimension of V. This is what we proved characterizes generalists like inventors. Chris point to Columba. Yes. Minus a yes. Yes. Thank you. Okay, great. Okay, so now here's what we're going to do. This M by the way, is not just a natural number because we know that this to the power n, is always zero. This has to be, m has to be less than real n. All right. Is Lester to end. Because we know that for any generalizinvector, the ends power of this expression will kill it. The largest would be minus one. It could be minus two. And so. Okay. Now here's the trick. Yes, I can erase this. Now, here's the trick. We are going to use a binomial formula. We are going to write minus lambda as t minus alpha plus alpha plus alpha minus lambda I. All right, So that's legit because we subtracted alpha and we added alpha minus lambda to the, can be written as minus this expression to the end. Let's call this A. Let's call this B. Now, know that they commute, In other words, doesn't matter. In which order we apply, we use the binomial formula. I'm guessing you're probably familiar, which tells us what is a plus B to the initial. Typically, the binomal formula is used when a and B are two numbers. In this case, it can be written as a sum of the sum coefficient B, K to the n minus k. Let me write it, I'll write it like this. Let's call K n. Then I will write b to the power n minus k to the power where this is called binomial cofficient. The standard notation for it is, and it is equal to n factorial divided by k five times n minus k f. This is called binomial cofficient. This formula is called the binomial formula. K goes from zero to n. For example, a plus b squared is a squared plus two B plus b squared. But you will see why it will be more convenient for me to write power of B first and then power of a. Typically, we consider this formula in the case when a and B are two numbers. Complex numbers definitely commute. Commutativity is important because if you open the brackets even for the square, you will see that it's actually not this. It is a squared plus B plus, plus b squared because you have this. A multiplies B and then B multiplies A. If A and B do not commute, that is to say, AB is not equal to BA, then this is not equal to this. For instance, if you substitute two arbitrary matrices for a and B of the same size, a plus B will make sense. The square will make sense, but it will not be equal to this. It will be equal to this. But we are precisely in the Swiss spot. A nice situation where our appeases do commute because actually one of them is a multiple of the identity. Identity commutes with everything. Therefore, it is legitimate to apply this formula here. The cofficients are 12.1 These are the binomal coefficients in this case. In general, they are given by this formula with factorials, it actually doesn't matter for us what this number is. What matters is that these are non zero numbers. They're all non zero numbers For all k from zero to n. Let's, let's apply this formula to our A and B. We are going to get sum minus lm to the n. We'll use, this is going to be the sum k from zero to n C K. Alpha minus lambda, N minus k times minus lambda. You see, where are we going with this? Minus alpha. We're doing sleight of hand. We go from lambda to alpha at the cost of getting a sum powers different powers, and you also multiply by parts of alpha minus lambda. But this doesn't matter because we have assumed there are two different values. Alpha is not equal to lambda, right? What matters is this coefficient. This is a number actually, because once we as operator acting, but it acts just by multiplying. Therefore, I don't need to put here here I do. I have a power here, which is k. Let's apply this to our vector right. On the one hand that is zero because that's the property of being generalized like in vector with respect to lambda. On the other hand, it's equal to the sum where we're now applying this operator to V. Now here's what I want to do. I want to apply, also apply minus alpha to the minus one, to both sides. You see if I apply to anything to zero, I get zero. But here you see this starts out as the first term is, has k equal zero, then there is k equal one and so on. Right? If I apply minus alpha to the minus one to the first term is going to be actually, in this case the cofficient will be equal to one. It will be alpha minus lambda n plus M minus one, right? Because this is k equal zero, then I have zero power. But I have applied this guy, I have t minus alpha to the minus one, right? Now, the next term where k is equal to one will be something where you have t minus I already to the right, because it will grow by one. This corresponds to k zero. The next one will already have first power of mine. I hit it with t minus to the minus one. The total will be t minus to the. It's not clear, alpha minus w. This is the first term which corresponds to zero. If I take zero here, it would be n plus minus one, and then I hit it with this guy. I have this zero. This is nonexistent. I have alpha minus lambda to the power n. Oh, I don't need to multiply. You're right. Yeah, this was my mistake. Yeah, I just keep it like this, right? Is that what you mean? Okay, sorry. That's right, I got confused. Yeah, we are not multiplying by alpha and slam the training power, just this guy. The next will be something, some number times M's power, but the Ms power already kills it. Then the next will be plus first power still zero. All of this will disappear, and the result will be this. Now, this is non zero by our assumption. That is how we constructed M, this is non zero. This is a contradiction because you get zero is equal to something non zero. Okay? If you think about it's actually very similar to the argument here. Just here. We're lucky for eigenvectors that you don't need to take powers, but the point is that you can always translate generalized eigenvector into an eigenvector by applying enough time times t minus. Okay, so that's the first lema. Now we know for sure that all of them, if you have a generalized invector, there's no question which it corresponds to. Which I can value. It corresponds to one and only one I can value if it is indeed the Generalized Invector. Okay, there is one more lema, then the pathway to Proving theorem one will be clear, clear it one more lema lema four. Which is an analog of something we prove for eigenvectors. For eigenvector, we also prove that eigenvectors corresponding to distinct eigenvalues are linearly independent. And that's what we're going to prove for generalized ones as well. Any list of generalized eigen vectors is corresponding to distinct, corresponding to distinct eigenvalues is linearly independent. Okay, so the way we proceed is very similar to how we proved it for eigen vectors, is that let M be the smallest number such that there exists generalized eigen vectors V1vm corresponding to the eigenvalues lambda one, lambda m, which are linearly independent, which are linearly dependent. We're proving by contradiction. I should have said. Suppose, suppose this is not true, then there exists a list of generatic invector corresponding to distinct tag in values which is linearly dependent. But if so, we can find the smallest such list of fewest number of elements and call this number m. What does it mean? M is the smallest number such that there exists m generalitic invectors corresponding to distinct taken values, lamb one, lamb m. And they satisfy the equation where all AI are non zero. Because if one of the AI is zero, we'll just exclude it from the set. It would mean that there's minus one elements which satisfy this relation. We want a smallest possible relation where all coefficients are non zero. All the vectors are Gener vectors pointing to distinct eigenvalues. Here it is. That's what would happen if this statement, the statement did not hold. We have to arrive at a contradiction. I think you already can see where it's going because we're going to show that if this is so, then there will exist a relation with fewer elements and how to do it. We have to apply something that would kill the last term, right? To get fewer to kill the last term. How can we kill the last term? Well, the last term is a generalized vector. So apply minus lambda m, I to the, to both sides to this equation. Of course again is a dimension of V. Then we know that since the last guy, VM is a generalized vector. Chris point to lambda M. It will be killed by, It kills VM, it will kill the last term. All we need to do is to see that the rest of it will look just like this equation, but with fewer terms. That would be the contradiction. For that, we have to analyze what is minus lambda M to the applied to V I, where is less than m. Now remember V I is V I is supposed to be generalizing vector crespoint to lambda. It's in the generalizing space. All right, corresponding to lambda, but this subspace is invariant. Let's slow down for a second to see that I recall that lambda I consists of all vectors. Such as that t minus lambda to the key. We can actually use a more advanced, more advanced description that it is just an all space of t minus lambda to the n. If you have a vector in here, it has to be annihilated by minus lambda to the n is here. It has to be annihilated by this. But then I will also be annihilated because we can move through. This power translates to the statement that the snow space and therefore the generalized in space is invariant. So this is also going to be G of lambda. It right? Because it's the invariant and it is non zero. Let's call it tilda. I claim that tilda is non zero because if it were, it would be a generalized invector for both lambda and lambda M. We have just shown that it's impossible because lambda and lambda M are different. Lambda is distinct from lambda M. I being less arm for otherwise V I would be non zero. A generalized like invector for I and a generalized invectormbam which we know is impossible. That means that if we apply, if we apply minus lambda to the end to this, what are we getting? We are getting a one times one tilda, et cetera, plus a minus one, minus one tilda. This guy will disappear because he gets killed by this app equal to zero. That's a contradiction because we have found a relation in which there are fewer elements. Okay, Therefore, mophors true holds. Any list of generalized invectorssp distinct Tic in values is linear independent. Okay, great. So now finally we can tackle our main theorem. Okay, back to the main theorem. The one v is equal to the direct sum of lambda I from one to m, where lambda one, lambda m are distinct, all Gm values taken exactly once. First, the two steps show that the sum that if you take the sum of lambda one plus, et cetera of lambda M is in fact a direct sum, okay? Now, what does it mean direct sum? It means that if you have some vectors in of lambda I, if V one plus, et cetera, plus m is equal to zero, then all I is equal to zero. Right? That's what it means to be direct sum. But we have just proved it, follows immediately from Lemma. Now the only thing is to show that. To show that, In other words, this shows that if you take the sum lambda one, we have the direct sum of lambda IT. Each of a subspace we can take the sum. We have just shown that it's a direct sum. It is a subspace of v. For now, why is it the whole thing? Because by theorem three has the basis of generalized vectors. There has a base of geng vectors. They generalized vectors are going to live in those subspaces. Therefore we see that is included in the sum by the three V has a basis, the invectors. This basis is going to look like this is going to be V1v, you know, one. Then there will be 1121 k22, and so on M1vm K M, right? Where are linearly independent ors of lambda one. These are in G of lambda two, this G of lambda m. Therefore, if this subspace was smaller than this, it wouldn't be possible because this shows that every vector here can be written as ar combination of vectors from here. That means that this is actually, this implies equality here. All right, So that proves the theorem. What else do we need to know? There is a terminology, there is a term that was introduced for the dimension of, of lambda t, which is called multiplicity. Okay? So there is multiplicity D I, which is a dimension of G. Lambda I is called a multiplicity la equation. From that we have the equation. The sum of I one M is equal to N, the dimension of V. Then we introduce the polynomial Q of Z, which is a product of Z minus LumbdaI to the power DI. Guess what? This is what you've cited before under the name of characteristic polynomial for now introducing roundabout way compared to previous way, namely as, by, as a product of powers of Ma in Slam the corresponding to the dimensions of the space. But after the break I will actually introduce in a conceptual way the notion of determinant following the material of chapter nine. And then we will actually have a conceptual definition of critics polynomial in terms of the determinant. And we will ascertain that this definition is equivalent to the definition using the determinant. That's what will happen maybe first three lectures. After that we will also establish Jordan Canical form. I'm out of time guys. Have a good break. I'll see you in April.