All right. Last week we talked about the minimal polynomial.  We saw that for every finite dimensional vector space, connector space over field, there exists, and the linear operator there exists, this minimal polynomial P of Z, with the property that, first of all, it's monic.  If you substitute the operator for z, you get the zero operator. That's zero in the vector space of all operators acting on. Finally, it has the smallest possible degree, so we showed that it exists and is unique, and its degree is less than equal to the dimension, right?  We further showed that the set of Eigen values. Of t is in one to one correspondence with the set of zeros or roots of this polynomial, P of Z.  The advantage of this approach is that we could prove the existence of this polynomial without using any tools such as a determinant function and so on.  But at the end of last lecture, I contrast this approach with a more traditional approach that instead the so called characteristic polynomial, which is constructed using the determinant function approach of the book, of the textbook that we are using by Stephen Axler is to avoid using the determinants until we actually develop enough theory to introduce the determinants in a proper way, in a proper conceptual way, and not just some function, some God given function.  That's the advantage, but the disadvantage.  There is no algorithm to a priori, there's no algorithm for calculating the minimal polynomial.  Unlike the critic polynomial, we had one class of operators for which we know what the cdisolynomiahi we can find, find the critisticolynomial of explicitly which we discussed last week.  That's the case without loss of generality.  We can consider here in the case of operator acting on the space FM, we'll consider the case operator x as an operator.  Now, as we discussed before, a general vector space of dimension n over is isomorphic to a N, but not canonically. But in any case, we can just choose the basis in our vector space and identify our vector space with a N.  Then the operator becomes the operator of multiplication by an N by a matrix.  We will discuss various cases for such operators, different classes of matrices representing our operators.  One class we discussed last week, that is a class of matrices of this form, where you have once just below the diagonal, right? You have zero on the diagonal except at the very last one, and we'll call it a negative a n minus one. Then in the last column, you have minus a zero, minus a one, minus a two, and so on. And then zero is everywhere else this type of matrices, For instance, in a case when n is equal to three, I'll write it more explicitly like this, looks like this. Minus zero minus a one minus a two. In this case, we saw that the Grits polynomial is going to be a zero plus a one, Z plus, et cetera, plus A N minus one minus one plus Z to the n. It is a polynomial of degree N. In this case, the degree of the polynomial is actually equal to the dimension of our space.  I explained why. I explained in detail why this is the case.  Now I want to add to this a little bit, what does it actually mean? What does it mean that this operator has this matrix representation?  As always, the representation arises once we choose a basis. What kind of basis is it? Here is the basis in which this matrix has such a representation. The answer is Which is clear from this formula, is that this is a basis in which it has the following form. That you have one, then you have V two, which is equal to t times V one. Then you have V three, which is t squared on. Then you have V n, which is going to be t to the n minus one, In other words, the vector V1v1 to n minus 11.  Notice that there are exactly n of them, because we start out with this guy, which you can think of as T two power zero if you will. Because to zero view as the identity operator, they're exactly n of them.  The set is a basis if and only if the set is linearly independent spanning. In fact, one implies the other and it's equivalent to it being a basis.  But what I want to focus on is the spanning property. The property of the basis in which our operator has this form. Is that the basis that the vectors span our vector space? Yes. Yeah, that's right. Two for two, I have first power for three. Second power for V n minus first power one. Okay? Okay? Yes. I mean this you see in general, absolutely general situation. And you take and you start applying to it. So you take the vector, you take teeth applied to it, squared applied to it, and so on.  It's not going to spin your vector space, let alone be a basis. It's easy to come up with examples. For instance, let's suppose that it's not always true. For example, suppose that is in the no space. If the vector space has dimension greater than one, then let's call it star is not going to be spinning. Spinning. It's neither spin. It's clear that neither spinning nor a linear independence. So it is not a basis, right? So it's a very nice property which is not always true. It may be true for some vectors for instance, but not for other vectors.  But in fact, one can show that this is generic property. If you take a most generic operator and a generic vector in vector space V, this will be the case if it is the, the matrix of the operator relative to this basis, to the spanning set, which then is automatically a basis because it has exactly n elements. Will look like this. Once it looks like this, by the argument that I gave last week, we find that the critic polynomial has scficients precisely those entries in the last column. But what does it mean? It means that this vector, actually this vector conspires with that tor to generate the whole vector space. Such a vector is called cyclic. This is a situation where a cyclic vector with respect to the operator. You see the first major idea in linear algebra was that we can reduce things or express things in terms of what we call a basis, right? So that we don't need to have information about every single vector in our vector space for properties is quite sufficient to have a basis. In other words, for instance, if you take a small dimensional vector space like two or three over an infant field like real numbers. It has infinite manufactorsight. Even R, which is like the real line, has infinite manufactory, as there are real numbers. But we can reduce the study of vector space RNA to the study involving just n vectors in the basis vectors for instance, we can represent every vector as a colon. We can represent every operator by matrix and so on. So that's the first first reduction, so to speak. The fact that we have operations of addition and scale multiplication enables us to use a finite set, for instance, to generate an infinite set. But now there is an additional piece of data. It's not just a vector space like N or general end dimensional FA space over field, but we are also given an operator. Then we can reduce further to a single vector if provided that it is a cyclic vector. If you find the vector with this property, then in some sense this vector holds the key to the whole vector space. You are using now this vector and your operator to generate the whole vector space, you see, because you can generate the remaining vectors of a basis of this vector space by applying powers of this operator. It's a very nice situation where you can generate everything from a single vector by using an operator. That's the situation of a cyclic vector. This is something just a very powerful idea and has many applications. I don't unfortunately, have time to talk about it in more detail. But I wanted to explain to you what is the phenomenon that we are considering. If you have an operator acting on the vector space and you have a cyclic vector, then you're well equipped to obtain the minimal polynomial of the separator. Therefore, in particular, you can find all the eigenvalues of the separator. We are not using the, we are not determinant and therefore not using the critic, but using a totally different tool, an attempt to find the cyclic. That's the first class of operators for which we can calculate the critically. Today we'll talk about another class of operators. Is it the sound My Mike? Yeah. Oh, okay. I was wondering if. Yeah, okay. I was wondering if it's something else, but yeah. Okay, good. Yeah, it was weird. Yeah, so another class of operator that we'll study today is operators which can be represented by per triangular matrices. Sometimes you call it, we will call them upper triangular made, sometimes we'll just say triangular. What does it mean? It means that there is a basis V1vn in V such that that the matrix of our operator relative to this basis, let's call it beta, has this form, some terms on the diagonal. You have some terms above the diagonal which could be zero or non zero, but everything below the diagonal is zero. I see we will call such matrices triangular matrices. You could also call them upper triangular matrices, because by triangular you could also mean a lower triangular matrix, right? One in which the roles of entries above and below diagonal are exchanged. Zeros above the diagonal and non zero numbers below the diagonal. But by default, if we say triangular, we will mean upper triangular. If we want to talk about lower triangular, we will say lower triangular. This is how we will distinguish between the two. That's what I'm going to discuss. We will discuss a very different form, because here you have one below diagonal, that's certainly not upper triangular, right? It's a different form. In this case, what will happen is we will see. That the eigenvalues are just the diagonal entries. This is another case where we can find the eigenvalues. Ultimately what we want to find is not just minimal polynomial and so on, but we want to find all the eigenvalues we want to find. Ideally would like to find a basis of eigenvectors, it exists. If it doesn't exist, then see what else we can do. The logic is like this. We want to know understand eigen values. That's why we like this construction of minimal polynoma because minimal Paloma is precisely one which contains information about agen values. These are the zero of the paloma, right? For apparator of this kind, we can find the minimal Paloma and therefore the eigenvalues. But here's another class of apparati which we can also read off the agen values. They just turned out to be the diagonal entries. This is what we're going to show. Okay, so let's discuss this situation. First of all, what does it mean that an operator on the final dimensional letter space admits such a representation? This will be the subject of the first result which is very simple. I call it Alem, the matrix of beta is triangular if and only if beta is again a basis of the form V1v2, et cetera, V. It's equivalent to the following two statements, that first of all, the span of V1vk is viroit. The second is the VK isn't the spin of V1vk. This is for K from one to N. This is for K. These are the conditions. That's what the meaning. Let's discuss why this is. Well, let me look at it more closely. Let's say this is 11, this is 12, this is 2232313, and so on. We know that the matrix has the following meaning that this is one, this is V one. All right, the first column represents T of V one, written in terms of our basis. The fact that there are zeros everywhere except in the first component means that V one is proportional to V one. That corresponds to the case equal one on that board. First of all of V one is in the span of V one. But also that means that the spin of V one, which is the one dimensional subspace generated by V one, is invariant under every vector of the subspace goes to another vector of the subspace. Now next we look here. Actually, let me, this is V one beta, this is V two beta. What do we find? We find the V two is a linear combination of one of V one and V two. You have a 12v1 plus 222, that's the case, K equal to. Indeed, we see the two is a linear combination of V one and V two. We already know that V one is a multiple of V one, That means that the span of V1v2 is invariant. The invariants of this subspace is equivalent to the statement that V one and V two are still a linear combinations of V one and V two, which is the case as we can see, You continue like this, then the next one is going to be V three, written in terms of beta. We see again the fact that it's upper triangular, so that the next entry is zero. And then zero all the way down means that it's a linear combination of V1v2 and V three. It's exactly, it continues like this for okay, that's a sketch of the proof, so that's what it means. Instead of the cyclic property that you can get everything from one. On the contrary, what happens is that if you apply the operator to the case element, what you're going to get is near combination of V and the preceding ones. That is a property that is priced here. Okay, Next we have the following theorems. The diagonal entries of upper triangular matrix with the minimal polynomial. I will keep this to remind ourselves why are we doing this? Because remind ourselves why we're interested in principle. In polynom it's e zero will give us the eigenvalues they're falling. Theorem shows how to find shows us the first step to establishing a link between a patriangular matrix representation and minualplynomal. Suppose that the matrix of respect to basis beta is triangular with diagonal entries lambda one, lambda n. It's n by n triangular matrix and it has n diagonal entries, right? We, you know, this diagonal entries lumber one lambda n. Then the claim is that if we take minus lambda one, et cetera, minus lambda N, we're going to get zero. All right? Which will show us that means that it means that the corresponding product of linear factors, that is to say if you take the polynomial Z minus lambda one, et cetera, minus lambda N gives us zero if we substitute for z. And that means it may not be a minimal polynomial, but it contains mino polynomial as a factor. Right? It's divisible by minimal. This theorem does not give us a formula for the minimal polynomial, but it shows us that the minimal Paloma is a product of linear factors which are all of the form minus lumbar. There's nothing else. I guess I want to keep this, so let's prove this. So yeah, uh, let's start with one. We know that t times V one. By the way, if we use my formula here, this is lambda one, This is lamb 23 and so on. Right? We have just seen that V one is equal to lambda one. V one because it's upper triangular. V one is lambda one. V one which means is equivalent to saying that minus lambda one I, V one is equal to zero. Right, but if so then if we denote, well, let's just try minus number one I and then the rest of them we apply to V one. We can use the property that we have used before that because lambda minus lambda j commute. I recall that we say that a and B commute if AB is equal to. This is a property that we intuitively come to expect because of our extended experience with numbers like real numbers or complex numbers or integers where it is satisfied. But by matrices, or more generally, linear operators on vector space do not satisfy this property. As we have discussed a number of times, the algebra of matrices is non commutative. In general, given to matrices A and B, we don't have this equation. B calls the left hand side and the right hand side are different. We'll talk more about this, by the way, on Thursday. But commuting and non commuting operators, which is very important property, but for sure, if we call this all this B, then they do commute because when you multiply them, you're going to get a combination of t squared and I. If you multiply in the opposite order, you'll get exactly the same expression because they commute. We can push each factor, We can replace the product in one order by product in another order. Which you can think of as pushing this guy through the next guy, and then through the next guy, and through the next guy until you pushed all the way here, then you do the same. I meant here by the way. Yeah, minus first term you push on. Eventually you end up with a situation where you have here the product of minus lambda. And it's okay to write the product because it doesn't matter in which order where I goes from two to then. You have minus lambda one. I. Then you hit V one, but because one is annihilated by this term, so you get zero. And then you apply something, but it doesn't matter what you apply, you still get zero. It's clear, I hope. In other words, V one is annihilated by this operator. Let's call this tilda. Let's call this tilda of t. We find that tilda of t zero applied to v10, right? That's V one, but we very special case because it is actually nig in vector. What about V two? So let's say, let's try to do the same with V two. So V two, I claim that if you have t minus lumber one, V one, let's take the lambda two of two. Now, according to the property, we know that this is V two minus lambda times V two. And we know that this belongs to the spin of V one and V two, right? To. But more than that because in fact, when we write V two, you see the coefficient in front of V two is lambda two. Right here we subtract two. Let's term lambda 22. What's left is just a multiple of V one for all of VK is in the spin of one VK. But what we just found is a two minus lambda 22, we corrected by the multiple of V two will actually be in a spin of one up to K minus one in this case. Just one bless. Okay. But if so, then if I apply minus lambda one, time t minus lambda 22v2, I will get zero. Because when I apply this, where is my yellow Oh, here. Because this is going to be a multiple using that notation 121. And now we are hitting it with T minus lambda one, which we have just shown, sends us to zero, right? So that's why whereas V one is killed by a linear term minus lambda one, v two is killed by the product of the first linear factors of this product. And now you see the pattern. Do you have a question? Oh, go ahead. We're proving it. Yes, that's right, We're proving it by showing that this apata tilda of t is going to be equal to zero on our retospace. Which would, which implies that the minimum polynomally, it has to be divisor of this one. Because in Poma, smallest degree which has this property, this product may not be the minimal polynomial, but it contains the Mino Palo as a factor. I spent quite a bit of time last Thursday explaining the difference between the two. If you're not here, watch the video of my lecture. Our approach in this course is different, unusual. The conventional approach is to introduce the critic polynomial values found from the critical. Instead, we introduce a minimal polynomial. Oh, in this case it is a critasepinomal. Yes, that's right. It's always a factor. That's right. Yeah. Yeah. Yeah, that's right. So we are trying not to use critsinomial. At the end of the semester, we will introduce the determinants properly so that it doesn't appear as something ad hoc but will be part of a coherent conceptual explanation. Then we will talk again about Caris polynomials and compare more precisely Carris polynomial with the minimal polynomial. Now we're just focusing on speaking in terms of minimal polynomials. Only one thing which we love about minimal polynomial is that it gives us the agen values. Just like cats polynomial does. Okay? Therefore, in this discussion, our logic is that we want to find the minimal polynomial is. And that will give us information about what agen values are. You have a question again? Because we subtracted this, you see we effectively killed this. What's left is we're killing the diagonal entries as well. Now by passing from minus lambda one or lambda two and so on, we are eliminating one of the diagonal entries. Okay, That means that we get the statement becomes stronger. That is, in the spin of V one up to K minus one and not merely VK. Okay? Anyway, you see now that V two is annihilated, not by this guy, by the product of these factors. That's how we do it in general. We find that so that we continue and then suppose we have shown already for up to min one, we've shown, we have shown that minus lambda one, et cetera, lambda k minus one times k minus 10. Let's so that if you take the product and include one more term with lambda k, further the satisfy this equation. Okay? So for that we follow in the same vain, we simply apply minus lamb the key, that extra factor. K I V in the same way as before, we see that it means that we remove the diagonal entry. The diagonal entry. Then suddenly what we get is in the span of v one up to V K minus one, because the term K we have subtracted is the same as what we did for k equal to. Therefore, if you now include one more factor and apply to V K, you will get zero. Why? Because if you take this, it's going to be in the span of V one up to V K minus one. K minus one is annihilated precisely by the product of the previous ones. But what about VK minus two? K minus two is annihilated by the product up to minus lambda K minus two. Here overkilling it for K minus two. Even fewer factors are enough for sure. It will also be annihilated by this whole product. Likewise, VK minus three, it's already annihilated by the first K minus three factors and so on. The upshot of this is that this product minus lambda one up to minus lamb times I is equal to zero for all I from one to k. That's what we have shown by using the assumption that proving it for the first from the second implies the third implies the force, and so on. So it's inductive argument, it's, it's not a full blown reduction because we are not talking about all possible values, but only a finite range of values from one to N. I hope it's clear what this implies, that if I denote this by Pittilda, the product of all of them. This, then we find at the end of the day that tilda of T applied to the I is equal to zero for all I from one to n. But that's equivalent to Tilda of T being zero. It's a basis. The set is a basis, The zero operator. Right, that completes the proof because we know that the minimal polynomial is precisely one of those polynomials which annihilate a smaller degree. It follows that this polynomal ptilda is divisible by p. We have not yet gotten the minimal polynomial, but almost. Okay. Then from this we can now show that describe the egenvalues. That's the next theorem. In other words, we now know that P of Z is a product of, of z minus lambda I, where I belongs to a subset of a subset of the set one to n, right? There are two possibilities. For instance, one possibility is that actually some of these numbers lambda coincide lambda I could be equal to lambda plus one. In that case, for instance, a two factors in this product could be equal A polynomial, which is a factor in this product, could have a smaller multiplicity for a given lambda instead of z minus lambda squared, it could have just z minus lambda. That's one possibility, but another possibility in this case the zeros, they still have the same zero, is just that the multiplicities are different. So that's one possibility. But the second possibility is that actually some of the lambda do not occur amongst the. Of the minimal polynomial, in this case, you're actually removing one of the zeros. But the next theorem shows that in general, the next theorem shows that actually that's not the case. That actually that the eigen values of t, maybe I'll put it this way, set of the set of eigenvalues of t is exactly the set of zero of tilda of z is lambda one, lambda N. In other words, the subset S is actually the entire set. The subset at least includes each of those lambdas at least once. It could be that the multiplicity is different. Only the first type phenomenon of what I described is possible. In other words, there could be some repetitions here. But if you just take it as a set without repetitions, then that's exactly the set of eigen values of T. Or equivalently the set of, as you see, we know that PZ is the minimal polynomial. Has a zero is precisely the agen values. Now we're saying that this is also the set of zeros of this tilda which we obtained by taking the product of these linear factors. Perhaps at this point, it's useful to give you an example just so you can orient yourself like what's going on. It's closely connected to what I explained last time, an example that I gave last time at the end of last lecture. Let's consider the two by two. The dimension of V is two. Let's say V is R two. There are several possibilities. We will see that you have a triangular matrix, which look like this. You have lambda one, lambda two on the diagonal. You have zero below. And then you have something here. There are three cases. Case one is one, lambda one is not equal to lamb two. In this case, we will see that actually both values and the minimal polynomial is z minus under one times Z minus. In this case there is actually a Nagin basis in this eg basis V. One is one of the vectors because one is special because it's the only vector for which the colon has zeros everywhere, except in the first position that when you apply your operator, you get to one, you get a multiple of v1v2. Like this 12 if you apply V two is a near combination of 1.2 if the star is non 02 is not an vector. But it turns out that in this case you can find a linear combination of V one and V two, which is an eigenvector with eigenvalue lambda two. That's one. To find that, you have to exploit this condition that lamb one is not equal to lamb two. When you write a formula, you have to divide by lamb one minus m two. Actually, maybe I should write the second vector. I'll call it two prime. You see what is V? Two prime is going to be two, but if you apply to V two, you are going to get lambda 22, which is good. But let's call this 12 plus a 12 times V one, we have to get rid of this. Term how to get rid of it. Let's take minus 12 divided, let's just say minus 12 times V one, right? Then when you multiply, when you apply to this, you will get minus 1121, right? But you also have it multiplied by lambda one. That's not good because you're going to get lambda one times this, but you want to get lambda two, right? How to deal with this? Let's put some efficient here, alpha times V one. Then what's going to happen? This is the result of applying to V two, because you get lamb two plus V one. Then you get minus alpha times lamb one, V one. Now you got a good term, which is this. That's what you want. Well, you want to get two prime, but it's going to be two prime, but then you have to add also to compensate. You see this is V two, but V two is equal to two prime plus alpha one. You're going to get plus lumber two times alpha times one. You see where it's going. You got this comes from here. Then you have plus a 121, then minus alpha lumber 11. You have to get rid of this term, then you good. Then v two prime is lamb two V two prime, this has to be equal to zero. What it means is that alpha, you have alpha times lumber two minus lumber one is equal to a one is equal to a 12. Which means that alpha has to be a 12 divided by lambda two minus lumber one. You see? And we can divide by lamb two minus under one precisely when they're not equal, I have shown that one is not equal to d two. Then no matter what is here, you can adjust two by adding to it a multiple of one so that it will also be an eigenvector with eigenvalue as expected, lamb two. But if lamb one is equal to launder two, you may or may not be able to do it. It's clear what happens that you can do it if one should write may be here the case two, case two is lambda one is equal to lambda 212 is equal to zero. In this case the matrix is lambda one, lambda 10. In this case, actually this term disappears. We don't need to worry, don't need to make an adjustment. Lamb two is on the nose, Gv in this case because this is zero, both V one and V two are with the same eigen value. V one and V two is an eigenbasis. And then there's a case. So then the question is what is the minimal polynomial however, is just Z minus lambda minus lumber one because already T minus lumber one I is equal to zero because minus lumber is exactly zero matrix, this is lambda one, T is equal to this, which is lambda one in brackets. If you take T minus lumber one, you will get the zero matrix. You see, minus lumber one satisfies the requisite property of a, of a minimal plynomial. We don't need to square it. This is a case where this is. But Pitilda that I introduced, this guy, is obtained by just blindly multiplying the linear factors corresponding to all diagonal entries. In this case, pitilda is z minus lumber one squared. In other words, Tilda and P have the same zero, namely lumber one is only 10 here, lumber one, but the multiplicities are different. Mims. Multiplicity one minus lambda one has multiplicity one that appears in degree one in Pitilda, which we obtain just from diagonal entries minus M one squared. Okay, ask me if something is unclear. Finally, we have the third case where lambda one is lambda two is equal to lambda two. But a 12 is non zero, you have lambda one, lambda 10 for example. For example, let's suppose that is one example. It could be any non zero number, but let's say it's equal to one. You still have the same diagonal entries. It is upper triangular, but the difference is that nowadays actually strictly diagonal, In this case there is only one eigen vector. Well, when I say one eigenvector, it should be taken with a little grain of salt, namely every multiple of one, every non zero multiple V one is an eigenvector. V one is lambda one, V one. There exists an eigenvector with eigenvalue, but the space of such a inventor is the one dimensional. There is nothing you can do to adjust V two to make it into eigenvector because this calculation shows that we can only do that, can make sense of this expression. We can make sense of this expression only in a situation where lambda one is not equal to lamb two. Then bonafide formulas, a legitimate formula is divided by non zero number. If lambda one is equal to m two, we actually don't need to do this calculation because this term just disappears. We don't need to do it. There are two possibilities for being able to construct a second linearly independent eigenvectors. In this case, this possibility cannot be realized. There is no agen basis. This is a situation, a special case of what's called Jordan basis. In this case, we can find a basis which is not an gen basis, but it has the property that minus lambda, you can rewrite it as follows. Minus lambda 1v10 minus lambda one squared V1v2 is V one is equal to zero. If you apply once you get one, that's what I mean. You see what happens. Minus lumber one, I annihilates V one but does not annihilate two, rather it sends V two to V one. Do you see that? Let me explain this more. I feel that it is really, I feel it is really useful to go through these exercises because once you understand how things work, say in two by two, maybe three by three k, then it would be much easier to follow all these abstract arguments. Whereas if I'm just giving you abstract arguments without you don't have a model in mind of what is going on, it's very difficult to follow. That's why I, I prefer to spend more time to do it because from there to understanding the theorems is a very short path. But without it, it's a very long path and a tortured path path. Now you see is given by this matrix, let's put lambda instead of lambda one To simplify then what is minus lambda minus lambda is minus lambda lambda. Previously, when this was zero, this would give us a zero matrix. But now this term will remain. What is special about this matrix, this matrix? Let's call it matrix B, applied to V one is zero, but applied to V two is one. That's what we see reading, the first call and the second call. Two is not an eigenvector lambda. Because this is eigenvector property, we like to write a inventor equation as V one equals LmbdaV one. But this is equivalent to one equals zero because we simply take this to the left hand side in terms of minus lambda, which is what I call this is agvor propertia. Vector equation I con, vector equation is that the vector is annihilated by minus lambda. V one satisfies this equation, but V two does. However, it does not satisfy it in a very controlled fashion. When we apply this B minus lambda, we don't get zero, but we get the previous guy which is an eigenvector. What does it mean? It means that here is your V two and here is your V one. If you apply to V one, you get zero. But if you apply to V two, you get one. But that means that if you apply B twice, you get zero. Therefore, what it means is that V two satisfies the equation B squared V two is zero. Such vectors are called generalized vectors equation. This is the Eigen vector equation. This is a generalized Eigenvector equation. The difference is that here's the first power of the operator B, which I remind you is minus m. Here is the second power. If you vector satisfies to some power apply to this vector is equal to zero, then it is called a generalized vector. Here we see that there is a basis in this situation, there's no basis of eigenvectors but there is a basis of generalized invectorsV one and V two. In general, one can prove and hopefully we will cover this later in the semester that at least vector space is over complex numbers. Every operator has a basis of generalized, again, vectors in two by two case. This is a complete description of what's going on. Now what does it mean for the minimal polynomial in this case? In the previous case, when there was a zero here, the minimal polynomial had degree one because T minus La was already zero. If you have zero here, you get zero. But now minus lumber is non zero is but B squared is zero. In other words, minus La squared is zero. The minimal polynomial in this case is Z minus lambda squared. And that's Pitildakay. This case and Pitilda are the same where Pitilda is what we get by product of linear factors. What happens if we take three by three? It's very similar. If you have three by three, you have more possibilities. Again, the vanilla case is when you have upper triangle matrix and all of these are distinct, three distinct numbers. This is like our case one for two by two. It turns out that in this case, again, there is an Eigen basis. There is an Eigen basis which includes vector V one. Then you can construct a combination of v two and v one which is igen vector. And finally combination of 32.1 which is no ngenvector. In the calculation, differences between lambda and lambda j will occur, which is why this is an important condition. Eigenbasis exists. There is an eigenbasis, it looks like this one. There is a two prime which is V1v2 plus some alpha minus I wrote minus five minus alpha one V three prime, which is three minus one minus G two. In this case, the minimal polynomial is just the product of Z minus lambda I I 1-3 It is equal to Pzilda. Then there are other cases. For example, two of the three are equal, but the third one is different. You are back to analyzing this case because the Met has a block form. Let's say you have lambda one, Lambda one, this is zero then. This depends on whether this element is zero or non zero. You analyze it as cases 2.3 provided that lamb three is different. Likewise, if lambda two is equal to lambda three, but lamb one is different, and so on. There are these cases as well. Finally, the most interesting case, that is a new phenomenon, which did not occur before, you have a stations like this, that you have all of them at the same. It turns out that in this case, we can basically changing the basis, we can always bring it to this form where you have one just above the diagonal. This is called the Jordan canonical form. What happens here is that if my then minus lambda I is obtained by removing this diagonal entries, the result is this appiator, that's the analogue of B which we discussed in the case two by two. In the case of two by two, the square of this matrix is zero. But in the case of three by three, it's the cube which gives you zero matrix. In this case, you get a situation similar to this, but now it's a chain of three vectors. You have three, you have two, you have v1v1 is honest to goodness eigen vector, when you apply to it, you get zero to two. If you apply, you get one. If you apply to V three, you get two now, which is the smallest power of B, which is zero applied to anything? It's third power because the first power kills 1, second power kills two, and the third power kills three. Indeed, if you raise it to the third power, you get the zero matrix. Do you have a question? No. Okay. You see what I'm saying. In this case, the minimal polynomial is going to be z minus lambda cubed. The Minial Paloma knows about this phenomena. Minal Paloma has a multiplicity. It is the smallest length. This type of cycles like this, 3v2v1 or chain, whatever you want to call it. Okay? So this is just to give you, orient you with a few examples so that these things become more concrete and not so abstract. There are some formulas and we're just manipulating formulas. These are good examples to illustrate this general story. Okay, now let's go back to this theorem. I think I try to. Mm hmm. Yes. So, so what are we trying to do? We have shown that in general, Pida, which is a product of the manufactor, is divisible by p of Z. As I explained, it's not clear whether they share, they share the same zeros. The set of zero, exactly the set lambda, one lambda. So that's what we're trying to prove now. So in one direction, it's easy. It's obvious, right? So, improve if you have a zero, if you have an eigen value, so if you have an eigenvalue of of, of, of z, the minimal point, right? That's what we proved last week. That's one of the main conclusions from last week that tags precisely the zero of z. But since of divides, it follows that, that every zero of P of z is zero of Pitilda. It's follows that zero of P of Z is zero of Pitildaz. Right? Therefore, every gen value is zero of Pitilda. Is a zero of ptilda. I remind you that tilda of z is just a product of the linear factors corresponding to the diagonal entries of our triangular matrix. This way we prove this way that every agin value is zero, right? That's clear. But much less clear is why if you have a tilda, then it will be an Ag in value because it could be that a zero pile is not a zero, right? We know that P of Z is a product of some of these guys, Mrs. some of the lumber ice. Okay. Now we're going to show that that's not possible. So let's prove it this way. So we want to show that every zero of p tilde of Z is an eigenvalue. Okay, again, this reminds you what PTLD is. I can value of t. Here is a nice argument here, you take t minus lambkey. Again, I've given you enough examples, so you can. I hope that this discussion will immediately evoke in your mind at like this, more precisely a matrix. Well, in this particular case that would be so if the two as are the same. But in general, what's going to happen is you have a matrix which is lambda one. Lambda two up to lambda n and then it and then zero everywhere. What are you doing now By subtracting Imes lamb the key. You're subtracting the matrix which has Lamba key everywhere on the diagonal and zero everywhere else. It will modify diagonal entries by subtracting lamb decay from each of them. Right? In particular, the case entry will become zero because at the entry of the diagon, you had lamb decay, and now you're subtracting lamb decay. Let me write it like this, lamb decay minus lamb decay, which is zero. In this new matrix, it will still be upper triangular. The upper entry section will be the same, because this one has zero. All the numbers on the diagonal except the case one will be modified by subtracting lamb decay and the case one will actually become zero. But if it becomes zero, let's zoom in on the case colon. Previously the case colon had zeros below the case entry. Then could be something on zero here. All right, on the diagonal you have lamda key. That's what we meant by saying that the, if you apply minus k, if you apply to V, it's a near combination of V one to V K. But if you also do it an extra trick and subtract this diagnomytrix, you'll end up with zero. And then suddenly it's a near combination of V one to V minus S, of one to the spin of one K minus one. We have made sure that VK will not appear in the image by subtracting lambda k. By removing that, making that entry zero. Okay? That means that the null space, the dimension of this is smaller dimension of this is k. The dimension of this is K minus one. Remember these are elements of the basis. That means that the null space of minus lambda I is non zero by fundamental theorem. Full in your maps because the range has dimension. At most mines, we therefore the dimensional no space is at least one. Right? But that means that therefore, there exists a vector V K prime in the span of v1vk such that minus lambda K prime. This vector is non zero. Because this is non zero, such that it is in the null space. But being in the null space, as we discussed, is exactly the genvectorditionVKrim is equal to lambda K prime. On this blackboard, I actually wrote an explicit formula for two prime, right? Okay, So this completes the proof. So therefore, the shows indeed that every zero is an eigenvalue of t. Okay, now the next result, So let's say this was theorem two, this is theorem three. So now we can do the converse. So that has a representation as a triangular matrix with diagonal entries lambda one, et cetera. Lambda M with repetition. Possibly with repetition. So M could be, I don't know, it's a product, I'm not formulating it correctly. I see. Okay. I'm not a triangular matrix just like that. If and only if, the minimal polyomal is a product of Z minus slander. Here Am could be less than N. M is lesson two. Well, let's just say M is less to n. It doesn't have to be n, the dimension. So in this case, you see the fact that the minimal polynomial can be written as a product of linear factors is equivalent to the statement that has a representation as an upper triangle matrix. You see that logic is a little bit different from how we are used to, from the critical monomers. We try to get diagonalization. Diagonalization means that you want to represent your matrix in diagonal form. As my example already for two by two matrix shows, it's not always possible. But even if you cannot put it in diagonal form, you can put it in Jordan form. Jordan form is still triangular. Our logic here is to settle on the intermediate case between complete randomness and.