All right. Please have. So last time for any For any vector space, finite dimensional finite dimensional vector space over field F V. We define vector space and one, two, and so on. We define the vector space V and it's subspace, which we call. Okay. So an element of this space V m is what's called linear map on V. It is a rule which assigns to any to pole of elements of V, an element of our ground field F. V one, et cetera, here. Let me see. Of our vector space. So the collection v one up to VM goes to what we call row of V one to V, which is an element of. And they has to be linear with respect to each argument. Okay. And then belongs to the subspace, and if it is an alternating linear functional, a property, which means that if you switch two of the arguments. So you have some Vi and Vj somewhere, then you fix the other vectors and switch them, then the value corresponding value will differ from the value of digial value biosig. Incidentally, this is equivalent. So I'll put it as a remark. So this property is equivalent to the property that, if any two of them are equal to each other. So that let's say in the I position and the position, you have the same vector, Then this is equal to zero, if I is not equal to. This property the property if you switch e two, you get minus the value is equivalent to the property that the value every mta pole where two vectors coincide is equal to zero, and this is useful. Sometimes it's easier to work with the first property, sometimes with the second. Also, last time we saw that if we choose a basis, one N of v, so that we always reserve the letter n for the dimension of the vector space. N is the dimension of. If we choose a basis, then any R in VM is determined by its values on the toples which are drawn just from the basis is determined by the values of R O E J one, GM, where each J is an element of the set one two. And therefore, that implies this implies that the dimension of VM is equal to n to the power. That's for the entire space VM. But we are actually more interested in the alternating subspace because we're going to use it to define the determinant. In this case, If Row actually belongs to this alternating subspace, then it is determined. By fewer Maples. No arbitrary collections, E J one up to JM, where the only condition is that Ji belongs to the set for one two. But those collections for which the numbers are increasing. R of one, et where one is less than two less than. The reason for that is very simple. That first of all, since it is an element of VM, we already know that is determined by values on arbitrary mop. But if an tople is not of this form that the indices increase, we can always apply permutation of the indices to get to an increasing sequence. A sequence one, M can be obtained by permuting some increasing collection. I one, two, I. For example, let's suppose suppose that is equal to M is equal to three. M is equal to three. Okay. So you have what are the possible sequences? One, two, G three, which label the values, which determine role. There is 123. So then there is one, three two, then there is two, three, two, three one, three, one, two, and 321, right? So there are six of them. So, um There are actually, sorry. I am I'm making a mistake, sorry. In fact, there are more because there are nine, so I'm doing permutations. Actually, I'm allowed to have yeah I made a mistake. I made a mistake here. Any sequence with distinct with distinct Js? Distinct L can be permuted to make it an increasing sequence. And then just write row of J one, J one, J is going to p plus minus row of E one, I. Depending on how many switches we have to make to transform the sequence one up to GM to the sequence one up to M. This still leaves out the sequences for which two or more of the Ji coincide. But those we don't care about because we know that if you have such a sequence, the crispor value will be zero according to this property because that would mean that if two of them coincide, our role is automatically zero, so we don't need to talk about them. If J L is equal to JR for some not equal to r, then the corresponic row of E J one, J is equal to zero, so we don't care. We don't need them. List so therefore, it's enough to consider only those sequences for which there is no repetition. If you cans all sequences, you have three possibility for g13 possibility for G two, and three possibility for three, there are three times three times three nine. Out of them, there are some sequences for which two or more elements repeat. But let's list the ones where they don't repeat. So that's the ones which I wrote. One to 313, two, two. So I'm just doing like I start with one, then I have two and three left. I can put them in this order on this order. Then I have also one, three, two, three, one. Then I have three, one, two, three to one, right? I think this are all all the sequences of G one, g2g3, where they don't not equal to each other, right? And so now the The ones which for which they are arranged in the correct order. In other words, they're increasing, so one to three is the first one, and the rest of them are obtained by permutation of this sequence, for instance, to get this one, we have to switch three and two, right? And therefore, the value on this triple According to the alternating condition, will be equal to the minus the value on this triple. Once we know the value, this one, we know the value on this one. What about this? For this one, we have to switch two and one. Again, we have a minus sign. For this one, we have to switch two and one and then three and one. So if we switch two and one, we get one here, and then one three, you have to switch. You get minus times minus. The two switches are necessary, so you get plus. For this one, you have to push three through one and two. Here again, plus, and for this one is going to be minus. This is an illustration of how this The sequences which are one M, which are distinct can be transformed to the only sequence in this case, for which the numbers are increasing. Then we see clearly that we only need one value because all the rest of them are determined. They're going to be plus or minus the value at 123. And likewise, it works in the same way. And so, therefore, we need much less information to include a row, which if we know that this row belongs to an alternating subspace and not the entire space, right? And so this is what I explained last time. But I was a little bit cavalier because I I claimed that in fact, the dimension so we have let's call it 11 one. What this implies is that the dimension of Maybe dimension of V t is equal to is less than or equal to the number of increasing sequences, one, et. I was a bit cavalier last time and I wrote that dimension is equal to it because I said that R is determined by the values on such elements for which I one, two, and so are increasing. But in fact, a priority could be some more relations between them, one has to investigate more precisely why there are no other relations. But if there are relations, there will be less dimension will drop. So for sure, dimension is less than equal to this number. And each is set one up to. In fact, one can prove and this number, yes, I explained that this is a number of it's called n, n is equal to n factorial over factor n minus factor. In fact, one can prove this is actually equality here. But we're not going to be concerned with it for general. But we will we'll now prove it for M equal. For equal N. So here, of course, in this in this statement, you have to assume that M is less than or equal to n because we also know that if you take VM out with m greater than n. I showed last time that this space is actually equal to zero, so only interesting to consider it if m is less than or equal to n, so we will prove it now for m equal n because that's exactly the space that we want. Now in this case, This number is n is actually equal to one. So this result implies that the dimension of v is less than equal to one. And the only thing that we need to prove now is that there is a non zero, we'll need to show that there exists a non zero element and non zero element role I here. If you have a vector space of dimensional circle to one, it is. It is no zero, if and only if there exists a non zero vector in it. We're going to exhibit a non zero vector and this vector will look like the determinant. The formula which will define it will look like the determinant. This is what will lead us to ultimately to the definition of the determinant of any linear operator. Okay. Now, there is one more thing here. Before I get to this. I want to talk about, by the way, this is an example of the situation where both n and m are equal to three. And so in this case, I have listed all the possible triples of distinct numbers 1-3. And I have shown how all of them can be we can relate them by permuting them, relate them to the increasing sequence, the only increasing sequence. There's one more aspect here, which is also very interesting, has to do with this concept of permutation. So, in fact, so we have a notion. Permutation group permutation group given collection fix. Then permutation is defined, actually it's one of the concepts which is discussed in a course of abstract algebra one 13, if any of you are taking it now, have taken and we'll recognize this concept. So permutation can be written as the following table. And so it equivalently, they're labeled. So if you have a picture like this, for example, in our case, let's take the last one's three to one. You can think that, You can think about this in two ways. You can think that one goes to three, two goes to two, and three goes to one, or you can think that the third element goes to the first, the first goes to the third and the second stays put are different ways of thinking about them. But in any case, this is a concept which expresses the idea that you have n objects which are ordered in a particular way, but you can then move them around and to create another sequence of those objects, which is not the same as the original one. You can see that they are exactly n factorial possibilities. Permutations And then the idea that bless you. Every time you have such a situation where you're applying some movement to a collection of objects or you can think of this collection as a single object with end pieces. You can compose such operations, such movements because you can make one permutation and then followed by another permutation. The result will be a new permutation. And that is the cornerstone of the concept of a group. Bless you. I guess the allergy season has started. Yeah, I'm feeling it, So in fact, that's why I wrote that it's a permutation group, and sometimes it's also called symmetric group. And denoted SM. So there is a way in which you can multiply those permutations or compose them with each other. And so you get this very nice structure, which is one of the central objects in all of mathematics, one could say. So here we encounter it in this discussion of the alter alternating linear maps on n dimensional vector space. But since our time is limited, so I'm not going to spend too much time on this. I will say that for us, the permutation encoded by sequence one to n by the sequence because the permutation you just think of the sequences where one has its natural position somewhere here, so example three has natural position here, but in the permutation, it will appear in the first position. The likewise for two and three. So they will just be encoded by the sequences, J one, J two, et cetera. JM for us. We will use the nation not SN, which is a more traditional notation, but we'll follow the nation from the book, which is permutation of n. There is an interesting combinatorial question as to how to calculate the sign. Of this permutation. These are the signs which are exhibited in the special case. That's what I'm talking about. In other words, how many switches you have to make to transform your given sequence to the standard one. That's what I mean by the sign, sign of one G two, et cetera M. And so in the book, this is discussed quite well so. It turns out that there is a way to define it without actually dividing a particular method of switching them. So for instance here because it's a small sequence. It's actually easy to explain in each case, how to obtain From each of them, the standard one by applying some pairwise switches. For general permutation, that seems like it could be a difficult problem. But there is a definition of the sign which does not appeal to this procedure. And in the book, it is proved that the sign agrees with the sign that you get by making switches. First of all, it's easy to prove that every permutation can be obtained as a composition of of pairwise switches. In fact, even fewer every permutation can be obtained by neighboring switches of and iPass first elements. But there is a nice formula for this, which does not appeal to expressing the permutation as a composition of those elementary switches. And this is actually minus one to n where n is the number of pairs, K and L two elements, K and L between one and such that K appears is after L in the sequence, J one, J two, et cetera, M. So you see, for example, we have to look at For example, here, we have to look at all the pairs of numbers where the earlier element is larger than the later element. For example, here, three appears before two, but three is bigger than two. That's one. Three is bigger than one, but appears before one and two is bigger than one but appears before. There are three instances. This number and capital is three, and so the sine is minus one to the power three, which is minus what we obtained also by applying two switches. In this case, there are three switches. Here, on the other hand, the only Okay. So you have the two pairs where they are out of order. Three is before one and three is before two. So therefore, so maybe I'll mark them like this. So there are two instances, this number is two. Here is um stree because, and this are out of order. You get a plus. Here you have this and this. Also N is equal to two. In other words, there is a nice combinatorial formula for the sign, which you can get just by looking which of the pairs are out of order as illustrated by this example. That's how you know. Now our immediate task is to show that there exists a non zero. I made a mistake here. Yeah. Okay. I made a mistake. I should have written like this. Okay. So we want to show that this space is actually one dimensional. So we already know that this dimension is less than equal to one, and therefore, it remains to show that there exists a non zero elements. So we're going to actually exhibit construct explicitly a particular alternating and form on or V is dimensional. Which is show that it is non zero. Okay, so. Um, so that will be our first theory. I think that was La theory. So the dimension of N t is equal to one. If if N is a dimension of. Okay. So we need to construct Some non zero element in this space. And we will construct it as follows. We will choose a basis. Of V. And then it's a very nice proof, it uses the dual space and a dual basis. So that's kind of gives a very nice control and tool for constructing linear maps. So recall that there exists a dual basis. Okay. Which we'll call f one, f n in the dual space. So what is the property of the dual basis, I recall that f of E J is what's called the Cronerdlta symbol, which means that f of Ei is one and zero otherwise, right? So that's the definition of the dual basis. And now, this implies that if you have for any V, which you can write as a sum of A J J. P. Just picks out the efficient. Right. Because when you substitute this summation into this formula for V, you open the brackets and you see that you have to calculate i i of J. The only term which is non zero is the one which corresponds to equal i. In this case, we have a coefficient which pops out, and that's a coficient AI. In other words, this linear functional is nothing but the coordinate of the vector if you will, because AI is what appears For instance, when we write it as a colon vector, This vector will consist of those coefficients when we write it as a colon vector with respect to our basis. P, therefore, its meaning is that it looks at this colvector and picks out the ice entry. That's what it is. Now let us define define. R in V n by the following formula. As we already discussed, what we need is well, in principle, what we need is just to know the value on a particular element, but we will actually give a general formula for an arbitrary dopo of vectors. R O V one, V two, V. Remember here n is equal to, that's the case we're considering is equal to the sum over all permutations N That's the permutation that I discussed on that board. Then you put the sign of this permutation. Defined as, for example, minus one to n where n is a number of elements which are out of order in the sequence one, two, and so on. Then after that, look at this, you take five one of V one, five G two of V two, et cetera, f N of VN. That's the formula. So it's a so you have your linear functionals, five. But for a given permutation one G N. You take five G one of V one, five G two of V two, and so on, and you put aside. I claim that with this definition, we actually obtain an element of o space. That first of all, it is linear, which is kind of obvious. Let me write this down. So first of all, we claim that That is an element of V N. Indeed, what do we need to check? We need to check that it is linear. With respect to each argument, V. But that's obvious because everything is expressed in terms of linear functionals evaluated on Vi. In each term in each term, if you keep track of V which is sitting somewhere here in the middle. If you keep tracking on the right hand side, there is only one factor here which has to do with V that's J of Vi. But this is linear functional. Linear functional. Therefore, if you put Vi plus W, it will be equal to the sum of the values, and likewise for the scalar multiple. If this is so for that factor, and that's the only way in which Vi appears in the formula. This would be true also for R. R will be linear with respect to V for every i. That's the first property. Second property is that it is alternating. R is linear, R is alternating. As I explained earlier, this is equivalent to showing that if you have R in R, you have two vectors in some places I and J. The same, then this is equal to zero. That's what we need to show. Of course, this will come out of the signs that there will be concllations. You see. Let's check for example, that if you have the first two are equal, then you get zero. In other words, V one is equal to V two. Okay? Now, for every permutation, for any J one, J two, et cetera JN, you can produce another permutation in which you just switch J one and J two. You get J two, J one, JN. This way, you can break all permutations into two groups, where in each group, you have two permutations which only differ by the first two elements. But now let's look at the signs. We will see that it's easy to see that the sign. For this one, is equal to minus. Of the sign of this one. Because the point is that the question is whether the one and G two are in order or out of order. If they are in order, these guys are out of order, so there is a difference by one in the number n capital. As for the rest of them, nothing's going to change if we just permute the first two because these guys are staying put, the rest of them is staying put. Same. So it's clear that number number for this one is equal to number n for the other one plus or minus one. Therefore because the sine is defined as minus one to this power, we see that the signs differ the sines different sine of this is equal to minus the sine of this. Therefore, since we broke the alter mutations in the groups consisting of two elements. The summation can also be broken into the sum of pairs of those groups in each group, they have two terms and they cancel each other. Because when you switch, so you have the two elements are the same. V one is equal to V two. Therefore, five G one of V one, 52v2 is the same as five two of one, five one of V two because we are assuming that V one is equal to two. If one is equal to V two, those terms cancel each other. Terms corresponding to these permutations. Therefore, we get zero. Now repeat the same argument if you have the same elements appearing in some position K and L. It works exactly the same. Well, it comes a little bit more. Well, Is actually the same because then you have some elements before and some after. But again, you'll notice that nothing changes as far as the ordering of the elements before and the elements after. So really, there is only one change always. Okay, I hope it's clear. This shows that we have indeed an element of this space. And what remains is to show that it's no zero. This element is in fact non zero, and then we're done. We know that this space is one dimensional. But for that, we can simply so we need to show that it's non zero. And for that, it's enough to show that its value on a particular entopo is non zero, and of course, as a particular p we'll take our basis. Then you see in this summation, there will be the first term which corresponds to the when one, et cetera two. N is equal to one, two n. In this case, we will have one of one, two of two, et cetera, f n of N. Because that's what we are getting. Suppose J one is one, and you get five one of V one, two is two, two of V two, and so on. But V one, V two, and so are one, two. In fact, all of them are equal to one. You have a product of one, how do I know that because of this formula? It implies that f of i is equal to one. That's exactly what I have here. I have a product of five of i for all from one to n. I get one times one times one times one, and this is equal to one. But then if I have any other term for any other term, Ji is not going to be equal to i for some. Then they will have a factor Ji of i. But if Ji is not equal to i, then this is equal to zero, and this factor is enough to have just one factor zero, then the whole product is zero. The the whole term vanishes. So out of all the factorial terms, correspond to different permutations. There will be only one non zero term and this term is equal to one. We find that this is equal to one, so it's no zero, and we're done. We have shown that indeed we prove the sem one, that the dimension of alternating space of n where n is the dimension of v is equal to one. Now we can run the program which I suggested last time. Remember I explained how this is we can view, this is a factorial construction, and it will enable us to convert any operator t acting on into operator acting on this one dimensional space. So now we are ready to define the determinant. Okay? So follow the idea from last lecture. We have this construction, which gives us for every vector space V, another vector space, which is this Vn where again, n is the dimensional V. This is one dimensional. If you have any operator T from V to V, then as I explained, you will have an operator here, which noted last time as N. You're going to have for every operator, t it will draw a shadow will be an operator rex on a one dimensional space. A linear operator on a one dimensional space is always is just multiplication by a number. This will give assigned number to t and this assignment will actually respect multiplication. So we will recover the properties of the determinant. But let me do this slowly. So this is how it works. So definition, suppose that you have an operator linear perator acting on V and choose an element of VN. And define a new element, which we'll call row of V and Alt the following formula. It's the value of this new one t on V one to V n. Let's say, let's make it more V two, V n is by definition, the value of the original one, but not on V one, V rather on TV one, TV two TV Okay. So that's what I explained last time, this idea. Now we are applying this idea because we have found and we have established that indeed V and Alt is one dimension at the space assigned to assigned to V. Now, check check that it is linear. Check that RT is V t. But it's very easy. So you have to check linearity. It follows from the fact that t is a linear operator. So if you have somewhere or T of V plus W, we use linearity to show that it's t plus t, and then you can pull it out of the of the summation out of R. That shows you're not going to spoil linearity if you apply linear apparatus, all the equations will be preserved. Linearity is obvious. But also the alternating property is obvious because obviously if you start exchange these guys, then it will result in the exchange of these guys, and here we know R is alternating, so they will be assigned. Therefore, that shows you that RT is also alternating. Okay this construction assigns to any element of this one dimensional vector space, another element of this one densial or space. But if we are in one densial vector space, And that's the operator that I denoted last time T N. This assignment, R to R T is what I call last time T N. Okay. So now, But they are vectors in the one dimensional space. Is one dimensional. And R and R t are two elements of this one dimensional letter space. It means that they're proportional, right? Uh. Implies that R t is equal to some number. I'll call it for now just some number d d t times row where this number is an element of our ground field F. Do have to be proportional. That's a determinant. This number DT is called the determinant. Finally, our all this hard work brings the results. It's called the determinant of t is denoted by that. Okay. Now, on the other hand, we already have seen the terminus before, so we want to compare that indeed this is the formula that we had before. What can we learn? Let me see. Okay. So. So here, since I have very little time, I'll I'll just formulate the statement, and then we will and after that, I will explain the properties, the purposes of the determent. So first I've already explicit formula. Actually, I already did that last time for in the case of NC two. So explicit formula? Okay. So if we choose, choose a basis. One of v. Then let's call it Beta. Then the matrix of t relative to Beta. We have a matrix of t relative to Beta, we'll call it A. This is a by a matrix. Then we can express determinant of t as used by the usual formula. In terms of the entries of this matrix. Namely, it's a sum Sign of J one, J N, where J one up to J N again in the permutation. A J one, one, A J two AJ two two, et ce N, N. So it's a sum of terms. Each term is a product of n elements from the matrix. The matrix looks like this, AIG I and J goes from one to n, so there are terms labeled by permutations, that is to say sequences one up to N for each such sequence, we have a particular collection of entries from the matrix of them, we take the product, and you put a sign. In the case of two by two, we get the formula that we discussed last time. In general, this formula has n factorial terms, and there is a shortcut which is right here, which I'm sure you're familiar with from M 54 or an equivalent from the first course of linear algebra, which is called the expansion along the first row, which gives you a practical way to calculate this determinant by induction on determinants of smaller sizes of matrix to smaller sizes. The first. So it turns out and this is just a combinatorial statement, which follows very quickly from this formula, where you simply isolate the summation over J one, and then you have the summation over two, where they are not equal to one. From this simple trick of splitting the summation into two summations by actually not quite correct, what I wrote. It's more If I did that, that would be, that would be along the first colon. Let's first colon since that's more clear. But you can rewrite it also in terms of you can also switch one and one here and two and two and so, I would also be an equivalent forma. But since I wrote in this way, let's do it in respect to the first colon. Here is your Here is your matrix. Let's consider it in the case one and it's equal to four. Here you have a one, one, A two 1a41, and so on. The idea is that you have to take each of them with a sign alternating signs. The first sign is going to be plus. It's going to be the sum minus one plus one or i plus one. I from one to n, and you have a one. These have second index equal to one because of the first colon. Then for each of those guys, here's what you do. You remove it, and then you remove the role in the colon which contains it. So it's always going to be the first colon because all of them are in the first colon removing a particular roll. What's left is a matrix of size smaller. Okay. And I will call this matrix a smaller size. A one hat, it's n n minus one times n minus one matrix. And you just take the determinant of this one. So for instance, here I showed what happens if I remove the so that's the first element of Ecle two one. If I cal two two, I have to remove this row and the first colon. Now it doesn't quite look like a matrix, what's left, but you just compactify it and make it into collapse the top onto the last two rows, and you get three by three matrix. Take it's determinant. And so on. In this case, you'll have four terms. And so it's useful because it allows you to calculate the determinant of the original matx which is n by n. In terms of determinants of smaller max which is n minus one, by minus one. And then you can calculate that by the same procedure reducing it to n minus two times minus two and so on. So this for small sizes, for instance, for size two two by two. We know the formula I rooted last time. It's easy. It has two terms. And using this it's very easy to calculate for any three by three matrix. And so Okay. So this is familiar. But now, how do you prove this? The proof is straightforward, just from the definition, more or less everything falls out from this formula. So I'm going to I'm not going to since I have a little time and I have some important things to tell you. In addition to this, I'll just leave it for you to read how to derive this formula. So this is easy. See the book. What is which I feel is more essential is to emphasize how do we get properties, how do we get various properties of the determinant. Determinant is now defined in this abstract way as the coefficient of proportionality between your original form row and the new form row that the numbers that we get this way indeed satisfy the properties that we are that we that we expect. So So Lemma. I. Maybe three, maybe. I always forget I always keep lose track of which the numbers. So for any pair of operators, S and T from V to V, we have determinant of S t is equal to the determinant of s times determinant of t. Okay. So we have to work with this definition. Let me integrate this notation into the formula. So you have RT. Here I use DT, this notation, but then I replace it by the traditional notation. So times. That's the formula we work with. R remind you, is this, which, by the way, we already know what the determinant of the identity matrix is from this formula because if t is identity matrix, then you're doing nothing, the right hanside is exactly the same as row of v one, v two, and so on. This formula shows that if t is identity, then the determinant is one. Simply because R t is just equal to since is just. Therefore, determinant is. But now to prove that multiplicative property, that the determinant of the composition of two operators is equal to the product of the determinants. That's actually very easy because you are simply applying the procedure in sequence, applying t and you're applying to R. What do we have? We have T of v1vn is by definition equal to R of S t v one, et N But this is the same as raw. You can strip away S everywhere and put it as a subscript according to the definition over there of TV one, TV, right? But row, by definition, is a determinant of s times row. I'm applying this formula in the case when t is. We back to R but row of TV one TVN. That's R t. That's the determinant of times row T of V one, V N. But row t is the determinant times row. So we get finally determinant of s times the determinant of t times row of V one, et. So here we were stripping away perator one by one. But we could also recall the definition of this that we're not going to separate them, but just look at this definition in the case when the index is ST and t. Then by definition is going to be R determinant of S t times row of V one, V. And so the result is that, and we know that there exists a non zero element, so we could we can find we can take, for example, our road that we constructed using a basis for which this will be a non zero number. And therefore, you can divide by it, and then you get the equation that we wanted. You see, it's actually totolgical what's beautiful about this construction is what I call last time functors a factorial construction. That is to say whatever rules the operator satisfy on the originator space, When you roll them compress them into this operator on the one dimensional space, which is given by the multiplication by the determinant, the same the same relations will be satisfied. So in particular, this relation is satisfied. Okay. So what it means in particular it leads us to the following result, actually, that So now we want to show that the turbdt non zero, if and only if the operator is invertible. So that's the moo is that T is invertible. If the determinant of t is invertible, it is not zero. You see how powerful this is because t is an operator from V to V. So it's a complicated object. We could have very large dimension. The aparator don't commute with each other and so on. This notion of being invertible or having an inverse is very subtle here. But here, this is an element of our field over which the vector space is defined. Invertible element in the field is just a non zero element, right? Because in the field, you have one of the axioms of the field is that every element, which is non zero has an inverse. Invertible elements of the field are precisely all non zero elements. We get a very simple criterion for invertibility by simply calculating the determinant. That tells you whether the original operator is invertible or not. In one direction is obvious now from the previous formula, which is that If t is invertible, then there exists the inverse pertor then we find that the determinant of i, which is one. So we have t times t inverse is the identitypertor. We can writes the determinant of t times t inverse. But by this lemma, this is the determinant of t times the determinant of t inverse. So we find that one is equal to the determinant of t times determinant of t inverse. If one is a product, this is in f. If one is equal to a times B, then a is no zero and B is no zero. This proves it this way. Because we find that this is non zero and this is non zero. Moreover, we find that also we find that in fact, the determinant of t inverse is just the inverse of the determinant of t. That's easier. The opposite direction is a little bit. Explain that. So now we want to prove the opposite. So suppose the determinant of t is non zero. Let's show that T is invertible. Okay. But this is equivalent. Remember, invertible, we know that it's an operator from V to V. We know that invertible is equivalent to the null space consisting of just the zero vector. Because then it is injective, and also by fundamental theorem of f linear maps, it is subjective. Injective subjective is equivalent to invertible. We discussed this in great detail early in the course. What we need to show is that for v0vt of v is also non zero. Let's do that. If v is non zero, we can complete v to a basis of v. Let's call it, where the first one is v itself. The one we start with. Okay. And then I claim that for in row. So actually, no, take the take the row take the row which we constructed before. We constructed in the one from this basis. Remember, for every basis, we constructed a particular alternating form, which has a property that row of E one, E two, E N is non zero, actually it was one. But it is non zero, we constructed. Apply this construction this basis where the first element is our vector we start with. But then, R t. By definition, so RTE. Of E one. R of t one, t N, is what's called R t of one N, which by definition of the determinant is equal to the determinant of t times row of one, two, N, right? Okay. And we have assumed that this is non zero, and then this is actually equal to one, so this is equal to the determinant of t, which is non zero. You see evaluate on a particular opal, where the first one is t one is non zero. But if this element was zero, it would be equal to zero. Okay. So let me actually explain this because somehow, I did not mention it earlier, but it is a very important important property of multilinear maps. So that I, all of them are no zero. T E one is no zero, and so TV is non zero. And we are done because that's what we needed to show for every v. Let me just pause for a moment and state this as a as a remark, it's not even it doesn't even rise to the level of a lemma because it's so obvious. But I want to state it so that we don't we see it clearly. I have for any of those spaces, not necessarily the one which we're considering now, which is very special V n is a dimension of v. But just take any one of those VMs. If you take you evaluate it on any opal, which contains a zero, then it's value zero, I of the V is equal to zero. Why? Because you can calculate row of two V on the one hand, it's two times row of Vi basically well, I guess, no, it's simpler like this. Zero times V. If it is zero, you can always write it as raw of something here. You put some vector and multiply by zero, then bilinearity. You can pull the zero out maybe put some whatever, any W. We know that for every vector if you multiply by zero, you get zero. But linearity, you can pull zero out, so you get zero times row evaluated on this. But it doesn't matter what this is, this is just zero. Therefore, if one of the arguments is zero, then there is no choice. The value of any multilinear multilinear form on this opal is equal to zero, top is equal to zero. Here we find particular ople where the first member is t one, which is what we call which is TV. Because remember, one was the initial vector from which we completed a basis is our original vector. We find that there is a particular collection where this is the first one for which this value is not zero. Therefore, actually all of them are not zero and in particular this one. Of course, we use the determent is non zero. And so that shows this direction. Okay? Now, this Candice Crolry which is that the determent of a matrix does not depend on the determent of an operator does not depend on the choice of a basis. If you have an operator t and you assigned to it, the matrix M t Beta, then you're assigned to it the matrix MTs back to some Gama where Beta and Gama are two basis of v Let's say this is A and this is B, then we know that A is equal to b times q inverse where Q is a change of basis matrix. But then the determinant of A is the determinant of Q times the determinate of b times the determinant of Q inverse, which is the same as the term of Q inverse according to what we have done. Then these two cancel each other and you see the determinate of A is equal to terminate of B. This way, you don't worry about which matrix representation of an operator you take. Conceptually, it is easier to define the determinant of an operator because you operate no pun intended with this functoro construction. But for practical purposes, we want to represent an operator by matrix and then use the explicit formula. Then you start wondering, what if I choose different basis, maybe I would get a different result. But this shows you that there's nothing to worry about because these two things cancel out Q and Q inverse have determinants. Okay. So finally, we can define the cistic polynomial that which was the purpose of this whole exercise. You see? Characteristic polynomial. T is an operator from V to V. Then we take the defined t of as the following polynomial is the determinant of z times minus t. From the definition, you find that actually it starts with z n, so it's a monic polynomial of degree n, which is the dimension of our rector space. What is so special about this polynomial It's the eigenvalues. If and only if Lambda is a root of this polynomial, Q of Lambda is equal to zero. This is how you were taught originally in the first line go course. Now we have recovered this. But we recovered this with a bonus that we actually know where this formula comes from, how to prove this? Well, that's easy because the Lambda is an eigenvalue. Is an eigen value if and only if of t. The apiatorm the minus t is not invertible. Because in this case, there is the space of Lambda minus t is zero. By the way, there are two schools of notation. Sometimes people write Lambda min and sometimes T minus Lambda. Of course, whatever property, Eigenvector is in the space of this one and this one, it doesn't matter if you put aside. You can define the cyst is determinant of this or of t minus z i. But if you do the latter, the coefficient will be minus one to the n, and so it's slightly inconvenient. In the book is defined this way and I'm doing it this way also. So the null space is not zero if and only if the Lambda is g in value, but that means Lambda min is not invertible. And we have just proved in Lemma four, this means that the determinant of Lambda minus t is equal to zero, which is equivalent to saying that Q t of Lambda is zero. That's how you prove it. And then in fact, we can see more in the complex se if we work over the field of complex numbers, we have the Jordan form. T has a Jordan form. The Jordan form is an upper triangular matrix, right? In fact, where you have this Lambda one, maybe several blocks with Lambda one, several blocks with Lambda two and so on. But actually, for any upper triangular matrix, The determinant is just a product of the dia may say, the product of the diagonal entries. From this, we immediately conclude that the ctstic polynomial of an aperator is just a product of z minus d to the power Di, where D is the sum of the sizes of all join blocks. Right? Because that's how many times Lambda will appear. For each on block, there are as many Lambdas as the size of the block. And so if you take the sum of the size all the Lambda eyes which are appear on the diagonal of this matrix. And I want to contrast that With the minimum polynomal First of all, we have the theorem due to Klein Hamilton, which says that if we substitute T into the Crys polyoma, you get zero. This is easy for play with the Jordan block and see what happens. I can I can read it in the book. This implies that the minimal polynom PT of Z divides QT PT of Z has the same factors Xm is Lambda. But the powers are different in general. Let's call this SI. As we discussed, this is the largest size of L Jordan block the Jordan form of the