All right, so we finally we are getting close to defining the determinants. So remember, our goal is to introduce determinants in a conceptual way. So that it's not a formula that is given to us by decree, but we actually understand where it comes from and we are very close. Now, if not today, then on Thursday we will do it. The key to doing this is to define spaces of multilinear forms. We already started doing this last week when we defined bilinear forms. Even earlier we defined linear functionals, which you could say is one linear forms. Remember, we have a finite dimensional vector space over field. That's a very lot of excitement, you guys excited about the Determinants as I am as well. Earlier in this course, we define the dual space. The dual space consists of linear functionals, that is to say linear maps from V to. And to fit our general terminology, we will call them one forms, one, because there is only one argument. Last time we defined bilinear forms or two forms. Now these are functions not from V to, but from V, cartigian product with itself to. Now there are two arguments. Here is one argument. We write this as i v, v is an element of the vector space, that's the only argument that we have. But now we can draw a vector from here and another vector from here. And they will become the two arguments for this bilinear form. Whereas here, this was an element of, of VW is an element of, I will reference as two arguments. V is the first argument and W is the second argument. Of course, it's nothing new to us to consider functions in several variables. For example, in multi variable calculus, we consider functions in two and three variables, and sometimes even more, where each argument takes values in the field of real numbers. We don't call it field of real numbers, we call it a set of real numbers in calculus, but it's the same object. The only difference that here, the argument is not a real number, but it is a vector and V. The second one as well is a vector, and V, we replace r by V. Otherwise, it's the same idea, a function of two variables. However, in linear algebra, we are not interested in general functions. We are only interested in functions which satisfy the property of linearity, Which means that the value on the sum of two vectors is equal to the sum of the values on each of them, and the value on the multiple is equal to the multiple of the value, right? However, here we have two arguments. What to do? Should we consider linearity for the first argument or the second? Of course, the best is to do, to be in the best of all worlds and consider linearity for both arguments. And that's the definition of bilinear form. If we did not impose a linearity condition for each of the two arguments, it would be just a general function from this set to this set, but is a vector space. So we exploit the structure of the vector space addition and scalar multiplication to restrict the class of functions that we're interested in. This way we get something much smaller and much more manageable and much more useful for our applications. That's what we defined last time. Then the next idea is that, well, so far we only talk about each of these objects separately, individually. But we can combine them into sets, and those sets have natural structures of F spaces as well. This we already know. If you take all possible linear functionals, they form what we call the dual space. Here there is a new idea, which is that if you have not one but two functionals, you can take the sum. If you have a scalar, you can multiply any functional by scalar. This way you get two operations on the set of all linear functionals, which makes it into a vector space. That's what we call the space. Likewise, last time we looked at the set of all bilinear forms, we realize that if you have two such bilinear forms, row one and row two, we can take row one plus row two. There's a natural definition of row one plus two. Its value on V is just a sum of the values of one on V and two on VW. Likewise, for scalar multiplication, this is also a vector space which we here, we could call it like this. A prime is upper script one with brackets, That's two. That's a vector space in Sbt. A space of dimension n squared, where n is denotes the dimension of V. This one, by the way, has dimension n as well as the same as dimension of V itself. Obviously, if I ask you how can we generalize this, you will say and you will be correct. Let's consider three forms or four forms. Define them in a similar way. That is our next step. Then I'll explain how this will lead us to the determinants and then we will follow that path. Okay, But first let's give a definition now. Take an M from 123 and so on, and define linear functions. Linear functional, linear form or form for short, to be a function from V cross V cross V m times right to the ground field over which V is defined, given a collection of vectors, That is for any collection of vectors V 1v2m, ordered collection, we know which one is the first, which one is the second, which one is the M. We have what we call r V one, et cetera, V, which is an element of F, okay? In other words, every function is a rule which assigns to an element of the first set, an element of the second set. In this case, the first set is VcroscrousVM times the second set is it's a rule which assigns to every ordered collection of m elements of V m vectors. In other words, which we call V one to V M. A certain number for a number means an element of the field over which everything is defined. We denoted V one to V M now M arguments, whereas before we had two arguments or one, it's a very natural generalization. But now again, in both cases that we cited before, we have to impose linearity condition. An m linear form is a function like so, which satisfies linearity for every argument. The narity condition for every argument, for each of the M arguments. That means that if we fix that is if one K minus one, and then K M. That is to say you fix all of them except one. All of them except K. All of them except K will be fixed. Yes squares n squared. Because I explained that every bilinear form is defined. Let me postpone this. Okay? Because I will, this will be to the M and this will be a special case in, since I will have to explain end to the M, this will follow, right? Okay. We fix all of them except the case argument, right? And substitute into effectively it now becomes a function of only one argument because all the rest of them are fixed, right? We get a function from now from one copy of V to which sends V two raw, let me use a different color to raw of v1vk minus one. Then V in the case position, then the rest of them, okay, Effectively, it becomes a function of one V. It becomes a function from V to. And we want it to be a linear functional, as simple as that function, which has to be linear. That is to say, let's call this. And I put the symbol, will the symbol reference this, this data I want of U of this five. And five at five of U will be C of five of this, like for all K for, ok? For every argument. Okay, so that's the idea. Now clearly this will specialize to what we call bilinear forms last time. This will be just exactly the objects for which we take two arguments rather than m or cle to two. If amicle one, we will recover the notion of linear function. Now the question becomes, for instance, what happens if we take all of them, now, Take all of them. So let the set of all forms we could say to emphasize a form on what is a form on V. For every vector space, there is a space of one forms, On the space of two forms, on three forms, and so on. Of course, if you change V, you will get a different space. It's not like the space space of forms, a space of forms corresponding to a specific vector space. Forms on form a vector space. I will say with respect to obvious operations of addition, scalar multiplication is nothing to it. If you have two such creatures, two such gadgets, row one and row two, you define row one plus two as the function which takes whose value for a given collection of one, and VM is row one. Evaluate on this collection plus two, evaluate on this collection. And likewise for scalar multiple, It is a vector space. And the next natural question is what its dimension. Now I have a claim that if m is one, we know that if m V, this is n two is n squared. Although that has just been questioned. I guess I'll have to give an argument for this. I claim that actually for any, m is going to be n to the power, it grows exponentially if N is not equal to one. Obviously, what is the dimension? Last time I explained, I gave the argument why the dimension for V two is equal to n squared. This argument was simply by obtained by choosing a basis of V. Then seeing how to define form or by linear form is the same as to define its values on all possible pairs of basis elements. Since there are n squared pairs of basis elements, you need to define n square values, and that's how you get the statement that is n squared. But likewise here we have to define the value on emptopoles of basis elements. Let me explain this more precisely. Choose a basis and let's call it, what do I want to call it? Maybe one. Okay, So now the Lemma I want to state is in VM is uniquely determined by its values On very special vectors in general can be viewed as, that's probably worthwhile to mention. A general can be viewed as a black box which has M inputs, namely the vectors v1v2 m. We draw inputs from our vector space V, have to choose m of them because it's a function of m arguments. It draws and it spits out an element of that's where it is. But because of the linearity property, it is sufficient, necessary and sufficient to know the values specifically when each of these VI is a basis vector. That's the point. That's what I was going to write here is determined by now. We have to come up with a nice notation. We have to write some, maybe one. Let's choose k1k2. Let's not confuse them. N is the dimension of V, that's why the n is. But the arguments here, it could be one, two, which take the first one, one which has the index k11 of those guys, he also one of those guys, number two, and so on. Obviously if you have already, you have to know what this value is, All right? So we can call this denoted like k1km. This is an element of F. If you already have, then certainly you have all these numbers for any sequence k1k2 KM. In other words, I is in the set from one to n. But conversely, if you already know this, this gives us the value at any V1v2, et cetera. For the simple reason that each I can be written as linear combination of some IJ, j, where j goes from one to n, right? Each V I can be written as a linear combination. Then we use linearity to express this value as a linear combination of the values on the basis vectors. That's how you can always reduce this combination, some of this kind of this type with some coefficient right poles of numbers from one to n. That shows you that conversely, if you know all of these numbers, you know for every collection one, right? But what happens if we take the sum if you have two rows first represented by this numbers like this collection encodes you can think of, it is necessary in sufficient information, numerical information, you need to encode this object. This is very much like encoding a vector as a column of numbers, for instance, using a basis. In this case, there are numbers. The only difference is what labels this collection is not numbers from one to n, but ample of numbers one to N. That's all, if you have two forms on V, one encoded by numbers like A with some indices, and the other one encoded by numbers like. So if you take the sum, the numbers will just add up component wise. You see that's why it's clear that the dimension of the space is equal to n to what I wrote to the m, because this is how many, this is how many numbers we get in this collection. If you put one into M, we get this K1k2. Get K1k2. There are n to the power numbers. This is an element of, we call element of our numbers, or scales, if you will. These numbers add up when Tom, they get multiplied by scalar. When we multiply the form by scar, that means that V vector space is isomorphic to the power n to the power. The dimension M is equal to n to the power. Okay, good. Any questions? Let's grows very fast. It's not going to help us to get the determinant. Now, I want to explain to you the basic strategy, how we want to get the determinant. What is the idea of the determinant? We want to use M forms to introduce the concept of the determinant. By which I mean the following procedure. We want a function to define a function from LV, which I remind you, LV is our notation for the vector space of all operator acting on V, on a given V. We want to assign to every operator from V to V, which is an element here, a number. That's the idea, right? We want to assign to is an operator has dimension one, then is a number, all right? But if has dimension greater than one, it's not a number. At best, we can represent it by n square numbers. But we want to assign to this a single number. Not just in an arbitrary way. But so that if you take composition one, two, then it goes to the product of the crysponic numbers. The determinant of one two is equal to the product. Moreover, that identity apparator goes to one. If you combine the two, you see that if the operator is invertible, then the determinant of inverse has to be inverse to the determinant of t. This shows, in particular, that the determinant is non zero. If a non leaf is invertible, that gives us a way to compress into something which is just a number. Think about it. You try to convert something which is multi dimensional to something which is one dimensional. How to do that? Here's an idea, we would like a procedure. Look, here we have a procedure which enables us to pass from V, which is our dimensional space, to what we call VM with round brackets, which has a different dimension. Unfortunately, it grows. But what if we could modify the construction in such a way that you would get, for example, something which is one dimensional then is an operator from V to V. It will give us an operator from VM to VM. That's the idea. Let me write this down because I start explaining it in words and I see that it gets a little to, let me just write this down. So you will see. So observe the following. That gives rise to an operator which will call M from V. M to VM is a crucial point. Think about what we have just done is a procedure which assigns to one vector space, another vector space. All right, you start with V, you get another vector space called VM. For example, for Emical one is just a dual space. But remember for dual space, if you have t from V to V, you obtain transposed from V prime to V prime. In other words, not only you're assigned to every vector space, another vector space, but you assign to every operator on this space an operator in this vector space. At least we've seen it for amicle one. I claim that we can do the same for any M in the following way. I have to find a way to transform raw to another one, which I will call M raw. How to do it? The idea is very simple. I will define given role from V to V, define a new element of V to the med applied to Ro. What does it mean to define M role? It means to define it as an M form. But what is an M form? Form is something that for every collection, well here I put already it is for every collection v 1v2vv, M produces a number. Now I have to such that raw of v1vm, how to get a new number. Knowing raw of any mptopole, if you think about it, the only natural way to do it is to hit each of these vectors by apply to each of those vectors. That's what we can do because converts vector to another vector. We define the value of the new form on V one VM as the value of the old one on v1v2 TV. We rotate each of the arguments by the same as the result and then evaluate Ra. In other words, from this perspective you start out with raw, then we define the M. This box is defined as by substituting here for each of these guys before the, before they enter is a box which has inputs and one output which is a number in between each of those inputs and the vectors themselves. At the operator, we take the composition of these boxes. You see this new operator is. That's what this new operator does. In other words, if we take this whole thing and think of it as a single black box, that's the black box of m. It's easy to check if this is linear form, linear form. It's linear for each argument. This will be linear for each argument is a linear operator because of the sum of the t's, therefore does not spoil linearity. You see now how interesting you see. You have an operator, you start with an operator acting from n dimensional space to n dimensional space, but you obtained canonically without any choices. We didn't have to choose any basis or anything like that. We get an operator acting from totally different vector spaces, presumably. Well, at least from the outset, we know how to construct one from the other. But it's a different vector space, in particular has a different dimension. This is an idea which in mathematics is called functor reality. Today in mathematics, we not only focus on set theory, which remember I talked about set theory as a foundation theory of all of mathematics that was developed a little over 100 years ago. About 50 years ago due to the work of great French mathematician under Growth and Tick, and many others such as Ein, Perk and Mclean and so on, a new paradigm emerged, the paradigm of category theory. The category is an enhancement of a set where not only have elements, but we also have operators between them. More properly, they're called objects morphisms. In the case of vector spaces, you have category of vector spaces, objects are vector spaces and morphisms are linear maps. Then you're interested in procedures which preserve the structure, which not only convert an object to another object, but also if you have a map or morphism between two, you apply the procedure. You should have also a map between the resulting objects. This is what we have done here. It's an example of a functorial procedure. Okay. But still doesn't help us because we want a number. To get a number, we have to get a linear operator on one dimensional vector space. We have to modify the construction so that instead of this guy whose dimension actually grows wildly, we get something which is one dimensional. Why would solve for us? Suppose that let U be one dimensional reector space over then a linear operator, let's call it from U to U, is the same as a number. A linear operator, a linear, a linear operator is the same as a number, that is to say an element of. That's simply because every vector under some lambda, for some lambda, every linear operator is just re, scaling on one dimensional space. You don't need to choose a basis, You could choose the basis. The basis vector will get multiplied by this number. But any other basis element is a multiple of the one you have chosen. It will get multiplied by the same number for higher dimensional or spaces. We cannot even say that. We cannot really say that linear operators on say, two dimensitor space are the same as two by two matrices. Yes, it's true that they are isomorphic by two, two by two metris, but there are many different isomorphisms. They depend on the choice of a basis. Because the basis consists of two vectors, if you change a basis, it doesn't mean that each basis get multiplied by some number. There could be some recombinations, But one dimensional case is precisely the sweet spot, where in fact, operators are just numbers. Because the matrix is one by one, so to speak, every vector gets multiplied. There is no difference between the notion of linear operator on one dimensionaltd space and the notion of a number. If you want to get a number, it's the same as getting a linear operator on one dimensionalated space. Therefore, the idea is modify this construction in such a way that instead of VM, you will produce out of an effective space, canonically a one dimensional t space. Then any operator will give an operator on that one dimensionalitd space, and effective will give you a number. And that's a determinant. This is how we're going to do it. The question is how to modify this construction. And the clue is given by what we did at the end of last lecture, which is for bilinear forms, I define symmetric and alternating bilinear forms. So let's revisit this notion. For Amical two, we defined two subspaces of V two of symmetric and bilinear forms and alternating forms inside two. In fact, two is a direct sum of this two. I want to remind you what is it can be defined in two different ways. One is consists of those raw in two for which raw of U is equal to zero for all U. If the argument repeats, we get zero. But alternatively and more suggestively and explaining the terminology, it consists of those bilinear forms whose values gets a minus sign. If we switch the two arguments, this is a beautiful thing, you realize. If you have two arguments, you get asymmetry to play with. Because if you have two arguments, you can switch them because as I emphasize when we say two arguments, we have to remember that they're ordered. We know which one is the first, which one is the second. If we switch them a priori, we're going to get a totally different value. In multivarbal calculus, you would consider function of x, f of x, y, but let's say the function is x plus y squared. If you switch them, you get y x, you get x squared plus y. There's nothing to do with this priority. The two functions are completely different, but some functions are symmetrical. For instance, if you have x squared plus y squared, if you switch x and y, you are the same. That's a symmetric function. But if you have x squared minus y squared. You get minus x squared plus y squared. That's alternating function antisymmetrical. It's value at y x is equal to minus the value of x y. That's nice because actually every function can be written as a symmetric plus antisymmetric. There's a simple check taking the two sum and a half difference, which I did at the very end of last lecture. That's what we're trying to do here, describing antisymmetrical or alternating functions for which the value at UV is minus the value of V. You see. But now what does it mean from the point of view of the encoding of M forms by these collections of numbers? In fact, for two forms, for two forms it's going to be especially easy to see. Remember we have to consider role and let's suppose that so also two v is two dimensional and it has a basis one, two. In this case we have these numbers can be arranged into two by two matrix which consists of the values of role on 1121, 221 and two. Two. Now I know that there is a little confusion here. Question made arises. How come elements of two are represented by two by two matrices say or by Mat in general? Which is also how we represent linear operators. They also represent by, by matrices. I will postpone the answer to this question later. It's not a coincidence, but the objects are not exactly the same. The difference between them is roughly like the difference between V and V prime. It's very close but slightly different. Let's not worry about this, let's not worry about this fact. I'm about to sneeze, but okay. So four numbers. Those numbers are the values on 1112, and so on. But suppose now that row is alternating, then this will disappear according to the first definition. These are two equivalent definitions. In other words, every alternating form satisfies both this and this. In fact, each of these equations is enough. It implies the other. The first definition implies that if you have post arguments at the same, you get zero. There's no information, they don't car any information and no priority. The zero, as soon as we know that is in the two out, we know that this guy is equal to this guy with a minus. Effectively, there is only one parameter. Guess what? The dimension, There is only one parameter, namely of one, two. Which means that the dimension of this subspace two alt is exactly one V two. Without alt, all bilinear forms on the two dimensional space has dimension four because two square root is, As I mentioned that symmetric alternating give you two direct summons, direct sum decomposition of two. The symmetric is three dimensional and the alt is one dimensional. Yes, you said it because of this, two arguments are the same, has to be. But it's clear also from this, because of U is equal to minus U. So if a number which is equal to minus itself, it must be zero, right? That means that this is always is alternating. This is always zero. If is alternating, this is equal to minus. This effectively only one parameter remains instead of four. For symmetric ones, this may be non zero. So this is one parameter, independent parameter. This is another independent parameter, and this is equal to this. So there are three parameters, one to three, it checks out. The symmetric ones form a three dimensional subspace. The alternating ones form a one dimensional subspace. Together they form a four dimensional subspace, which is what we know anyways, that the two has dimension four. But now you see, we locked out, we have found exactly what we were looking for. We found a canonical construction for a two dimensional or space. This is a special case because here we're only considering two dimensional vector spaces. If the vector space was not two dimensional, the dimension would not be one because there would be the diagonal entries will vanish. Okay, I'll get back to that case. Dimension is three. I just want to emphasize we are considering a special case when the vector space has dimension two. In this case, we have been able to assign to any vector space of that dimension, namely dimension two, a canonical one dimensional let space. Therefore, this program should go through that any operator on the two dimensionator space will give rise to an operator on one dimensionaltor space and therefore to a number. And guess what? That's going to be precisely the determinant of the two by two matrix representing this operator. Okay? So in fact, perhaps it's worthwhile to explain this example so that you see that we are firm on the firm ground here. Suppose that dimension V is two. You have an operator. It gives rise to an operator which I will call two alt from V. Two alt V, two out, okay? Namely, of one of J is equal to j. But because this space is one dimensional, we can interpret the separation is a number and that's going to be our determinant. Let's calculate what it is. Suppose that, suppose that we have already chosen a basis one Re two. Because we use this basis to identify the space of, to identify alternating forms with numbers by simply taking the value of the alternating form on one two. Because we know that this is sufficient. Necessary, insufficient. Once you know this, you know everything because you already know this is zero, this is zero and this is bins and then you know it for every vector. Now, on the other hand is operator from V to V. And we use our basis to convert to a matrix. Let's call it. I don't want to use a capital because I used a for the encoding of forms. So I'm going to use 11122122. Okay, so now I've had to perform this program. So I have to calculate what? Suppose a is an element in two out which is determined I equal, determined by its value just on one, two. I have my operator which is determined by this matrix 111221 and two, two. I want to calculate this operator. What happens when I substitute instead of J? A substitute two out overall. But I know that it's going to be also alternating. I need to only find out the value on one. Two by definition is going to be row of t1t2, right? That's, this is written for every IJ, but I know that I only need one and two because the rest of them can be one. One is zero to two is 021 is minus one. Two. So I need to calculate one thing. But what if a matrix like this one is this, this of two. More precisely one is 111212, that's our usual matrix representation of 212122 of two. This is a Cot of 111 plus 212, then one, two. One plus 222. Now, it's going to be tedious rewriting. This is sum of four terms to accelerate it. Let's remember that the thing is alternating and therefore each time I get 1122, it will vanish. There will only be two terms which don't vanish, which is wing to be raw, 21111. Then I have to take 222 on the cross terms. The first argument is one, the second has to be two. Then the second one is two. The first one has to be one. It's going to be of 212121. The other terms vanish because they involve row of one, one or row of two, two, but it's linear so we can pull out these guys. Likewise, over there, the result is 1122 row of one, two plus 2112 times row of two, one, but row of two one is minus row of one, two. Then both terms are proportional to row of one, two, except here I have to put a minus. Lo and behold, you recover the familiar formula, 1122 minus 2112 multiplied by row of one and two. That is a determinant of T. You see how these things combines simply because of the alternating property. First of all, in each term is a product of terms which live in different rows and different columns. That's simply because if two things are the same, one, one or two, two, then the answer is the value is zero. That's why these two guys survive. Then the alternating property to put the proper sign on it, eventually it becomes proportional to role of one and two. And the efficient of proportionality is just determined as we know it for two by two natives. I claim that exactly the same thing works in general. Any questions, by the way, about this? Yes. Yes. I, I made a small type. I should have written out, there is a procedure assigning to every vector space V, another vector space which is called V two out. That is a space of alternating bilinear forms on V. Then I claim that is operated from V to then. I can easily define an operator from V, V, which I will call to a that is one dimensional, in this case a one dimensional space. Therefore, this operator gives rise to a number for yes if you wanted to. Represented by a matrix will be one by one. Matrix therefore is a single number, which is what we wanted. The point is that under this procedure functorial, but also whatever relations you have between operators at the level of operator, it will be preserved. So I will get precisely this. That the product will go to product and one will go to one. You see, the idea is to be able to convert an end dimensional letter space to one dimensional space in a canonical way which does not use anything, any basis. And we did it here for at least for two dimensional letter spaces. We have found the canonical one dimensional letter space assigned to every two dimensional letter space, namely the space of alternating bilinear forms on it. Then we run this machine. The machine is very simple, given any role in our space, which we have just defined. Define a new one with the help of the operator as follows. The value of the new one on, say I, J, is going to be the value of the old one on INJ, rotated by t. That's a definition, that's a general construction. I first explained how to do it, where instead of two out, we take VM. The problem with VM was it's dimension grows. It doesn't give me one. But in the case when V is two dimensional, there is a piece of V, two is naturally isolated by asymmetry condition or antisymmetry condition more precisely, which is actually on the nose one dimensional. But the idea of converting elements of this new space into new elements with the help of t is always the same. You just apply to every argument. That's it, you see, we have just seen what happens by its least calculation in the case of a Tomas space, the new alternating bilinear form is equal to the old one times the determinant of the matrix of the separator, which is the product of diagonal entries minus the proto of diagonal entries. This combination naturally appeared from the antisymmetric properties, you see. This is how we are going to proceed in general. In general, suppose now V has dimension n which is not necessarily equal to two but could be greater general case, the dimension of v is equal to n, which is an arbitrary number like 234. And so in the case one n is equal to one, there's nothing to do. The determinant of t is just itself. We don't need to do anything. We are applying it to the space of dimension greater than one. We will define a subspace You see so far, we have defined for any space, the space for every now we take to be equal to n, which is the dimension of V, which is what we did here. Because here we considered n equal two to dimensilator spaces, we consider the two forms for N dimensitor space, we'll consider forms, but inside the space of all forms, which is very large, has n to the n, we'll consider alternating forms, which I will define in a moment. And it will show that it's one dimensional always, even though the space of all forms has dimension that grows exponentially. If you match the dimension and the number of arguments and consider only alternating forms, this alternating condition is so severe that it will shrink the space to the one dimensional space. This is the subspace of V N. This has n to the n, but this has dimension one. Then every operator from V to V will give rise to an operator which I'll call alt from this one dimensional space to itself. Therefore, it will give rise to a number. And that's a determinant. Then when we calculate the value of it from the matrix of the operator will have a similar calculation will give us the formula that we are familiar with, which is the sum of factoral terms. That will give us an explanation where this formula came from. It comes from beautiful construction, functoral construction of converting vector spaces of dimension to vet space of dimension one. That's the meaning of the determinant. It's what happens to an operator when you collapse your vet space onto one dimensional ter space by this construction, then the operator collapses onto number. That number is a determinant. Okay? So let me define this subspace. That's the first step, right? So I have to define the subspace. I have to show that it's one dimensional. Then for sure I will have this function. It has no choice. I'll have it. Then of course, there will be an exercise, so to speak, to calculate like here I did for two by two to calculate what it is in terms of matrix efficiency. And it's just a pure combinatorial definition, which we'll see. It is very simple. The crucial point is to define this space and to prove that it's one dimensional. So that's what we're going to do. Let's keep this. Uh huh. Maybe, yeah. Okay, let me continue here. All right. Definition, we define it for M. Actually, we start out by, even though for the determinants we will only be interested in as a dimension, I will define a for every, then we'll specialize to. This case is a space of raw VM that's a subset at first it's a subset that will show it's a subspace of those VM such that for any v1vm, V of V one, et cetera, I, J, V M, where I is not equal I lesson right. So we can take any I and J which are lesson J. This value versus this value of v1vj, I and the rest of them are the same. In other words, we switch two arguments. The condition is that they just related by sine. You see if m is equal to two, then we get back our old definition, because in that case there are only two arguments. And the condition is that if you switch them, you get minus what the original value is. Here we are extending this notion to every pair, the IVJhIlesJ's definition. So now, the Mm hm. The first lemma here is that if any two of these are equal. Uh huh. It started, it's coming back. Yeah, I think it's because it's hot. It's starts feeling okay. I'm getting excited. The microphone fills it. Okay. To calm down. Not so fast. Okay. I'll take you out misbehave. Okay? So now what I was going to say is if you have two of the arguments which are equal, then then the value is equal to zero. So if you have the same V in the position and the J's position, and this is here, I didn't write it, but it should be obvious that, I mean that if row is alternating, then substitute two equal values will give you zero. The value if two of the arguments coincide will be zero, okay? So that's actually the same proof as two, but since I don't recall if I did it or not, I might as well do it. Here's how it works. Suppose that it's obvious. Sorry, I was going to, what I'm thinking is deriving this property from this. But we've already defined this way. This is obvious, right? Because then it's equal to minus itself. If you're switching to equal vectors, you get the same. All right? But the condition is that the value should change by sine. Therefore, it I guess it's obvious, right? Okay. Now what this implies corolla is that if one of the vectors, if the vectors v1vm, M, yeah, are linearly dependent. Then a of one V M is equal to zero. Okay, Let's prove this. Suppose they are linearly dependent. We know that if you have vectors which are linear dependent, then one of them can be written as linear combination of others. Let's say if one of them say VJ can be written as on your combination of others, one V one plus, et cetera, J minus one, minus 1j1j, plus one A M A M M. Okay. So now I'm substituting this guy into law. But look, look what happens if I substitute this into row of V one VJ V M. I substitute for VJ. This expression. Then I'm going to get in the first term, bi linearity. I'm going to get a one, a row of V one. And here will appear V one in the position that comes from the first term, linearity. Bilinearity tells me that if I evaluate it on the sum, on linear combination, I get a linear combination of the values. The first term in this linear combination is going to be when I substitute V one for VJ and I put this coefficient one in front. Then the second term will be when I put two, but remember two appears in the second position. Then I'll have V two in the J position. Here I'm showing what appears in the J position, right? And I think you already see the trend. This is, this is zero because one appears twice, appears in the first position and the j's position by this lemma. By the way, is it lemma one or did they have a lemma? Okay, it doesn't matter. Let's, let's be one. Since we started doing the alternating forms by m one. This is zero because the same vector appears in two different arguments comes from this. The second term is two, v two, which means that in the J's position I get V two. Again, these two are the same because here V two legitimately appears in the second position. Now, there is an intruder V two also in a J's position. This is also zero. By the same reason on all of the storms are zero because each of these efficient, each of the storm will produce row of some vectors where two vectors will be the same in two different arguments. That shows you that if the vectors are lar dependent, a row is equal to zero on this vectors, But this immediately implies the following. Next, that v lt is equal to zero if is greater than n, where n is the dimension of our vector space proof. Let's suppose Raw is an element of this subspace. Let's evaluate Raw on some vectors V1vm. Since M is greater than the dimension of the vector space, this collection of vectors is linearly dependent. Every collection of m vectors is independent in an m dimensional vector space. This collection is linearly dependent by the corollary. The value of row on this collection is zero. Since this is so for every collection one M, we find the row itself is zero. There are no non zero elements in here. This is comforting because whereas the dimension of the entire space VM is growing exponentially. Now we see that after plus one steps, it becomes zero. There is a chance that it will actually grow and then come down. In fact, the following. We can actually write down the formula for the dimension of the detective space. So we need to know what the dimension of detective space is, V1v2, up to alternating, because once it's n plus one and more it's zero, right? In fact, the dimension of the space for, between one and n is equal to the by nonocficion. Are you guys familiar with? By noncoficiion? You must have seen them somewhere. But anyway, it's n factorial divided by n factorial. This is a number of ways to choose objects from a set of n objects. Let's say set 12n numbers, let's just say numbers. In fact, you could say the proper way statement is the dimension is equal to a number of ways to choose elements from a set, one from one to n. Then you can evaluate that this is actually given by this ratio of factorials. Let me give you an example of this. First of all, from this follows in particular, a dimension of n. Out as a special case, we take a equal to n. Then is a number of ways to choose n numbers from the set of numbers. How many ways? There are only one. Which also you can see from here, it's n divided by n factorial times zero factorial factor is one factor divided by n factorial is one, which is what we wanted. Our goal is to prove that the dimension of this subspace V, n out n is the dimension of V is one. But in fact, will prove more that the dimension of V out is the number of ways to choose elements from the set of numbers from one to n. If m is greater than n, we already prove that is zero, we get zero. That's a very interesting construction which produces to you some finite dimensional Vetter spaces. Out of your v space V, whose dimensions are growing. And then going down, it grows up until about n over two. And then it starts coming down because the question is, which one is dominates here? For small m, it's going to grow when basically you switch m and n and m, there is a symmetry. And n, n over two, it, it starts with n, then it grows, and then it comes down. At the end step, you get one, and then it becomes zero. That's the construction of this alternating multilinear forms. But let me give an example just so you get some feeling for this. So let's take V, which is three dimensional. Okay, So n is equal to three, and then let's say two out. Yeah. Two alt is inside two, Two, we just explained that two has dimension and squared. And every element of V two is completely determined by a collection of numbers of I J. That's for general, right? In principle, there are nine possibilities you have of one, one of one row of 13212. Let's clear, right, previously we considered the case when V is two dimensional. Then it was every row is presented by a two by two matrix of values. Now it's three dimensional. We choose a basis in 123 and we evaluate raw on each of H pairs of those guys, there are nine of them. But here's the thing. Let's see what happens when we impose the condition of raw being alternating. A priority was not alternating. We would have nine parameters for all. But now what's going to happen is we already know that if they repeat, the value is zero, these parameters disappear. It's very similar to what we did for two by two. We removed the dilentresowree, but they match, each of the lower ones can be obtained by switching one of the upper ones. It's transposition. This, this is equal to minus this. Effectively, this is not a free parameter because once we know this guy, if row is alternating, we also know this, its value is going to be minus the value of this guy. Likewise, this is minus this, this disappears and this disappears. Three of them are left. Now, I claim that I can interpret this three as ways to choose two elements, 1-3 because what are the parameters bless you? Row of one, two. This corresponds to choosing one and two, 1-3 Remember here, n is equal to three. In general, we have to take the set one to n and we have to consider all possible ways of picking elements is two. How many ways there are to pick two elements, 1-3 There are three ways, 1213 n23. They correspond precisely to rule of one, two of one of two, right? We have found three independent parameters which determine an alternating raw in this case. And we see that they precisely correspond to the ways of choosing two elements, 1-3 In general, if you think about it's exactly the same because we described alternating multilinear forms by the value of raw, where each of the arguments is one of the basis factors. If any two of them coincide, we get zero. We only consider the case when they're all different. Then we can reorder them so that they increase at the cost of putting some number sign here. Don't increase 2-1 But we can always say that. We can always express this in terms of raw for increasing sequence 1-2 at the cost of putting a sign. But putting a sign means that we can express one in terms of the other. This parameter disappears. What's left precisely in general raw of one of where I one is less than equal to two, less than equal to I M. But such a sequence is exactly a way to pick numbers from numbers from one to n. That's it. You need to show that those are linear independent so that you can, this is clear, this key, that obviously there is no relation between these three guys just from alternating property. Alternating property only gives us a relation when we switch. That's roughly speaking, it's not complete proof. You need a little bit to show that these values are independent. In particular, you see that when n is equal to one, there's only one way to order them, they are increasing. All right, we'll continue on Thursday and we will finally define the determinant in general.