All right, so let's continue our discussion of invertible linear maps, also known as isomorphisms. So last time we talked about the following situation. That you have two vector spaces over the same field, V and W, and you have a linear map between them.  And this map is called an isomorphism of V and between V and W if it is invertible.  We also saw that the property of being invertible is equivalent to the property of being bijective.  Which in turn is equivalent to being injective and surjective.  Now we have three different ways to characterize isomorphisms.  These are invertible maps, these are bijective maps.  These are maps which are both injective and surjective.  Now, invertible means that there exists another map going in the opposite direction from v, such that if you take the composition first and then which is going to be the map from V to V, then this is going to be the identity map on V. If you take the composition in the opposite order, then this will be a map from W to which will be the identity on we also saw last time. It's not enough in general to impose just one condition.  We saw an example in which we had linear maps and such that S is equal to the identity, but is not in this map. Both. And were maps from the same vector space to itself. This way. This way. Okay. In fact, in this case was clearly not bijective, not even injective. If we want an equivalence between the condition that something is bijective and something is invertible, we better impose both conditions in general.  In that example of in which the infinite dimensional vector space of infinite sequences of elements of F, we had the situation that one of the conditions was satisfied by the other was not.  This puts to rest the question as to whether we really need both of them. However, in this case, in this example, V was infinite dimensional. It could not be spanned by finitely many vectors.  What I'm going to show now is that if in fact V W are finite dimensional, then it is sufficient to take just one condition. In fact, these two conditions are equivalent to each other. Situation simplifies dramatically. But suppose now that V and W are finite dimensional. In fact, last time we proved that, in this case, that if there exists an isomorphism between V and W, then the dimensions of V and should be the same. Incidentally, if there exists such an isomorphism from B to, we also say that V and W are isomorphic to each other. All right, Suppose we, in this situation, that we have two finite dimensional vector spaces which are isomorphic. That is to say there exists an isomorphism from B to W. Then we know from last time that dimensions have to be the same. Now I'm going to prove that in this case, the first of the qualities is equivalent to the second. Okay? So let's prove this way. Suppose this composition is the identity, Then I claim the null space of it consists of just a zero element. Let's let V be an element of the null space V zero. But then you see the beauty of it is that if you have a composition where is on the right and you apply it to v. If A is V, then ST will also kill T of V the same as applied to V. This is zero because V is zero of zero for any linear map is also zero of zero, but of t is equal to the identity map. By our assumption, the identity map of v is equal to zero, but the Identity Map of v takes every vector to itself. We conclude that V is equal to zero. That means that if v is in the null space, it has to be equaled to zero, which shows that the null space of v of t is indeed consists just of zero element. Now by dimension there, or by the fundamental theorem of linear maps, which says that dimension of V is equal to the dimension of the null space as dimension of the range. This now is zero. We find the dimension of the range is equal to the dimension of v, but range is inside. We know that dimension of V is equal to dimension of W. Therefore, the range of coincides with W. T is subjective. Now we have shown that the non space is zero, which means is injective. We have shown also that it is subjective by using the fundamental theory and exploiting the fact that the two dimensions are the same, which we know. That is a necessary condition and in fact, necessary in sufficient condition for there to be an isomorphism between them if they are finite dimensional. We conclude that just one equation, T equals IV, implies that is bijective because we have shown that it is both injective and surjective. And that means it's bijective one to one correspondence. But we have also shown that being bijective is equivalent to being invertible. Right? There we have the conditions, both that is identity and is identity. We get the second condition as well, right? That is, a bijective map must be the inverse map, because the composition is the identity. For the invertible map, you'll have one to one correspondence. And so therefore, it doesn't matter if you go this way back, or this way and back, you will get the identity map right this way. You show it like an ingenious proof because it uses the equivalence of different points of view on invertible, bijective, injective and subjective maps. And uses the full force of fundamental theorem of linear maps, which enables us to estimate the dimension of the range of a map from them of the no space. You see it's very powerful. It's nice illustration of how all of these notions come together, this proof to enable us to see that out of these two conditions, which we know for sure are necessary in general. Because we've seen an example where one of these conditions is not enough to allow us to conclude that the map is bijective. It turns out that one of them is equivalent to the other under the special circumstances that both of them are linear maps between finite dimensional vector basis. Now, this proves this way. But, but of course we can just switch VW and switch S and T. And so if we switch and and S, this implication to that implication. Therefore, by proving it in one direction, we automatically prove it in the other direction. We simply relabel what we call S V and replace by by SS and get equivalent. We get the proof of the statement in the opposite direction. Okay? So this is very nice. I will show you in a moment a very nice application of the concrete, very nice application of this, which is about matrices and by matrices, which is very hard to prove directly, but which follows from this argument in a conceptual way. To me, this is one of the first illustrations of the power of linear algebra, where you argue by using the concepts numbers rather than numerical representations of those concepts. We argue in terms of vector spaces rather than just poles of numbers. The way we approach linear algebra before You argue in terms of linear maps and not matrices. In that framework, certain things become clear once you define them clearly, and once you see clearly the connection between them. And then you can use those insights to come up with very concrete statements about numbers, which would be much more difficult to derive directly. Before I get to this, I have to talk about this numerical representation. I have to remind you that we have a powerful tool of representation, vectors in the finite dimensional vector space by columns of numbers, and representations of matrices by rectangular arrays of numbers called matrices. Right? I will introduce under umbrella of numerical representations, numerical representations. What does it mean is that, first of all, if you have V, A is a finite dimensional vector space, vector space over some field, let's say dimension of V, then some number, call it n. Then if you choose a basis of v, which as we know, all bases have the same number of elements, and that's exactly the dimension. Let's say beta is a basis, some x1xn basis of V. Then we know that there is this isomorphism discussed last time between vector space. But I want to do it this way, between V and FN, an isomorphism which sends a given vector v to the column vector, which I did not like. What is it is a column vector of coefficients a, one which appear in the representation. In this basis, V has a unique representation. In this basis, I record the coefficients of this representation. These are elements of I assigned to V, this collection of elements of this collection of numbers, elements of F. We call numbers because in our examples, in our examples is either the field of real numbers or the field of complex numbers. I call this a numerical representation of a vector. A vector is something abstract. It lives in its own space. It's space. It doesn't know what the coordinates are. By the way, in the modules, I put a link to a video I did for the Youtube channel called Number File, which many of you probably have heard of or maybe have seen. I did this video 2015, My God. Nine years ago at the time, I was teaching this class from a different book, but the same class. In this video, I was making this point of the difference between the Meerical representation of and vectors themselves. I alluded to some of it at the end of my last lecture that numerical representation is not the same as a vector itself, because to have a numerical representation, you have to choose a basis like here. But there are many different bases. In general, there is no canonical basis and therefore there are many different representations. People like to phrase it as saying that similar sentiment is expressed by the statement, the map is not the territory or the menu is not the same as a meal. You see this is a menu or map. This is a, this is a meal. Or the territory something which is living. It's an object in some better space. What could it be? It could be space of system of linear equations, as I talked about last time, solution of a system of differential equations. Or it could be Hilbert space representing the evolution of some quantum system. Or it could be a vector space which arises in a study in transformers in some large language models or something like this. In general, those spaces do not have a canonical basis vector V, starts life there, it's not yet formatted, doesn't yet have the coordinate system. To get to numerical representation from a vector, you have to pick a corner system. You have to pick a basis, and that's where you exercise your choice, your free will, which is now is very fashionable to argue about free will, but this is a very simple example of free will. I don't know who has free will, but I know that I have to choose a basis. Somebody has to choose a basis, otherwise you cannot do it. I don't know who. Maybe God chooses it, maybe, I don't know, the Chancellor of UC Berkeley or the Chairman of the Mass department. But somebody has to make this choice. Otherwise, you can't go from a vector to is numerical representation because there are many choices. You can't argue with that. There are many choices and different choices to give you different results. Okay, anyway, watch that video if you like. It's not obligatory by any means. Not required, not going to be tested or anything. But I thought you might find it amusing. Anyway, that's what I mean by numerical representation with all of the good and the subtleties that come from it. Okay, good. That's one aspect of it. The second aspect is the representation of a linear map. A linear map by a matrix in this case. Okay? So in this case we have, we have two vector spaces. This is the first case. Here is one vector space and the basis in it which we have to choose. Now suppose we have two vector spaces which I find a dimensional over the same field. Let's say dimension of vs and as before, and dimension of W is we choose basis beta and Gama. Then to linear transformation, linear map from V to W, We assign a matrix M by and matrix which is a horizontal rectangular array of numbers m and columns. The column, J column of this matrix is going to be the following. You take the J vector of this basis beta, you apply to it, you get the vector in and you write it relative to gamma, you get a column with entries. That's the J column of this matrix. Okay, So you have this is called went like this of beta gamma. The lower subscript, the lower index refers to the basis in the initial space, The source, the upper index refers to the basis in the target of this map. That's our notation. And it is almost the same as in the book. In the book, they don't put beta, but I'd like to, so we remember which basis we choose. Okay? But look at this, this, this assignment, actually, I want to keep this, I'll need this. This is itself a linear map. You see? This is the cool thing about linear algebra is that when you look at any procedure, on most of the procedures we do, they themselves represent linear maps or give rise to linear maps. Okay? This assignment going from to M of t beta was fixed beta vector space of all linear maps from V to right. You always have this impulse in mathematics when you have introduced some notion like a notion of linear maps from V two, there's always an impulse to say, let's look at the collection of all of them and what is it like, what is it as an object? And we have seen before, this is itself, this is a vector space, it's a secondary vector space. It's next level vector space. You already have a vector space and you have a vector space and you can see the linear maps between them. But guess what? On top of that, you also have a vector space of all linear maps from V two. On the other hand here, this is a matrix by and matrix. These are denoted FMN in the book. This is a vector space of all all matrix of this size. Look, we have assigned this construction gives the map from this vector space to this vector space. Of course, it is itself a linear map. You can also show, and this is a simple lemma, which I'll just leave for you, that this map is an isomorphism. In fact, it sets up a bijection between these two vector spaces, all linear maps from V two and all matrices. With a caveat that this isomorphism depends crucially on the choice of a basis and V and choice of a base and right. But it's important to understand that this is an isomorphism. This result goes in parallel with this result, that this is an isomorphism. Once you choose a basis of V, you get an isomorphism between your vector space and a fan, where N is a dimension. Here you have an isomorphism between linear vector space of linear maps from V to and the vector space all amby and matrices. These are two parallel results. This is what I call numerical representation. In the first case, numerical representation of vectors. In the second case, numerical representation of linear maps. Ask me if it's not clear. Okay, What does this property of invertible, what does it translate to? What does it translate to under this isomorphism you see? Suppose now you have LV, which is an isomorphism. First of all, it implies that the dimensions of V and are equal. Otherwise, you can't have a isomorphism. That means that n is equal to m. So we're actually considering n by n matrices. Matrix M of T beta gamma is n by n matrix, is a square matrix. Okay? Now, if it is invertible, if it is an isomorphism, it means that it's invertible. Then there exists a map in the opposite direction such that identity on, V and S is identity on. And we have just proved that these two properties are equivalent to each other, actually, because we are in a finite dimensional situation. What does it mean for the Cris point matrices? You see you have the matrix of Ama A. Then you have the matrix MS now, gamma beta, because now we're going from V, we put the basis gamma at the bottom and beta at the top. According to our notation, let's call this B. I do not. Just to simplify my notation, I'll write A and B. Now, what does this property imply? The composition on linear maps on this side corresponds to multiplication of matrices on this side. If I say ST is IV, it means that BA is IV, beta beta. So what is this matrix? You have the identity transformation from V to V, right? Every vector goes to itself. It's a stupid transform. It doesn't do anything lazy transformation, okay, just keeps everything put as it is. What is the matrix of this transformation? We said to every linear map we can associate a matrix. What's this matrix? Well, let's look at the rule. Is this the J column? We have to put the image under this transformation image of XJ and its core components relative to beta. But identity applied to anything is that same thing. Therefore, it's the same as it as xj. The jl of this matrix is a. Is the column, which has one in the Js place and zero everywhere else. That's true for every J. What you end up with is the identity matrix. It's a matrix which has one on the diagonal and zero everywhere else. This is a test. If you don't see right away that this matrix IV of beta is the identity matrix, it means that you have to work harder in this course. It means that you are not seeing what's going on. You have to absolutely see how the coordinate vector of X J itself is just this, because that's what the coefficients are. When the vector V is X J, J is one and is a zero, where I is not equal to j. If you put together columns like this, you're going to get a matrix which on the diagonal has one of diagonal, zero has zeros. That's this matrix. We will call this matrix the identity matrix and call it I n, where n is the size identity mat of size n. The upshot of this is that this equation leads us to a equals I. Likewise, this equation leads us to B equals IN. But we have proved that these two properties are equivalent because this is an isomorphism. Assigning to a linear map, an by and matrix is an isomorphism between the space of linear maps and the space of by matrices. We see that whatever statement we make about these linear maps is going to be equivalent to what the corresponding statement that we make about the corresponding matrices. As a consequence of this theorem that we proved, we have now proved that if you have two by matrices A and B, AB is equal to identity is equivalent to being equal to the identity. That's a statement that I mentioned that is a numerical corollary of this conceptual statement. You see, let me slow down and repeat because it's important. It's a great illustration of why we're spending time on this stuff. Because you could say, I have taken 54, so I already know how to multiply matrices and stuff like that. But try to prove the following theorem is actually crollery of this result. Corollery, sometimes we call our propositions or our statements, as I said, Lemmas, theorems, and on corollery means something that is an immediate consequence of something that we have just proved. Corollery. Suppose A and B are two n by n matrices over some field. Then B equals n by an identity matrix. That's this matrix right here, which has one on the diagonal and zeros everywhere else is equivalent to be a equals you. Now try to prove this directly. Let's suppose we raise all of this. You know what is by matrix. Everybody knows what is by matrix, very easy. Just a array of numbers. Say here's the four by four matrix with some arbitrary efficient. Let's suppose that you have matrices like this is how to multiply matrix, we know every row multiplied by every column and so on. You get by matrix. Suppose it happens that B is identity, which means if you go squared entries, n squared equations, if it's a diagonal throw by, you get one column throw by J's column, you get zero. If I is not call to J, take the opposite order. Try to prove that it's identity C's completely different. Because here you're multiplying rows by column. There you'll be multiplying columns by row, columns of A. Now there are ways to do it there. It's not like it's impossible to just in this framework of mat. Yes, it's possible, but it's a very long, a tedious procedure to show that in the opposite order it will also be identity. But now we get it for free. You see? And I really wanted to show you to focus, to draw your attention to this, to how powerful these tools are and how many things we have used for it. Because number one, we used the fact. So the first thing we translated this equation, this numerical equation equation, but matrices, we translated this into something much less tangible, some abstract notions of linear maps and so on. So two linear maps, composition of two maps is identity blah blah. Okay, after that. We spanned some time to show that in this context of linear maps. In the case of fine dimensional vector spaces, which of course we are in fine dimensional vector space because it's some numbers n by n matrices, that's the dimension. We can prove that these two properties are equivalent to each other. But how do we prove it? We don't prove it by calculating row multiply by column and so on. We prove it in a Pt different way. We estimate the null space of t. We estimate the range of t by using this equation. We find out this equation implies that the null space is zero. We find out that the fundamental theorem of linear algebra, a fundamental theorem of linear maps, implies that the range is the entire W. We derive from that that the T is both injective and subjective. We say it means that it's bijective. We then do some work, which I did last time showing that bijective maps are the same as invertible maps. Invertible maps meaning that both conditions are satisfied. And that's how we show that the second condition is satisfied. And then we translate back into matrices, you see we prove it in a totally different realm. In the realm of abstract ter spaces and linear maps. We use the full power of notions like basis and dimension n space and range and fundamental theorem of linear maps and so on. And then we do that argument in the abstract realm, and then we bring the result to our numerical realm and we get something completely unexpected and non trivial. Guess what if you take this matrix in the opposite direction, you will also get the identity matrix. Now I hope you realize that for general matrices A and B in general, for general two matrices, it is certainly not true that B is equal to A. Most matrices if you take just a random pair of matrices, AB AB will not be equal to BA. That's why the algebra of matrices is called non commutative. You see it's doubly surprising. First of all, in this case is exactly equal to AB. No reason for that. This is equal to identity, this is equal to identity. It's a very special case, it's not a generic situation. Any questions that's, that's idea and the deepest results of mathematics are like that. Where you get something out of thin air. Almost looks like we were kind of arguing and pouring water from one container to another without much benefit. But actually here is a very tangible benefit that we learn something about specific concrete numerical objects and bio matrices. All right, now the next question I want to talk about is what happens to a matrix? What happens, first of all, what happens to numerical representations? Both numerical representation of a vector and the numericresentation linear map when we change the basis. Okay, I've said enough about how the result depends, but now it's time to actually give a formula for how they change change. If we change basis, the first question is, how does this column change if we go from basis beta to some other basis? Let's say call it beta prime, Number one refers to that number one, which is numerical representation of vectors. Suppose again you have this vector space V, which has dimension n. Now we have two different bases, beta, which is as before, x one to x n, and beta prime x one prime x prime. Okay, you have two maps. Now I remember I use notation F, but I should say five beta. I should put an index here. You have five beta and five beta prime from the same V to a fen. Right here five beta is recording forgiven vector V in V is recording the coefficients one N in the decomposition of V is a near combination of X one up to x n. The second one is requiring the compilation of the same, same V but relative to a different basis. Instead of x one to x, you have x one prime x up to prime x one prime up to x prime. It's a whole different collection of coefficients, definitely depends on the choice of a basis. Just look here. Suppose you on the plane. Suppose this is your x1xn, xx1x2 because it's dimension two. Suppose, suppose is just x one. Well, that's not a good example. Let's say x one plus x two, that's the coefficient a one is one and the cofient two is one. But suppose I take a different basis in this is actually x one prime, this is x two prime, then V is x one prime plus zero times, it's again one here, zero times x two prime. That means that five beta takes to the column vector 11 because the cofficients 11. But five beta prime takes it to 10 because that's what you have here. Obviously, this numeric represent changes if you change the basis. This is a very simple illustration of that. In fact, if you think about it, there is more, much worse in some sense. And what you can expect any column of numbers represent. Let's say any non zero column of numbers can represent any given non zero vector for a particular choice of basis. In other words, it's what is your address? Your address. It's your address, one N, relative to a particular coordinate grid. It's like streets and avenues. But imagine that they do it differently and set up urban planning differently, have different collection of avenues. Suddenly your adds changes. Actually your address can be anything depending on how you set up your urban planning. That's the difference between numerical representation and the objects itself. Your house is still there, but the coordinate system has changed and therefore it gets a different address. Okay. Now, how do we find the second address of this vector relative, which is five beta prime? If we know five beta, what information do we need to be able to convert to translate of this coordinates to a vector of this coordinates? What we need, as it turns out, is what's called for lack of better expression, a change of coordinate matrix or change of basis matrix. It's actually a matrix two. You deal with matrices by using matrices, you introduce this matrix which is called change of basis matrix. Okay? What is this? Here's what you do from beta, we call it, it's from beta prime to beta, okay? You take elements of the second beta prime, you write them relative to the original basis beta, you have x one. Let me emphasize this. Beta prime consists of this x1x prime. You take each of those. Right there. Co, ordinates relative to beta. Okay, the first column, it's a matrix, n by n matrix. The first column, one prime relative to beta. Second column, two prime relative to beta. The last column is x n prime relative to beta. Each column has length n, and we have n of them. The result is an n by n matrix, obviously, right? How is this relevant? Lemma. The following. Lemma will express the coordinate vectors of one basis in terms of the other. Namely, for any take V beta is equal to Q times beta p. The one thing to it's easy to get lost in this discussion is which one is which you write. This is how I memorize it for myself. I take them vectors of the prime basis, beta prime, and write them in terms of beta. If I do that, that's the matrix which has the properties that if I hit with it the coordinate representation of beta prime, I will get con, representation of beta. But it's easy to forget what to do here. I remember, I suggest not to memorize, but to know how to recover the information. You recover it very easily by saying that it's enough to prove it for any basis. All right? So this is a linear formula. If you prove it for basis vector members of any basis, it will be true for any vector because any vector is linear combination. On the left hand side, you will have a linear combination right hand side of linear combination. Therefore, if you already know it for basis vectors, you know it for everything, right? It's enough to, it is equal to x j prime, right? For all j from one to n. Because then you know it for all vectors. What happens if I put x J prime? Then here I get x j prime beta. Here I get Q x prime beta prime. But as we just discussed, if you take a J's vector, a given basis, it's numerical representation relative to that basis is just 000, 1000, it's the column when you have a single one in the J's position and then zeros everywhere else. And if you, if you hit that vector with what are you going to get, You get precisely the J column q. But that's exactly what it is, because Q is constructed from this type of vectors, you see. That's why you get the equality that shows you that this is true for every vector V. This is a way to remember it. Instead of memorizing, just look at what happens on the right hand side when you take the simplest vector, which is of this form, then you know you get the JS column. What should be the JS column for this equation to be true? And then you recover this. That's how you connect without memorizing. Ask me if it's not clear. Yes, which one. Okay. Is it clear that this vector is this column, right? Because you write this vector is a collar combination of x one prime, x two prime, and so on. The coefficient in front of x prime is one. The covisionsan, all the rest is zero. That's the equation I'm talking about. If you write G prime in this form, all cofficients will be zero except the j is coefficient which will be one. Therefore, the corresponding column vector is like this. Almost all of them are zero. There is only one non zero entry which is in the Js position. If you take this vector here and multiply by matrix matrix and by matrix, what are you going to get? You get the J column of this matrix that you have to see once and for all multiply matrix by this column you will get the Js column of the matrix. Let's suppose you will see it after the class, but for now you just trust me on this. Okay. Then you get on the right hand side, the J column of That tells you for this equation to be true, for this equation to be true, For this equation to be true, what you need to have on the column of Q, you should have G prime relative to beta, right? So you want to get, if you want to get the representation of any vector relative to beta. In particular, in this situation you're going to have the representation of G prime relative to beta. But on the right hand side, in this case we are going to have the column of Q. So that means that for this to work, Q has to have this form. Okay. Any other questions? Next? Matrices. What about linear maps? Now, in general, we have numerical representation of linear maps going between arbitrary vector spaces. And those two spaces don't have to be the same. V and W. And then we choose a basis in. And however, and we can write a general formula. However, the formula which is most useful is the formula where V is equal to W. For linear maps, we'll restrict ourselves to the special case when V is equal to. Then in principle, you still have a choice. You have a map from V to, you can choose a basis beta for the viewing V as a source for V viewed as a target. But we will consider a special case where beta is equal to Gama. We are going to write the matrix like this. In fact, for shorthand, we'll just write like this. If it is understood that the map is from V to itself, we only choose one basis. And we use it for incarnations of both manifestations, of both roles of V as a source and the target of the map that our typical notation would be. But we'll just write it like this. The question is how to connect the two matters. Let's call this Lemma one, And this will be Lemma two. The question is how to relate M of beta prime and beta. Okay? It's analogous to this formula. It's going to be analogous to this formula, but for the numerical representation of, of linear maps. Whereas there's numerical representation of vectors, the answer is that you have to, first of all, Q is invertible. Maybe for this lemma to make sense, I have to come up with an intermediate statement, which I'll call this Lemma three. And I'll Lemma two. Okay? Lemma two is that, remember Q is given like this. But we can do the opposite. We can take vectors not of the prime basis. Consider components relative to beta, but take basis elements of beta and consider their components relative to beta prime opposite. Right, that's the first column, is the second column, then up to a last column. The statement is this is the inverse matrix. Is the inverse matrix of Q, that is to say Q times Q. Inverse is the identity matrix as I just explained earlier in the corollary that I wrote. Both of statements are equivalent, you can write this way or that way. I remember also that last time we proved that the inverse is unique. In fact, I was using Ab equals a. B is equal to equal, is equal to identity matrix. But in fact, we can then say B is A inverse, A is B inverse in the previous corollary, because the inverse is unique to define, if you have a matrix A and the matrix B such that AB is identity is unique, there cannot be two different matrix satisfying this property. Therefore, we can call it A inverse. Then you see that the inverse of the inverse is inverse. Inverse of the inverse is the original one. Okay? This is Q inverse. Not only it's invertible, not only this matrix Q is invertible in the sense that it has an inverse, but also its inverse can be constructed in exactly the same way by simply switching beta and beta. Now, how to prove this? How to prove this? Again, you can try to all kinds of things, but every time you have an quality of matrices, consider the possibility of simply applying it to column vectors like so. If you know that both sides apply to column vector like so are the same, then the Tumts are the same. For the simple reason that multiplying a matrix by coonvector like this give you the cosporing column of that matrix. Every column of this matrix is equal to every column of this matrix. Then you're done because it means that the Tumts are equal. What else is there other than columns? If you know that every column of this one is equal to every column of that one, they are equal. Try to apply define Q inverse by this formula. Define Q by this formula. Then look at this equation and apply it to Colin vector is point to x I, which is a Colin vector like or J, and see that the result is this. And likewise for this equation. I'll leave it for you to verify now that we know what Q is and what Q inverse is. Both of them are defined in a very similar fashion by simply expressing vectors, basis vectors from one basis as linear combinations of basis vectors from the other basis. Right? Now finally, the big reveal is that we need to use both of them to convert the matrix representing our linear map in terms of beta, to convert it to the matrix representing it relative to beta prime. Again, I will, I will leave it to prove to you. Again, to prove it, simply multi, apply it to a vector of this form. Both sides then interpret this as appropriate. Interpret this product in terms of the action of the Crispone linear map on the Crispone vector and see that results on the left and on the right are the same. Any questions, this gives you an effective way to translate. The first thing to understand is that every vector space of dimension n is isomorphic to a N. That's the first thing to understand. We discussed this last time. The second thing to understand is that there are many ways to establish an isomorphism. The thing to understand is that a choice of an isomorphism like this is equivalent to a choice of a basis. The first thing to understand is how the numerical representations of vectors and linear maps change if you pass from one basis to another. This is what's covered on this blackboard. All right, that completes our discussion of invertible maps and how they relate to the corresponding matrices. Now we start the next topic, which is the topic of those spaces. We're going to keep section three in the book. From the book, we're going to straight from three D to three F. Obviously, sections that we keep are not going to be on tests, on quizzes or exams. Speaking of our mid term exam is coming up in two weeks. It will be on the material that we study up to next week to the Thursday, a week from today up to and including. That's right. On Tuesday, I will give you more information about it. I will post exam so you can see what kind of things to expect. Okay. On courses, I'll say a few more words about it. It will be in class on Thursday, February 29, two weeks from today. What else? There will be review lecture on Tuesday on the 27 exam. Tuesday lecture will be entirely devoted to the review and you will have all the office hours. Of all the GSI myself available for discussion of the material preparation for the meter exam.  Okay, let's talk about dual spaces.  What is the dual space? This important topic actually another conceptual topic, which at first may look a little bit arcane, but it does have very important applications indeed.  What is the dual space? Remember that we have defined this special vector space L of V, given any two vector spaces over the same field, we have a vector space of linear maps from V, two linear maps.  This vector space over the same field called LV, we are going to consider special case one is the simplest non zero vector space over.  The simplest vector space obviously would have to be dimension one.  All of them are isomorphic to each other and to the vector space one.  In this case, the column consists of simply one element. It is just an element of this is actually just itself. It's not surprising, of course, that every field is a vector space over itself because it has all the requisite operations that you need. You have addition and multiplication. In this case, you're actually multiplying the scalars are elements of the space itself. In this case, the corresponding space of linear maps is called the dual space of the V prime.  Sometimes in some textbooks you will see nation of star textbook we're using is then primes. I'll follow this nation.  This is the first perspective on it. It's just a special case of space of linear maps. It doesn't sound particularly exciting looking at this case and not for example, the case 12 or three, but actually there is.  It's called dual for a reason.  Duality doesn't mean that you go from an object of one kind to an object of another kind.  It also means that you can go back in, as we will discuss next week, If finite dimensional, then if you apply this procedure twice, you go from V to V prime.  I make a curly arrow, meaning that it's not a map, but it is a procedure. You start with V, you construct prime by this formula, just taking L of V. Then you do it again. You take the dual of the d, okay?  Then it turns out that it's actually going to be itself, canonically, without any choices. Without choice of a basis or anything like this. No choices.  I spent a big part of last lecture in today's lecture emphasizing that sometimes you have to make a choice. For instance, when you go from vectors to their numerical representations, and likewise from linear maps to their matrix representations.  But sometimes there are things which are canonical.  This is one of the beautiful examples of that.  They're priced, they are special because when you have something that is canonical that is not dependent on any choices, it means that it represents a certain fundamental structure that is most likely useful for something. And in this case, it definitely is.  The double dual is itself. Provided a fine dimensions is very interesting. In that sense, you could say prime is dual to V, but V original V is dual to V prime. That's another way to say that the double dual of V is itself.  But we'll discuss this in more detail next week.  This is just a preview just to give you an idea why this is interesting. What I want to talk about today is, first of all, I want to give a couple of examples.  Second, I'm going to show that the links between V and V prime, for instance, you can go from a basis of V to a base of V prime and you can go from linear maps from V to, to linear maps from W prime to prime.  First things first examples, let's take the first example is V is N. It's just a column vectors with coefficients in, in this case the prime can be thought of as a sequence of tuples of elements, but it is useful to arrange them now as rather than columns.  For this following reason. That if you have, before I explain this, let me just say in general what is an element of V prime. I should have said this before the example, but let me do it now.  An element of V prime, call it pi, is by definition, a linear map from V to F.  Because we define V prime as a set of all such linear maps, the sector space consists of all linear maps from V to F. An element of prime is a linear map from V to V being a vector space over.  What this means is a linear map. I remind you that it means that the sum goes to the sum, scalar multiple goes to scalar multiple, that's what it is. It's called linear map.  A linear, the name is linear functional.  Such maps, these are called linear functionals.  For example, if v is a N, V prime will consist of all linear maps from a fan to V to, for instance, if f is r, let's say n is equal to two, then this is going to be a map from R two to R. Now, we have studied such maps under the umbrella of multivarble calculus, right? In multiverble calculus we study functions in 2.3 variables. In fact, we could consider both cases, but in calculus we consider a very large class of functions. We consider continuous functions, differentiable functions, Functions like technometric functions, polynomial functions, and so on. Likewise, in 2.3 variables. What distinguishes our interest here is that we're only interested in linear maps. A linear map like this, if I choose Ordin x and y. A linear map from this definition is always a map of the form that is a degree one polynomial only such map, not surprisingly, it's dimension is going to be the dimension of V, in this case dimension two. It's a very small piece of the space of functions that we consider in calculus. Likewise, for three dimensional, it will be a x plus Y Z, where B and C are some real numbers. There's a two dimensional space of linear functionals, in this case three dimensional space of linear function. In this case, in general, we can identify. There is an isomorphism between V prime in case when V is a fan, and the space of row vectors under this isomorphism row vector corresponds to linear functional whose value on the column vector a one N is a product of this row and the column. Remember, row and column can be multiplied according to the usual rules. And it turns out that all linear maps on FN have this form. These are the two special cases of it. When is R and n is go to two or three. This may be a too simple. Examples are too simple. More interesting examples arise as follows. Let's take a V the space of polynomials of some degree, let's say over R, just to be a little bit more specific. These are polynomials of degree or less over the real numbers. It's one of staple examples. Here is an example of a linear functional. We are interested in linear functionals, that is to say linear maps from V to r, because we are now over r specifically. Here is an example. We can define a in the following way. It's value on a polynomial, p of x is going to be value of p at one, or it could be value at any a real number. For example, if p of x is three x squared plus two, then let's call this f one, and let's call this A. Then f of x is abstained by simply substituting a. Let's say A is two, substituting two x, which is going to be three times four plus two, which is 14. The value of two on this polynomial is equal to 14. And likewise, for arbitrary p and arbitrary A, Evaluating polynomials can be viewed as linear functional. Evaluating polynomials at a specific a can be viewed as linear functional on the space of polynomials. Another possibility is to take first a derivative, or a second derivative, or some more complicated expression, evolving derivatives, and then evaluate The third possibility is to take an integral. In all of these cases, verify that the properties hold that the value on the sum of two elements of vector space is equal to the sum of the values of the respective values. The value on a scalar multiple is equal to the scalar multiple. Here's another example. Let's call it 12p of x is integral 1-2 p of x dx. Likewise, you can have arbitrary a and B instead of 1.2 Integration is also linear operation from the point of view of the polynomials that you're integrating, or you could actually apply it in a more general vector space of all functions, all integrable functions on the real line. Okay, these are the examples. Next. Very briefly, I want to explain one thing, which is that the, if v is finite dimensional dimension of the V, prime is also finite dimensional and it has the same dimension. That's simply because prime is L of VF. In general, the dimension of L, V is a product of dimensions of V, W. In our case, this is equal to one. All right? They have the same dimension. Therefore, any basis in V has the same number of elements as any basis in the prime. But there is a matching of basis. There is a certain correspondence of basis given the basis beta in. There is a canonical basis, beta prime in V prime. There's this unfortunate clash of notation. I have previously used beta prime as another basis in V, and now it's going to be based in V prime, but doesn't confuse you. It is defined as follows. Let beta have elements x1xn, then beta prime will have elements one. The crucial property is that is defined by the equation. It's value on X, j is equal to one. If I is equal to j and zero otherwise, I'll leave it for you to check that this is a basis number one. Okay? Then in particular has the same number of elements, which is clear. Unfortunately, I'm out of time, so I have to stop here. Naturally, we haven't covered that section in the homework reading assignment. I will indicate.