today we'll talk about something that should be familiar from math 54 or another equivalent class, the first course of linear algebra that is representing linear maps by matrices. How does this work? I will explain it in a slightly different way from the book. I will comment on the difference between the two. So, you have a vector space, a finite dimensional vector space over a field. Therefore, it has a finite basis. Let's choose a basis of the number of elements, of course, is the dimension of the vector space. It does not depend on the choice of a basis. Right now, we know that if you have a basis, then every vector in V has a unique representation as a linear combination of elements of this basis. V is equal to 11 plus et ce plus N V N, where each AI is an element of F. Sometimes we will use a shorthand and we'll write a summation sign I from one to n IV I, because this representation is unique. Once we fixed this basis, we have one to one correspondence. One to one correspondence between V, one to one corresponds in a sense I explained last time. The other term is bijection between V and N. That is to say, to every V, we assign the numbers a one N. It is customary to arrange them as a colon of numbers. We will see why it is convenient in a moment to every vector. You can assign this collection by writing V as a combination of V one VN. We have fixed this basis one VN. Conversely, if you have a collection, you can write this clear combination and you got yourself an element of V. This sets up a one to one correspondence for every V, there is one specific colon of numbers and vice versa. I will use this notation V beta for it, where beta will reference this basis. Remember here, a basis is viewed as a list of a list of elements of V. Which means that it's not just a subset of n elements, but it is an order subset the first, second, third, and so on. This is essential because when you form a colon, you know that there is the first one, the second one, and so on as you go from top to bottom. To have this assignment indeed give you a bijection, you really need to talk about an ordered collection one up to VN. All right, now this representation is pegged to a specific basis beta. It's an interesting question to find out how it changes if we pass from one basis to another, we will probably talk about it next week. We probably won't have time to discuss this today. But for now, simply imagine that you have fixed the basis and then you have this representation. This is the first observation. The second observation is that we can do something similar to linear maps. Now suppose that V and W are two vector spaces over the same field. Over the same field. Suppose you choose a basis, beta in V, basis of V, and gamma W1w, basis of. In other words, I'm assuming that, yeah, this kind of looks like a war zone starting to annoy me. Let's just covered it. I don't know what happened here. I don't want to know. Okay, let's go back to this. We're assuming that both V and W. Find a dimensional is a dimension of V and W and M is a dimension of W. I now suppose that then you can represent every vector and V as a colon and can represent every vector of us, an M colon. Think about as a numerical representation. This is all the rage today. A large language models and so on. Is that the idea that you can represent things by vectors, in the sense of these colons as a collection of numbers. Like in large language models, you want to represent what's called tokens, which are more or less syllables, or some parts of words by collections of numbers. And then you try to find a way. Represent them so that they get clustered by, the distance between them will be smaller if they occur more frequently in text. In other words, this idea of numerical representation of things which arise naturally, like language, like sentences, a sentence a priori is something that we listen to. Something that corresponds to some vibration, some wave, right? But we try to break it down and represent by numbers. This is the whole idea of computation and to what extent we can represent various processes that happen in the world or within ourselves in terms of collections of numbers. There are many limitations to that, but also a very powerful idea, what we're talking about is a rudimentary stage of this idea where the abstract notion of a vector space, which a priority is given to us as something that has nothing to do with numbers. For instance, we talked about vectors on the plane, and there are many other examples of that nature. But we try to break it down, so to speak, or to represent in a more palatable form as a collection of numbers. Let's for this all about, these are numeric representations of all the objects, all the ideas that we have talked about up to now. Vectors get represented by a colon of numbers if they live in a finite dimensional vector space, and we have chosen a basis. Now if you have a linear transformation, you also can have a numeric representation of this linear transformation, Biomtrixkay. Nothing to do with the matrix of the movie. Maybe it does. We'll find out. You have a linear map from V to W. A linear map. Remember how Morpheus asks? No. Do you know what matrix is? What would you like me to show you? Okay. So would you like me to show you what a matrix of a linear map is Obtained as follows. Is all the data that you need to know to reconstruct this linear map? Once you fix the basis in the basis in W, what do we need to know to describe this linear map? It is necessary and sufficient to know to know exactly where each of the, the basis elements goes to know, goes under. That is to say VI. Vi is going to be an element of, right, because is a map from V to W. If we know where each VI goes from one to n, then we know where every vector goes then of V for every element of. Why? Because every element of V can be written in this form. We write V as a one V, one plus, et cetera, V. Then we know that by linearity can be written as one of one of v. So you see if we already know what these guys are, 123 and so on OVN, then we know what is TV for every V simply because every V can be written as a combination of this. And if you have a linear combination like so, then applying to it will give you a linear combination of v1v. Okay? Then the question is, we agree that this is all what we need to know, for all I from one to N, but these guys are in turn elements of W. W. We have chosen a basis, therefore of one, for example, can be written as a near combination of the basis elements that we have chosen Gama. Let's write it. But now keep in mind that we're calculating of V one, we need two indices. Now because one index will be to which I we apply, then let me call it j to be consistent with what's coming next. So we'll write this as 1212 plus, et cetera. I wonder if, okay, hold on to this also, okay? M1m, right? I'm choosing capital letters, but it's the same idea as before. It's just that. Now we want to represent not just a single vector in, but a bunch of vectors V1v2 and so on. Two will be 12222, and this index is the second in the index of the guy to which we apply here. For example, one appears as a second index. Here is two, it will be second index throughout equal to two. The first index is the corresponds to which I, I hope it's clear. Last one will be a one n one plus A two n two plus Et plus A. You see they all share in the same role. They have the second index as the same, okay? Now, since I said that it's enough to know these guys, then surely it's enough to know all of these numbers. To know TV one means to know these numbers. We are back to the situation of vector representation representation by colons, but now in, in we're representing TV one as a linear combination of the basis, which is called Gama, right? So those numbers which appears coefficients are the bona fide representatives of because now isn't one to one correspondence with FM, where TV one, V one is a colon vector with those guys, 11 21m1. All right? Vj is a 1j2j A MJ. Therefore, whereas to know vector in an n dimensional vector space V, it is sufficient to know a collection of n numbers, once you have chosen a basis, to know a linear map from an n dimensional vector space to an m dimensional vector space. Once you've chosen a basis in the first one and a basis in a second one, it's enough to know how many numbers m times n, right? This complete determines everything, because once we know those numbers, we know what to VN are, then we know every V. Now the question becomes how to package it in a nice way. Package the numbers as an array, as a rectangular array of numbers by the small deficiency here. Or a potential for small confusion is that when I write an expression, for instance, already here, if I write V as a combination of I's, it goes in one row, but we represent it as a colon. Likewise, what we will do is what appears as rows on that blackboard, we'll represent them as colons. The matrix which we'll call of a, I will put the indices to indicate relative to which basis will be the following. I will take of one. I will take it relative to gamma. This is an nation. This is one, right? It like this. One Gama, this is Jamal, this is going to be a colon of M numbers. What are, these are the precisely efficiency appearing in the first equation. Then I take two G, which is those numbers. But I arrange them as a colon on again and I put them together, the colons. What do I get? I get a rectangular array like this, where the first colon will be those guys, which is actually the guys, 121m1. Now, instead of writing the second one, let me write the J one. The J one is this. Maybe here too, I'll do J's. And this is gum. Okay? A 12jj, right? But these numbers also you can think of as an equation for TV J, written in one line. There is a slight discrepancy between our desire to represent things as vertical rays, also the fact that we are used to writing things in one row. So there is a, a switch between rose and colons. You might ask, okay, why are we doing this? You will see in a moment, basically, no matter how we do it, there will be a discrepancy. We choose a discrepancy at this level. It's a tradition, if you will. The last one is 12, it's exactly those IN numbers, but they are now arranged, if you will, by transposing those numbers. Because the row becomes a colon, each row becomes a colon. Okay? First of all, what should be clear up to now is that these numbers uniquely define what this linear transformation is. I should have written of this matrix A, if you will. But it is matrix associated to T is a linear transformation from vector space V to vector space. I'm claiming that there is a very economical way to represent it with just an array of M byN numbers. These numbers are obtained in this way by simply representing each of the vectors of one EtVjtV, relative to the basis Gama. All you need to represent a linear transformation, linear map from Dimenstorspace to M dimensional. Vtorspace is a rectangular array of amby and numbers, which is very nice. The caveat is that it presumes a choice of a basis. In general, there is a question which basis we should choose, because in a generator space does not have a canonical basis, which will come to this later. But once somebody has chosen a basis for you, or you have chosen it yourself, and there's no ambiguity which basis we're talking about, of V. Likewise for double, you have chosen a basis as well. Then all you need to know is a collection of Mb and numbers. Now suppose I have that, then there is actually a very simple algorithmic procedure. What will be, what will be of V for a given vector? Suppose V is, we want to find for general element v, we already know what is v 1v2v3. And to v, in what sense do we know it? We know them as colons. We said that in this matrix, there is a colon number J. This colon contains precisely the ordins of this vector relative to the basis Gama you see. Now suppose that I have V, which is given by this matrix, is a one V one plus matrix. I mean colon vector. Suppose that then TV is going to be a one TV one. Actually, I already explained this here, but I'll write it one more time. Plus AJ VJ, I'm writing a generic term here, J n TV. But now I suppose I want to find a colon vector which represents it, right, Because we are sold. I hope on the idea that we can represent vectors by colons. The colon has a size equal to the dimension. For instance, V was represented by colon of numbers, but TV is going to be represented by colon of M numbers, which is the number of in, right? Because it lives now in the space. Space has dimension like Mary. But if I want to do that, I simply need to take the combination of the colons for each of these guys with the crefficients is called matrix multiplication. There is a rule that we take this matrix where remember this is the first colon. This is the Js colon, this is the colon. Okay? That's this matrix which I call M of T A gamma, right? Then I multiplied by the colon of my original vector A 12. If I have a rectangular array of numbers, which we call matrix which has n colons. And then I define the product of this matrix and the colon, which has n entries by this formula. Namely, I take linear combination of the colons with the weights or the coefficients which are stipulated. Here you see it works. There are n colons and there are coefficients. I can take a one times this, et J times eta times, which is what I get, which is the expression of the rod, which we know is exactly the colon is going to have each of them has coefficients. The sum of these guys is going to be a colon with entries, right? This colon will be precisely the representation of TOV relative to the basis Gama. Now usually matrix multiplication is described in a slightly different way by saying that you have to multiply each row by this colon. But it's equivalent. Alternative description, multiply row of the matrix by this colon one. Now what do I mean multiply? Let's look, let's look at the ice row. That's this matrix that I'm talking about. What's it going to look like? The throw, you see the first index stipulates the number of the row, for instance, along this row, it's two along the first row is one Along this row is I throw is going to have a one, I J I N. The throw looks like this. I one, I two. Again, I think it's better not to do two, but just to do a generic one, the J. I J A I N. Right now I multiplied by this colon, one J N. I have cut out the throw of this matrix that I have built, and I want to explain what it means to multiply this row and my colon. It simply means multiply the first by the first, second by second, and so on and take the sum. So you're going to have a one times at J times J plus, et cetera, n times n. That's what I mean by multiplying a by colon. They have to have the same size, the same length. This role has length, this colon has length n. That's why this formula makes sense. For shorthand, I could write a JJ j from one to n. This is going to be, is going to be the entry in Gama. Gama is obtained by multiplying this matrix by this colon vector. It is going to be a row vector in vector, but now of size m, this is what it will be. This is this entry. This is a first entry entry. This is the entry. I've spelled out what the th entry is. The upshot of this, that there are two ways to think about multiplication of a matrix and a colon vector. The traditional way is to say you multiply every row by this colon, every row by this. I explain what it means to multiply row by a colon, that's what it means. But this is not the most illuminating way to think about it. A much more illuminating way to think about, in my opinion, is to say that what you are doing, you taking the linear combination of these colons with the coefficients provided by this colon. A one, you take a one times plus a two times the second colon, J times the J's colon, and so on. It takes a couple of minutes of staring at this formula to see that these two procedures give the same result. Multiplying rows by colons and taking near combination of colons with such weights is exactly the same thing. If you don't see it right away, don't be alarmed. Think about it later. You will see it. It's important to see both. This is how it used to be defined in 54 and so on, row by colon, row by colon. Now we have a more conceptual way of seeing things in representing things in terms of basis and so on. A better way to think about the multiplication of a matrix and a colon vector is that you are taking linear combination of these colons. Then it becomes clear why it is TOV because of this equation. Any questions? All right, let's do a couple of examples. Maybe this will become more clear. Okay, my favorite vector space is blackboard, as you know. Okay, I'm going to start with that. It's what I call the vector space of geometric origin. Because to define it, I don't need anything, except I need an infinite blackboard. But okay, you see how hard it is to do that. You can imagine it. It's an extension of this blackboard to infinity in all directions. Okay? I also need a point. Let me, once I have a point, then I have a notion of a vector. It's just a point at interval starting at this point ending somewhere else. And this will be elements of my vector space. They don't need a basis, they don't need anything. You need the rule of addition, rule of scalar multiplication, and we know parallelogram rule for addition and multiplying vectors by scalers as we know. In this case, the field F is the field of real numbers. Okay? Now, I don't, I don't need the format. I don't need the format. Imagine like you have a blank sheet of paper versus a sheet of paper where you have the formatting. This is a blank sheet of paper, blank blackboard. But even on this blank blackboard, there's a notion of a linear map which everybody understands, which is rotation. Some angle. Rotation by some angle and say counter clockwise direction about this point, right? What does it mean? Any vector you start with, you simply rotated by that angle. Let's call it theta V. This is your, you is a rotation by theta. Counterclockwise theta obviously goes 0-360 degrees. If you are sophisticated, you will say 0-2 pi. It's equivalent. Different ways of measure things. Okay, Now I would like to represent this bio matrix. Yeah, why not? I just explained that in principle, we could represent any linear transformation or linear map from one vector space to another as long as they are finite dimensional. We can represent this linear map by a matrix which has n times n entries, which are elements of our field, which in this case is the field of real numbers. Miling and transformation is going to be from v to V. In other words, W is equal to V. Both are two dimensional. Therefore, I have to produce a two by two matrix for you, which represents this very natural geometric move. Just rotation angle, theta, think of theta 30 degrees, every vector you rotate by 30 degrees. But here is a caveat. I cannot do that. I cannot do that until I actually choose a basis. Until I choose a basis in order to represent a linear transformation from V to, I have to choose a basis in V and a base in W. In principle here, I could choose a different basis in V, viewed as a source of the map, viewed as a target. But it is more, it's customary to choose the same one. In a situation where linear map goes from a vector space to itself, we usually do not try to pick two different bases, but we pick just one basis for both. Gama will be equal to Bait in the notation. I guess I raised it. Now I'm going to choose the basis of two vectors, which I will call V one and V two. And I will choose them in such a way that the angle between them is 90 degrees. We're talking about angles, make sense here. 90 degrees makes sense. I also make sure that they have the same length. In other words, this 12 is obtained by rotating, rotating V one by 90 degrees wise. Okay, Now I need to produce this matrix. I claim that forgiven the see not just one problem. I'm doing a whole continuum of problems labeled by the. Yes, but this is just common sense. You know what angle is just rotating things. Oh, how do we know that it's linear? Okay, Well, let's leave it as an exercise. First of all, that's a good point. Yes, I'm assuming something which in principle, needs to be proved. So let's not slow down for this. Let's assume it, but you're right, it's something that needs to be done first, okay? Assume that it's linear. Which something which is it, can be easily proved. It's basically the parallel gram moves, right rotation by 45 degrees. You have vector v1v2 of diagonal and parallel gram. When you rotate both V one and V two, the diagonal will rotate by the same angle. It comes from basic Euclidean geometry. But you're right. If we want to be pedantic, we have to say that here we are in the formal system of Euclidean geometria. We have to use axioms of Euclidean geometry to show that this is a near transformation. Absolutely. Correct. Which is what enables us to fit it into the formal system of linear algebra. Absolutely. Okay. I tried to cut some corners. All right. As we discussed. Where did I put here? I have to write V one and V two. Actually, let me call it theta so that we remember that it's not a single linear transformations, a rotation by some angle theta, it's a family of rotations. Theta goes 0-360 or 0-2 pi, the matrix of theta, That's a notation I'm using relative to this basis. Beta in both V and W is going to be V one beta V one and V two and V two. I have to compute this is going to have two components. Is this going to have two components? Right? I put them together, I get two by two matrix. What is V one? Let me actually draw it in a more familiar way by extending this vector to x, x, if you will, and x one. Let's say for simplicity that the angle theta is 0-90 degrees. It's not necessary to assume that, but it just things will become a little bit easier. The answer will don't depend on it, make it a little bigger. This is rotation, this is V one. I need to write it as a linear combination. This is theta. I need to write this linear combination of 1.2 non trigonometry. We know that it's going to be this times one plus this times two. This is a cosine, this is sine theta. Does everybody agree with that? Cosine theta, sine theta. This is going to be this distance, a distance. It's going to be A times V one plus B times V two as cosine theta times one plus sine theta times two. All right? A, a is cosine theta and B is sine. That's the definition of cosine sine. That means that in this matrix, this first colon is going to consist of these two numbers. Is going to be cosine theta on top, sine theta at the bottom right. To go like this, remember whatever appears in the first row in this equation is the first colon of the matrix. Next, I need to calculate where this guy goes. It has to go like this. This is the angle theta, okay? Again, this is going to be, this is going to be D. But the angle is here. D is actually cosine theater is minus sine theater, if you know what I mean. Because this is the, and this is the C is sine theta, but this point has negative coordinates. I'm V two is minus, C is a distance times V one plus D times two. This one is going to be minus sine theater times V one plus cosine. The times two, yeah, my second colon is going to be minus sine, the cosine the. Now, it's customary to put brackets. This matrix does not just live by itself, but to put a pound on it, bounce on the left and on the right, that's the matrix. Now I can find where any vector goes under rotation, just from the, from this matrix. And the knowledge of V one and V two, If I have now what is of V? Where V is equal to V has to be general vector. It will have to be a linear combination of V one and V two. Let's call it y one times one. Plus y two times V two. Every vector can be written this way. Where y1y2 are real numbers. Then V is going to be equal to T of y11 plus y2v2, right, Bi linearity. This is y1v1 plus y22. Therefore, V, if I write it in terms of its coordinates, is going to be y one times V1y1 plus y2ov two represented. My basis except Gama is beta. As I said, no, this is, this is I have y one times what is it? Cosine the sine, the plus y two minus sine theta, cosine theta. The answer is Y one cosine theta minus y two sine theta in the first and y one sine plus two cosine theta. In other words, if I want to know what is V one, I already know it. I have already calculated and it's just the first colon. If I want to know of my matrix. If I want to know what is two, I already know it. It's a second column of my matrix. But what if I want to know what is of some general vector, which is this, y1y 2y1v1 plus 22. Simply take y one times this, y two times this. That's what I wrote. That's the answer. Alternatively, you can write it as a product of two by two matrix and 2.2 column vector with two entries. Let say that again is y1v1 plus y2v2. Which means that V has representation relative to our basis as a colon vector y1y2 of V theta. I meant a drop theta on this blackboard. To simplify is going to be the product of this matrix. Cosine theta, sine theta minus sine theta cosine theta. This colon vector, as I explained, the product of a matrix and the vector can be done, calculated in two different ways. In one way I have already calculated is just taking Y one times the first colon plus Y two times the second colon. And then you get, but usually people like to multiply row by colon. Let's do that too. Let's multiply this row by this colon. What are we going to get? Cosine theta times one minus sine theta times y two. Which is exactly this. Well, of course I can reverse because these are numbers of course can reverse. Y one cosine is cosine theta times one. Y two times sine theta is sine theta times two. This is just to show you that these two alternative ways of calculating this product give the same result. Let's do the second, this was the first row by the colon, that's the result. Times this, gets this times this, gives you this. Now let's do the second row by this colon sin the times y one plus cosine theta times y two. Okay? Same result is here. Any questions? This is how you represent things by numeric way. Vector a colon of its two real numbers. Linear map from V V, which is two by 24 entries. That's the matrix. There is a very nice application of this, which is called trigonometric formula for Cosine and sine of the sum of two angles, which we're now going to get without any effort. But before I explain that, I have to talk about composition of linear linear maps. This is something that actually was in the book in the reading assignment for last week, but it's worthwhile to revisit it. If you have the two operations, we honestly 33 natural operations on linear maps. The first two are obvious. If you have two vector space V and you have two linear maps going in the same direction, 1.2 you can take their sum and scalar product, define one plus two going from two by the formula, one plus 22v is one plus two V. If you have some lambda, then we can define lambda times one from V to by the formula, lambda one v is equal to lambda one of V. You have two operations on the set of linear maps from V to. If we denote as before, the set of all linear maps of linear maps from V to, where V and W a fixed vector spaces over the same field. Then this addition and scalar multiplication that I just defined, addition defined by this formula, scalar multipication defined by this formula, make it into a vector space. Or in other words, this set of all in your maps from V two together with these two operations is vector space. Yes. Okay. You're right here. I'm defining lambda one. I already know what one is, a multiplier Lambda, how? I said sometimes I will write like Colin, which means I'm defining left hand side in terms of the right hand side. Likewise here I'm defining the left hand side in terms of the right hand side. Linear maps. The set of linear maps is given to us initially is just a set because there's a notion of linear map, it's a map from two satisfying some properties. But after that, on this set of linear maps, denoted L of V, we can define two operations given two linear maps from, we define a new linear map called one plus two. Given any linear map, one and the scalar lambda in our field, I should call it because I said that these are over F. This is going to be over the same F. If F is R, then it would be R, but for example, complex complex numbers. Okay, I have defined now two operations on the set of linear maps. You see like next level initially linear maps themselves. I define on vector spaces, between one vector space to another and satisfy some properties. But now I zoom out and I look at the set of all possible linear maps from given V to given W. I define operations on that set. I also call additional multiplication, and then I verify that with these operations, this set is a vector space. This is all in the previous section. In the last section from last week is explained there in detail. These are the first two operations. Remember I said there are three. We are getting to third in a moment. But now, but first, I want to explain what this operation, these two operations mean from the point of view of matrices. Because we have now decided that every linear map to every linear map, we can associate an M by and matrix if it's linear map from n dimensional space to m dimensional space. The upshot is that you verify that the coresponing operations on matrices are just addition and scalar moliication matrices. What's an addition of two matrices is obvious just term by term. What's a scalar modification? Also obvious term by term. Let me formalize that as a lemma. Suppose that beta is the basis of V, g is the basis of the one. I can associate a matrix of one beta gama, which is going to be ByN matrix. Meaning that suppose that here the number of elements is here. The number of elements is n. It's associated in the same way as I did before. Likewise, for two, you also have a matrix. Also by a matrix, the question is, what will the T one plus two respond to? The obvious answer is, it's a sum of these matrices. Now, what do I mean by a sum? The most obvious thing, you have two matrix of the same size where typical element is AIJ. Put one here. Here is AIJ two. This is a typical element. It sits on the intersection of the throw and J's colon. Likewise, for this one, when I take the sum, I will get a new, a new matrix of the same size, where the entry and the intersection of ice row and J's co will be just the sum of the corresponding entries for the two. It's going to be one J plus two IJ intersection of ING. Most obvious way. If you have two matrix of the same size, just take the sum. By the way, it's exactly the same as how we do. I did not talk about it, but it goes without saying that if I have a vector V, which is a combination of one V one, et cetera, N VN, it's represented by this colon. If I have another vector V prime, it will be represented by a one prime, two prime AN prime. What will be the representation of V plus V prime? It's just the sum of the two colons. By sum of the two colons, I mean adding up the corresponding components. Same for matrices, that's easy. Likewise, lambda one will be lambda multiplied by matrix of one. By that we mean multiplying every entry by the same number lambda. These are the first two of the three natural apparationsy' obvious. There is a third apperation which is less obvious, in some sense more interesting, which is what I want to concentrate on. The third operation is the composition. In this case, the third operation is not going to be an operation defined on two linear maps from V to. They will be defined on maps which act between different spaces. The first one will be from V to, then the second one, it will be from two to some other space. In other words, what we require is that the target of one of them, the source for the other, coincide. But this can come from anywhere. And this can go anywhere. This have to coincide. Then we can take the composition. You see, this is an notion that comes directly from set. The way before we talk about vector spaces and linear maps in the category or in the formal system of set theory, we have the notion of composition. Composition means that we apply them back to back. How do we apply? We first go from V to by one. Then we go from, the result is from, from the first one to the last one. You see. Composition means that you compose things one after another or apply them back to back. Now, there is something weird that's going to happen right now, because notation for this is not 12, but it's 21. This composition is 21. In other words, even though we first apply one and then apply two, we recorded a 121, I said to one, but they did. I wrote it incorrectly for the following reason. That we always think about applying it to something whom you apply first, you first apply one. It's a contradiction between. The fact that we write things from left to right, but we apply things from right to left because we operate a linear map, or any map, we're applying to something in the initial space. This is V is here. We first apply one, then you apply to the result which is going to be, we use the fact that that's exactly the source of the second map. Then we apply two for this reason we're going to write it as 211 rather than 22, even though we're first applying to one and then applying to two. Okay? This way you obtain, this way, you obtain an operation on matrices. But this operation is not going to be the set like here it was on the set L V. It's going to be from L V, Cartesian product two, L m, one will be here, two will be here. The resulting will be called 21 composition. Sometimes people put a little circle in between to indicate the composition, but sometimes we will, sometimes we'll just drop it. Okay? So then the question arises again. What does it do to matrices? Because suppose that we have a basis beta here and a basis gamma here. We have a basis gamma here. And suppose you also choose a basis delta in you. Okay? Then to one you associate the matrix. Let's put it this way. Associate the matrix M of one beta Gama. To two, you associate the matrix M of two from Gama to delta two, composition one, you associate the matrix from beta delta. Now this has size. Let's say that this again dimension N. This has dimension m, this has dimension K. This one is going to have m and N colons, whereas this one is going to have to. This is where we have to remember in which the order to do it, This one and colon this one will have the size is match. In this way we have a notion of a product of two matrices. And I'm sure most of you remember it from, actually I'm curious how many of you remember how to multiply the matrices. Yes. Okay, good. All right, so I came to the right place. You came to the right place, so both, okay? You multiply mass by multiplying each row by each colon. To be able to do that, the size of the row which is m here has to be the size of the colon, which is also m. They have to match, but the rest K and n could be arbitrary. Okay? So I claim that that's exactly what you need to do to get the matrix of the composition. You have to multiply the matrices of the two in the correct order. Second on the left is the second one, on the right is the first one. If the maps could actually be composed, the sizes will match in the sense I just explained. Then multiplying row by colon, row by colon, you will get a new matrix. Which, what is the size of this one colon? Maybe make it a little bit more realistic. Should be a little bit longer. Okay, That's how you find the composition. Now we have a, a control, shall we say, over how to compute various things. Not only we can associate numerical representation to every vector and every linear map, we can also, now the procedures not even imitate is not the right word to represent the natural operations, three operations on linear maps in terms of operations on matrices, addition, scalar modification, and now composition composition corresponds to the product of matrices. Now here is an application. Of this. Remember how we found this matrix here or there? That's the matrix which represents rotation by theta. But suppose I want to rotate by theta, theta one, and then by the two common sense tells us the results should be rotation by theta one plus two. Therefore, the product of the matrices corresponding to theta one and theta two should be a matrix. But for theta one plus two, in this case, it actually doesn't matter in which order we do it. But let me this is then I want to see that this is a matrix two. I call it, let me drop, okay, Just to simplify because otherwise it will take forever. Beta beta is the basis times theta one beta has to be equal to theta one plus theta two. Let's verify this. So I have to take cosine theta two. Let me see if I have enough space. I'm done. Let me do it on this one. So the matrix is pointing to for theta two is going to be cosine two sine theta two minus sine theta two, cosine theta two. And I have to multiply by a similar matrix but for theta one. Okay? Now what is the rule? The rule is I have to multiply this row by this colon to get upper left corner entry. I get it doesn't matter which cosine theta two times cosine one. I'll switch, just look a little bit nicer. Cosine theta two minus sine theta one, sine theta two. If I make a mistake, please tell me. Now I want to do the second row and the first colon. Cosine theta one, sine theta two, in one sine two. Obviously, I'm going to get negation of this one here. I want to save some time here. I'm going to get this. But now you see this has to be equal to sine theta 12 theta one plus two minus sine theta one plus two theta one plus t two. You see we have found a formula for cosine of theta one plus theta two, which is studied in trigonometry. Cosine of theta one plus 32 is equal to cosine theta one, cosine two mininet two. We got it out of nothing, but it's not exactly nothing. It is the idea that comps. First of that, linear maps can be represented by matrices Under this representation, composition of two linear maps corresponds to the product of the matrices. It gives us the trigonometric formula for the cosine of the sum of two angles, as well as for the sign, that's this guy, because that's where sine of theta one, theta two is. You see this is a very powerful tool. Questions. Let's do one more example which is cool. Remember how we discussed the fact that the field of complex numbers can be viewed as a vector space over the field of real numbers? V is going to be C, but viewed as a vector space over the real numbers is two dimensional. It has a basis beta which consists of two numbers, one and I, where I squared is negative one, the proverbial square root of negative one. Now I define a linear transformation t from V to V by the formula. If you here I have some complex number, let's call it V. I map it to z times V, where z is a complex number. Complex set of complex numbers is a vector space over real numbers. That's the first property. But the second property is it actually is a field. In other words, it also has operational multiplication, which most vector spaces do not have, right. But in this case, it's an exception to the general rule. It actually has its own multiplication, which is given by multification complex numbers. It is very easy to see that it defines a linear map. I'm going to skip that because I only have 5 minutes left. And I want to write the formula, check that it's linear. It's a linear map, a vector spaces over R. It's very easy to see it's actually linear map. It's a linear map for vector spaces, complex numbers, because it is a field of complex numbers. But anyway, it is a linear map. Because it's a linear map, we should be able to represent it by two by two matrix with real entries, with real entries. Because we can view this as a two dimensional vector space over R with the basis one. So the question is, what is the two by two matrix with real entries which represents a complex number? Every complex number has a real part imaginary part, right? Or maybe it doesn't matter. In the same way as here, we're going to build this matrix relative to this basis by applying this linear map, each of the two basis elements we write, let's call this V1v2 of v one is going to be z times V one, but V one is one. It's going to be z times one. Which is z, which is x times one plus y times the corresponding colon is one, X and y, right? Because we have to take the two components in front of the basis vectors. Base vectors are one and I. Multiplying by z, we get x times one plus y times we get x y t of V. Two is z times two, which is z times I, which is x plus y I times I. That is X times I plus y times I squared. Y times Y squared is negative one, It's negative y times one plus x times, therefore the corresponding colon is negative y and X. Therefore, we find that the matrix of T is obtained by putting these two colons next to each other. X y negative y x. See how interesting the complex number z, which is x plus y I, gets represented by two by two matrix in such a way that the sum of complex numbers corresponds to addition of these matrices. Multiplication of complex numbers by real numbers corresponds to multiplication this matrix by real number. But moreover, multiplication of complex numbers, which is the most mysterious structure here, is just corresponds to product of the corresponding matrices. You see, how many centuries did people think that there is something absolutely strange about complex numbers like square root of 91? How can you imagine that? Guess what? It's just two bytometris of this form. That's it. Once you know what two bytometricees are, there's nothing mysterious about complex numbers anymore inside the four dimensional space of complex matrices. Why four dimensional? Because if you consider general two btometris, you have arbitrary entry here, here, here, and here. But suppose you have this condition that this entry is equal to this, and this is equal to minus this. Then you have a two dimensional subspace in the space of two by tto matrices. Guess what? That's exactly the field of complex numbers in which the operation of multiplication of complex numbers is now represented by simply by the product. Let me show you, for example, the mysterious formula, I times I is negative one. How is realized I times is negative one? This is what the great mathematician, Arla Cardano, called the mental torture, trying to come to terms with it. Let's look at this mental torture from this perspective. Number I simplsponds to the matrix in which x is zero and y is one. You get 10 negative 10 times I corresponds to the product. And when you start multiplying, you see that it's minus 100, minus one. That's the minus one. All right, I'm out of time. So we'll continue next week.