Now, let me remind you what we did last time. We have a vector space over field, and we assume throughout that V is finite dimensional, which means it is spanned by a finite set of vectors. Now we have two notions. One is linearly independent subset, as the book calls it, a list of elements, I call it subset, the same subset of V. Then we have a second notion, which is a subset which spans V. We call it spanning. Sometimes if you read another textbook, this is referenced as a generating subset. Generating and spanning are two synonyms. Here we proved an important result, which is that the number of elements, elements in any linearly independent subset of this vector space V is always going to be less than equal to the number of elements of the spanning subset. This is what allowed us to define the notion of dimension. First, we define the notion of a basis. So V is a subset that is both one and satisfies both properties, okay? And then we prove the theory by using this inequality in two different directions, that any two bases have the same number of elements. The number of elements I'm writing in shorthand. Therefore, this allows us to give a definition of dimension of v over is. This number is a number of elements of any basis of V, in any basis. We have looked at various examples. Last time I want to emphasize one thing. I will frame it as a remark. It is essential in some cases to specify over which field we are looking at V, because sometimes it could be that a given vector space can be viewed as a vector space over different fields. Obviously, in our prime examples are the fields of real numbers and complex numbers. In this case, the field of real numbers is actually a subfield of the field of complex numbers. That means that any vector space over C is automatically vector space over R. But the notions of linear independence and the notion of spanning are different. For instance, let's take it now, every field is a vector space over itself, because it has all the requisite operations, Is a vector space, because the field of real numbers is a sub field of C, it is also a vector space, or in fact we can write an arbitrary element of it. Complex number is the sum of its real and imaginary parts, right? X plus y, where I is the square root of negative one. This actually shows that the set one and I is a basis of over. Because this definition, with x, y being an r, shows that every complex number is a linear combination of these two numbers with real coefficients. That means the dimension of C over R, two that are two linearly independent elements, one and I. They are not proportional to each other over the real numbers because I is the square root of negative one. And we know that there are no real numbers that could possibly be equal to the square root of neg, negative real number. But if you look at it, if you look at a as the vector space over itself, then the dimension is one. Because in this case, the basis is one, it just consists of one element, one. You see if you consider it over complex numbers, you're allowed to do scalar multiplication by complex numbers. You can easily switch from one to I by simply multiplying by I, can multiply one by, and you get the other element. Actually, you get an arbitrary complex number z by multiplying one with Z, that shows that this is a basis over, okay? So you have to be careful when you talk about dimensions. You have to specify dimension over which field, because in principle, it could happen that you have two fields. This is not the only instance of that type of relationship. There could be other fields which with a property that one is the subfield of another. In this case, the dimension would be different depending on whether we consider a given vector space over one field or the other. Generalization of this would be N N it's basis. Over just consist of elements which we think of CN as colon vectors. May be, let's call Z1n, where each Z I is a complex number, right? Then a basis consists of elements like the colon vectors which have one in a particular position and in all other positions. The n of them, the dimension of CN is n. But the same vector, same set N, can also be viewed as a vector space over R. Then the basis will double in size. Namely, it's not enough to take this guy, you have to take these guys where you put either one in a given position or you put, you See only then can you claim that this is a basis because you're only allowed to multiply by real numbers. Now, to get a complex number, let's say in the first position, you have to take a combination of one times real number plus I times real number in this position on the number of elements now doubles. We find that the dimension of N over R is actually two n. This is a good illustration of the general notion of dimension, so that's why I wanted to mention it. Okay, now what else can we do with all this stuff? First, I want to talk about the sets with these two properties, linear independent subsets and spanning subsets. The idea is that spanning subset, a little bit of an overkill. It's something that's always going to be bigger than a basis. Something that can always reduce to a basis by removing some elements. Whereas the linear independent a subset, it may not be enough. You have to throw, throw in more things you see spinning subset gives you everything, but there is some redundancy. To reduce the redundancy, you have to remove some elements. Whereas linear independent subset is something where there's no redundancy. But it may not be enough. They have to throw in more things. This is going to be expressed by the following theorem, which I've already talked about last time, which is that every spitting subset, so let's considering one subset, every finite spiting subset of finite dimensional. We are always under the assumption that is finite dimensional. Throughout this lecture, every spiring subset, every spinning subset of V can be reduced to a basis. That is to say, by removing finitely many elements from it, we will get a basis. Okay? So I can go over the proof here because it shows you how to argue this type of statements. Okay? So this is actually in the, in the book and it's explained pretty well. So it's kind of similar in a sense in spirit, the argument here, to the argument we used to, to prove this inequality. That it's a procedure which has a certain number of steps. And by the end of this procedure, we find the result, we find the statement we want. Where is it? Mm hmm. I'm not looking at the right place. That's right, here it is. Okay? So let's say this set is a subset. Is one up to VN. Suppose that it spans do you guys have a question for me? By chance, reduce means that if you remove there exists a subset such that if you remove it, you're left with the basis. You will see it now in the, in the argument, in the proof. It will be constructive. We'll be removing things. Okay? So, step one. Let's call this initial step, initial set. Zero. Initial set, we look at zero and we look at the first element, one. There are two possibilities. If one is not zero, proceed to the next step. With one being zero. Don't do anything to the set V. One is, then remove it. Then set B one is 23 VN. Okay, that's the first step. Now the second step, we're going to look at the second vector. Look at two, okay? If two is a linear combination of the preceding ones, which in this case means one. Which means in this case just V two is proportional to V one. But I want to write it in a way that will be easy to generalize. Two is not a linear combination, then do not. Two is going to be one. If V two is a linear combination of one and then remove two, then two becomes one minus V two. This is notation when we remove something backslash. Okay, now we proceed like this. Step three, we look at three and ask whether three is linear combination of V1v2, depending on whether it is or not. Proceed or remove three, then step K. At this step we already have constructed a set K minus one after k minus one steps. Now, if K is not a linear combination of the preceding ones, K minus one set K is B, K minus one, then remove it. Okay? So it's very simple. This should be understood as a proof in the same sense in which I explained the proof of this inequality. This proof consists of a century sentences, sentences because we have to do steps, because initial set has elements, we have to go through all of them. So it means that the steps, they're all tied together into one proof. After what happens, we are left with a set which I call BN. Right? Each step after the first step, one, after second step two, and so on after N steps BN. And what is the property of this? First of all, I claim that it still Ssh, we have removed only those th which have been. Which are linear combinations of other vectors. And therefore, if we do that, the property of being spanning set does not change. Okay? It still spans, but also I claim that linearly independent. Here I use a Lemma which I wrote last time. This follows from the Lemma. If let's say one, um, is linearly dependent set, then there exists some k from one to n. This is m such that u is a linear combination of the preceding ones, U minus one. You see, it's actually is such an important dilemma that maybe, let me repeat the proof. I sketched it last time, but maybe too quickly. It really is how to say instructive, insightful. It shows you a very important trick that we constantly use in algebra, not only linear algebra, but algebra in general. How to argue these things, you see the issue is that linear dependence is not initially defined as a statement that one of the vectors is a linear combination of others. It's a statement that there is an equation that is satisfied, right? If this is linearly dependent, it means that there exists some a one where not all m's are equal to zero. There is at least one M, which is non zero in this equation. In principle, this doesn't show you that one of these is a combination of others, so you have to do something to get to that result. Here is how you do it. Here's a weird condition. Not all of them, okay? Not all of them. Let's take the largest number from the list of available numbers labeling the vectors which is 12, the largest number for which the coefficient is non zero. You see because if not all of them are zero, then there is at least one which is non zero, but there could be several. However, these fissions are ordered because they're labeled 1234. Out of all non zero ones of which we know there is a non empty set, there is one which is the largest index, that's I Abe. In other words, this equation, in this dot dot, actually you're going to have dot plus ab plus plus one plus dot dot, right? But these guys are not important. I should have written I times I times u plus a plus one times I plus one plus dot, right? But we said that a is the largest one. Well, the I is the largest index such that a, b is non zero, which means I plus one is zero. A plus two is zero. All the way to AM, this actually disappears and we are left with the equation one plus et is equal to zero. Where we know now by our assumption that this coefficient is non zero. If it's non zero, we can divide by it. Do you have a question? Sorry, I meant k. Yeah, yeah, yeah, yeah, you're right, I misspoke. And En wrote incorrectly, right? See how you find that pivotal coefficient? You just say, well, in principle, you could actually take any non 01 You can divide by it. But here it actually gives you nicer structure because here you know for sure that it's not only just linear combination of others, but it's actually linear combination of the preceding ones on the list. It's really gives you good control for the notion of linear dependence because now we can take it to the other side and divide by negative b. From this we get inverse want you one plus minus one in a statement. I denote it by k here I denote it by, it doesn't matter, it's just a question of nation. Okay, so that is an important fact, that linear dependence set is always on one of the elements. You will find one of the elements which is the linear combination of the preceding ones. Because our lists are always ordered, they are not just some random collections of things. We actually order them first, second, third, and so on. So this gives us a certain leverage in organizing our equations. You see, let's go back to the theorem saying that from every spinning subset, we can remove elements to get a basis. We did this steps and we found N, we found BN, which on the one hand spans V. Because we only removed redundant vectors, vectors which are linear combination of others, this does not violate the property that they span the vector space. But after, since we have removed all the redundancies, then it follows from the Lemma that the resulting set is linearly independent, right? Because if it were linearly dependent, it would mean that one of those vectors is a linear combination of the preceding ones. That contradicts our procedure, because in our procedure, we systematically removed all such vectors, that linear combination of the preceding ones. Therefore, there are none left. That's only possible if the set is actually linearly independent. We found a subset B, now On is a subset of the original one, obviously because we obtained it by removing some elements, but potentially, maybe not actually, maybe it was already a basis, don't have to remove anything, but maybe we removed some. It is a subset of the original set and it is a basis because it satisfies both conditions. Remember, the definition of a basis satisfies both conditions. We have a subset of the original set which is a basis. In a colloquial way, we say we have reduced the set to a basis. In other words, we removed some elements to obtain a basis. That's what happens with spending subsets. That this has a nice corollary which is that corollary, sometimes informal systems, you have a notion of a theorem that's something that you can derive from axioms. But in practice, mathematicians, there is introduced a gradation between different theorems, some of them are called theorems. The ones which are considered most important, which are considered more technical ones, are called lemmas. And sometimes you also have a notion of corollary, which means that when it's something that is closely connected to a specific theorem you have proved, okay, it's a static to call things in this way. Corollary one, the corollary of this there, the consequence of this theorem. First of all, it says that any spanning subset of an n dimensional, let's say dimensional space over n elements where n is greater than real. And if n is equal to m, this set is a basis. You see now how do we know that M is greater than equal to n? That's our inequality that we proved last time. Because the basis is linear independent, any spelling subset has to have at least as many elements as a basis, which is the dimension. What is a new element here? What is a new element? No, unintended. What is a new aspect of this statement is that If it's actually have a spinning subset which has exactly the right number of elements, that is to say the number equal to the dimension, then it is a basis. Why? Because we know now from theorem one that we can always reduce a spinning subset to a basis. In other words, a spining subset contains a basis and maybe something else. But if it is not a basis, it means you have to remove at least one element. But then the number of elements will go down. And yet, we know that every basis has the same number of elements. Therefore, it gives you a way to prove that something is a basis. A naive way to prove that something is a basis, or the more straightforward way to prove that something is a basis is to check both properties, okay? That this set is spanning and it is linear independent. But now we see that actually, if you already know a prioria what the dimension is, then you can prove that something is a basis by only checking one property and then making sure that you have exactly the right number of elements. Namely, if you have a subset which is spanning and it has the right number of elements, the right number of elements, meaning, number of elements equal to the dimension, then it is a basis. You don't have to check linear independence, It follows automatically. You see? So it's nice that any questions, that's the story of spining subsets, the ones which are kind of give you everything but potentially redundant. So we have to get rid of the redundancy if you want to get a basis. And that's taken care of by theorem one. Okay? There will be about linear independence subsets, linear independence subsys on the contrary, something from below. Something which is not redundant but may not give you everything, so you have to throw in things. Okay, the theorem, theorem two. Every linearly independent subset of a given vector space can be extended finite dimensional. I've already said that we are assuming throughout this lecture that the vector space is finite dimensional. But let me repeat it one more time. Can be extended to basis. In other words, extended means that you add more elements to it. This is actually much easier since the vector space is finite dimensional. First of note set by one k. Then since V is finite dimensional, it has a spending subset. Let's call it one P. Then just take the union, take the union, join all of them. It's going to be an overkill, obviously, But what are we going to get? We're going to get a set, which for sure is a spinning set. Because if you have a spinning set and you add something to, it's still a spinning set. In this case, this is a spinning set from one to WP and you are join some more vector. It's still a spinning set. We are now in the framework of there one we got ourselves a spinning set. A set, that's right. I already mentioned several times that these are synonymous. Yes, order. Yeah, Okay, ordered set. Maybe that's why I guess he's using list. Yeah. Well, you see that in this argument we are actually using enumeration. Yeah. Okay. Yeah, you have a point, which is that traditionally when we talk about sets, we don't use ordering more proper notion of what we're discussing, our ordered sets, because we do have a set with a particular ordering of elements. Perhaps the notion of a list is equivalent to notion of an order set, the notion of a set. Yeah, Right, What point? Right? But it is much more convenient to work with ordered sets. For instance, our standard example of vector space is we talk about colon, colon. You have an ordering because you go from top down or you go from left to right if you, sometimes there is an ordering in the basis, the natural basis of a fan, which is what I wrote earlier. In the case of CN, which has like 1.1 positions on it acquires a natural ordering from this, right? So if I were to permute them, then it would create kind of difficulties in the notation. So I think actually I'm glad you guys brought this up, because actually it's a subtle but important point that what is natural in this discussion of vectors and linear dependence on is not really sets of vectors, but really ordered sets. And this then justifies actually the terminology of the book. He doesn't quite talk about it, but the author uses the term list. And I found it a little annoying until now. But now I realize that actually it's good that he's using that because this way he does not confuse the two notions. You see a set, really a proper notion of a set is a set of students in this classroom or people who sit in this room, right? And so it's not ordered by anything. Once you start looking at some atriviews, you can order it graphically by last name or whatever. But a priority is an ordered set. But if you have an unordered set, it's very difficult to work with it because we are used to listing things. We're not going to just throw things on the board like darts. You see the mean linear algebra just becomes much more coherent when you choose an ordering. And you see how here, for instance, in the arguments we really go from the first to the second and so on. So it really matters, right, in this argument also. In particular, the ordering matters because now we have acquired a set which is a spanning set. It is ordered in this way, that the first k elements come from our linear independent subset that we started with. The last elements are elements which span. And now we apply to it the machine of this. There we apply the procedure of this theorem. In this procedure, we're supposed to, at each step either do nothing or remove one of the vectors. We have to start from the first. We look and we see that because the first k are linearly independent, none of them will be removed. So they will stay and then some of these W's will be removed. But that means that we're adding something to the first k that we already had. The result is a basis. That means that we obtain a basis by adjoining some elements to one U K. All right, Is that clear? Yes, it would work. It would work, but it would have to preface it by saying, let's order them and then apply this argument. It's not something spurious, it just allows us not to preface every proof with saying, let's order this element and then apply this argument because they already come ordered. You see that's the idea. All right, now this one also has a corollary similar to this one. Take union and apply theorem one. Now, corollary two is similar to this, whereas here it's always the number of elements is greater than or equal. And if they are equal, then is a basis. Now it's going to be the opposite, linearly independent subset, order subset. Okay? I guess allow me to use a subset in the sense of tense order subset unless, unless specified otherwise. By default, just as a shorthand, because I'm kind of used to the word subset. So I'm afraid that if I try to switch now to lists, I will be making mistakes. Subset means order subset in brackets, okay? A subset of, of dimensional vector space has n elements. With n now less than equal to m. If n is equal to m, it's a basis. This is proved in the same way, which is that n is less than requal to m. We know because of the inequality, the basic inequality, that every linear independent set has fewer elements than a spanning set. Therefore, fewer elements than any basis. That's clear. This was proved before. This is a new part which says that if you know that something, what the dimensional vector space is, and I call it M, you have a subset which is linearly independent of that number of elements, then it is necessarily a basis. Why? Because again, by theorem two, we know that the basis can be obtained by extending this linear independentubset. Which means we have to throw in something. But if it already has this number of elements, if you throw in or add some vectors, it will have more elements. You'll have a basis which have more than m elements. And that's impossible because every basis has the same number of elements. You see how we can go from just one property, and knowing the dimension is enough to demonstrate that something is a basis. Okay. One more corollary, which is actually something which I suppose we take for granted, but it needs to be proved that actually every finite dimensional vector space has a basis. It's not obvious, right? Because so far we have defined a finite dimensional vector space is defined as one which has a spending set. Finite spending set space is one that contains a finite or a subset. Now corollary three maybe one prime because it's a corollary steal of theorem one. To emphasize that it comes from theorem one, that every finite dimensional space has a basis. And proof. Take any finite subset, any finite spending set finite, then by the one it contains a basis. Every five dimensional vector space has a basis. Then, as we discussed, there are many different bases. The only thing they have in common is a number of elements and that's the dimension. Okay, our next task is to understand the notion of interacts with the notion of a subspace. So far we've learned, we've learned notion of vector space. We've learned the notion of a subspace. Now we've learned these notions of linear, independent and spending sets, as well as the notion of a basis and dimension. It's natural to ask, how do they interact? How do they fit together? The first statement is that if you have a subspace, its dimension is always less than equal to the bigger space. First of all, let's just say V is a vector space, is a subspace. First of all, there's a, which I'll call them three, since I already labeled numbered the first two is that if is finite dimensional, then so is you intuitively it's clear, right? Because dimension is a measure of the size. So if you have an ambient, bigger space fundmensal, surely subspace will have to be fundamental. But actually, yes, we have to prove it because it's not obvious. It doesn't follow from actions right away, right? Let's see. I will skip the proof I have to get to other things, but you can read it. 2205225 in the book. It is explained very clearly. Every subspace of a finite dimensional vector space is also finite dimensional. Because of that, we can now measure the dimensions are finite dimensional, Both have dimensions, the number of vectors in any basis, the four says dimension is less than equal to the dimension of V. Right? How to see that? Well, that's actually very easy, because that's from our basic inequality. Just choose a basis here, let's say one VN. This is a spanning subset, right? Choose the basis here. Um, so that's the basis of, but now we will look at it as a linearly independent subset of, but if you have a linear independent subset of it is also linear independent in V, right? Because it's a property of just about the vectors. It doesn't matter whether we're looking at the vectors as being part of you or part of V of also on the left, we have a linearly independent subset of V. On the right we have a spinning subset of v, Therefore the number of elements which is in this case less n. That's the inequality we want to prove. Right dimension of a subspace less equal to the Dimmbientpace, bigger space, there is more. Then the next question is, what if they have the same dimension? Could it be that they're not equal? The answer is no. If two vector spaces have the same dimension and one is contained in the other, they're equal. You see there's no gap. Somehow, there cannot be any gap between two vector spaces, one included into another and of the same dimension. Four is suppose V is the finite dimension of space, is a subspace. Which five? Which by theorem three is finite dimensional, also finite dimensional by there is a subspace of V, U is inside dimension of U is equal to the dimension of the, is equal to V, right? And so here we're going to use the theorem about extending. So take a basis of of you one up to UN, right. It is linearly independent by definition independent, right? It's linear independent in and therefore in V L, let me write it more carefully. Take a basis, it is linearly independent. In, and therefore also in V, we have a linear independent subset of the basis of by theorem two, it can be extended to a basis of V, right? We obtain, we obtain a base of V by adjoining some elements, we can obtain a basis of by adjoining more elements. But if in fact we need to adjoin elements, that would increase the size, right? It would mean that the basis of V has more than n elements. But our condition is that the dimensions coincide. Dimension of V is n. Therefore, it is already a basis. If it is already a basis, it means that every vector of view, every vector V is a vector of view, right? We know that this one UN is either basis or we can obtain a basis of V by adjoining more elements. But the second possibility cannot happen because of the assumption that the dimension of V is the same. If we actually had to adjoin elements together basis, it would mean that dimension of V is greater than n, which is a dimension of U. Since the two dimensions are the same by the assumption of this theorem, it follows that it's already a basis. One UN is a basis of both U and V. But then every vector of is a near combination of one UN. And every vector V is a near combination of one UN. Therefore, that's the same, they coincide. This is nice because it shows you that vector spaces can increase increases quanta. One could say energy is quanti, which means that it can only increase in certain increments. Yes, typo. Since dimension of V is equal dimension of Yes. Here, Right? Thank you. Vector spaces cannot increase continuously. The size of the vector space cannot increase continuously, can only increase by one dimension each time you start with the smallest vector space over a given field is a space considering of one element only zero. This vector space is zero dimensional. We stipulate that it has a basis which is empty set. This is to tie things together next, or space is one dimensional vector space. Dimension is one, cannot be one half. If you have 21 dimensional vector spaces with one containing another, they are the same. It's a good measure. Dimension gives you a very nice measure on vector spaces because it shows you that there could be two different one dimensional vector spaces which are not equal, but neither is contained in the other 21 dimensional vector spaces, one is contained in the other. It means that there's two lines and they actually coincide. You cannot have a subline in a line which is not equal to the whole thing. You see it's important because for instance, I think about the alternative situation. Let's suppose that we did not ask for scalar multiplication, operation of scalar multiplication by elements of a field like field of real numbers. But we only ask for a set on which you only have multiplication by integers, okay? Then instead of the whole real line, you could have just all the integers. This is a one dimensional analog in the case when the field of real numbers is replaced by the set of integers. What is the difference between the set of integers and a set of real numbers? Both have operation of addition, which, which satisfy the usual axioms. Both have operations of multiplication. But the difference is that set of real numbers is a field, which means that every non zero element has a multiplictive inverse. Five has the inverse 1/5 integers do not only one active one have inverses, number two does not have an inverse within the integers. This subset, this set which is preserved by multiplication by the integers. But then you could also have half integers, for instance, which is also preserved, which also fits on the line, but it's not equal to it. You see what prevents this situation from happening is the fact that we actually have a multiplication by a ring, like the set of integers, but by a field. So we can always undo multiplication by a non zero number, can also multiply by the inverse. All right, now suppose that we have a subspace which is not equal to the given or space. We can still use this argument to show that we can always extend the basis to this is a subspace. Suppose that the dimension of U is actually strictly less than dimension of V. Let's say this is, this is, n is strictly less than, n can choose a basis one m of the two, again, using the fact that this is a linearly independent subset, not only of you, but also this is linearly independent subset of. And by theorem two, it can be extended to a basis of, right. That is to say there exists some elements, one n minus m, such that if you adjoin them to one, um, you get the basis of V. Okay? A simple illustration of this. Remember how we discussed the three dimensional subject space? Are three, how can be thought of as a space of this classroom, then as a subspace? We could take, for example, a plane which contains this notebook, right? Then the point is that you can choose a basis here. The two vectors go along the notebook, but you can always find the third one is not contained in here, which is transversal to it, so that adjoining it to the first two, you'll get a basis of a three dimensional space. The simplest case would be if you have two perpendicularector going along the edges of this notebook. And then you put like a vertical thing that makes it very clear that's what this theorem is about. The theorem so far, just an argument, but this can be framed in the following way. That actually every subspace has a complement, another subspace such that the first and the second together form as a direct sum or give rise to V as a direct sum. This implies the following theorem, which is number six, which is that again inside, same as here, and then exist inside subspace, There exists another subspace in v such that the red sum is. You see, think about this example. V is a plane in the three mesial space is a line. In this case, the sum is the entire three mesial space and the only intersect along zero, which we have shown earlier, is equivalent to V being a direct sum. To prove this, we use an earlier theorem saying that this is equivalent to two properties. Plus v is v and the intersection is just a zero element. And then define following this procedure. Namely, I just explained that a basis of U can be extended to a basis of V by adding the vectors one minus M define as a span of w1w, mm. Okay? And then basically the proof is very straightforward to show that with this definition of W, U plus W is the whole thing. It's more or less just a statement that this is a basis then the intersection between them to it's, this is spending subset. It gives you the U plus V and the fact that it's nearly independent gives you that the intersection is, I'll leave it the details for you. That's more or less what we need to know about basis and subspaces and how they interact with each other. There's one more them about the dimension of a sum of two vector spaces, which I'll leave for you to read as the last result of section two. But now it's time to move on to the next topic, which is the topic of linear maps. Okay, any questions so far? Up to now, we have looked at vector spaces one by one. We consider particular vector space and various structures in it. For instance, sets of linear independent vectors or sets of spiningvect bases, and so on. Also, we have considered subspaces, but there is a basic idea in mathematics that you get an interesting theory if you consider certain objects, but you also consider interactions between them. These interactions, in general, are called morphisms. This point of view that I'm referring to right now is called category theory. Category theory is a theory in which you are interested not only in objects by themselves, but also maps between them. Maps which makes sense in the context in which these objects are defined. Here's what I mean more precisely. The most basic vanilla case is the case of set set theory, where our objects are just sets without any particular structure, just collections of things. Already, at this level, there is a notion of a map given two sets, we have a notion of a map or a function from one set to another. This is defined as a rule which assigns to every element of the first set some element of the second set, which we call F ofs. That is a proper notion of amorphism in the category of sets. Okay, set theory is incomplete if we just look at sets one by one. And it becomes a much more interesting subject when we look at sets and also maps between them. The question for us is, what should be the analogous notion in the theory of vector spaces? It shouldn't be an arbitrary map, because vector space is not just an average set. Vector space is a set with a particular structure, namely, actually two structures, addition and scalar multiplication. It is natural to consider only those maps which are compatible with the structures. That leads us to the notion of linear, linear transformation. In the theory of vector spaces, we consider what's called. Linear maps in some other textbooks that are called linear transformations. What's a linear map? Actually here. So suppose you have two Vetter spaces over the same field and two vector spaces, but it is essential that they Over the same field, the same field, for example, could be two real vector spaces or two complex vector spaces, but vector space of the reals and vector space over the complex n. Then we have this definition. A linear map from V to W is a map of sets. Remember, a b space is defined as a set. As a set, but it's a set with operation of addition and operation of scalar multiplication by elements of the field over which it is defined. It's a map of sets or a function of sets from V to, let's call it from V to W, which is compatible with these operations. That means that it satisfies two properties, U plus V, U plus V. Here we take the V. U and V are two elements of V. If you have two elements of V, you can take the sum because V is the vector space, and then apply this map. That's one thing you can do or you can apply to each of them. Then take the sum W. The condition that we impose is that the results are the same. That's what I mean by this map being compatible with the operation of addition. You can do the addition before applying the map or after. The result is the same number two. The same for scalar multiplication. If you have a vector and a scalar, you can take multiplication in V and then apply or first apply T and then multiply by lambda. This is, this should give the same result for every in and every lambda in. In these two properties, we will introduce a notation. So we'll introduce notation. Denote the set of all linear maps between two given vector spaces by L of VW. Oftentimes we'll consider the case when they are the same. Not always, but oftentimes we'll consider this case V is equal to. Then we really considering maps from V to itself. In this case, we'll denote just L V, which equivalently L of VV. But we will also use notation LV. What does this mean? What does this property mean? Consider the simplest case, one V, and are one dimensional, let's say over the reels, just R. And R example, is R viewed as a vector space over itself? Now we are considering maps from R to R. Let's actually use notation because we are more familiar in calculus. In calculus, single variable calculus, we talk about functions from R to R. Initial question arises. Which of those functions that we have studied in calculus, in single variable calculus, are actually linear maps? Now, there are so many different functions. Polynomial functions, you have trigonometric functions, you have exponential functions, and so on. Let's restrict ourselves to a large class of functions which are the polynomial, the polynomial functions. What are the conditions? What are the conditions now? The first condition means that f of x plus y is f of x plus f of y. Second is of lambda x is lambda f of x, right? So these are the two conditions which polynomials satisfy these conditions. If you think about it, out of all the polynomials, and it's actually the form infint dimensional vector space actually as we discussed, or at least the books, I think I mentioned it, but in any case is discussed in the book. Out of this whole infor space of polynomials, if you consider poly all degrees, there is only one dimensional subspace, namely polynomials of degree one, which satisfies the property. Just to show you how restrictive this property is, I claim that is a polynomial. The satisfies these properties then must be a multiple f of x is just a x p of strictly degree one without constant term. How to see that? General polynomial is going to look like this, A n x n, et cetera. A one x plus a zero. If I write F of lambda x, which is the left hand side, then this will get multiplied by lambda n. The next will mutifiylomda minus one. The only one which will get multiplied by lambda is this term. If we say that this is equal to lambda f of x, this implies that it is, all the other terms are not satisfied. On the other hand, if it is given by this formula, then the first condition is also trivially satisfied. That's linear. Linear functions are linear maps to all maps. Linear polynomials just dilations like this. A x of degree one are to all polynomials. Now, there are a couple of properties that are linear maps, but I'll just show you one and then you'll read the rest in the section in the book, because it's very straightforward. But there's one cute one which I will explain, which is that I suppose you have a linear map from V to now. You have a zero element in V, right? Because it's a vector space, every vector space has a zero element. I'll call it zero sub v, to emphasize that that's a zero element of V. Then there is also a zero element in which I will denote zero sub the theorem is that every linear map is a linear map. Then of zero is zero. Every linear map takes 0.0 element. Which makes sense because as I said, we want things which preserve the structures. In the definition, we only talk about addition and scalar multiplication. But zero is also very much woven into the structure of ector space. It's nice to know that actually here we just use the property to take zero v plus zero v. On one hand, if I take the sum in V, I get zero, it's zero V. On the other hand, by linearity this property one, this is plus t0v. Now add the negative of zero V to both sides. The result is zero on the left. Nothing left of zero on the right. That provesteria All right. I believe we are out of time, so to be continued next week.