So, all right, you guys, last time I defined,  I introduced the notion of a subspace of a vector space time subspace of a vector space V. We started discussing what we can do with subspaces.  In the case of subsets, we can take unions and intersections, they will still be subsets. In the case of subspaces, if you have let's say V one and V two above subspaces of V of a vector space V defined over some field, then the intersection of the subspaces is still a subspace, which is actually one of the homework exercises.  However, as I showed last time, the union is usually not a subspace. The example I gave last time was an example of subspace. One dimensional subspace is corresponding to two coordinate axis in the two dimensional vector space. The union gives you a cross. It is that it is not preserved by the operation of addition. Therefore, we have to come up with a different notion, then, union, to be able to stay within the parameters of subspaces. That notion is the notion of a sum instead the correction. It is the notion of a sum of two subspaces, which is written like this. In fact, I will give a more general definition of a sum of not just two, but any finite number of subspaces. There are rocks falling from the blackboard, that's probably a good sign for something. At least I hope. Okay, Definition, let V be vector space. Sometimes I will abbreviate. I hope it's clear vector space. Then instead of saying, instead of writing over field, I'll just like this for shorthand, one up to V a subspaces of V. The define the V one plus V two. We define V one plus two plus, et cetera. M is by definition the subset of V, which consists of all vectors of the form v1v2 plus et VM, where V I is in V I.  In other words, we take all possible combinations, all possible sums of elements of those subspaces, that's the sum. Now, what would it mean?  In our basic examples example, you would have to take sums V one plus two, where V one is an element of the first subspace. V two is an element of the second subspace. This subspaces shown on the picture. From this picture, it's clear that one has to be a vector of this nature. It's something that goes along ftrizontal line. Horizontal axis is going to be vector of this nature is vertical, right, by parallelogram rule. The sum of the two is going to be this vector is the hypotenuse of this right triangle that you draw in this way, right? That's V one plus V two. Now, if you vary V one and V two, which means that one can be arbitrarily long, either going this direction or that direction along this line two. Likewise, along the vertical line, it is clear that this vector is obtained this way will cover the entire plane. Right? From this, it's clear that V one plus two is the entire space which we call two.  Now more precisely, two was defined algebraically by, as a set of column vectors with coordinates x1x2 being real numbers, with respect to the usual operations of additional scalar multiplication over R. But by choosing the coordinate axis, we have identified R two with the vector space consisting of all vectors on the plane. This I explained before. This is a geometric realization of two, which makes it more clear how sum of these two subspaces becomes the entire thing. You could also see it algebraically, because algebraically, this consists of all expressions of the form x10, where x one is a real number. This consists of all expressions of all column vectors of this form, where the first coordinate is zero and the second is x two, which is real number. Clearly, if you take all possible sums, you will get all possible pairs x1x2 and that's exactly two.  That's another way to see it, more algebraic way than geometric. Both arguments clearly show what the sum is. Okay, So that's the proper notion, which replaced the notion of union in the category of our spaces.  Next, first we see that this way we generate out of subspaces. Think about it generating, because you're not just taking a union. Union is a naive notion of set theory, where you're simply adjoining things from one subset to things from another subset. Here, you're generating things because you have an operation of addition, right?  A vector space differs from just a plain set. In the fact that it carries two operations. Operation of add any two elements of give you the sum of another vector in another element of V. There is also an apparation of scalar multiplication, multiplication by scalar, an element of your field, in this case the field of real numbers. Therefore, when you talk about any notion in this world, in the world of eco spaces, you have to make sure that it interacts well with these two operations. In this particular instance, it means that it's not enough to just adjoin elements two elements of V two as we have seen. But to exploit this operation to generate more elements by taking sums, possible sums of elements of V one and V two by using this operation.  Now you might say, why are we not using the scalar multiplication? We could also be using scalar multiplication, right? But scalar multiplication will keep each VI in the confines of its own subspace. Scalar multiplication is not something that applies to two elements of V, it applies to a single element of V. If your element, let's say V one, in a subspace capital V one, scalar multiple of it will still be an element of the one. We are not losing generality by not referencing scalar multiples. In fact, the union itself was already preserved by scalar multiplication because if we all we cared about was multiplying vectors by scalars, the union is preserved closed under that operation.  Remember on Tuesday, we had three conditions that have to be satisfied. For a subset to be a subspace has to contain zero element, It has to be preserved by addition, it has to be preserved by scalar multiplication. Now, the union is actually satisfies two of the three properties. It does include the zero element. It is preserved by scalar multiplication. What doesn't work for union? The condition that it should be preserved by the operation of addition, That's why what we need to do is to take all possible sums of elements from V1v2 and VM to get something which is a subspace. This is a new subspace and this is how we can generate. You can verify this as I just explained, but more or less, I'll explain why it satisfies all those three conditions. This is a new subspace by the theorem, from last time. From last time. Okay. This is good news.  We now have a way to produce new subspaces from a given subspaces. But then we notice an interesting phenomenon, which is that there is a certain redundancy in the separation in general. Okay. Bear with me. In this case, there is no redundancy, okay? You have to subspaces and producing these elements which are the sums. And if you think about it, you can reverse this process. And in fact, instead of producing a vector on the plane as a sum of V one and V two, you could actually start with a vector like so, which is like a general vector on the plane and ask how it can be represented as a sum of an element of V one and an element of two capital visits. Okay? Right. In fact, you find that one, save and save two are uniquely determined by this vector. Namely, this one is just a projection of the vector onto the horizontal axis, and this one is a projection on the vertical axis. There is no redundancy. They are utilized somehow to full capacity, and there is no ambiguity in how to represent a general vector of the sum as the actual sum of two elements.  But that's not always the case because, for example, let's suppose that I choose a third subspace, which I will call three, which is neither of the two coordinate axes. Remember last time I classified all subspaces in two, we saw that all of them are like this. Well, it's either zero or it's line going through the origin or the entire two. Let's take a third one of this form. Then we can take V one plus V two plus V three. Let me actually, let me actually call this vector V three. We see that it's equal to V one plus V two. In principle, this definition, it does not impose any conditions on the. Um, legitimate subspaces. It can take any number of subspaces, and you have this definition. But there is a difference between the previous example and the example where you take the sum of V one plus two plus three, which is the Q line. Now obviously we're not going to get anything other than two, because already from the sum of the first two, we get two. If you throw in more vectors, well, you still are in two because we don't have a bigger space. Here we're discussing something happening within V. The sum cannot be bigger than V. It's still going to be inside V, right, subspace of V. Therefore, clearly the answer or the result is the same. But there is a difference between the two because now there is a redundancy. What do I mean by redundancy? Now, a particular vector in our two can be represented in different ways as a sum one plus two plus V three. You see for example, the vector three can be written in two different ways. General vector here you have this expression. Three is equal to V one plus two. Here you have a vector inside this V three, but here you have a vector 21 plus two. It's intuitively clear why? Because V three actually already belongs to the sum of one plus two, therefore you have this redundancy. It's natural to ask how to deal with this and how to distinguish the case in the first example where the subspaces are transversal to each other, that's a sweet spot where they generate the entire thing. The sum of one plus two, the sum of V one and V two is the vector space two. But they are not intruding on each other, there's no overlap. Okay, That leads us to the notion of direct sum. That's the next definition, which is a continuation of that one. I use the same notation. The sum v1v2 plus V M, like on that first blackboard, is called direct sum if each element, or each of its element can be written uniquely, uniquely one, uniquely, that's the key, 12 plus, et cetera, p. In other words, there is no overlap. In this case, this is not a direct sum as this equation shows. But that example where you have just V one and V two is a direct sum. If it is a direct sum, then we use a slightly different nation to indicate this fact. Instead of just plus, we put plus with the circle. Okay? See you see that something is equal to the sum V one plus two plus V M. It simply means that it's referencing, well, it means that that subspace consists of all elements of this form, right? Of this form. But if you see that something sub subspace is a direct sum, it means that it consists of all such elements. And there is an additional statement implicit in this notation, which is that there is this uniqueness property that, in fact, every element can be written uniquely as the sum of elements of those subspaces. To see one more example of this, let's expand our horizon. Let's go from R two to R three. The convenience with R two is that I can draw things on the blackboard because it's two dimensional. If you want to talk about R three, it's the space of this classroom. Okay? So then usually what we do is we, I can draw projections of it. But I can also use, let's say, an of a line on the plane would be a line in three dimensional space. Now, just like a Tumensal space, to really talk about it as a vert space, you have to choose zero element in it, right? Because The point at intervals, they have to start somewhere. So that's why I always choose this point, but I also even choose the coordinate system. Likewise, in principle, we should choose a coordinate system. Imagine three coordinate xs, the way we usually do it, like 53 and multivabal calculus and so on. But in any case, we have to choose the origin. Let's imagine somewhere somewhere here, Okay. Then an analog of one dimensional subspace in R three would be a line which goes along this razor, something like this, like this. It could go under any angles, arbitrary line, the conditions has to go through the origin. Okay? So we saw last time in two that lines which don't go through the origin do not correspond to subspaces. They are not closed under the operation of addition, neither they're closed under the operation of multication. They do not contain zero elements. In fact, it's a trifecta, but in a bad way, so that none of the conditions are satisfied. On the other hand, if you have a line which goes through the origin, it satisfies all three conditions, and that is a subspace. Okay? Now, there is more. There are two dimensional subspaces, which are planes which go through the origin. These are also subspaces, so there are many of them. Okay? The analog of this example. Well, there are several analogues. For example, I could take two lines, two lines going through the origin, right? What will be the result? The sum of the lines are parallel to each other. Then the sum is going to be a plane. There is a unique plane which contains these two lines. If you think about it, that plane is going to be the subspace, which is the sum of the two. In this case, it's a direct sum because every vector, in the same way as on that picture, you can verify that every vector on this plane has a unique representation as a vector here and plus vector here. Then the next example is my V one is a line, my V two is a plane. Now, there are two possibilities. One is that they're transfersal to each other. The, the plane contains the line. The plane containing the line is a case of redundancy. In this case, the sum is going to be just this plane, because this line doesn't give you any Vectors in this line are already vectors in the plane. If I take sums, we're still not going to get anything. That's the case when it's not a direct sum. But if they are transversal to each other, then you can actually obtain every vector in R three as a sum of a vector from this plane and a vector on this line, right? In the most obvious example is if they are perpendicular to each other. If one of them is a Z axis and the other one is xy plane, then it's actually clear just from the coordentterpresentation, that every vector can be written as a sum of a vertical vector, which goes along the Z axis, and a horizontal vector on the xy plane. But similarly, even if the kind of slight is skewed, as long as this one does not collapse onto this plane, the same argument will work, that every vector in the three meal space will be a sum of a vector going along this line and a vector which is contained in this plane. That's also an example of direct sum. I notice one interesting phenomenon, which is that I have not yet defined properly what the notion of dimension is, but gradually coming to this notion now, what is the dimension of a vector space? But intuitively, we understand that the line is one dimensional and the plane is two dimensional and the space of the classroom is three dimensional. Look how interesting the plane is two dimensional. In the suit spot example where you have two lines, the sum of these two lines, which actually is a direct sum, is the entire thing. You see that the dimension of the entire thing, which is two, is equal to one plus one, which are the dimensions of the pieces of two pieces. Likewise, here you have three dimensional space. This is one dimensional, this is two dimensional, and we have a station of direct sum. Again, three is one plus two. That's going to be always the case. We start slowly understanding what dimension is from these examples. But now let me give you one more example where both of them are two dimensional. It's like two notebooks, okay? Both of them are going through the origin, so let's say origin somewhere here. Both of them are legitimate subspaces, both of them are two dimensional sum. Sum is going to be, again, even if we just took one line on this other plane, I'm assuming that they don't coincide. It's not interesting, they're different. There is an angle between them, non zero angle. The sum is going to be the entire space three. Because it's already R three, even if we just take a line inside the second plane. And in this case, it's not a direct sum, which you can see right away from the fact that two plus two is four. But the ambient space is three divisional, so there is a redundancy, There is one dimensional redundancy, but you can see that also clearly because the intersection between the two, the intersection between the two is a line, right? You see, right, The two planes intersect along a line. If you have a vector along that line, on one hand it belongs to the first plane, and on the other hand, it belongs to the second plane. Therefore, it can be written in two different ways as a sum of one plus V two. First one is just an element of one, and second one is just an element of V two. Therefore, it's not a direct sum. From this, you see that if you have two subspaces, they give the sum is a direct sum, no non trivial intersection between them. Why do I say non trivial is because of course, they always intersect at zero. All of them contain zero because that's one of the conditions to be a subspace. We're not talking about empty intersection, but we are trivial intersection trial, meaning that only zero and nothing else. This intuitive argument can be made precise in the following Alemma. Okay, so suppose U and W are subspaces of V, two subspaces. Imagine like those, the ones I just described. It could be two planes or a plane and a line, or two lines in R. Three is a direct sum if and only if intersect is just just consists of zero element, which is the smallest possible intersection for two subspaces. Okay? So let me quickly prove it. The first thing to immediately see is that if and only if, it if, which means that there are actually two statements that we need to prove. All right, so first, suppose it's a direct sum. Is a direct sum, let us show that the intersection is zero. Pick an element. Pick an element v in the intersection. Let me continue here. Pick an element. We want to show that this element is necessarily zero. Let me write two equations. First, I have V plus negative V is equal to zero. Now, the zero is the zero vector of on Tuesday, I will use a subscript zero element of the zero element to indicate what I'm talking about zero element of. Well, in this case, actually it's not ambiguous because we are assuming that U are subspace of V, which means they all share the zero element. But just in case because also somewhere floating around is a zero element of the field over which everything is defined. Let's put this aspect just to make sure. Now, this comes from the axiom of a vector space. That's the additive inverse. The existence of additive inverse. Every element has an IT inverse. And in fact, we also know that this ID inverse is unique. Right? This equation is true for every vector. Now we're applying it to a vector in the intersection, right? But now we know that this belongs to V obviously, but also is equal to zero plus V. We get two expressions of the same element. Namely zero, V as V one plus V two. Or maybe U plus W, let's just call it plus. Where U is and is in, right? Why? Because The vector V. This is in the intersection of the two, which means that it belongs to both U V, W. This has a perfect form of an element of U plus an element of W. We are viewing this as an element of U, and this is an element of W. Why is it an element of W? Because is a subspace. V is an element of minus V is also an element of W. Here I'm writing zero as a sum of an element of U and an element of. I'm treating it as an element of here. This is the first summon and I'm treating this one as an element of wise this is an element of I can treat it as it of, you can treat this as element of, you see I'm getting two different expressions. But we have assumed that this is a direct sum. Which means that such an expression for every vector is unique. Which means that these two expressions actually have to be the same. They can only be the same if equal to zero uniqueness, uniqueness of representation, which is part of the, of the definition of the direct sum of representation of V. As such a sum means, means that is actually equal to. That's the only way that these two expressions are the same. Okay, that proves it in one direction. Next we want to, I might leave, I might need this, and I might need this. So let me do the opposite now, this way. Now we suppose that, that you intersect W zero as small as possible. Let's show that this is direct sum. To show that this is direct sum means that, show that U plus W is a direct sum. Which means that I, that is to say that if you have 11 where one is in and one is V is equal to two plus two where two is in, this is in, then necessarily one is equal to 2.1 is equal to two. That's what uniqueness means. Right? Let me have to show this uniqueness. How to do that? What we can do is we subtract minus U two on both sides. The minus one on both sides of I'm doing it this way, in a weird way because I'm following the axioms. But of course what it is is that what we colloquially call taking something from the left hand side to the right hand side with the opposite sign. That's what it means, right? Because when you subtract, for example, you subtract U two for both sides, it cancels it on the right hand side, but it brings minus two to the left hand side. And likewise, if we add minus one on both sides, one is already present. It gets canceled by this on the left hand side, and then we just subtract it on the other side. It's exactly what we are used to doing with numbers and so on. There's nothing fancy about it, but I'm doing it now to emphasize the link to the axioms in the future. I'll just say take it to the right hand side, or take it from left hand side to right hand side. Or conversely, I'm not going to emphasize it each time, but at the beginning I want to emphasize it. Okay? If we do that, what happens? This can sell out. We get one. Of course, we use competitivity freely. Also, one minus two. Is equal to, let me just use yellow is equal to two minus one. Right left hand side is because one and U, two are in U, one minus two is in U, whereas this is in. Since they are equal, we got an element in the intersection, but we assume that the intersection is zero. That means both of them are zero, and we get the result, one -2.2 minus one is this means this implies what we wanted, the uniqueness. Okay? Any questions? Which direction? This or that? The middle direction. Okay? This is already here. Because is a subspace, that means itself a vector space. And therefore, it has an additive inverse, right? In the intersection right here, we are assuming that intersection, we want to prove that it is equal to zero, right? It is now negative. The easiest way to see what is negative because we know that negative V is minus one times V, say here we're viewing it as an element of, then is preserved by scalar modification. Minus one times V is there to, which means minus V is there. This way, I don't have to appeal to any axioms because you can say, okay, well what if it's different from the one which is a degaitive v in V. But that's not possible because, in fact, it is equal to minus one times V. It's uniquely determined in V and in any of its subspaces. There's no different additive inverse for a subspace. It's the same one that is additive inverse in the ambient vector space. You're welcome. Any other questions? Okay, great. What have we done? We understand now the subspaces, how they interact, so to speak, how to generate subspaces from existing subspaces. And then we have this notion of we can control redundancy. Now we have related this redundancy in a case where there are two subspaces to the property. Directness is equivalent to the intersection being trivial, so to speak. The smallest possible redundancy then means that if it's not a direct sum, then the intersection has to be bigger than zero, has to be at least one dimensional, as we have seen in our examples. The next step is to actually become a little bit more focused on the sums, but in the simplest possible case, what are the simplest possible subspaces? Let me raise this. I'm going back to the original definition of the sum, okay? But I want to specialize this definition. Do the simplest case. The simplest case is when V I is actually consists of multiples of a particular vector. You have a particular vector in your vector space, which I will call small VI. Then I take all multiples of this. In a scalar multiples, I multiply by an arbitrary scaler from my field. This way I get a big collection of of elements of our space, our vector space. It's trivial to, it's virtually trivial to check that this is a subspace. Again, by using the theorem that we proved last time, it clearly contains the zero element, which you get by stating I equal zero, It is preserved by scalar multifications, preserved preserved by operation of addition. These are the examples. Let's suppose I choose the one to be like this then. If I take all scalar multiples of V one, I get precisely all vectors along this line. The subspace capital V one is just this line, capital One. The set of scalar multiples of this vector is just this line. Likewise, two is obtained from the vector two. In this way, capital V three is obtained from this vector. In this way, these are the lines, analogs of lines. But in the general vector space here, we talked about lines on the plane when our space V is two. I have also talked about lines in three dimensional space. Them, the space of this classroom. But in fact, one can introduce the dosion of a line in any vector space in this way by simply taking multiples of a particular vector. Now I assume I'm implicit here in this discussion. If I want it to be one dimensional subspace, this V I should be non zero. Because if it is already zero, then I will not get anything other than zero, right? These are the simple subspaces really. You get them from one element. The question is, what does this definition of a sum give us In this particular case then, what will be the element if I have m such things each of them has this form. Let's just look over there now. When I wrote v1v2 and so on, meant the subspaces meant in each capital V. But my general notation is slightly different. They're multiplied by something. In fact, that expression, well, is going to be a one V one plus a 22 plus et cetera. My notation is slightly different on this blackboard from the nation on that blackboard. What is small V on this blackboard actually is I times of this blackboard. That's why that combination, that sum is actually this sum on this blackboard. Right now, the parameters are not vectors, but actual scalar multiples. In this leads us to consider all possible expressions, this kind, and we have a special term for it, linear combination of V one of one up to VM. Okay? It's not something because this was definitely considered in the first course of linear algebra like 54, But now I'm explaining it in the context of this definition of a sum of subspaces. It is really a special case when the subspaces are one dimensional. Okay? We got this notion of linear combination. The sum of the subspaces of this one dimensional subspace is really the totality of all linear combinations of these factors, one up to VM. That gives us definition for the sum, the span. Suppose we are given, we are given some vectors, vectors v1v2 up to V M in a given vector space over some field. Then we have the notion of linear combination, which is an expression of this kind, where each coefficient in front of v1v2, et cetera, is an element of our field, our field over which everything is defined. But then we also look at the totality of all of them. Then the span, the spin. Now that looks like Spa. Okay? You have to be careful. Span V one. Vm is the subspace which consists of all of these expressions, of all ear combinations. One, V, one, etcetera, AM VM. It's called the span. The write like the span. And then we put brackets and we list the elements. One, sometimes you write V1v2. It's up to you, but I'll just write one to dots. Sometimes people write, you write V and then you say I from one to am something like this. Or you write I from one to M. There are various ways to shorten it. The book uses this notation. So most of the time I will use this notation, I suppose. Okay. I hope it's clear what I said earlier that this is the sum V one plus VM in a special case when each of them is a line of this nature. Now it's either line or it is zero depending on whether I non zero or zero to be precise. There is no condition in this definition that all of these are non zero. But of course, that's the most interesting case. Just to keep the generality open, we are going to consider, even though most interesting cases when all of them are non zero, we will allow in this definition some or all of them to be zero as well. That's also a possibility. I'm not going to add anything. If it's then it's multiple 02 and so that someone will not add anything, we'll add zero. All right. Now the question is, what is it actually, what is it? What is the subspace? There is a simple dilemma addressing that, which is that there is a simple dilemma which says that the subspace span of V1vm, sometimes people also put the curly brackets. But let's be consistent with curly brackets are put in, in the meaning of this notation. In fact, I realized that it would be better to write like this bar. And let me slow down just for a second to explain this notation. This is notation which comes from Set Theory. It comes from the foundations. This is not a notation that comes from theory of vector spaces, or the formal system vector spaces. We have already encountered such, well, first of all, on those blackboards. But I want to say, to maybe explain it one more time. It's a set of all elements of this form. The syntax for this notation is that there is a separator. So whatever is on the left of the separator, it shows you the form of the elements that we are now talking about. And we're talking about the set consisting of such elements, right? In this case this is given, V1v2, et cetera are given. And then we're considering all expressions of this kind. Now the expressions are not defined yet because one is defined, V two is defined. The dots is clear what they mean, but a 12 and a AM are not defined by this equation. Therefore, there is the second part after the separator which describes where those undefined quantities belong. In this case we explain. This explains that each of them is an element of this is defined. These are all possible expression of this kind where one, et cetera to VM are referencing the elements we have chosen, 12 up to AM are arbitrary elements of this is now a well defined subset of V, that's what we call the span. Since we introduced this round brackets, then we should stick to that notation. Okay. There's a small difference. In some textbooks they actually use curly bracket here in this nation as well. But let's be consistent in the beginning to the letter of the law, so to speak. Not only letter of the law, but all symbols of the law in brackets. Okay? This span is the. Subspace of that contains all these vectors v1v2, VM. A small remark, if we want to be pedantic, then you have to explain what do I mean by being smallest. But if you just think about it, you immediately see what it means. Smallest means that if you have any subspace which contains this, then it will also contain this. That means the smallest, right? Because we know it's a subspace and it just says that any other subspace which contains them also contains this subspace. That means anything else will be bigger, might possibly contain other elements bigger. Smaller in this case is in the sense of set theory. It is certainly clear what a smallest subset means. It means something out of all subsets satisfying certain property is one which will always be included, right? That the word smallest here is not difficult to, to make sense of. At the beginning of this course, I slow down in moments like this, and I give an explanation. But eventually our expression, our statements of theories will become closer to more English language than a formal language. If you were to program it on a computer, you're not going to say smallest because the computer will return an error because it doesn't know what smallest is. Therefore, you have to program it properly, which is what I just did. But we will, we'll skip those things because of our understanding of natural language. Okay? I'm not going to give a proof, just an indication. It's sort of obvious because you see, it is obvious from the serum which we, which I explained, which I gave last time. Which is that what are the criteria of a subspace? It has to contain zero, it has to be preserved under addition, preserved under scalar multification. If it contains all of these elements, and it's preserved under scalar multiplication, it must contain scalar multiples of each of them, right? And if it's preserved under addition, it must also contain this combination, linear combination. That's it. It must contain all of them. Any subspace which contains them contains all of these linear combinations. Therefore, any such subspace contains the span. Therefore, it is a small subspace. Actually, I have given the proof, I just didn't write it down. If this was too fast, you can read it in the book. It is proof in the textbook. Okay. It's again about generating things. You see we're in discussing this linear combinations and spans, we're talking about sums, but we're talking about sums of subspaces In a very special case, when each of the summons is one dimensional subspace, well, it could be zero. But without loss of generality, we can just say all of them are zero. It doesn't add anything. You might as well just stick to the ones which are non zero, then they are just one dimensional subspaces which aligns. What about redundancy? How do we deal with this notion of direct sum or non direct sum in this special case? Okay, then we get to the notion of linear dependence. Linear independence. Linear dependence and independence is what replaces the notion of direct sum in this scenario, in this particular set up. Let me explain, let me erase this. So the vectors v1vm, again, same situation. You have a vector space, you have vectors in the vector space linear dependent. If there exists some I in not all of them, such that one V one plus and cl m is equal to z. In other words, we can look at this equation. When this linear combination is equal to zero. If all of AIs are equal to zero, then clearly this is satisfied because each of the summons is 000 plus zero plus zero m times you get zero. The question is whether there are other solutions for AI for the efficient Abi. If there exists solutions which are not zero, then we will say that these vectors are linearly dependent. Here is an example. Take this vectors v1v 2.3 right? We know that v three is equal to V one plus V two, right? This, but we can rewrite this by taking three to the right hand side. This equation can be written as V one plus v two minus v three is equal to zero of this expression is a special case of an expression like this. Now I'm considering the case equal three of this definition. There are three vectors I can write this is one times V one. I can write this as one times V two. I can write this as negative one times V three. You see, this will be my one, this will be my two. This will be a three. A one is equal to one, A two is equal to one, A three is equal to negative one. We got ourselves a solution of this equation where not all of the I's are equal to zero. In fact, all of them are non zero. In this case 11 and negative one. This is a special case of linear dependence. Intuitively is clear. It's exactly connected to this notion of redundancy because you can express one of them and as a combination of others. They're all transversal to each other in a certain sense. Another example would be if you have three. In fact, if you have three vectors in R two, they will always be linearly dependent because there's not enough space for them to play. But in R three, you can have three like this, for instance, the generators of cones. You could express any of them as a combination of the two. Which is equivalent to this, as we'll see later. Equivalent to this notion of linear independence is closely connected to this is the opposite of direct sum. It's a case of redundancy. If you do have such an equation like here, what's the analog of direct sum? Then? That's the second part of this definition. They are called which is the opposite? What does the opposite here mean? They are called linearly independent if this equation star can only be sides fight by all I equals zero. Or in other words, the only solution of this equation. When I say solution, I mean solution for a one up to AM. Because two and VM are given. They're given to us. They're not variables in this equation. The variables are 12m, which are elements of our field. We will say that to me up to VM is linearly independent If this equation implies that all these cofficients I are equal to zero, that's linear independence. You see, now we go to 21. Notion has to do with what we can generate from a given set, that's the notion of a span, a given set of vectors. The other is the notion of redundancy. Non redundancy, and that's the notion of linear dependence or independence. So there is a tension between the two which we're now going to explore. Okay, given a set one up to V, M a set of vectors in V, we can talk about two possibilities. The first notion, it is a spin list list. That's the term which is used in the book. I personally would call it a spanning set, but the author calls it spinning list. I want to be closer to what's in the books, let's call it spinning list. In other words, we're looking at one up to VM as a list of elements of it is split. The subspace that we get, smaller subspace containing them is itself, if they generate the whole by those linear combinations. That's one notion. The second notion is it is linearly independent. That's just part of that definition. But let me repeat if the equation 11 a VM implies that all the AI is zero. I hope it's clear from the previous discussion that there is a sweet spot. The sweet spot is when you have a subset which satisfies both O. In this case, the subset is called the basis. This is where it's going. We are going to define basis, but probably not today, but probably on Tuesday. Using basis, we will finally introduce the notion of dimension. This is a preview of what's going to come and the motivation for what. I will do it in the rest of this lecture. The preview is that we'll have definition V1vm is called the basis of V If it satisfies both 1.2 1.2 both spinning subset and linearly independent, then we will show there will be a theorem that if V has a basis, has a finite basis. Finite basis. In general, you can have a vector space, like vector space of functions from R to R, which does not have a basis, a finite basis. But in this course, we will be mostly studying vector spaces which do have a finite basis. Finite basis, meaning that there exists some number M, natural number M, such that there is a set V1vm, which is a basis which satisfies these two properties. Now the question is, could there be two bases with different number of elements? The theorem will be, we'll say no, that the number of elements on the basis is fixed by the vector space. It's an invariant, as mathematicians call it. It doesn't depend on choice of a basis. That number is called the dimension, you see. So, if V has a finite basis, then all bases have the same number of elements. Number of elements. That number is called the dimension of V. For instance, if you are on the plane, then any basis will have two elements, like this, two. But you can take any two vectors which are not proportional to each other. This will be a basis. In our three, there will be three vectors which look like this. That's where it's going. It's really an important statement because we now understand what a basis, what the dimension is. Because I kept saying dimension dimension, but like intuitive notion, we need to have a rigorous definition of a dimension. The past rigorous definition goes through the notion of a basis, which in turn goes through the T two notions of a spining set and a linear independent set. The crucial statement to prove this theorem, which we will do on Tuesday, is the following. Now maybe I should say here that there is a notion of a finite dimensional space which actually we can define a finite dimensional tor space as a better space which has a finite Sp Sp set. In other words, we can define notion of finite dimensional space without giving the definition of dimension. I will assume that let a dimensional vector space over some field, which means that the exits a spin set set, let's call it W1w finite In spending set, suppose V one to V, M is a linearly independent subset subset of V. You see now we have two numbers, n and m. We are assuming that exists finite spending subset or list one up to you can span it with n vectors, right? Let's choose one of them, one of such subsets. We don't know yet if this number n, it's not fixed. Because obviously if you already have a spending subset, you can throw in more elements, they still be spending. Because when you talk about spanning, you don't care about redundancies. In other words, if you have a spending subset, you can add more elements to it. It will still be spending, it's not going to spoil it. Removing elements will spoil it, but adding elements will not spoil it. Let's choose one of them. Let's choose one such subset, spanning subset. Suppose that there is another subset, v one up to VM, that's a different number which is linearly independent. One of them is spanning, the other one is linearly independent. Then what we can say for sure that is less than recal to n, that's interesting. This is what will imply this result. It satisfies both properties. Then you'll have two inequalities less rec to n and less equal to m. And that's how they will be equal. This is a very nice technical result which implies what we want. Okay, Let me see if I have enough time to prove it. If not, I will finish on Tuesday. A very cute statement. You see it makes sense because these are two opposite properties. Like I said, spanning set is something that if you already span, you can throw in more and still span. But for the linearly independent is the other way. If you have already a linear independent subset, you can remove things from linear independence and still be linearly independent. In fact, it turns out that one always dominates the other. Then the Swiss bodies when they are equal, when they have both properties, when this inequality turns into equality. The proof is very nice. It's written in the book. Even if I don't finish it today, you will have a chance to read it. But at the vast I'll give some indications for the proof. What we're going to do is we're going to play a game. We're going to put these two sets side by side. Actually, I want to use the same notation as the book, so that not to add any confusion in the book. These are called one, um, not a big deal, but to be consistent with the textbook, let us stick to the notation of this textbook. Let's put these two set side by side to 12, right? The game will be the following. At each step, the first step, we will move the first element of this set, of this list to here. When we do that, it will still be a spinning set. But we will show that, in fact, from this spinning set, which will now have plus one elements, we will be able to remove one, as to still preserve the property of being a spinning set. That. This way we will move one of the elements from the left set, from the linear independent set to here. But because we will also throw away one remove one element, we will keep the number of elements here at n and keep the property that they span at the second step. After the first step, this one is gone. This still remains linearly independent. Then we take two and move two here. Now one is gone already, and we will show that actually. Then we can remove one of the remaining so that it still stays as a spinning said on you see what will happen from this game. It's clear that because at each step you can do that. The only way that this can happen is that the, at each step we're removing one of the s. There have to be enough to be able to remove them. They have to be at least M W for this process to work. This is a little bit, the argument is a little bit weird, but if you think about it, you'll see how these things come together into proof, okay? So let me start by explaining. We discuss the process which consists of M steps. Let me describe. The first step is that we add 12 Scl, this W, so we get 112 and N right now, because, because one up to WN already spans spans V, Where do you know that? Right? It means that every vector, in every vector in V can be written as linear combination of this vectors. In particular, one can be written as linear combination of the vectors, right? One, let's say can be written as 11 plus B N, but we know that one is non zero, right? Why? Because one is part of this set which is linearly independent. You can have a zero element in the linear independent subset. Because then the equation that we talked about, this equation, we'll have a non trivial solution. Let's say if this vector is zero, you can just multiply by a non zero element and you'll get zero, right? If you have a linear independent subset, every element in it is non zero. In particular, one is non zero because this is a linear independent subset by our assumption. Suppose one is a linearly independent subset, it's non zero, which means that at least one of the BI's is non zero. Because if all I were zero, then the right hand side would be zero and that would contradict the fact that one is non zero. Let's say that without loss of generality. We assume that say one is non zero. Well, it doesn't matter. Say B, K is non zero, then we can turn this around and express k in terms of the rest of them. You see K, I take everything to the other side and I divide by BK, because K is non 01 over BK times one minus B11 minus B. We obviously K skip BK. You see, we turn things around and say that one of the W will be expressed in terms of the rest of them. But then it means that it's redundant. K is already expressed by those guys, which means that they already span remove K. If we remove K, we are set 1w1w2. And let me indicate it like this, K hat means we keep it N, this will still be spinning, spinning subset because this K doesn't help us. Because K is actually in itself a combination of the other ones. That's the first step of the game that I was successfully, I have been able to successfully transmit the first element of the first to remove the W without violating the properties. Then you will see that similarly, you can do it with the rest of the UIs. And eventually we'll be able to migrate all of the UIs here. But for that to happen, n has to be at least equal to. That's roughly the proof. Please read it in the book. I will make a few more comments on Tuesday and then we will go to the basin dimension tomorrow. Your first quiz. And just to say that it's going to be closed book, it's going to be at the beginning of your sections. Please don't be late because there's very little time. There's 15 minutes for the quiz, so don't be late.