All right, let's continue with linear algebra. What have we done? So far, we have, I have given the definition of vector space, and a couple of them in the formal system of vector spaces. Then there were more theorems proved in your reading assignment, the first two sections of the book, and then there were some homework exercises.  Now the real story begins. As always, you have to settle on the axioms, and after that, you try to see how far the axioms can lead you. What is the first step? The first step is to talk about inclusion of vector spaces.  This leads us to the notion of subspace. I think a good way to think about it is to recall the hierarchy of formal systems that I talked about last week.  The basic one is a set theorem system. I mentioned that the prevalent one used by mathematicians today is called CFC, which is an abbreviation of mathematicians and the name of the axioms.  But then we built other formal systems on top of it. One of the formal systems which are relevant to us is the formal system of fields. Then the formal system of vector space is built on top of that. Because the definition of a vector space presumes the choice of a field, right? We have a notion of a vector space over a given field, not just an abstract vector space, it's always referencing a specific field vector spaces.  Now in all of these formal systems, there is a notion of something that's included in the most basic one, there is a notion of a subset. If you have a set, there's a notion of a subset, which is a set including some, but not necessarily all elements of. Right? It's obvious, let's say out of the set of all students sitting in this classroom, a subset of those who are in the middle section or in this section, in that section, those are subsets.  This is basic notion of set theory which we need. Once we have subsets, we can talk about intersection, we have two subsets, we can talk about intersection and union.  Already we are starting to develop a theory once we codify, so to speak, this notion of a sub of something that is insight, you create a hierarchy of the objects of your theory. That's the situation in set theory.  In set theory is the most basic one, because a set is just a collection of its elements, But it does not have any particular structure, such as the structures that we introduce in the case of fields and vector spaces.  Next, we look at fields in the case of field, what is a field? A field is a set first and foremost. As a set, there is more namely a set. Then there are two operations, addition and multiplication, and axioms, which they satisfy. Again, you can talk about um, you can try to introduce a notion of a sub, a proper notion of a sub in this context. Here, we already enter something interesting, or recognize something interesting, because it would be silly to consider arbitrary subsets of a given field without any additional requirements that would be appropriate at the, at the more basic level of set theory.  But now that we have this additional structures, it would make more sense to consider only those subsets which are compatible with the structures of the field. And that means that you consider, let's say, a subset, but not an arbitrary subset, but rather the one which is compatible with these two operations. So that if you take two elements from and take the sum, it will still be an element of. If you take the product, it will still be an element of, then gets equipped with these two operations. With respect to those two operations, it satisfies the axiom of field.  That would be appropriate notion of sub, okay, a basic example which is relevant to us in this course because our favorite fields and actually the only fields that we are going to consider, I think at least in the book, I think on the two fields, the field of real numbers and the field of complex numbers. This is the field of real numbers, is a sub field in the sense I just described.  The field of complex numbers consists of elements of the form A plus B I, where a and B are real numbers. It has a subset of elements of the form A plus zero times. In other words, this is zero. You only have one parameter, a, and that's the real number. Therefore, you can, you can identify the set with the set R. This way you realize the set R inside the set, The set inside the set, you find that this is preserved biodperations of addition and multiplication. The sum of tural numbers is real, and the product of tural numbers is real.  Moreover, the zero element of C is actually of this form. Because it's zero plus zero, it belongs to R. The element one is also of this form, one plus zero times I. Indeed, with the operations which you could say inherits from the point of view I am advocating right now, is that I start out with the field of complex numbers, which is described in this way, with its operations of additional multiplication,  Then I have a subset R of elements of this form. And then I look at the operations on which are additional multiplication. I verify that the subset is preserved by the operations in the sense in which I just described. Now you could say R inherits the operations obtained by restriction to R. It turns out that it, the operations that you get on R from context numbers, I satisfy the axioms of the field. It is a subfield. Any questions?  Our interest is to define a similar notion of a sub. And when I say sub, I don't mean substitute, but I mean it's like a subset sub field. Now, subspace something that's included. Okay, In the case of vector spaces, that's a notion of a subspace. Let me go straight to the definition. Let V be a vector space over field, U, it's subset. So, a subset of V is called a subspace of V if it is preserved by the operations of addition and scalar multiplication. Let me, let me spell it out. It means that if you have V and there are some V plus W is also in. That's what we mean by saying that it is preserved by the operation of addition, also for scalar in our field times V. That's what we mean by saying that U is preserved by the scalar multiplication. As a result, apps As a result, we get appirationsU.  More precisely, you have an appationrossU, which is the addition obtained from the addition, and you have an apperationhi is the scalar multiplication. Once they are preserved on U is preserved if we we stipulate that U is preserved by addition and scalar multiplication. Therefore, U inherits operations of addition multiplication from V and the condition is that it is with respect to those operations elector space, if it is preserved by plus and U is a vector space over f with respect to these operations, that is to say U with this apperation satisfies the axioms of a vector space over with respect to these operations, okay? So important is the same as the original original field over which V was defined. In other words, this notion of subspace always presumes that both are defined over the same field. This is important, you don't want to say that vector space over one field is a subspace. A vector space of, over another field that does not satisfy the rules of the game. Okay, that's the definition.  Next we look at examples. Let's look at examples. Our basic example is the example of R two, which we can think of as the space of column vectors like this. X one and x two are real numbers. Which, by the way, in the book, vectors are often represented as rows rather than columns. This is done for purely typographical purposes because it requires much less space. You save paper on this, but a more proper way is to use this notation. But I already addressed this point before. Obviously, the two nations can be related, two forms of nation equivalent to each other. You simply transpose the column to a row for note keeping purposes. When we start talking about matrices and so on, the difference between rows and columns will become more astute, but for now it is just a method of recording the coordinates. Okay, On the one hand, it can be represented algebraically as pairs of real numbers. On the other hand, we can represent geometrically by choosing coordinate axis on the plane, representing a vector of this form, V, as a pointed interval starting from zero, from the 0.0 to the point with coordinates x one and x two. What are the subspaces of the space?  For instance, we can take a subspace U, which by definition,  sometimes I will use this notation where I will put a colon before the equality sign. This is to emphasize that defining something, I'm defining the left hand side by the right hand side because sometimes equality means that you have two expressions and they are equal. But in this case, I'm defining by what is written on the right hand side, that's what this indicates.  It is going to be a subset of R two, which will consist of all elements where you have an arbitrary number here and zero here. You see if you have two elements of this form, The sum is going to be also of this form because the addition in this six space is component wise, zero will because zero plus 00. This is preserved by declaration of addition because lambda times zero is also zero for every element lambda in the field. Well, in this case is the field of real numbers. This is also preserved by the operation of scalar multiplication. Then we still need to check that these two operations that acquires in this way addition and multiplication satisfies axioms of vector space. But that is clear because these operations, there is one to one correspondence between you and the set of real numbers, and these operations are really just operations of addition and multiplication of real numbers. We know that it satisfies axioms of vector space. In a moment, we will devise a more convenient criterion for verifying whether a given subset of a space is a subspace. But in this case, it is obvious, okay, that's one. But that's not the only possibility to indicate other possibilities, let's rewrite this in the following way. Let me remove one, since there is only one in this formulation. Let's write it like this. This vector x zero is the same as scale, or x multiplied by vector 10. This suggests that there is a similar analogous definition. Define another subset which will consist of all vectors which are proportional. You can say these vectors are all proportional to 10, all multiples of 10. Let's replace 10 by a more general vector, y1y2. Take all vectors proportional to it. Y1y2 are fixed. X runs over the real numbers.  Now, my first subspace was just the real line. Geometrically, all the vector which end on a point on this coordinate axis are precisely the ones which are proportional to the vector 10. Because vector 10 is really just this vector, right? Likewise, I. Plot the vector with coordinates y1y2, which appear in this, in this definition. Then all the vectors which are proportional to it are going to land on this line. Line is a unique line which contains this vector. If this vector is non zero, let's, let's assume that this vector is non zero, P, which means that at least one of the components is non zero. If it is a zero vector, then we can't get very far from this point. Then all the multiples of this vector will still be us, the zero vector, which is also a subspace. I'll get to that in a moment, but I'm more interested in non trivial subspaces right now. You see that every vector in R two defines a subspace. Namely, you take every non zero vector, defines a subspace similar to the line of the first coordinate axis that I looked at first. Namely, all vectors which are proportional to this vector. Why does it work? Because if you have two multiples, x one times y1y2 and another one, then by the rules of addition of vectors, this is the same as x one plus x two times y1y. In other words, you have two elements in, in this subspace. Maybe let's call it y1y2. To emphasize that this subspace depends on the choice of y one and y two. This one belongs to uy1y2. This belongs, then we see that this also belongs by definition, just as this multiple now is x one plus two. This shows you that this subset is indeed invariant, is preserved by the operation of addition, which geometrically we can see clearly from the parallel gram rule because we know geometrically how to add two vectors. If two vectors go along the same line, there s will go along the same line. Likewise, the scale are multiple. Let's contrast that as I want to show you something that is a subspace. Then I want to show you a subset which is not a subspace subspace. There are fewer subspaces than just general subsets. In fact, one can easily see that the only subspaces of R two are the subset consisting of just the zero element, 00, that's one subspace. All the subspaces of this form which are in which correspond to lines passing through the origin on the plane, and the entire two, needless to say, V is its own subspace because it's a subset which obviously is preserved by the operations and satisfies the actions. The most obvious subspace is itself. There is also another obvious subspace which is the smallest possible space consisting of just the zero element. Then there are one dimensional subspace is crispint, two lines, which are all of this form, in other words is a very special subset. Obviously, if you take any finite subset other than just the set consisting of the zero element, it's not going to be a subspace. You will see clearly that they will not be preserved by the operation of addition or scalar modification. But another way to see, I want to emphasize one property here, that for instance, if you look at the original subspace that I defined here, it has zero here. What if I replace call U prime? I consider all elements of the form x one. Okay? Will this be a subspace? It is a subset, obviously, and it looks very similar to this, but there is a crucial difference. If you have two elements of this form, there are sum is also of this form. Why? Because zero plus zero is zero. But if you have two elements of this form, say x1x 21, then it's not going to be of this form because the second components also add up. One plus one is not one, it's two. The first components add up, the second components also add up. That's the difference. It's not uprime, therefore this is not a subspace. Discrimination between zero and non zero components here is reflected in the fact that the only lines which correspond to vector subspace in two are the ones which pass through the origin. If it doesn't go through the origin is not going to be preserved by additional scalar multiplication, okay? But how do we, In this particular case, it's very easy to verify everything by hand. We need a, a more sophisticated way, more sophisticated, more practical criterion, which we could apply in more, in a more general situation for, for verifying whether a given subset of a vector space is a subspace or not. This is reflected. This is expressed in the following theorem. A subset by V, I will mean the same thing as before. Let V vector space over a subset of V is a subspace if and only if the following three conditions satisfied. First of all, the zero element Tompa, clarify, because this is a vector space by assumption, we started out with v being a vector space over F. It comes with a special element zero which has the property that x plus zero is x for all x. Last time I proved that this element is unique in new vector space. The first condition is that this element zero. To clarify which zero I'm talking about, there's too many zeros in the story, right? Because there is a zero in the field as well, and so on. So we have to be careful. That's why I want to emphasize by coding yellow color, coding it yellow, to emphasize that I'm talking about specifically about the zero element of V, This element has to belong to you. That's the first second for any U and V, U plus V is in V. But that's actually is, that's just the property that is preserved by the operation of addition which was already in the definition. That's natural is part of the definition itself. Then the third property is also part of the definition that is preserved by scalar multiplication. That is to say, for any lambda in your field, lambda is okay. This requires several comments. First of all, how is the statement different from the definition? They share a big chunk, which is this. The two conditions are the conditions in the definition, and they are also the conditions of the there. The definition also has this, which means that all axioms of vector space are satisfied, right? If we were to go by the definition itself for a given subset U, to determine whether it is a subspace, we would have to verify, number one, that it's closed or preserved by the addition number two, that it's preserved by scalar multiplication. After that, we verify all six axioms. We need to verify all six axioms. That's a lot of work. Now we trade that last part. Verification of all the axioms for one little thing, just verify that the zero element of V actually belongs to you. Which by the way, you can see clearly that that's the case in my basic example. All the subspaces and are two that I have produced go through the zero point, which is the zero element. Second, these are two different things. These two properties which now appear as properties B and C, they simply tell us that now acquires its own apparationddition scale multilication. After that, this definition says, forget about V. Focus on and those operations that you got on. If for instance you would have to verify that contains an element which satisfy the aim of zero that there is an element such that if you apply added to any element x but element of x of U, V, then you will get back x a priori. It's not clear that this element in U has to be the same as the zero of V. For instance, you see it's not like we're taking just one axioms of six. We're actually taking a slightly different statement altogether. I'm not sure if I answer your question. Maybe you meant something else. If. What do you mean by simple limit? I don't. Oh, we are not allowed to take any limits. This is a very important point. I see what you mean. Yes, two things. First of all, we don't need to take a limit. We simply can multiply by zero. We don't need to take one of our x and x infinity. We actually have a given element zero. We don't need to produce it by taking one over x and taking the limit asymptotic. But since you brought this up, I also want to point out one aspect of this theory. The notion of a limit is something that is a notion of calculus or more general analysis. It's not something that is natural to the world of algebra or linear algebra. When we talk about limits, it suggests that, you know, distances between things. Which things are close to each other, which things are far away. In linear algebra, the way we consider it now, we are not introducing any such notions. Now, it's a coincidence that we are looking at the case of spaces over the field of real numbers or field of context numbers, where such a notion exists. But it's not native to this story. There is a generalization of this theory where there is also a proper notion of a distance, and those are called topological v spaces and then the notion of Hilbert space and so on. It's a much larger, it's a theoretical functional analysis, which in a way marries linear algebra and analysis. But right now we are inside linear algebra proper, where we do not talk about things like limits and things like distances and so on. You see in the later part of the course when we talk about inner product spaces, where we will talk about lengths of vectors and so on. But for now, that is not yet introduced. However, for your question, we don't really need it because we can actually multiply by zero and it will appear in a moment. That was the first comment that it is a simplification because it suggests a question. They are not allowed to be finite sets, finite subsets, unless it's a zero. The question is, if I understand you correctly, suppose you have a vector space over the real numbers or complex numbers, because that's what we're considering right now. Is it possible for such a vector space to have a subspace which is finite as a subset? Right, so we know one example of a subspace which is finite, namely itself, which is, by the way, as I mentioned last time, the set consisting of a single element zero is the simplest possible vector space. It is a subspace of every vector space, because every vector space by Axiom Three, I think includes element zero. We can just take the subset as a subset of this definition. We can take a subset consisting of that one element. It is indeed preserved on didition and multiplication, and it is a subspace. It is finite. It consists of one element. The question then is, can you have two elements, or three, or four, finitely man, but greater than one? The answer is no, because if it's more than one, then let me write this down. This would be maybe continuation of this. So this is the first example of a non subspace. A subset which is not a subspace. Here is another example where another non subspace, suppose that is a finite set, finite subset of, let's say of two. Let's just look at this example since we are considering this. But the same argument will work for any Any vector space of positive dimension over real numbers. Finite subset such that, this is, if I have a set, I put vertical lines like this, It denotes the number of elements. Or we also call it cardinality. Cardinality. Now, the subset could be infinite, in this case, cardinality going to be a number, It's something else. But since it's a finite set, in this case just a number of elements, Suppose you have a finite subset which has more than one element, then it is not a subspace. Why? Because just think about graphically, first, geometrically, since there is more than one element, at least one of these vectors is going to be non zero vector. There is a zero vector, but if there is one more, it has to be non zero. It goes like this, but let's call it. V is an element in U, but according to this condition that U is preserved by scalar multiplication, every multiple of this vector has to be in as well, where lambda is a real number. Now, I claim that if you have two real numbers, lambda one and lambda two, and they are not equal to each other, the multiples. These multiples are not equal to each other. They give us different factors. This is easy to see by simply adding to the both sides minus M two, you'll get zero on this side and you'll get, um, one minus Lm two times you get non zero number times v is zero. And that's impossible. I leave it for you to think about gometically's that all the multiples are going to be distinct. Obviously, twice this vector is not the same as this vector. Three times this vector is not the same. So on two times this vector is not equal to three times this vector. And more generally, lambda one times this vector is not equal to lambda two times this vector if lambda one is not equal to lambda two. But that means that this element brings with itself a whole family of other elements. It's like you invite somebody to a party and then they show up with like 100 other people. You see, except here, this shows up with infinitely many guests because the field of real numbers is infinite, V is alone. But the property that we require, the condition that we require that the subset is preserved by scalar multiplication, implies that belongs to your subspace, all of its multiples by real numbers also belong. There are infinitely many of them. Our set is finite. It cannot accommodate all of them. That's how you know that this fine set cannot be a subspace, cannot contain all the real multiples of this one vector because the set of real numbers is infinite. That's the answer why it's either zero, the only finite subspace. This may be generalized in this way. Suppose your field is infinite. It doesn't even have to be field of real numbers or complex numbers. Both are infinite, but there are many other infinite fields. For instance, field of rational numbers. If you have a data space over an infinite field, the only subspace which is finite as set is just zero. Subspace subset consisting of zero cannot be more than one element. And yet finite if it's more than one element, it has to be infinite like a line, like a whole line. This line has contains infinite manufactor. But let me go back to the criterion. This is a big improvement from the definition. That's the value of this, this is non trivial statement. For instance, if I wrote a theorem which would say a subset is a subspace condition, B condition, and under six axioms for these operations that would be equivalent to the definition. It would be no improvement. So there's no need to call it a theorem. The reason we call it a theorem is because it's not equivalent to the definition. Here comes the second point.  This statement is an example of what we will often meet in this course in which we often meet in mathematics. It's an example of what's called if and only if statement. If and only if statement is actually a combination of two separate statements and both of them have to be proved. If you only prove one of them, it does not prove the entire there. Okay? So it's a particular way of speaking, kind of like what is called shorthand for actually stating two different theorems. Sometimes it's kind of compresses it into one statement. But in fact, from the purely formal point of view, this is a collection. This is a union of two statements. If so, what are those statements? So let me explain if and only if, there can always be reformulated as two different implications between two different statements. In this case, what are the statements? The statement is that is a subspace, that's the first statement. The second statement is that ABC or are true there conditions. This theorem can be formulated as the statement that these two implications are true. That this implies this and this implies this, which oftentimes we abbreviate like this. But this still means that we have to prove both directions, okay? Indeed, when I say a subset is a subspace, if I removed and only if and I just left, if that would be this implication, because it would be a statement that once ABC hold, then it is a subspace. It's this implication that it is a subspace that this must hold. Okay. I hope it's clear if it's not, just think about it later and you'll figure it out. But this is a very typical construction of a theorem. The proof always in such a case will always involves two parts. You cannot cover one by the other, it is two independent statements. Let's, let's start by proving it this way. That means we assume that this is valid, that is, the trout U is a subspace and we have to prove ABC. Now, B and C are included in the definition. B and C then clearly hold. Suppose u is a subspace, then B and C holds because it's part of the definition. It remains to show that A holds that the element zero belongs to our subspace. But for that, we know that actually let's, let's prove A to prove A, observe that is non empty. Because we discussed last time, and it was actually part of the homework that an empty set cannot be vector space because one of the aims of vector space is the existence of a particular element. Let's find some properties. But even putting aside those properties, it stipulates the existence of a particular element. But an empty set. Or should I say, the empty set has no elements, there's no place to accommodate this element zero. Therefore, empty set is not a vector space. If U is a subspace, it means it is a vector space. It's part of the definition, which means it's non empty. But if it's non empty, there exists a particular element in it. Now, this U is also an element of V, because u is a subset, right, Multiplied by zero. Okay? So, zero times then must belong to you. Why? Because belongs to zero is lamb is zero, is lambda so to speak. And we said that lambda is or is and lamb is in, then the product is in. All right? Zero times this element has to be in. Zero times U is the zero element which I denoted with yellow. Actually, now I think about to make it a little bit more clear, I want to put an index V to emphasize which zero I'm talking about lets you get confused. Let's actually explicitly include the index subscript V in this notation for the zero element of our original taspace condition. Is that if U is a subpace, this special element of V must belong to it. I have now demonstrated it. I said it's non empty, there, there is at least one element. Take that element, multiply by zero must belong to your subset. But we know that multiply any vector in by zero gives you the zero vector must be an element of. Okay, in the book there is a slightly different argument, which I encourage you to read as well. But I like this one because it shows you right away that, um, the property of being preserved by scalar multiplication immediately implies that you have to have a zero. So in some sense, I think that's what you were suggesting. Because if you think about it, so for instance, if you had a subspace which doesn't pass through zero, it contradicts this property because we know that it has to intersect zero. Because multiplying any vector anywhere by zero brings it into this place, right? It's a, you can think about going out asymptotically, going to infinity. And this kind of brings you back to the origin is kind of taking the interest. I think there was a kind of a good idea in what you were suggesting, but in fact it's even simpler because we actually have the element zero available, so just multiply by it without asymptotical arguments, right? Okay? But it's too soon to declare victory because we have only proved it in one direction, right? So now we have to prove it in the other direction. Which means what? Which means that if ABC are satisfied, suppose these conditions are satisfied, then use a subspace. So now what does it mean? So it means that you have a subset U, which is preserved by additional scale multification and contains the element zero. Then there is no other way but actually going through the list of axioms of vector space and verifying them. What we find, however, that almost all of them are automatically satisfied because we are speaking about not some new operation on that came out of the blue. But it's an operation inherited from the operation on V which satisfies all the axioms. For instance, commutativity is satisfied by any pair of vectors in V. If you have two vectors in U, they are surely vectors in V as well, because u is a subset of V. If this property of commutativity is satisfied for all vectors in V, surely they will be satisfied for vectors which are in it's included. Likewise, sociativitywise, multiplicive identity. Likewise, distributive properties. There are two axioms left. One is additive identity that there exists element zero. But we know that from A. The only thing that remains is that the only axioms A, axioms, axioms follow automatically, except except the axiom of additive inverse. What does it say? It says that for any, there exists an element, you prime. In such that abbreviate plus U prime is right. But that means that if that's the case, because U is also an element of V, we know that that's going to be the additive inverse and which is unique. What it, what we need to show is that for every element in, let me put it here. We need to show that for every element in U, it's additive inverse, which we know is unique from the point of view of vector space V minus U, which is an element in V is actually in. But this is because we know that minus U is equal to negative one, which is an element of our field times. That's statements one of the series proved in section one of the book. I did not prove it, but we rely on this section 11. But since belong this also belongs to you, because u is preserved by skular multiplication, U is preserved by multiplication by any number including negative one. There is there, then negative u is also the. And this completes the proof. All right. Now, what is the advantage of this theorem? The advantage is that we can cut through the list of axioms. We don't have to go, okay? Axiom one, competitive stim two, satisfied. All we need to check is that the subset is preserved by the operations of addition skater multification and contains the zero element. By the way, terminologically I like to use the words preserved by like operation. Preserved by so U is preserved by the operation of addition, is preserved by the operation scale moulplfication. But an equivalent expression is closed under sometimes I actually don't remember what is used in the book, but don't be alarmed if you see closed under. These two terms are equivalent. Okay, So let me go back to my example. Remember in the very first example I gave earlier, I said this subset of R two is a subspace, whereas this one is. I use an argument showing that it's not closed under or preserved by the operation of addition. But now we can dispose of this case even faster by observing that the set does not contain zero element, whereas this set, now you still have to show that this is preserved by additional multiplication, but that only would establish part A, you still need to do part B and C. But if you check A and it's not satisfied, you're done. It's not a subspace. You see that the theorem allows you to dispose of many subsets right away, showing that they are not subspaces. For example, a generalization of this example. Let me show you another subspace which is very useful and really is very important applications solutions of systems of linear equations. I take the B is RN and consider the following system of linear equations. 11. I will present elements as columns with coordinates x I, which are real numbers. Let's look at the following system of equations, 111122, et cetera. One n x n is equal to one. A 21122 on. There are M equations like this. The coefficients A coefficients are fixed numbers 1112 and so on. We assume that IJ fixed real numbers. The first index indicates the number of the equation, first equation, second equation, M equation. The second index indicates which variable, which coordinate, so to speak, or component. By the way, this is not a coordinate. This is just a dent on the blackboard. It's x ten, it's x one. Let me shift it a little bit. X2xo times x 122nx. N IJ is a fixed real numbers. And are I's fixed real numbers. Consider I subset U, which is a set of all solution of this equation, of the equations. You have m equations on n variables. Every solution is just a collection of real numbers, x1xn satisfying these equations, it can be viewed as an element of N. The question is, under what conditions is this subset a subspace? Now, using the theorem, we can immediately rule out most of them. Namely, if at least one of these bees is the 00 is not a solution, you see if you substitute all equal zero into the equations, let's just give me a second. The left hand sides will all be equal to zero, right? Because the left hand side is a combination of multiples of is. If I substitute all is is equal to zero, then the left hand side zero for each of the equations. Then 000 is a solution of the system only if all I are zero. You may recall that sometimes it's called homogeneous linear system where right hand sides are all equal to zero. The element zero of RN, the L element zero of RN is the column vector in which all components are equal to zero. This vector belongs or this element belongs to the subset of all solutions if and only if all I are zero. If at least one BI is non zero, you rule it out. It's not a subspace. Now, let's suppose that all BIs are zero. Then the first criteria, the first condition is satisfied, then the zero element belongs to the set of solutions. You still need to prove that for any two solutions, the sum is a solution. There's multiple is the solution which is obvious because you add up two equations for one solution, for the other solutions. I'll leave it for you to check. The conclusion is that let's call this, It's labeled by IJ and by BI's. Okay? So the conclusion is that this subset I, J, B is a subspace if just explained. If, if can be expressed like this, I is zero for all. Okay?  Now maybe one more example which I think is important that actually has to do with an example of a vector space which is given in the book. But I didn't have time to introduce it in my last lecture. I will do it now.  This is an important example of vector space from section one B of the book. It a vector space which is denoted like where is a set. You said then is a field, is a field a field.  Then we introduce this notation for the set of all functions f from a, to introduce operations of addition and scalar multiplication on it.  Point namely, if f is a function, is a function. And we define the sum of these functions by the following rule. Its value at an element of the set is again, I'm using this notation that this formula defines the left hand side in terms of the right hand side, and that's what this column means, as a sum of the values of, of the value of f at the point or element and the value of g at the element that defines addition on functions.  Likewise, we define scalar multiplication of a function lambda times where lambda is by simply multiplying the value of at bi lambda.  Then we verify that this is a vector space.  This really have to do, you have to give an argument. There is no shortcut, but the argument is not so complicated because everything is done point wise. Point wise means that we're actually doing operations at each point, at each element of our set. We are doing this operation in our field. In our field satisfies the axioms.  You see the axioms of commutativity, associativity, and distributive laws. The only thing we have to produce is a zero element. And the negative additive, negative. But it's clear what the zero is, is just a function which takes value zero in the field at every S, it's clear what the negative value. It's a function whose value is the negative of the value of the original function At every point, verification is not difficult. It is based on the fact that we are taking functions with values in the field. If we were taking functions with values in the field, but in some other set which doesn't have the structures, we would not be able to derive the Xist space. But because the domain, or more precisely co domain of the function, where the function takes values. The target of the function, if you will. Because it is a field, it is assumed to be a field. The axioms are almost automatically satisfied due to the properties of the field.  This is a very important example of vector space, and we will be returning to this example. Let me point out that R two or RN is special cases of this example. Namely, if S is a finite set, then we can set up a bijection between the set for bijection is in this nation. And the set of positive integers from one to N for N, because you can just, you can enumerate elements of one to 34, and that sets up a bijection between and the set one to n. But if the is the same as because here you talk about function, so you need to specify the value of your function at every element of. But we can effectively have identified the set with a set of numbers from one to n. A function is effectively just the set of its values at 1234n, which is just a call in vector within components. N is a special case, and in particular, RN appears when is equal to R RN, which is a standard vector space.  In the first course of linear algebra, you can think about it as n-tuples of real numbers. Or you can think about geometrically, say in the case of two, this corresponds to vectors on the plane and so on. This can actually, this vector space is a special case of this much more general example.  What is the specialty, what is so special about this case? This is precisely the case when S is a finite set. But this construction is much more general, we can consider to be an infinite set. For example, here, I'm coming to one more. Application of this theorem. For example, let's suppose that is r, n is also. Let's suppose the set is, It's infinite now, therefore it does not fit in this previous example is what are we talking about? We are, it's a weird notation for something that you'll be studying.  Calculus, function in one real variable. In first year calculus or first semester of calculus, we study functions in one real variable. Function of one real variable is precisely an element of this, of this vector space. But these are all functions. Because remember I told you that in algebra we don't usually care about analytic properties. But suppose somebody comes to this class from like math one a and they are listening to all this. They say, okay, that's very interesting. I see an overlap between what we're doing here, what we're doing in first year calculus. In both cases, functions in one real variable show up. But what appears naturally here is collection of all functions. It is huge, right? Because you can just randomly assign values. There is no notion of continuity or anything. And then a person from mama or single value calculus, I only care about continuous functions. I don't care about discontinuous functions. How do they fit in this story? It's a subset. Let's consider a subset which we will denote like so. Which will be all functions from R to R with the additional property that it is continuous. Okay? Then natural question is, is the subset a subspace It? Then we would know that actually this is also a bona fide vector space, that in fact there is an overlap between single variable calculus and linear algebra in that the set of all continuous functions is actually vector space. Okay, that's a useful statement. How would we go about showing it? It would not be so easy to do with direct way by using the definition, because then we'd have to go through axioms. And how do continuity, how does it fit with the notion, with various axioms?  But now we have a shortcut. We have much nicer criterion which reduces this verification to verifying the three properties. I claim that this is a subspace. Why? Because first, let's call this v0v belongs to this U, because zero V is a zero function. A function which takes value zero everywhere. And clearly it's a continuous function because actually it doesn't change value, right? It just so happens that the zero element of this huge vector space is a continuous function. It belongs to our subset of continuous functions.  Property A is verified.  B, suppose that F and G continues, then we have to show that is continuous, but this is clear from the definition of continuity. Anybody who studies single variable calculus can prove this. They will verify effectively property condition B of our theorem. Finally, we have to show that lambda for any real number, and if F is a continuous function, then it's multiple is also continuous. That's also easy to verify once you know the notion of continuity. Even though the notion of continuity, as I said earlier, is not part of linear algebra, the notion of continuity is set up in such a way that these two properties are satisfied. And also the zero function is continuous. Therefore, we can derive from this that the set of continuous functions is a subspace on the space of all functions. And in particular, it itself is a vector space in its own right. Okay? Now, the same argument also works for differentiable functions for the same reason, because sum of two differential functions is differentiable function, scalar multiple differentiable functions is differentiable, zero function is differentiable. This shows you the effectiveness of this criterion. Okay. What is next? I remember at the very beginning of this lecture I talked about the subs to speak, the subsets. You have subsets in set theory you have sub field theory, you have subspaces in the theory of vector spaces. Now, in the first case, if you have two subsets, you can take the intersection and you get a new subset. It's clear also, you can take the union. In other words, it's not just you have subsets, but there is a certain hierarchy of subsets. And you can do something with them and you can create new subsets by taking intersection union and so on. It's natural to ask whether something like this can also be done in the theory of vector spaces.  For example, suppose you have two subspaces. Suppose you one is a subspace of V, two is a subspace of two subspaces. Because there are subspaces, there are also subsets. A subspace is a subset with some additional properties. For sure, subspace is a subset. Therefore, the notion of intersection is unambiguous. It is clear that you can just take one intersection, two is a subset because each of this is a subset. So the intersection is a well defined subset. So the question is, is it a subspace? In fact, the answer is yes.  This is one of the homework exercise. I call it lemma. It's more of a style, but I feel like there is a big word there when it's substantial statement, when it's a technical statement, which maybe not so hard, I will oftentimes use the word Lemma theorem. Light, the statement is one intersection, two is also a subspace. I leave it for you to prove because it's, it's the homework.  But next comes union. That's where things get more subtle. Same question. Is it a subspace to answer questions like this? It always makes sense to look at some examples where you can, usually genetic examples are to grasp. So let's suppose that are two. Okay? Suppose that one is this line is this coordinate axis, two is this line. Both are subspaces, because as we discussed earlier, every line which passes through the origin is a subspace. The union is this coordinate cross. It consists of all vectors which either lie on this line or on this, but not those which go in the middle, literally cross. That's the union.  Is it a subspace? If it were a subspace, well, you can start by looking at the criteria condition. Condition is satisfied because zero does belong to it, but that's not enough at the third condition also satisfied,  but second condition preserved by operation of addition is not satisfied.  Any vector along one belongs to it. Every vector along two belongs to it. But if you take the sum is going to be by parallelogram rule, something like this.  And this does not belong to 12.  The union turns out to be not a good notion in linear algebra, a union of subspaces. What replaces it is a notion of a sum. This is what we'll talk about next time, because my time is up. Okay, So we'll continue on Thursday.