At the end of last lecture I talked about, formulated a theory about, I formulated a result comparing linearly independent sets and spanning sets.  Last time we introduced two different notions.  One is the linearly independent. Sets. The other one is spanning set.  Here we are considering a vector space space over some field field of real numbers or complex numbers. There are two notions. One is a notion of linearly independent subset of the other one is a spanning one. By the way, in the book, the term that is used is not a set, a list of, so the author likes to talk about lists of vectors. I often will use the term set. They are interchangeable. They're synonymous to each other. Okay? The theorem which, which I stated and started to prove last time was that we start with a vector space and we will assume that this vector space is finite dimensional. Now, what does it mean? It means that there a finite spanning set.  In other words, there exists a finite set of vectors in it, such that every other vector can be written as a linear combination of those.  Suppose you have a finite mentioned vector space, a spanning set, 12n is spanning set pica, linearly independent set um, linearly independent. Then the number of elements in the first one is less than or equal to the number of elements in the second one.  This is a crucial result because we will use it to introduce notion of a basis and prove that the number of elements in a basis, in a given vector space does not depend on the choice of a basis.  The proof is very interesting because it's an example of a proof where it's not done in one shot, but it is a sequence of steps. Okay, So the idea is that we're going to move vectors from the first set to the second set. Let me put them together. Again, I like to represent them as cells in this diagram 12, and then M. Then you have the other 11. It's a Lego, if you will, two like on the board. We're going to move these pieces. And the way we're going to move them is we'll start out step one. We're going to move this one and adjoin it here. Okay? We know that one is non zero. Is non zero, because if it were zero, this set would not be linearly independent. You would have the equation 11 equals zero where one is non zero, right? That would contradict the property of being linearly independent. We know that this one is non zero, but also we know that the guys already were spanning everything. If we add one, then we are going to have one more element, right? Therefore, this new set is going to be linearly dependent. So this is going to be linearly dependent dependent. Because of that, one of them has to be, we should be able to express one of them in terms of the previous ones. But that one is not one because cannot be expressed in terms of previous ones because there's nothing before. It would mean that one is zero, but we know it's not zero. That means that there is some for which I can be written as a linear combination of one. One, et cetera. Up to man, this is somewhere Here we are saying that it can be expressed as a linear combination of these guys, right? But if so, we can remove it. The resulting set will still be spanning set. Removing it will have, removing it will leave another spending set. And that's because if all of them span the whole vector space, that means that every vector can be written as a combination of all of them. But this guy in turn is written in terms of the guys. Therefore, we can replace it by those guys. And therefore, that linear combination can be written entirely in terms of all of them except for this. We're not losing the spanning property by removing this I. That brings us to the second step. At the second step, the analogous diagram will be like this. This will start one we already got rid of. We removed it right from the left. You have two U three, um. Now remember this one is linearly independent. Linearly independent property is such that if you remove something, it will still stay linearly independent. If you add something, it may become dependent. But if you remove, it's still going to be linearly independent because you're not going to get any new solutions to the equation. You didn't have solutions before, you won't have solutions after you remove one of them. This one on the left. The set of UI's stays linearly independent throughout this procedure, because each time we remove something, we don't add anything here. On the other hand here, one in one out, but in such a way that the spanning property is not violated. As I just explained here, it looks like this. You have no one has migrated here. Then it starts with one, then it goes to minus one, then I skipped the plus one, so on. And then N. That is the position at the beginning of step two of this argument.  What do we do at step two? We take this one now and migrated here. So we remove it here. Now, we removed and added one and removed one. The number of elements here is still n, now it's going to be N plus one, but this was already a spin set spanning set. If we add one more element, it's going to be linearly dependent. It's exactly the same argument as before. Now, because it is linearly dependent, there must be at least, there is at least one. At least one. Or maybe W one and W two and so on. Why? Because if there were no W's at this point, it means that the set only consists of, namely, 1.2 But 1.2 are linearly independent by our assumption. This is not linearly dependent. For a whole set to be linearly dependent, it will need some contribution from there is at least one left. It's very important. It's important to ascertain that because now we are going to remove one, we've got to make sure that we have something to remove. But the reason why we have something to remove is because otherwise this set could possibly be linearly dependent. It is linearly dependent because it is obtained by adjoining an extra vector to something which was already spending set. There is at least one, therefore there is, there is some J, which is linear combination by the same argument as here, which is linear combination. Let me just say which is equal to linear combination of the previous, the preceding ones. Combination of u1u 21 up to j minus one. Then we remove it without violating, and still stays as a spending said you see very similar. We remove one, maybe it is somewhere here J, then there is, we remove it.  That brings us to step three. What we need to do is to perform this exactly m times, where m is the number of elements here, until we have exhausted this set. Each time we move one element, one by one from the left to the right. After we remove, we then remove one of the W's. At this point, let's suppose that we have already performed steps. Then what will be K plus one step plus one on the left. We will have the use but the set of us in which we have removed the first, it starts with UK plus one, then UK plus two and so on. Right, which by the way, means that has to be less than, than M because we need to make sure that there is something here so that there is something to do. Right? We will stop when we have removed all of the use to the right. Think about, this is a computer program. We have programming, it's recursively programmed computer recursive program, where we stipulate what the first step is. We stipulate do once we have performed steps, okay? Once we have performed steps where k is less than M, m being the original number of UIs, we have this on the left. What do we have on the right? On the right? First of all, all the use that we have already moved in the previous steps. The first one is UK, then UK minus one because I'm moving them and attaching them on the left. Uk, UK minus one, et cetera. Then you up to one. After that, there will be some do left. I will call them maybe one. We have removed in the previous steps, we have removed exactly K of them. Each step we add one and we remove one J. We don't know which one it is, so that's why I'm like labeling them. I, one, I, two, et cetera. Well, the total has N minus K. But most importantly, the way we set it up at each step, we are preserving the spanning property. You have seen it, what happened in step one. You have seen what happened in step two. We are now going to ascertain also that it stays spanning after step one, which will tie everything together so that we can run this program M steps. But we are assuming now that after k steps, it is still a spanning set. This one is a spanning set minus k one. Actually, to be precise, because remember we just moved kit, strike that. This is what we have. This is the initial position I have not yet moved. When I move, it will be yellow. Okay, after K steps, we have moved 12k. Here we have minus the total number is n. It is a spanning set because the property of being spanning is preserved at each step. But now I'm going to move at the step number K plus one. Remember. Step one, I was moving you one at the step two, I was moving you two at the step K one. I'm going to move K plus one. I'm going to attach it here. Suddenly this one here, the number of elements is n. Because the n minus K of these guys, and they are K of these guys, but now they're, the number of elements is n plus one. Just like here or there. What I know is that this set is going to be linearly dependent. Again, same argument before I joined this guy. This is a spanning set, right? If you already have a spanning set, it means that you can get every vector as a linear combination of those guys. If you then throw in one more, then you know for sure it's going to be linearly dependent, right? In particular, that guy you throw in can be written as a near combination. If it's linearly dependent, it means that one of them has to be linear combination of the previous ones, of the preceding ones. And we know for sure that this is linearly independent because it's a subset of the original set which was assumed to be linearly independent. Therefore, none of them can be written as a combination of the previous. The UIs cannot be written in terms of each other in a non trivial way. This linear dependence means that one of the W's has to be a linear combination of the preceding ones, which means, in particular, there is at least one. Therefore there exists at least one like I wrote here, at least one W exclamation point to emphasize that at least one, but could be more obviously there are several here, right? One of those, there is some IJ, which then is a near combination of the preceding ones. Preceding means to the left in this diagram. Therefore, we can remove it. We remove it. When we remove it, we do not sacrifice the property of being a spanning set. Because if that guy is near combination, it doesn't add anything. The remaining set on the right here is still spanning. You see the input on the case step is what I drew here and this by the previous, we know that it is a spanning set. That makes this program run. One more step, which is just moving this guy here and finding one of the W's, which is a near combination of the preceding ones. There must be at least one, because otherwise it would, this would not be a spanning set, and this being linearly independent set, then we remove that. The result is again a set which is a spanning set. Which starts out with now not but K plus one UIs. Now we see that the program is consistent, if you will. From the point of view of formal system, what we're doing is we're writing an argument. Yes, on the left side are linearly independent. These are linearly dependent. I joined this. The W's are spanning the original. Let me make it a little bit more clear. I will use yellow and this is spanning. The picture that I draw combines what was before the beginning of the step and what happened during the step. That is illustrated by yellow square before the step one was here. And what I have here is just the, the property of the set of this guy was still here. Maybe I should put it yellow to emphasize that is after the fact. Okay? All of the span, because they're spanning. If I throw in one more element, I get a linearly dependent set. Now, if you have a linear dependent set, there will be one element amongst them which will be linear combination of the previous ones. That element cannot be one. This would mean that one is zero. But it's not zero because it's part of a linear dependent set. Therefore, it's one of the W's. Call it I, I is a near combination of the preceding ones. Preceding ones means 12, up to I minus one. Therefore, if I remove it, this guy, the spanning property will stay because this guy doesn't bring anything new, because it is itself a combination of the other guys. In fact, even the preceding ones, this set was already spanning. This guy is near combination, If I remove it is still spanning. Because this guy can be written as a linear combination with preceding ones. That's the completion of the step one. At the completion of step one we have set, we lost one here, but also one in one out. Most importantly, we have preserved the property that is still a spanning set. You have a question. It's not only one, but there is a first one here. Yeah. Okay, good. I'm glad you mentioned let me explain. Let's put it aside. There is a lemma which actually is on the book, but I did not prove it last time. Let me at least state it. Suppose you have a said V1v2, I don't know which is linearly dependent. Then there exists an such that I is a linear combination of the preceding ones. That is to say it's equal to a one V one plus et plus a minus one, minus one. Now, suppose that you have, suppose the set is a spin set, which is what we have here before we remove one of the melina combination, Then if it's a spanning set, it means that every vector can be written as a combination of all this, but then substitute this formula for I. Then you will see that every vector is near a combination of V1v2. But skipping I, removing I does not change the spanning property, does not violate the spanning property, doesn't make sense. Okay, at the end of step one, we end up here with the spanning set which still has N elements. Likewise, step two, and so on. At each step, we remove one element here, adjoin it here, right? Because from the previous step, we know before that we did that it was spanning. We end up with linear dependence set. But this linearly dependent set, which is this whole thing linearly dependent, has the first k plus one linearly independent because they came from use. Therefore, none of them can be written as linear combination of the preceding ones. It has to be one of the E that is written as linear combination of the preceding ones. In particular, it means that there is at least one W left. Otherwise, it would violate what we learned from the previous step, the first one that comes, which for example, it could be that this guy is linear combination of this. Then take this one. If it's not, then the next one could be linear combination or the next or the next one of them is the first one that is linear combination of the preceding ones. That's what I call here, WIJ. It's a linear combination of the preceding ones. Remove it. When you do that, the remaining set will have elements. It will still be a spinal set by the same argument that I just gave. What happens at the end of the day when we perform M steps Y, M? Because we are given a list of M elements and at each step we remove one of them, one by one, there will be exactly steps. What will happen is when we remove it, we will have a set which we'll still have here. Which will still have some either no left, maybe back off one step. Let's say we do M minus one, we have done M minus one steps. When we move the last one step number M. The beginning of this step, we have one guy hanging in here. The last one we have here set which, which includes all the U except for this guy, it's like minus one, minus two, up to one. There will still be some W's left, j1j minus M plus one. The point is, we know that it is a spanning set. All right, when we move the last one here, what happens? There has to be at least one double you still left by the previous argument. So bear with me, this is the spanning set. If I throw in one more, it's going to become lineally dependent, but these guys lineally independent if there are no doubles left. It's a contradiction because you have a set which is both in independent and dependent. That's not possible, therefore there is at least one left. But that shows that the number of S is at least m. Because at the previous step, how many are here, n minus m. N minus M plus one should be at least one. Which means that n minus m is greater than zero, which means is greater than me. It's a very strange argument. What argument is it if we look at it from the point of view of formal systems? For this argument to be, for this proof to be valid, it has to satisfy the properties of the general structure of proofs that we discussed. Is a collection is a list of sentences in our formal language such that either one of them is an axiom, or something that was already proved before, or something that is obtained from the previous sentences in the list by using rules of inference. This argument can be organized as a collection of sentences, but it's quite long. It's going to be at least m sentences. In fact, for each step, there will be several sentences. It's an argument which is more or less, maybe we'll have three M sentences or four M sentences, if we want to get pedantic, can really break it down to all the rules of inference. But the most important thing is finite. We're doing this argument for a particular value of M. It's going to be a, let's say M is 1,000 proof will have several thousand sentences, but it can be written. It's very similar to what's called an inductive proof, but in this case it's actually weaker. A little bit simpler situation than in the case of approved by induction. Approved by induction will be like that, where you have a way to pass from step number to step number plus one. You're claiming something that's true for every K, not just for k from one to M, but for every k. There is a difference between a finite set from one to m, even if m is large, and a set of all natural numbers. Inductive step is one of the rules of inference which enables you to prove a statement of the form for every running over the set of natural numbers, you have such and such thing. It is a non trivial rule of inference because you're claiming something which cannot be in principle, written as a finite computer program. But you're saying that because in principle, in principle can reach every K by a finite program, it's true simultaneously for all K that is a real, a non trivial addition to the rules of inference. But we don't have to do it now at this point, because here we're approving something for a specific value. And therefore for specific value it can be taken care of by a finite program, by a finite proof. You see it's induction light, if you will. A lighter version of induction, where you don't really have to go all the way to infinity, but only up to a certain number. The idea built into this argument is very similar to the idea of induction, but it does not use the full power of induction because in fact, it can be formalized as a finite proof. Yes, but why is there a W left? Because we know that at the previous step, here we are at the beginning of step M, right? What happens here? There are minus one. Therefore, there must be minus plus one. We know that after each step number n, because in one out. We also know that at every step, whatever we did, we did not violate the property of being a spanning set. Therefore, this set before the addition of this guy is spanning. But if, when we throw in one more element, it becomes nearly dependent. Now, if there were no, but you are a contradiction. Because on the one hand, it has to be linearly dependent by the previous, whatever I just said read. But also we know that is linearly independent. That's the initial condition. That's a contradiction. In other words, for this to happen, not to have a contradiction means that there must be at least one. Which means that n minus plus one, which is a number of. Given that the total number before this is n minus one, this number is the number of W has to be at least one. Which is equivalent to saying that n is great and equal to m. What we want to prove, this is end of proof. All right. Now this is a very powerful statement because you see it applies in all Gentex space, fixed vector space. Whatever set of linear independent or you take in that vector space, that's the set of has elements. Whatever spanning set you take has n elements would always be the case that M lesson called to nowerful statement. That's why the proof, it's not your average sophisticated. So how are we going to use it? I already gave you an idea last time how we're going to use it.  We are going to discuss the kind of a sweet spot when you have a set which is both linearly independent and spanning. And such a set is called a basis. Okay? So let me see. Yes, the definition, definition. So we have a given the vector space space over, and then a set V1vn vectors in V is called the basis of V if it is a spanning set. And it is the near independent, the independent. Now, why is this interesting?  Two reasons. First of all, if you have a basis like this, you can have a particular way of writing it. Gives you a convenient way to express all vectors in terms of vectors of this basis. Namely, this is a statement of the following lemma.  One is a basis if every vector v in V can be written uniquely as linear combination of disguise.  Where the effusions are just elements of your field over which your vector space is defined.  In other words, the first of all, it can be written as a linear combination. That's a spanning property, right? But then also unique. uniquely means that the numbers I are completely determined by this vector, by V. In other words, if once V is fixed, these numbers are fixed as V cannot be written in two different ways in this form. All right, let's prove this quickly. It is an example of if and only if statement. These are actually two statements, right? As we discussed, we have to give two proofs of two different statements which are implications in two different directions. First, this way, suppose that V1v is a basis. Let us show that indeed every vector can be written uniquely like this. So first of all, if so, then it is spanning can indeed be written. spanning means that every vector can be written as a linear combination of these guys, can be written as I, let me call this equation star.  Okay, Now we need to show you this expression is unique. Suppose there are two ways, there are two ways to write it. Will be V is equal to a one v, one up to a VN, and also a one prime V one A prime, right? But then we get that this expression is equal to this, which means by taking all of these guys to the left hand side, using distributed flows, we get a one minus a one prime times V one plus, et cetera, plus A n minus A prime V n is equal to zero. We see that some linear combination of this vectors is equal to zero.  But remember, being a basis doesn't just mean that it's a spanning set, it's also linearly independent. Linear independence means that this equation can only be satisfied is if every coefficient is zero, which this is from linear independence of VI's. It means that all of these numbers, one minus one prime, two minus two prime, and soon are equal to zero. I will write it in one shot. I minus Y prime is equal to zero for all I. But this means that AI is equal to A prime. And we're done because it means that the expression is really unique. Two different ways for writing V as a near combination.  This shows it in this direction. Ask me if it's not clear or something is not clear.  Now let's prove it in the opposite direction. That means that suppose that every V can be written, be written as unique as a combination one. We want to show that v1v is a basis, okay? But this already implies that v1v is a spanning set. Because it can be written, every vector can be written as a linear combination, it's a spanning set. That's the first condition.  Second condition. Let's prove linear independence. Because being a basis means that it satisfies two conditions. It is a spanning set and it is linear independent. We already get spanning property for free because we're saying that a vector can be written but now we are going to exploit can be written. This is what gives us the spanning property uniquely is going to give us linear independence. What does it mean linear dependence? Independence means that if you have a one V one plus as a n is equal to zero, then all I is equal to zero.  How to see that? To see that is very easy because this is an expression. Think of this as this formula. Again, I call it star because the same one is that this is formula in the case is a zero vector, right? Think about it as an expression of vector zero as the near combination of ice. But vector zero has another expression. Namely you take all coefficients to be zero. Then it works because zero times anything is zero. You sum up zero, you get 00 for sure is equal to zero times V one plus zero times two plus et cetera, plus zero times N for sure.  And now we got ourselves two expressions of this very special vector zero. First as a one, V one and VN combination, and second as 0000 combination.  But we assume that this expression is unique for every vector, including vector zero. Which means that the coefficients on both sides have to coincide.  It means that I, all of this is, are equal to zero for a line. That's what we wanted to prove because linear independence is precisely the statement that given this equation, the only solution is all I equal to zero. We obtained it from the uniqueness property.  Okay, this is, see what it means is that what this implies is that suddenly a nice record keeping device. We have a way to keep track of vectors, keep track of vectors in our vector space by means of this linear combinations.  This book keeping device, which is very important in general, you can have some vector space V, which is some abstract vector space. For instance, it could be vector space solution of a very large system of linear equations. We don't know how to solve this equation, we don't have good description of the solutions. But we know that it's a vector space. Because we talked about if you have a system of homogeneous equations, linear equations, where the right hand side is zero, this is going to be a subspace of, of the bigger vector space where all the variables look. Or it could be a vector space of solutions of some different system of linear differential equations. Again, some complicated abstract vector space. How do you describe elements of some abstract vector space? It's not clear, but suppose somebody gave us a basis.  Suppose we found the basis or constructed the basis of v, we'll call v1v2 VN. Then, according to this lemma, every vector in V can be written uniquely in this way. Not only it can be written, but it can be written uniquely, which means that this one N are completely determined by if you know them, you know. And conversely, if you know V, you know them.  This means that you can convert the information about V, which is abstract information in some abstract vector living somewhere in some abstract vector space. You can convert to much more tangible information of a collection of real or complex numbers depending on whether you or field is complex. You see that's a very powerful thing.  We convert this, that's what I mean, book keeping, we are no longer trying to describe some abstract things, but we convert it to a collection of these numbers. Usually we put them like this. As always, we assuming it's real numbers or complex numbers.  You see this sets up a one to one correspondence. How interesting right now, what is going on? Because so far in the previous, in the first course of linear algebra that you have taken like Matt 54, that was actually the main example of vector space. The set of all collections of end pulse of real numbers.  Now we realize that actually every vector space can be converted to it in some sense could be identified with it.  The precise statement is that the set of such collections forms the vector space of its own, in its own right, which we call a fan, right? In the framework of the beginners called course in linear algebra. That's all we knew. We only viewed vectors as collections of numbers.  But now now we are considering more abstract approach where we are not demanding that our vector space be that. However, the existence of a basis of a vector space allows us to identify any finite dimensional to space with a vector space of that kind.  This is what's called an isomorphism. It's something that we will talk about most likely next week when we talk about linear transformations, and in particular, isomorphisms between vector spaces. It's a preview for that, but for now what I want you to see is that a basis gives us a very nice bookkeeping device. It enables us to convert vectors to collections of numbers. Those numbers could be called coordinates of this vector.  And to drive this point across even more strongly, I want to show you, let's look at a low dimensional example, just an example of a plane.  When I say an abstract vector space, here's an example of what I mean. I mean, what is a plane? Take this blackboard, idealized blackboard, so it's completely flat, right, and extended to infinity in all directions. That's a plane. A vertical plane of it is not yet a vector space. We need a marked point in it.  Once we have a marked point, the construct a vector space as follows. Its elements are point at intervals which start at this fixed point always and go to some other point. That's a set of elements of this vector space.  Now, I also need to describe two operations, addition and scalar multiplication.  Addition is given by the parallel gram rule, right? Familiar. Well, let me do it here so that I have enough space, scalar multiplication, just multiplying, it's going to be vector field vector space over the real numbers. I need to explain how to multiply a vector by real number. If this number is positive, I simply construct a vector pointing in the same direction whose length is multiple of that number. If it's negative, then it will go in the opposite direction, and length will multiply by the absolute value. These are the two operations.  After that, you can go the six axioms vector space, and verify that this set is indeed a vector space. With respect to these operations, I will call this vector space the plane. Okay? Now we're used to thinking that it's exactly r two, but it's not quite so because to identify it with R two, I have to choose a basis. Usually we think that somehow there is a coordinate grid somewhere, which is given.  But that's not really true. Look at this blackboard. This blackboard was here before we came to class. The blackboard doesn't know anything about coordinate axis. We pretend that we know because we say, okay, well one of them is horizontal, Danamaz, vertical. But if you tilt your head a little bit, notion of horizontal, vertical will change. Or the classroom, clearly it's not something canonical.  The notion of a basis gives us the correct perspective on the relationship between this geometric vector space, which is what I would call it, like an abstract vector spaces. Something that comes out of nature, so to speak. Well, idealized nature, because we're using idealized object, which we don't really find in nature infinite blackboard. But we can imagine it in that imaginary, on this infinite plane, there is indeed a collection of vectors and there are two operations.  It's a vector space without any choice of a coordinate grid.  The choice comes from a choice of a basis. Because on this plane I could choose a basis. This basis would be, consist of two vectors. For example, I could take this vector and this vector and call this V one and call this V two. Once I choose these vectors, which is equivalent to saying that I have chosen like a coordinate axis, then I can make a coordinate grid by drawing parallel lines to the first one and the second one. I purposely drew them in such a way that they don't look like they're perpendicular to each other to emphasize the point that they don't need to. The only thing that we require is that they are basis.  Now, what does it mean that two vectors form a basis? It means that, first of all, they're linear independent. Which for two vectors, means that they are not proportional to each other.  It, if you look at the equation of linear dependence for two vectors, you will see that it's equivalent the equation, but by taking one of the terms to the side, you will see that if you write a one v one plus a two v two equals zero, we can take one of them to the other side, you get a one V one is equal to negative 22, which means that they are proportional, being linearly independent in the case of two vectors, simply means that they are proportional to each other. Which includes the case when one of them is equal to zero, because then you can say it's equal to zero times the other guy. Geometrically proportional means that they go along the same line. Clearly here they don't.  Being linear independent for two vectors means that they're transversal to each other, They're not proportional. That's the first property of a basis.  The second property of a basis is that every vector can be written as a linear combination of this two.  That's what creates the coordinates, because then we can write every vector as a one, V one plus a 22. Then to V, we can assign this 12. This gives an identification between this vector space and R two, where every vector of geometric origin, which is a pointed interval like this on a blackboard, without any preset coordinate grid.  Once we make a choice of a basis, we can then relate such a vector to a pair of real numbers, which is an element of two. You see a slightly different perspective on the spaces two or in general. They do have a special role in linear algebra.  But not because every vector space is two or two dimensional. Is two dimensional, is RN. But because fine dimensional vector space can be identified with RN once we choose a basis. This will become more clear when we talk about transformations and isomorphisms, but I think it's a, a good idea to see the examples of this.  Maybe one more thing related to this is what changes when we change basis, the same vector will acquire different coordinates.  Imagine that the address of the vector when you superimpose a particular system of streets and avenues. But let's suppose that you do, in your city, they completely change the streets and avenues so that every house will now have a different address. That's what happens when you switch from one basis to another.  This ties together with a bunch of homework exercises from last week. That's why I want to mention it. Just one example. Let's say we want to go back to more familiar territory where we choose two basis vectors which are perpendicular to each other. That's how we usually draw things in 53. Math 54, that's like our x axis, and that's y axis. Now this is a vector of 10. This is a vector 01, relative to this coordinate system. Let's call this 1.2 right? It has lengths one. A general vector is going to be a linear combination of the two multiple of one, multiple two. Where the intercepts of this perpendiculars be, let's say x and y. All right, that's how we use 12, the basis to assign to every vector a pair of real numbers, x and y.  But what if I choose a different basis? And the most obvious choice is to take something that goes along the diagonal and anti diagonal, the rotated by 45 degrees. Now my new basis vector will be this one, which has like coordinates 1,1 relative to this basis, this has coordinates 11. The second one will have negative 1,1. Okay? That's V one, that's v-two.  Now, by the same token, I can also write just as V can be written, what does this really mean? It means that I write as x one plus y two. That's exactly the linear combination we're talking about, except in the most general context. I call this label the numbers by which we multiply a 12n but now anzqal to two. I'm using xy. X plays the role of one and y plays the role of two. But now I have these two other vectors. I can write the same vector V as say a times V, one plus b times two. Then the question is, what are the a and B? You see?  The point is that you can write this is a times, you have to express V one in terms of the vectors of the original basis, which is 1,1. Then you express V two in terms of the original basis. It's going to be plus b times one, negative one, but it's supposed to be equal to x y. Also, we get this equation which is really two equations for the first component, second component, and so the equation is that A is x and A minus y, right? Then the question, for example, could be that suppose we know the coordinates x, y relative to our standard basis, but we want to understand what will be the coordinates relative to this basis.  You simply need to solve the system of equations for A and B, given x and y.  The solution is the following. If you take the sum of the two equations, you get two A on the left hand side and x plus y on the right hand side, you get a half sum of x and y. Likewise, if you take the difference between subtract second equation from the first, you cancel out as you get two is equal to x minus y.  The upshot of all the discussion is one of the main differences between the first course of linear algebra that we've been exposed to before.  This one is the realization that a given vector space has more than one basis. Therefore, we are not stuck with RN with just collection of real numbers. But we understand that we get those collections when we choose a particular basis in our director space, there are many choices. And some choices could be useful for a particular problem, and some other choices could be useful for some other problems. But it gives us more flexibility. And coupled with an ability to connect, to translate from one basis to another, which we will do next week, probably it creates a much more versatile theory.  Okay, that's what I wanted to say about these choices. But actually the first of all, strictly speaking, I have not yet shown to you that a basis always exists. You see, I said a basis is a set which is linearly independent and spanning. But how do we know that such sets exist? In this case, it's clear because it's such a simple example. But what if it's very large vector space? Why they exist? Maybe they don't.  And there is a very simple trick to show that this is where. We use the notion. So, I recall that a vector space is called finite dimensional. Finite dimensional if per v, if, if contains a finite spanning set. Okay. Yes. Say again? Which example? Yes. Yeah. Well, I'm comparing the second components. I get a minus B is equal to y. Oh, I see. Because it's minus 1,1. Right. Okay, good. Good catch. You're right. You know what? I can redo the calculation or I can choose a different one which would satisfy this. Right. So that would be my view too. But yeah, thank you for pointing this out. I did not do it correctly. Now, this one has one negative one, right? Okay. So that fixes it very good. Okay. One more basis thrown in, even if it's closely connected to the second one. Okay? So we have to start somewhere and we have to impose this condition that is finite dimensional.  Not every vector space is finite dimensional. For instance, in the book, there is an example of a vector space. I think it's called P of R, which is a set of all polynomials in one variable over. This is not finite dimensional, it does not have a finite, unless you bound the degree a polynomial of degree up to n, then it is finite dimensional.  But if you don't impose a conditional degree, you cannot find a subset which is finite and spans the whole space. Okay? This is not finite dimensional, Therefore, we call it infinite dimensional. Infinite dimensional, Not every vector space is finite dimensional.  However, the main focus of this course is on finite dimensional vector spaces. We are going to assume that our vector space is finite dimensional.  Therefore, suppose V is finite dimensional, finite dimensional, then there exists a subset. It's actually, I will use the same notation as in the theorem which approved at the beginning one which is going to be spin subset.  This definition says that there exists a spanning subset. Let's suppose that it has n elements. Denote its elements one up to W. This is a spanning set.  I claim that I can manufacture basis out of it. Start removing things if necessary.  First of all, two possibilities. It may already be linearly independent. In which case it's already a basis and we have found the basis, right?  But remember, we discussed that there could be some linear independence subsets which are smaller have smaller number of elements.  Let's suppose otherwise it's linear dependent. Therefore, we know that there is some, there exists some between one and n such that I is a linear combination, A 11 plus minus one.  All right, then we are in a similar situation to what we had to deal with when proving the theorem.  We have a spanning set, and we know that one of the vectors is a linear combination of others. If we remove it, it will still be a spanning set, because span means that every vector is a linear combination of these guys, but one of them is itself a linear combination of other guys.  We just substitute that expression and we see that everything is a combination of all except for that one.  Then we get one I minus one plus one. It has one fewer element. The number of elements is equal to N minus one still spanning.  Then we ask again is independent if it is we have constructed basis. If it's not, one of them has to be easily a combination of the preceding ones. We can remove it.  Eventually We'll get to the point where we here, I want to assume that my vector space is bigger than just zero space.  Remember that there is a somewhat weird, the example of a vector space, the name vector space, which consists of just one element, which is a zero element. It's different from other cases. Let's assume that the vector space we're dealing with is not that we will deal with that case separately.  If it has a one non zero vector, and therefore to have a spanning set, it contain at least one element.  If a vector space is non zero let me repeat. If a vector space is non zero, then it has at non zero vector and therefore spanning set must contain at least one element.  As we keep removing the vectors, it can't continue indefinitely. We either reach one and then it would mean that that vector has to be spanning. It has to be independent because it has to be non zero, right?  Or we stop somewhere in the middle. But in any case, after finitely many steps, after finally many steps, steps, we reach a basis, a linear independent subset, which is a basis.  This is how you show that the basis exists, but it shows more.  It also shows you that every basis will have fewer number of elements. Fewer or equal to any number of elements in a spanning set. If you already have a spanning set to get to a basis, you simply start removing redundant things, right? It ties nicely together with the theorem.  The theorem says, the linear independent sets always have fewer elements or equal to any spanning set. Here we come to a basis from above, so to speak. We start with the spanning set and start removing things.  Likewise, we could start with a linear independent subset and start throwing in more things to get to a basis. I just leave it at that.  But in the remaining time, I want to get to the jewel of the whole story of the whole theory, which is the idea of dimension, the concept of dimension. That is the following theorem.  V is a finite dimensional vector space dimensional vector space over B one, which is V1vm, and B two, which is say W1w are two bases then is equal to N. In other words, any two bases in a given letter space have the same number of elements.  I claim that is immediate consequence, immediate consequence of the theorum that we proved earlier. Because I will rephrase it as follows.  The theorem we proved earlier says that if S one is linearly independent set inside S two is a spanning set, then the number of elements S one is less than equal to the number of elements S two. But now we have B one and B two, and they're of both kinds.  You see where is going to I get both inequalities and therefore equality  first treat B one as one because a basis is certainly linear independent. That's one of the properties. From this, we get that the number of elements and one is less than equal to the number of elements. And B two Second, treat one as two because it's a one is 1.2 as two. Then you get this inequality but you could also reverse the rules.  Basis is exactly the sweet spot, which it satisfies both conditions, you see. So we can treat one as one, which means linear independent, but we can also treat it as two, which means it's spanning.  And now we reverse the role. So B one, we treat as 2.2 as one, which means the number of elements in B two as less than equal to the number of elements of B one. But this together mean that number of elements in B one is equal to number of elements of two. How interesting there are many different bases in a given vector space unless the elector space is just zero, right?  But there is something which we can say for sure that they all have the same number of elements and that number is called the dimension.  So it's not surprising that on the plane we saw different examples of bases. All of them had two elements, which intuitively it agrees with what we understand about the dimension of the plane. The plane is, it is indeed two dimensional, right? Okay. So we're out of time. So we'll continue on Thursday. This is a dimension. Hi.