All right, so today we'll talk about invertible linear maps and isomorphisms. It's a very important topic which we'll be using throughout this course. My presentation will be slightly different from the presentation in the textbook in section I think three D.  The difference is that I want to emphasize properties of maps which are strictly, which will not belong to Linear Algebra but belong to set theory. They remember the idea of a hierarchy of formal systems at the basis is set.  There's a formal system of set theory, there are notions of sets, maps between them, and so on, various properties of those maps.  Then on top of it, we build the formal system of linear algebra, where we consider not arbitrary sets, but sets which are vector spaces. They have apparations of additions, scale multiplication, satisfying axioms of vector space, and so on.  The notion of invertibility is a notion that comes from set theory. It is a notion which lives at the fundamental, at the basic level, not at the level of linear algebra. You see? I think it's important to have this perspective and to understand that this notion exists already for plan sets without any structure of vector space. In fact, we already spoke about some properties of maps, of sets that we discussed.  The fact that if you have two sets, we have the notion of a map from S one to S two, which is simply a rule which assigns to every element of S one, a particular element of S two.  Then we talked about properties of maps, we talked about injective maps. Those are the maps which send different elements of Sone to different elements of Stwo.  We talked about surjective maps. Those are the maps that have the property here that Surjective are the ones which have the property that for every element S two, there is an element from S one which maps to it.  Then we have notion of **bijective** map, which is **injective** and **surjective** simultaneously.  Those are the ones which establish one to one correspondence between the two sets, that for every element and S one, there is an element in S two and vice versa, such that the maps between them, something like this.  In particular, if S one and S two are finite sets, each has a finite number of elements, then the map is bijective. Can only be bijective if they have the same number of elements.  Okay, Now there is one more property that we should discuss which is equivalent actually, to the property of a map being bijective.  We will say that a map 1-2 is **invertible** if there is another map in the opposite direction, such that the following holds.  You remember, we talked about compositions of maps in general, sets s1s 2.3 You have a map, say from s1s2 and 2-3 Then you have a composition which is denoted in a slightly weird way because even though it's which is the first one we put on the right, on the left, this is called composition of the two maps, we talked about this.  Sometimes we put a circle to indicate that we're taking a composition. I will put a circle just to emphasize that fact. This map goes from the first to the last, right?  This is a general property of maps, of sets. In set theory, you see no linear algebra yet. These are all basic notions of set theory.  However, now we specialize to this case. In this case, what is special is that the second map actually goes backwards.  Three is we are considering the case when S three is the original one. This composition goes from S one to itself. Now, if you have a map from a set to itself, if you study maps from a set to itself, you find that there is a very special map which is called identity.  There is an identity map which will call IS one, which sends every element x to itself.  Note that this doesn't make sense if you have s1s 22 different sets, because there's no way to say that a particular x is an element of both S one and S two if these are genuinely different sets.  But if you have a map from S one to S one, you can ask whether every element of S one goes to itself under this map.  If this map is called the identity map, the property that we will impose here, when we talk about invertible maps, is the property For a given map, there exists a map g going in the opposite direction, such that decomposition is equal to the identity on S one. Okay, But now we can also reverse the rules of S one and S two.  We can now start with two, apply. If indeed there is talking about a map from Tu, then take, Then we also have a composition. But now it's circle because it's that comes first, comes on the right.  For S two, we also have an identity map which is called IS two, which sends every element of two itself. The second condition is that if you take that, this is the identity you see.  Now let's indicate what it means in a special case when our sets have finitely many elements. Specifically, let's say each of them has four elements.  This is your S one, this is your S two. Okay? Then we will indicate the map by these arrows which show where a particular element of S one goes in S two. All right. In this case, we see that indeed this map is injective because to every two elements go to two different elements, go to two different elements, and it is injective.  In fact, our next Lemma will be that map is invertible if and only if it is bijective. That's why I'm using this particular example to illustrate the notion of being invertible.  In this case, what is the inverse here? This map is obtained by simply reversing the arrows. We can do that without violating the basic rule of a map.  The basic rule of a map is that from every point on the left there is only one arrow. You cannot have a situation where the two arrows emanating from a particular point on the left because that would mean that we are not quite sure what to assign to this element. It's either this or that. That's no map, it's not well defined. The rule is that arrow, there cannot be two arrows coming out of the same place.  However, precisely in the situation that we are in, if we reverse these arrows, we'll still get a well defined map because there is one and only one arrow that comes in. That's what being injective and surjective means.  So therefore, by reversing the arrows, we're obtaining a well defined map. The map this way is the map that way is.  And what happens if we take the composition first, apply, and then apply? Obviously, we just go back and forth to the same element we started with, we get the identity.  This diagram illustrates precisely this situation. In a special case, when as one, as two are two sets of four elements, you have a map and you have a map going in the opposite direction. And the sit these properties, right?  This suggests that perhaps the two notions are equivalent. In fact, that is the case for S1s2 is invertible if it is by objective. I'll leave the proof for you as an exercise.  It is basically involves unpacking the two notions and seeing that being invertible implies bijection, implies the map being bijective, vice versa. This is a perfect illustration of the situation.  As I said that being able to reverse the arrows like this so that you go, so you can go back to the original element that you started with, implies that the map is both injective and surjective.  Conversely, the map being injective and bijective shows that it's one to one correspondence. If it's a one to one correspondence, it's one to one in this way, it's also one to one in that way. Right? These are very basic notions. I don't want to spend too much time on this because we have a lot more sophisticated material to cover today. But I strongly suggest that you don't take it on faith, but actually write it down. Invertibility implies being bijective, and vice versa.  There is one more statement which I will prove that is a statement of uniqueness. A priori. It could be that you have a map which is invertible.  Here in the definition we said it's invertible if there is a map which has this property. We're not asking in the definition that it is unique a priori. There could be two different G and G prime which have this property.  This diagram shows that that's unlikely because it seems that everything is determined that is determined by F. You're simply reversing the arrows. What freedom could there be in?  That is a statement of this lemma, which is and G prime both satisfy the conditions of the definition of, of invertible maps, then they are equal. In other words, the inverse is unique, is uniquely defined by the properties.  As I said, intuitively it's clear from this diagram. But you could say, okay, well it's a diagram in a particular case, very special case where you have, first of all, finite sets.  But even we are illustrating it with sets with four elements, it feels like slightly incomplete argument here is a more rigorous and very satisfying argument, which is actually is similar in spirit to what we have used before.  For instance, when talking about the uniqueness of the zero element in the vector space. Or uniqueness of the negative element, the additive inverse. It's a cute sequence of equalities which establishes this fact. I wanted to present it.  Here's how we argue. First of all, you have a map from two to S one. We can precompose it with the identity map, right? That's what this means. Precompose because we first apply this and then apply that. This is, that's the composition.  But you see if you precompose something with identity map is not going to change anything. The identity map is like a unit element. It does nothing. It just takes every element to itself. So what we can just erase it, that's what I wrote to. G is equal to precompose with identity.  Now we use the fact that according to this definition, S two is equal to f composition g. But now we do it for prime because we are assuming that both G and G prime satisfy the conditions. And the conditions are written here, number one and number two, using number two condition, but for G prime.  Now next we use the fact that composition is associative. And that is again a property of compositions just from set theory.  You understand in general you would have not necessarily two, but maybe some three as two is one we're taking composition.  And then there is one more.  If you look at the definition of triple composition, you will see that it doesn't matter whether to look at it as if we compose this two first and then composed with this or the other way round in all cases, it's just following the arrows, That's what it is.  So that's why this is the same as F prime. I just put brackets in a different way. Let me maybe say one more once more. So if you have this, there is a well defined apparition of just taking an element, applying this arrow, applying this arrow, applying this arrow, right? A triple composition.  But if you realize that it's the same as composing this with the composition of these two maps, but it's also taking composition of these two maps and composing with this in both cases. It's just following your nose, following the arrows. So that's why the result is the same. There's nothing complicated here, but now we use the fact that this according to the first property, but for is identity on one.  Now we are back in a situation where we're composing our map with identity map here. We were precomposing it here, so to speak, after composing it. But that doesn't change anything. That's G prime. The sequence of this equality gives you an equality between G and G prime. That is a proof that it is true. It's always the case that if F is an invertible map is unique. Any questions? Okay. So far.  This was general stuff about set theory. What have we learned? We have learned that the notion of being, for a map to be bijective is equivalent to the notion of the map to be invertible. Which talks in the definition of bijective.  It's something innate to the two sets. We are saying bijective is the same as injective and surjective.  It's something that is defined within the properties of the map itself, right? It's something about what happens if we have two different elements, whether they go different elements, et cetera.  Whereas this has a slightly different flavor, it postulates the existence of another map in the opposite direction.  You see it's a non trivial fact that the two notions are actually equivalent to each other is a very basic fact of set theory, which then we exploit in all other areas of mathematics. Because as I said, set theory can be viewed as the foundation of mathematics in the sense that all mathematologis we study today can be cast in the language, or formal language, more precisely, of set theory.  For instance, a vector space is a set with such and such operation satisfying such and such actions. A field is a set with such and such operation satisfying such and such actions on.  Now, it's very essential to realize that in this proof, we used both properties here to the composition in this direction by two. Here, we were able to replace composition in this direction by IS one.  This is actually very important point that both are essential in general. In general however, we will see that sometimes one of them implies the other essential.  Okay, now we size. We specialize to the case of, to the case when both sets a vector spaces equipped with structures of addition, multiplication, scale multiplication satisfying the actions that the two sets a vector spaces. As is customary, we will denote them not as one, as two, but VNWnotep between them.  Now, of course, since we are now within the formal system of linear algebra, we are not going to consider just an arbitrary random map from V two, but a map which is compatible with the structures of vector spaces on VNW.  By the way, it goes without saying that they should be defined over the same field. For this notion to make sense, this is now a linear map we already discussed.  We have seen already that the notions of injective, surjective and bijective maps carry over to the situation. Carried over the notions of injective, surjective and bijective maps.  And now we can use Lemma, Lemma one to say that, to say that is bijective if and only if if is invertible in the sense of the above definition.  By the way, since is unique according to Lemma, we will denote it from now on as inverse.  Remember did something similar when we talked about additive inverse elements in the vector space, as it was claimed only that every vector or every element of a vector space has an additive inverse. It was not claimed that it is unique, but we were able to prove it by combining it with other axioms.  Once we did it, we were justified to use notation for this inverse element as minus V, where V was the original one. Because we were able to establish that this element is uniquely determined by V, we could use notation that is pegged to v. Likewise here, since this inverse element is unique for a given, if it exists, of course, then we are justified to denote it like this by.  For, in other words, if from V two is a bijection, then has an inverse, which we, following this national scheme, will do no inverse like vice versa, right? Because we establish that the two properties of a map, being bijective and being invertible are equivalent to each other.  However, what is not obvious yet is whether inverse, whether given that is a linear map. Inverse is also a linear map.  So far we know that it exists as a map of sets from two V a priority. It could be that is linear and invertible, but the inverse may or may not be a linear map.  We need to establish that it is actually a linear map. This way we see that we don't get out of the category of vector spaces by considering the inverse. That's not difficult to do. That's lemma three. If t is linear, is a linear map, then inverse is also, if t is a linear map, then inverse is also a linear map. What do we need to, so we need to show that if you have inverse, that for any x and y, remember we have V two. And that's then we have inverse which is in the opposite direction. We want to show that inverse, we're assuming that is invertible. Therefore, maybe I should say, an invertible linear map.  To make it a little bit more clear, suppose you have an invertible linear map from V to then. It's inverse, which is a map from V is also a linear map, which means that for any X and Y in inverse of X plus Y is equal to T inverse of X inverse of Y T lambda X is lambda T inverse of X. Let's call it star, maybe two star.  Let's show the first one, the fact that x and y, X, Y. But the bijective, because it's invertible. Invertible bijective are equivalent. Being bijective, it must be surjective. All right?  Which means that there exists a and B such that our original map from V to sends a x and y in V. This is W, x is t of y is t of B. Let's substitute this expression in here. We get inverse of x plus y is equal to t inverse of t T of right, but t is certainly linear map, this is equal to t plus B, since t is linear by assumption. Therefore, this is equal to t inverse of t of a plus B. But if you apply and then inverse to a plus B is the same as applying the composition to a plus, this composition is the identity on what first apply and then apply inverse, it's identity on V, because T is invertible.  This is property two, property one. In the definition of invertible map, this is equal to the identity of a plus B. What am I doing? Inverse? That's right. But identity of a plus B is just a plus B because identity doesn't do anything by definition. If X is equal to T of A, it means that A is equal to inverse of X, reversing the errors, and B is equal to inverse of Y. This is equal to inverse X inverse of Y, which is what we want to prove this, right?  You see it's more or less obvious, but I wrote it out in detail that the sum goes to the sum, likewise is two stars. Proof is similar, analogous.  We conclude that if you have a map of vector spaces which is linear and invertible, then its inverse is also linear that closes the circle.  Now we get a basic, good understanding of invertibility in linear algebra.  First of all, this notion comes from set theory. In set theory, there is a notion of bijective map. We have shown that, or at least stated without proof, that this notion is equivalent to the notion of invertibility, that's number one.  Number two, the inverse map is unique, uniquely defined by these properties, by its properties. That shows us what this is all about in the language of set, the most general situation of arbitrary maps between sets.  Then we specialize to linear algebra, Specialize to the case where our map is actually a map between vector spaces, and this map is a linear map.  Then we apply what we've learned by realizing that in this context, a linear map is bijective if and only if it is invertible. If it is invertible n linear, then it's inverse is also linear. That gives a good understanding of what's going on.  So first of all, it's unique. The inverse is unique because it's unique in general, linear. By considering inverse maps, we don't have to get out of the framework or the context of linear algebra. The inverse will still will automatically be also linear map.  Now, I like this logic or the presentation more than the presentation in the book, because in the book the author does not talk about invertibility in the context of set theory.  It defines the notion of invertible map as a linear map. He starts out with a linear map, and then he says that this map is invertible if there exists another linear map in the opposite direction, such as this proxy satisfied.  This leaves open the question whether it's possible for there to be a linear map, which is as a map of sets, but not as a spaces, not as a linear map.  Now, this question is put to rest because first of all, we define the invertible maps in greater generality.  But then when we specialized to linear maps, we see that the is also a linear map.  Therefore, we see that the definition that is given in the book is actually equivalent to the definition which we have given here. All right. Any questions?  Now, there is a special term for such a map. The term is isomorphism.  **An invertible linear map between two vector spaces is called an isomorphism.**  That's the first part. The second part is that two vector spaces are called **isomorphic**.  Are called isomorphic if there exists an isomorphism between them, from V two.  And note that in this case there is also isomorphism in the opposite direction, isomorphism. In fact, what is clear is that if you take inverse of the inverse, you get back the original, which is obviously from the definition. If you start out with, you say it's invertible if there exists a map in the opposite direction such as these two property satisfied.  Suppose then the exists, then I claim that itself is invertible because I just switched the order of the first property and the second property switching NF since inverse is unique, shows inverse itself.  This is true in the greatest generality when we just consider arbitrary maps between sets. For sure it's true for linear maps,  The notation is suggestive. If you take inverse of the inverse, you get back the original map. If you have two vector spaces which are isomorphic, then this is equivalent to there being an isomorphism from V to or from V.  The two properties are equivalent to each other, because if one of them is already an isomorphism, then it has an inverse, which then is a map in the opposite direction, which is also an isomorphism and vice versa. Okay. Now, let's talk about isomorphisms in a little more detail.  So, first of all, I want to consider the special case when these two vector spaces are both finite dimension, both V and W are finite dimensional.  Then we have the theorem, which is that V and W isomorphic. They have the same dimensions. Let me see. Dimension turns out to be a criterion. First of all, two finite dimensional vector spaces are isomorphic. They must have the same dimension.  Second, two vector spaces have the same dimension over the same field, obviously. Then they are isomorphic.  In other words, there exists an isomorphism between them.  Okay, let's prove it.  This is an if and only if statement. So we need to prove two statements. So I will use the following notation.  This symbol will mean R isomorphic or is isomorphic, isomorphic. Let's say two of them are isomorphic to each other. We can say either V is isomorphic to, or we could say V and W are isomorphic to each other. These are two equivalent formulations, We'll use this notation for this.  Now we want to prove from this need to show that the dimensions are the same, right? That's on the if statement. Okay? But what does it mean that they are isomorphic?  Isomorphic means that there exists an isomorphism between them. But isomorphism. It's an invertible map.  Which means it is a bijective map. Which means I is both injective and surjective is a map from V to W, which is an isomorphism.  That's what V and W being isomorphic means. But then, this map must be bijective, as we discussed, which means it's both injective and surjective. Now, the first property, as we discussed last week, implies that the null space of t is, consists of just the zero element in V.  Surjectivity implies that the range of t is the entire W.  But now let's apply the Fundamental theorem for linear maps,  the which gives us an equation equality for dimensions, what does it look like?  It looks like this dimension of V is equal to the dimension of the null space plus the dimension of the range. But this now is zero because the null space consists of just a zero element by virtue of the map being injective.  This is dimension of W. We see that the two dimensions are the same.  This proves the first in one direction If V and asomorphicn, they necessarily have doulemorphic and finite dimensional, which is what I said, we are considering this case, Otherwise, the notion of dimension doesn't apply.  We cannot write equals something if it's not finite dimensional. Can V is infinity? In what senses? Infinity. There are many ways in which it can be infinite dimensional. In fact, we are to be able to speak about dimension, we have to restrict ourselves to finite dimensional electro spaces.  I want to recall the notion of finite dimensional to space. Fine dimensional electro space is one which can be spanned by finite remaining elements. Okay? Then it has a basis and then the number of elements in any basis is a dimension. Okay?  This proves it in this direction. Now let's suppose, on the other hand, that we and W have the same dimension. That's the opposite, the same dimension. Well, let's just write dimension of V is equal to dimension of. Then choose a basis, v1v, let's say is equal to n of one of.  Now, define a linear map t from V to by the following formula. Use the fact that every element, and it can be written, every element, V and V can be written as linear combination of one N uniquely. That is the basic property of a basis. Basic property of a basis, exactly the fundamental property of a basis uniquely. Without loss of generality, we can simply focus on elements of this form because every element can be written in this form.  Uniqule we need to describe where takes a one V one plus a VN, and as you probably already guessed, we simply take it to the analogous linear combination, but in terms of, okay, this is well defined, because it's well defined and linear obviously because if you have two vectors of this form, the sum will have the sum of cofficiencs cofficients.  Then it will go to the sum of the images.  And likewise for the scalar multiple, it's well defined.  Also, we can define the inverse by simply reversing this arrow. Inverse is going to go this way, maybe inverse like this, because now we argue from the point of view of, we say every element of W can be written as linear combination of its basis elements with some coefficient. Let's define inverse as a linear map which sends this combination to this combination. Then clearly as maps of sets, they are inverse to each other.  Because if you apply this and then followed by inverse, you get back to your vector. And likewise, you start here, apply inverse, then apply, then you get back this vector.  So it is an isomorphism questions.  Here we come to a subtle point, which is that two vector spaces are isomorphic by definition.  Let's go back to this definition. Okay? So this is similar to this definition here. I wrote there is, but I could have written there exists as well.  To sound very similar here, when we talked about invertible maps, we defined an invertal map as one for which there exists the universe.  After that, we showed that, that inverse is unique.  This begs the question, what happens for the definition of an isomorphism?  At first, you might think that, okay, again, professors pulling our leg, because he says it exists, but you'll prove that it is unique as well, right? But actually it's actually the opposite situation, that it's almost never unique, it's almost never unique.  Very interesting aspect of linear algebra. Two vector spaces that are isomorphic are isomorphic to each other in many different ways, because you can already surmise this from this proof.  This theorem says any two vector space of the same dimension are isomorphic to each other.  But when we try to construct an isomorphism, we had to choose a basis in V and a base in W. We know that there are many different bases. We discussed this, for instance, on the plane. Any two vectors which are not proportional to each other and zero will give you a basis. Obviously, there are many different choices. If so, then it becomes clear that there are many different invertible maps between two vector space of the same dimension.  Two vector spaces that are isomorphic are isomorphic to each other in many different ways. That is essential to illustrate this point.  Let me consider a special case. In this case, let me emphasize, but it's not unique. In fact, the only case when it is unique is when both vector space are equal to 0,0.  Vector space has a unique basis, The empty set. It's not very interesting, right?  Yes, empty set is unique. There is a unique isomorphism between vector space zero. There is only 10 dimensional vector space, and that's the vector space with one element. Of course, there is a unique bijection between it and itself. But if, for example you have 21 dimensional vector spaces, you can already rescale by non zero scalar, any isomorphism. You already see that if you field has more than two elements, the it's not unique even in the one dimensional case.  To illustrate this, let's consider a special case or space that's familiar. Most familiar to us, that's the space FN special se V is FM, that is to say, as I said, it consists of all quals of elements of F is any vector space over such that the dimension of is also n.  Then here is another there how many theorems we have just 122 isomorphism. Obviously, this has dimension n. Obviously a fend has dimension n. This has dimension n by assumption.  So they are isomorphic according to theorem one.  Now I would like to describe more precisely the set of all isomorphisms.  I just explained that almost always there's more than one isomorphism if it exists. It's interesting to discuss what can we parameterize isomorphisms in some way? How do we get a handle on the possibilities that exist?  This theorem two describes isomorphisms in the special case when is an an isomorphism from fan is the same as a basis of you formulation is slightly loose.  What do I mean by this? Is the every isomorphism between FA and W gives rise to specific basis and and conversely a specific base and gives an isomorphism.  So the data, the data of an isomorphism and the data of a basis, one to one correspondence in by objection. Okay? Choice of a basis is the same as an isomorphism with a standard vector space.  We can call this a standard vector space, standard n dimensional vector space.  What do I mean by standard? By virtue is very definition.  It comes with a canonical basis. It comes with a basis which is obvious to all of us that look at it. It's like there is no disagreement.  There is a most obvious basis which consists of columns that have one in particular position and then zero everywhere else.  It's simply because it's defined this way. It all n entopls out of all entopoles, you have most obvious ones where you put one in ice position and zero everywhere else. That's clearly a basis. That's how we know it has dimension n.  That's why it's a standard. I call it standard. The point is that a Fn comes with a canonical basis.  The word canonical here, I have to make more precise. What I mean by this canonical is something which does not depend on any choices. It's something that all of us will look at it and we'll say, yeah, that's the one. It's something objective. It's not something that depends on additional information. You don't need any additional information to come up with the most natural basis here. In such situation, I will call the object that is objective, no pun intended as canonical. That's something that we all share. Something that comes up naturally, FN has a canonical basis, namely 1000010. You would be hard pressed to disagree with this, honestly. If anyone disagrees, please raise your hand now, forever hold your breath. It's something obviously, if I put two somewhere, that's not the most obvious one, because one is the canonical element of a field.  Every field has two canonical elements, 0.1 the additive identity, and the multifive identity. That's all we're using here and the definition of the strict of space. Okay, So let me call this beta zero, this basis. Now if F is an isomorphism between N and V, then of beta, which is, let's call this x 1x2xf of x one. Hi, vaccine is a base of V, right? That's easy to see because if it's an isomorphism I'll just leave it for you to check.  You use the fact that if it's an isomorphism, it's no space, is zero and ranges the whole thing. This shows you that every vector in V can be written uniquely as a combination of disguise.  Conversely. If beta V one, VN is a basis of V, define a linear map, beta from N to V by sending each an element of the canonical basis two.  As we saw in the proof of theorem one, this map will be an isomorphism. You have two vector spaces of the same dimension, and you have two basis, one in each, a linear map, which sends an element element of the first basis to the ith element of the second basis for all I is an isomorphism. This is an isomorphism.  We see that we can go back and forth between bis, beta missed the zero here. Basically, the idea is that beta zero under this map goes to beta. That's beta. This is how we establish a link between a choice of a basis in a vector space V and an isomorphism between the vector space and the standard vector space F. Now I should explain what I mean by the fact that in what way general vector space is different from Fn.  Because for a Fn I made a bold statement that for all of us it's clear that there is a canonical basis. There is a most obvious, most natural basis, which is what I call debt zero.  But this raises the question.  Maybe every vector space has a conical basis in that same sense. Let me give you a very simple example which, which shows you that this is not the case, that precisely almost all vector spaces have many different bases and none of them are better than others. That they created equal, or most of them are created equal. That without additional information for a general vector space, you can't really pinpoint a most natural and most reasonable basis.  This is a very important issue. Actually, I'll comment on that. But first, let me give an example to convince you that this is we have 33 is one of the standard vector spaces. Obviously, in the case when is R and N is equal to three, it has a basis 100-01-0001 which is what I call canonical.  Now let me consider a subspace in R three. It will consist of those factors which satisfy a one plus two plus three equal to zero, okay?  Now, this fits in the pattern that we discussed last week that we can see easily that this is two dimensional dimension of U is two, right?  Because we can describe it as an all space of a linear map from r three to r which takes a1, 123 to the sum a one plus two plus three.  This map is rejective because obviously if you take 100, for instance, you get non zero element, therefore it is rejective.  Then by the fundamental theorem, you find that the null space has dimension two. That's why I wrote dimension is two.  Now let's suppose I ask you a question, construct basis of this.  Now you know that it's two dimensional. The previous discussion, you know what you need. You need to find two vectors which are not proportional to each other and which satisfies this property.  Here's how I would do it. I would take one minus 1001, minus one.  First of all, both of them satisfy the property.  And they're obviously not proportional to each other because this one has zero and the first position has zero, and the last position clearly a basis.  But somebody else could come up with this 11 -10.10 minus one.  What I want to argue that there is no objective criterion, at least I don't see one. That without any additional input, without any additional information would say that this is preferred to this. This is better than this. They're both minimal in the sense that only use element 01 and minus one.  But even in this very rudimentary example, you see that there are two choices that some of us will vote for, the first, one for the second. The rationale for voting for this because it depends how you can introduce some lexicographical order.  The advantage is that in this case, both vectors have one in the first position. But in this one is like it gets shifted by one. It's a beginning of a trend. If you try to do the same for RN and you write down the equation, A one plus a two plus A equal zero. This is easy to generalize. You have one minus 101 minus one, and so on.  But also you could also take one on this position. Not clear. This is what I mean by there's no objective criterion in this case. There is because obviously, minimalist choice. Now in this case, okay, you could say two. It's not such a big deal. There are two bases which are on equal footing.  But now, suppose you generalize this example instead of considering one equation in three dimensional vector space, which has a very nice regular structure, a one plus a two plus three equals zero, where yes, there are some nice basis. Obviously better, this one is better than one. Which would include maybe like two minus one. Minus one. Yeah, it's easier.  But imagine that instead vector space, instead your vector space is defined as a subspace of some where n is a very large number. Is defined as a collections or column vectors one where satisfy a system of linear equations.  I sat a system of linear equations like this. A system of linear equations, not one. But let's say this is dimension 1 million and there will be 1,000 equations. Yes, you know what the dimension is going to be if these equations are independent from each other. In other words, if the corresponding map is surjective by fundamental theorem, you will find this dimension.  Let's say it will be 1,001,000 But finding a specific basis, there are many choices. I hope it's clear.  Another example of similar nature when you look at a space of solutions of a system of linear differential equations. In this case, if you have a system of linear differential equations, you can show that the space of solutions is a vector space. Because if you have two solutions, the sum is again a solution. The scalar multiple is again a solution.  But it doesn't come with a basis. There is no natural collection of linearly independent solutions which also span this space. This is a typical situation.  Vector space does not come equipped with a canonical basis. In general, there's one exception, and that's Fn.  That's why this subtlety was not noticed before, because in the previous linear algebra class, you have taken, all the discussion was focused on a, there is a canonical basis.  This creates the impression that every vector space comes with the innate natural basis.  What I've been trying to explain is that that's not the case.  This makes the notion of isomorphism even more important because you see that the choice of a basis is very closely linked to the choice of an isomorphism.  Specifically, an isomorphism in a standard vector space and a given vector space is the same as the choice of a basis vector space.  Why is this important is because there are two major applications of linear algebra in last hundred years.  I would say one is quantum theory, the other, more recently in machine learning and AI neural networks, and so on in the case of quantum theory.  What we learned is that we describe the state of quantum system in terms of position and moment of its constituents. But as a vector in an infinite dimensional vector space over the field of complex numbers called the Hilbert space.  We also learned that for a general vector called a wave function, when we do a measurement, we cannot predict with certainty what the value will be. In other words, the channel vector in the Hilbert space does not have a well defined value for every observable, such coordinate momentum, energy, spin, angular momentum, and so on.  But only the Eigen vector called Eigen vectors of the certain operators acting on this Hilbert space have well defined value.  Those Eigen vectors form a basis for every observable there is its own eigenbasis.  You see if you start set up an experiment in which you are measuring coordinate of a particle or electro, electro, you will be projecting it wave function on an element of a particular basis. If you're measuring the momentum, you'll be projecting into a vector in another basis.  Therefore, you see that the choice of a basis is actually equivalent to a particular set up for your experimental protocol.  What are you actually measuring? Depending on what are you measuring, you're going to change the system in different ways. Most people are aware of the notion of collapse of wave function and the idea that result of measurement changes the system. Most people know about that, but fewer people are aware, is the fact that it is worse or better.  In fact, you can do that in many different ways depending on how you choose a basis in your Hilbert space. The result of the measurement will be a change in the state of the system. Those changes will be different and they're irreversible. In other words, yes, it does matter that there are different bases. If there was only one basis, in some sense, the uncertainty of quantum theory would not be so strong.  The uncertainty of it lies in the fact that we actually do, the observer has a choice of which experiment to set up. And that is equivalent to the choice of a basis. Okay, Closer to the end of the course when we talk about sulfa joint operators on vector spaces with inner product. I will explain this in more detail.  That's when we'll talk about this eigenbasis and so on. So where things will become more clear. But this is just to give you a sense of why I'm emphasizing this.  A good analogy would be comparing a page which has vertical, horizontal lines, formatted page to a blank page.  A general Ta space is like a blank page. It does not yet have a formatting and there are many different ways to format it. Of course, if it's finite like this, you could say that you want lines which are parallel to the edges, but imagine goes to infinity in all directions so that there is no notion of parallel.  So that you have many different, you know, families of parallel lines which format it. Most vector space, I like a blank page formating. It introduces a choice that's a choice of a basis, that's also a choice of isomorphism with a canonical vector space. Any questions about this? All right. Now, moving on.  Continuing with the finite dimensional story, we have, I didn't mention the second application, second application, where you are representing tokens in the natural language, for example, by vectors in a large dimensional vector space over the real numbers. Now again, I wonder in what way the choice of basis in there can also make a difference? I know less about this. I don't know. Maybe one of you will figure it out.  Suppose that you and B, finite dimensional vector spaces over same field. Then a map to W is an isomorphism If it is injective, it is surjective. This is similar to what we discussed before, which is that, for instance, if you want to check that something is a set of vectors, is a basis, and you know that it has the right, the right number. Dimension of the vector space, then it's sufficient to check that it's linear independent.  The spanning property will follow automatically, right?  Likewise, we have discussed the fact that if you have two vector spaces and the map is injective and they have the same dimension, the map must be bijective.  This actually follows from the statement which we proved last time, which is that the dimension of V and it's not follows. But maybe it's similar to generalization. If dimensions of V and dimension of W are the same, then you have a map, V two, which is injective.  Then it is automatically bijectivve, it's surjective. Injectivity implies subjectivity and vice versa.  In this case, we start out with an isomorphism, but the fact that it exists, it already implies that the two dimensions are the same, right?  We are in the realm of that statement from last week, that you have to space of the same dimension, then there is no room for the injective map not to be surjective.  You see, basically just by looking at the fundamental theorem which connects the dimension of the null space and the range, you can go back and forth between injectivity and surjectivity.  Let me it at that for now. I want to give you one more statement before we stop.  Remember here I emphasized the fact that in the most general situation, you need both properties. That you take the maps in compositions in two different ways.  They have to be equal to the identity, otherwise it doesn't work.  I wanted to show you how an example of a map that satisfies only one of these two properties but not the other.  To show you how c, in this case, this is not a bijection. In this example, I will actually have a linear map from vector space to itself. This vector space will be infinite dimensional.  This is actually a good moment to introduce an example of an infinite dimensional vector space.  This vector space will consist of infinite collections of maybe like this. Or if you want, you can write like I in natural numbers where each of this is a real number. If I truncate this collection at some number n, it's the same as Fn.  It's a finite dimensiontd space of dimension.  But I do have an option of considering infinite sequences of real numbers 12 and so on.  Addition and multiplication, scalar multiplication, defined in the same way as in the case of Fn component wise, okay? It is effect space with respect to the separations, but it is infinite dimensional.  It is not spanned by finitely, which obvious because no matter how many elements like this you take, it will not cast a net wide enough to get to all infinite collections. It is an infinite dimensional space. Now I define a map as follows. It will take this collection to the following. I will shift by one. Here, put zero. Okay? Now I define a map which will shift it in the opposite direction. You see? So you have this infinite sequence which goes to think of it as a tape. You can move it this way and then put zero in the beginning. That's or you can move it that way. Okay. Now let's investigate what is the composition.  If I take first and then S, it means that I will shift like this. But then I'll shift back. It is the identity I call it,  but if you take the opposite order, it will send a 123 on first, it will shift this way, but then it will shift this way.  By putting zero, it will be 023. It's not identity. Clearly, this is not a one to one correspondence because under this map, you are mapping things only a subset. It's range is not equal to. You see? So for inter dimensional lector spaces, both properties are essential. But at the beginning of next lecture, I will explain that in the final dimensional keys, only one of them is enough. The second follows automatically. All right, so that's enough for today. We'll continue on Thursday.