bad chalk;
---
so last time at the end of last lecture I introduced the notion of linear maps. Today we will talk about various properties of linear maps and subspaces associated to linear maps. And we will also prove an important result about the dimensions of those subspaces, which in this book is called the Fundamental Theorem of Linear maps.
Suppose you have two vector spaces, and over the same field, it is essential that in this discussion, when we talk about linear maps, we consider vector spaces which are defined over the same field. I mentioned that already last time.
Then you have this notion of a linear map from one to the other. Which is just a map in the sense of set theory from the set V to the set W with the additional property that it is compatible with the operations of vector space. On both of them, there are two operations, addition and scalar multiplication by elements of this field. Okay?
Let's suppose we have such a linear map, Then we are going to define two important subspaces associated to such a map.
One of them will be in V and the other one will be in W.
Let's start with V. The definition is the following. Given such a map, we define the null space. Null space of T. Which we will denote as
null T; is the subset of V which consists of those elements which T maps to zero.
Let's unpack that.
I already explained before, that every time I use this nation with curly brackets, it has two parts.
On the left, we're specifying which set we're talking about. This defines a subset of V.
The first half of this nation specifies which set are we talking about?
Looking at elements of V capital, which we donote by small letter V.
After that we put a vertical line, or in the book, actually they use a colon, which is also fine.
Then we describe those elements by a condition.
In this case, the condition is those element, v small, which satisfy this equation, so that this linear map maps it to this element.
Now, this is a zero element of W, the vector space. One of the actions of vector space is that it contains a special element called the zero element, neutral element. Whatever we got to call,
I will be using the subscript to indicate which zero element we're talking about. Because in this discussion, we start out with two vector spaces, V and W. A linear map is not something that's defined for a single vector space, it is defined in general for two vector space, because you go from one to another. Now, it may happen that they are equal, but it still has two inputs, or more precisely the input and output.
I will be distinguishing the two 0 elements by the subscript 0W, 0V. That's a null space. It is defined initially as a subset of V.
But of course, the whole point is that in the theory of vector spaces, we are only interested in those subsets which are compatible with the structures, addition and scalar multiplication. Those are the subspaces.
Naturally, we should expect that since I'm introducing it, that actually it is a subspace and that's the case. I'm going to call it Lemma, Lemma one.
This Null T is actually a subspace of v. Let's prove this.
Proof: Luckily, we have a simple criterion to determine whether a given subset of a vector space is a subspace.
There are three conditions that we need to verify.
The first one is that this subset contains the zero element of V. Which is that if you look at the definition, it means that T sends the zero element of V to the zero element of W. We proved this last time.
The first condition is satisfied.
The second condition is that if v and w, two elements of the null space, then the sum is also in the null space.
Well, let's find out. For that we need to calculate T of v and w. I'm calling v and w.
Now by the first property of linear maps. This is equal to the sum of T of v and T of w. next. Because V and belong to belong to the null space, this is equal to zero. This is equal to zero. The result is zero plus zero, which is zero. This shows that if V and are in the null space, V plus is also in the null space, which is what we wanted to check.
Finally, we need to show that it's compatible with scalar multiplication.
For any v in null space and for any scalar, the scalar multiple of v is also in the null space.
All right? Similar, similar argument. I'll keep this, I'll let you finish it this way.
We check all three conditions. We see that they're satisfied and therefore this is really a subspace.
Let's look at a couple of examples. For instance, let's take V as a plane, which we always draw it. I chose a basis here, let's say V one and V two.
Then we have two coordinate axes. We're considering a vectors on the plane in the sense in which we have discussed multiple times. Once we choose a basis, we can identify it with R two,
Elements, can be represented as column vector x or X and Y are real numbers.
We know what the subspaces are of the space. We know that there is a zero element, which is always in a subspace. We also know that any vector space has itself as a subspace. Two trivial examples, two obvious examples.
Then in between, there are one dimensional subspaces which are different lines. These lines can be represented by a single equation on x and y.
For instance, you can write, you can consider the bisector line. Okay? All vectors which belong to this line have the form x, x, or maybe let's call it z,z so that the first and second coordinates are the same.
Which means that this line can be defined by the equation x = y, or x minus y equals 0. But it might be easier to write x minus y equal zero. If we impose this equation on x and y, we're considering all the vectors which go along this line. They define this subspace.
But now we can interpret this as a null space. This is the line, it is the null space of a linear transformation T. Null T.
What is this T? It's a linear transformation from R2 to R, which sends vector x, y to x minus y.
You see, what we have considered previously in this context was already a null space of a natural linear map.
This linear map takes the coordinates of the vector x and y, maps them to some combination of them, some linear expression, in this case, x minus y. Then clearly, what is the null space? The null space consists of those x, y, for which this value is equal to zero. And that's exactly this equation.
This shows you that null spaces naturally appear when we study systems of linear equations.
You see the null space of t could be a line, that it's a homogeneous equation, this is zero. On the right hand side,
we discussed the fact that if we put one, say, or some non zero number on the right hand side, that's not going to be a subspace, right?
Because for one thing, it does not contain the zero element because zero element, both x and y would be zero, the difference cannot possibly be non zero number.
From the point of view of this linear transformation, that equation x minus y equal one is the set of all vectors in V, in this case is, R2 such that the value of this linear transformation, the value of V, is one.
What's the difference?
The difference is that in the definition of null space, we set it equal to zero. That was essential when we're verifying the conditions because the first condition we verify is that it should contain the zero element.
And sure enough, the zero element is transformed, mapped to the zero element of element of v gets mapped to zero element of W by a linear map.
But if we try to change the condition and say non zero element here, and take a V's which satisfy that property now with some non zero element, which in this case, in this example, would be, for example, set of two such that T(v)is one. That would actually be this line which is moved.
Actually, if it's one will go negative one here and one here. That x minus y is one. This, you see it, it misses. the zero. That's why we shouldn't even try.
If we were to change this equation, modify the definition of null space by substituting here for zero some non zero element, there's no chance we will get a subspace.
Therefore, it's not relevant to our studies of vector spaces. It's an interesting object, too by the way, the mathematicians call it affine space. It's an affine subspaces, a translate of a subspace translated, say, by this vector, down. But this is not a subspace.
I hope this clarifies the relationship between linear equations, homogeneous and inhomogeneous, two subspaces and the null space.
Okay, Now let's do another example. Actually, in the homework---2A-- one of the homework exercises, there was a subspace considered, which is, if I remember correctly, it was something like, was it four or three?
Anyway, let's just say this subspace of all elements of polynomials P of x such that the second derivative of some number, let's say six, is equal to zero.
We see the question was to find the basis of this subspace.
Now we can interpret this also as a null space of a linear map. This linear map is going to go from P4 to R.
It takes a polynomial, p of x to this value, the second derivative at some point, let's say six.
That could be any other number, right?
Why is this the null space? Well, first of all, defined this way is linear map derivative. Derivative has the property derivative, the sum of, the sum of derivatives,
The derivative of multiple is a multiple.
Therefore, second derivative also enjoys the same properties
evaluation at a particular point is also linear because you have two functions or two polynomial functions, for example. And you evaluate the sum at the point 6 or whatever, it's just a sum of the values of each of them, right?
This shows you that this map, which actually you can think of is a composition of three maps,
- derivative
- followed by another derivative
- followed by evaluation at six.
-
Each of the three is linear. Composition of two linear maps is linear. That's part of last week's material. That's how we know that this is linear. It's a linear map.
The question is, okay, what is the null space? The space is this U because it consists by definition of those polynomials in here which satisfy the property accessible to zero corresponds to imposing the condition that this linear map takes this element to zero.
That's how you fit, the same applies to many other examples in the homework from last week. []
Now you can, in this case there will actually be, this subspace is going to be non trivial in the sense that it's not zero and it's not equal to the whole thing.
In fact, you can see, in this example, a preview of the theorem that we're going to discuss in a moment, which is about the dimensions of the null space.
What the next subspace I will introduce is called the range here.
The range is one dimensional, the null space, in this case, this is five dimensional, the null space is a has dimension less by one, which means four in this case.
Here is an example that is opposite, where you actually have nothing in the null space. Space doesn't have to be non trivial, it definitely will contain the zero.
Remember, the small sector space that could exist is not empty. Zero subspace contains a zero element. But it may well be that there is nothing else.
For instance, let's say T is a map of all degrees polynomials over F. F could be real numbers of complex numbers.
And I want to define a map which sends a polynomial P of x to x times P of x. Okay?
It's linear because if you multiply by anything, if you multiply sum, you get the sum of products just follows from distributive laws.
But what is the null space in this case?
It consists of those polynomials which have the property that if you multiply by x, you get zero. But x is non zero. If you have product of two functions and it is equal to zero, it means that either of the two is equal to zero. X is non zero function, yes, It takes value zero at zero, but it takes non zero values at other values on the real line.
Say therefore this is just the {0}, the 0 polynomial. In this case, the null space is as small as possible. Any questions? Okay.
Remember I said to define two subspaces associated to a given linear transformation. This is the first one.
Now I'm going to define a second one, that's going to be a subspace in W.
Definition. Again, we are given a linear transform, a linear map from V to W, and we're going to define a subset first. It's going to be defined as a subset of W and then we'll show it's a subspace.
Range of T: notation is: range T is a subset of W which consists of those elements which can be represented in the form of w = T(v), for some v in V, and V.
Sometimes people use the word image. In fact, maybe this is a good point to emphasize that the origin of both of these notions lies in set theory, not really in linear algebra.
Both actually makes sense from the point of view, from the point of view of set theory.
In set theory, if you have two sets, S1 and S 2, you have a map between them, then you can always look at the image of the map.
You can consider there is a subset here of those elements which can be represented in this form. for some y in S one, right.
That notion is actually borrowed from set theory. It's all the things that come this way by means of the map.
Map doesn't have to be a linear map
for this, you need to pick something in the second set because it describes you those things that map to that particular element.
In the case of vector spaces, we're lucky because there is always a special element, just by virtue of the axioms of a vector space, there is a special element, the zero element.
For a general set, there is no natural element, they are all created equal. In some sense, the abstract set does not differentiate between different elements. They all are on equal footing,
But if we choose some z in S2, then we can consider the set of all those elements, y, say, in S one such that F of y is that special element.
Sometimes the notation is F inverse of z. Sometimes it's called preimage, this is the image and sometimes it's written f of S one. For this one it's called pre image.
Okay, so this is just to indicate the origins of these notions.
Okay, so let's go back to the case of a linear map and the notion of a range,
Just like in the case of the null space, we are going to have this Lemma, that this is actually is a subspace of V.
Again, we have to check three conditions. I'll just go very quickly. First of all, we have to check that 0W is in the range.
But of course, because we know that the zero element of V is mapped to zero W by any linear transformation between V and W.
And then next, addition:
suppose you have w1 and w2 are in the range, then the sum is in the range.
but of course. they are in the range means that w1 is T(v1) for some v 1. and w2 of Tv2 for some v two, but then w1 plus w2 is equal to T(v1 +v2) by the first property of linear maps.
Likewise, for scalar multiplication, lambda in f implies that lambda w is in the range as well.
It is indeed a subspace. Okay,
Now, indeed we have two subspaces associated to a given a given linear map between two vector spaces.
One is the subspace of V called the null space, and the other one is a subspace of W.
Now, since I have linked these notions to the notions of image and pre image from set theorem, maybe subspaces null. This is just like a summary inside and ran inside.
It is useful to consider the properties of image and preimage. The properties of the maps and set theory.
Now, in set theory there is a notion of injective map and there is a notion of surjective map.
Let me remind you this is in set theory.
In set theory you have two sets S1 and S2 and you have a map from S1 to S2. one to two ; the map is called injective, sometimes it's called one to one, but I don't like one to one because it creates a clash of terminology. Because there's also a notion of one to one correspondence. One to one correspondence. Something is also called bijective map. I'll get to that in a moment.
I prefer injective, but sometimes one to one map is not the same as one to one correspondence. 1 to 1 correspondence is one to one plus one more property.
But just so you know that the terminology which is used sometimes, let's call injective the following property holds, that if you have f of y one is equal to f of y two for some y one and y two in the first set S1 , then they have to be the same.
We can draw this as follows.
The simplest example, simplest maps are maps between finite sets. In the case of finite set, you can represent the set by a collection of dots.
For instance, let's say here you have three dots, and here you have four, can be represented by arrows, because what is a map?
A map is a rule which assigns to every element of the first set, which is a particular dot, an element of the second set. Let's say this guy goes here. It can only go to one place.
A given dot on the left cannot go to two different dots on the right. It goes to a single one.
But 2 different dots could go to the same one, right? And then this one say goes here.
This is not injective, right?
Because there are two elements y one and y two which both go to the same elements 12. This is not two, if y2 went somewhere else to hear or to here. Then it would be injective, right? That's what it means. Injective, All go in parallel.
Different elements do not go to the same place.
There is also a notion of surjective map. Surjective or onto.
Onto means that everybody has a partner. everyone in second set has somebody who comes to them. For instance, let's say here are three dots here. Four dots. It will go like this. Everybody has a partner here, which is not always the case. This is not surjective, neither injective nor subjective.
This one is surejective, but not injective, and so on. So you can come up with your own examples.
The sweet spot is both injective and subjective.
In this case, you really have what's called one to one correspondence between the two because that means that every element in S1 has one and only one partner in S2.
That's what it means, we get a matching between the two sets.
Injective and surjective is called bijective or one to one correspondence.
In this case, you, for each one on the left, there is one on the right.
Each element on the left, there is an element on the right.
For each element on the right, there is an element on the left, for example, Like this, they could cross each other.
It doesn't have to be, what am I doing? Not like this. I meant something like this. This goes here. Say this goes here, this goes here.
So in this case, there is actually the inverse map because every arrow connects a particular pair.
Yes, of that's right, yes. Sorry about that. Thank you. Okay.
So going back to this, if you have a one to one correspondence, this is also equivalent to this map being invertible.
To be invertible. Invertible means that there exists a map, which we call F inverse, such that if you go back and forth, you get identity map, every element goes back to itself.
If you think about it, sufficient condition for map to be invertible is that it is both injective and subjective.
Okay? If S one and S two a finite sets, then there exists a bijection between them. Or one to one correspondence if and only if they have the same number of elements. Because you effectively enumerating these elements by these maps.
For infinite sets it's more subtle, but for finite sets is equivalent to the existence, the possibility to have a bijective map between two sets is equivalent to the sets having the same number of elements.
It doesn't mean that every map between them is a bijection, Obviously not. I could map this guy to here, even though they have the same number of elements. I will not have a matching between them. Because two guys here will go to the same guy here and one of the guys here will have no partner at all. Okay,
Can we ask, can we use now this information to combine it with notions of null space and range?
Turns out that yes, that actually there is a simple criterion to check whether a linear map is injective or subjective by looking at the null space and the range respectively.
Okay, let's first talk about injectivity.
Here we have Lemma three. I call them lemmas because they're simple to prove. I feel they don't rise to the level of a theorem necessarily, but it's in the eye of the beholders. If you like to call them theorems, you're welcome to do so.
Lemma Three: if you have a linear map between two vector space, V and W, looking at it as a map of sets, we can talk about, we can ask whether it is injective or not; it is injective if and only iff the null space consists of only the zero element
as always, "if and only if" is actually a combination of two statements going both directions. We have to give two proofs.
Suppose it is injective, then let's show that the null space consists of a single element. Well, what is injective?
Injective means that if two elements go to the same w in W, then they have to be equal.
And suppose that injective here means that if T of v1 one is equal to T of v2 two, then it means that v1 one is equal to v2 .
In particular, suppose that v1 is an element of the null space, some element the null space, any element of null space.
Take as v2 two, the element 0 V, the zero element of V, which we know for sure is no space because we know that every linear map sends a zero element of V to zero element of W.
That means that indeed, Tv1 one is equal to tv2 and equal to zero, since we have assumed that it is injective. This then implies that v1 is equal to 0V , which is what we want to prove.
We want to prove that every element of the null space is equal to zero.
That's the proof of proving it in this direction. Okay, now let's prove it in the opposite direction. So trying to see. Unfortunately, I forgot my own chalk, my thick chalk. And I don't only find here this, so I hope you guys can see how is it, by the way, is it easy to see or not so. Okay. So I'll make sure I bring my own next time. But bear with me for today. Unfortunately, I only have this. Maybe we can come closer to or ask me a question if something is unclear.
Okay, to go back means suppose that the null space is equal to zero, consists of a single element we want to show, to show injectivity.
That means we want to show this property that the equality of v1v2 implies that V one is V two of one is equal to Tv2 .
By subtracting V two from both sides, we get t of one minus two is equal to zero. But then of v one minus V two is equal to zero.
Here we use the first property of the difference of vectors, difference of the images.
You see how important it is that we are considering, not just some random map between V and W, from V to W, but a linear map. It satisfies this property.
This is the first property of linear maps, right?
But you see that means that V one minus V two is a null space because of T V one minus Tv2 two is zero.
We have assumed that the only element in the non space is zero, which means that one minus V two is zero, which means V one is equal to V two.
This is how we show that if the null space consists of only zero element, then the map is injective. Any questions? Yes, no, no.
The null space is in the first set, it is in V. It's those things which go to zero, right? So it's something which is before you applied
the range on the other hand is in the second one. We'll get to that in a moment. Any other questions? All right.
So that takes care of this lemma which explains the connection between injectivity and null spaces. If you know that that the null space of your map is, consists of just a zero element, then you know that this map is injective and vice versa.
If you know that the null space is bigger than zero, then for sure this map is not injective and vice versa.
Okay, let's do the same now with the range. as you may have already guessed. Range to surjectivity is the null space. Is to injectivity in this case is actually more clear because it is actually defined as the range is actually defined in exactly the same terms as the image of a general map.
It doesn't even require proof that T from V to W, it is surjective if and only If the R of t is the entire, in this case, it's obvious, there's no proof is required because the two notions are equivalent in both cases.
We're talking about the image of the map, in linear algebra, we call it.
And set subjectivity simply means that is equal to, in this case doesn't require any proof, it just realization that the two notions are the same, are equivalent.
Okay, so now the question arises as to whether we can actually estimate or relate the dimensions.
So what is the most important variant of erector space? It's a dimension provided that it is finite dimensional.
All right, next we want to relate the dimensions dimension of the null space and the dimension of the range.
This is given by the following theorem. Now it's a statement which deserves to be called the theorem, which in the book is called the Fundamental Theorem of Linear Maps,
but it's just a somewhat aggrandizing way to reference it.
Okay, so suppose V is finite dimensional vector space over some field F, and T is a linear map from V to W, where W is also over F, a linear map, then the dimension of V is equal to the sum of the dimension of the null space and the dimension of the range.
So it shows you that they complement each other, the null space and the range.
They cannot have both of them too small, cannot have both of them too large. The smaller the null space is, the larger the range is.
And in the end, they combine together. In size-wise, they combine into what is the size of the V itself. V is the first one of the two vector spaces, the one from which we map. Okay, let's prove this.
This is always the idea is that we have to remember what dimension is. Dimension is a number of elements in a new basis.
As we go along in this course, we are developing intuition of how to approach various statements, various theorems. You see
in a way it's not something that is no clear rules how to do it. There is some element of play, some element of a guessing game, and so on. But there are some patterns.
I'm sure if you have been consistently doing homework, you already start seeing certain patterns, how things are proved.
And it's very important to approach it consciously. I think consciously these proofs are not random, it's not ad hoc, it's not like each statement we find its own proof without following any system.
It's also incorrect to think that there is a particular system always. It's we created as we go.
But it's important to become aware of certain patterns, of certain approaches which we apply over and over again.
Okay, in this particular case, let's say close the book. And we just start thinking, how would we prove it? Even the first step is not obvious. You probably experienced that.
Once you see the first step, then oftentimes the rest of easy.
But you have to guess how to start. Where do you start with?
The most problems that present most difficulty to us are usually the ones where we don't even know where to start, even if we understand the statement you see.
But I want to use this as an illustration of the situation because in this case, yes, we do see the first step right away because
Theorem is about dimensions. What shall you do,
The first thing you should do is remember what dimension and spell it out.
No brainer, but I think it's useful to be aware of it.
The first step is to say what is a dimension. A dimensions is the number of elements in the basis. Okay?
There are three subspaces.
The weird thing is that you have V, and this one, this is actually inside, but this one is a W.
It's hard to connect immediately to this, right?
Because they are actually defined a subspace of two different vector spaces.
This perhaps is not the right approach, but this two are certainly connected because one is a subspace of the other.
This is kind of gives us the second step, that what if we could relate the basis is relates V and the non space which is a subspace of V.
Now we start thinking, okay, which one is smaller, maybe even small?
How could we possibly relate basis of a space and a subspace?
There are two possibilities. One is that we start with the basis of a smaller space of a subspace and try to extend it to a basis of the entire space. Can we do that? Yes.
Actually, we have a theorem proved previously that every linearly independent subset can be extended to a basis. If you have a basis of a subspace, it is linearly independent not only for the subspace, but then for the entire space as well. It can be extended to a basis.
So we see that you can as we go from a basis of a subspace to the basis of the entire space.
If on the other hand you start out from the basis of V, then you have trouble because a basis of V cannot be reduced to the basis of a given subspace.
This is a very important point which I want to slow down here and emphasize it.
Here's what I'm talking about.
So let's go back to the simplest example because it presents a good illustration.
to the plane vectors on the plane As I start with this basis on the plane,
here's my subspace which we talked about earlier, the line which is a bisector of the corresponding coin cross.
Now this is a subspace. It may well be the space not only well, but I actually explained how it is null space of a particular linear map just a few minutes ago.
It is a subspace. But it doesn't mean that you can obtain a basis of the subspace by removing one of the elements and the basis of the entire space.
You see what that, because there's so many different bases, it's actually very unlikely that the generic basis of the ambient space, of the bigger space would contain within it the basis of a given subspace.
It would have to include a vector going along this line. But why should it? There are so many options. Think about it.
A basis on a plane is just any two vectors which are not proportional to each other.
Doesn't have to be vertical. Horizontal. There's no such thing. It depends on your point of view, just any two non zero vectors which are proportional to each other.
How unlikely it is that one of them will actually go along this line, the probability zero. You see, that's what I mean.
We're trying to figure out how to prove this theorem.
The first step is to realize that we should talk about basis, right? Because dimensions about basis.
The second step is to understand that more likely we should talk about basis of V and the subspace of V.
Now we're becoming even more precise and we say the only thing we can do is to go from smaller to larger.
Now we have it mapped out, you see, because then we say start with the basis of the null space,
Step two, maybe start with the basis of the null space, let's say U1..., et cetera. I want to keep the nation of the book Un.
Now remember here we are actually exploiting one of the assumptions. One of the assumptions is that V is a finite dimensional vector space, and we know that every subspace of a finite dimensional vector space is also finite dimensional, that's why it has a finite basis.
We are exploiting the assumption that the space is finite dimensional. If it were not fine dimensional, it would not be clear how to proceed, but also this equation would not make sense. Because if we did not know a priori that this actually is well defined as a number, then it's not clear what this equation means.
There's always this issue about what do we mean by a basis of zero Vector space. Here we have a quandary because a zero vector space. So what is a hierarchy?
The smallest is zero.
The next vector space is one dimensional vector space, like a line line is one dimensional. And in fact, we know that a basis of one dimensional or space consists of a single element. That's why it's dimension is one, right?
From this point of view, if we wanted to include the zero vector space into the family, as a legitimate member of the family of vector spaces, we would have to say it's zero dimensional.
Which kind of makes sense. Think of dimension as a number of degrees of freedom, the number of directions in which you can go independent directions.
When I say direction, I don't mean like going left and right. It's the same direction, is just like forward or backwards in the same direction. Right on the line, there is only one degree of freedom.
On the plane, there are two degrees of freedom, and three dimensional space are three degrees of freedom.
And by the same token, at a 0.0 element has no degrees of freedom, you are frozen. Therefore, it should be zero dimensional.
But if it is zero dimensional, it means that if it has a basis, it should be an empty set. It should be a set which has no elements.
This is what we stipulate, even though it sounds weird, because how can we think of empty set as a basis?
Because basis is supposed to be linear and independent. That is fine, because you could say empty set is lineally independent because actually there is no equation to consider.
But in what sense it spans? Because spans means that you have to write like a one V one plus a two v two. And if there are no elements, what does it mean?
This is where it is a leap of faith. We actually stipulate something which does not fit the general theory.
Initially, we stipulate that empty set is the basis of zero vector space.
Now, the question was absolutely correct that this is one of the possibilities, that the null space is actually zero.
In this case, we start with the basis, but it's not going to look like one U and it is going to look like an empty set, okay? So let us assume that it is non zero. Suppose it's non zero, and then we will consider that case separately, that null space is equal to zero. Okay?
Here, we assume that the null space is non zero. In fact, n is a positive number. Okay?
Here's a basis. We know that every vector space of finite dimensional vector space, which is non zero, has a basis which has exactly as many elements as this dimension. This n is the dimension of the null space. Okay?
The next step is to extend it to a basis of the entire space V. This, we know it can be extended to a basis of the entire space V. That is always true for any subspace, you have a basis of a subspace, it can extend it.
Let me repeat why
this is a basis of the subspace, therefore it's linearly independent.
It's a linearly independent list of ordered subset of elements of this subspace.
But being linearly independent in the subspace also means it's linearly independent in the whole space. You see,
because the equation doesn't care whether we are thinking about these elements as elements of the subspace or of the whole space.
Therefore, linear independence of these vectors vis a vi to null space actually implies their linear independence period via the entire space V.
It is a linear independent subset of V. Every linear independent subset of V can be extended to a basis of V. We proved that last week.
That's what I mean here. It can be extended to a base of V when we extend it. Either it's already a basis of V, it can happen, or.
We have to adjoin some elements. Actually, I want to be consistent with notation on the book. It's actually not, not that I misread it, this is M.
Let's say we're going to add some elements, one up to vn . Okay?
Then this is the basis of V, which by the way, means that the dimension of V is m plus n, which is a number of elements.
Okay, Now you see what we need to show.
Now for this equation to be satisfied, because we already have, this is m plus n, this is therefore this has to be n. All right, what do we need to show?
We need to show that the dimension of the range is exactly n, exactly the number of vectors that we had to adjoin to our basis of the space to obtain the basis of the entire space V.
You see, I'm trying to show you the logic of it, how you guess the next step.
You guess it just from this equation because there is only one unknown now in this equation, and that's this.
How can we relate the vectors to the range?
Because we have to prove that the dimension is this.
Which means that ranges I, range of t has a basis with n elements, right?
So if you have such a precise statement that you need to prove that certain vector space has this many elements, usually it means that you can actually constructed, or the best case scenario is that you can actually construct a basis which has this many elements.
How could we possibly do that? Well, we can apply our linear map to the vector.
Then when we apply it, we will actually get vectors in W.
It is natural to assume given we know that this set will actually be a basis.
That's what we need to prove, right? That's what it boils down to.
If we can prove that the images of these vectors under T provide a basis in W, then we're done because it would mean precisely that the dimension of the range is equal to m.
I will keep this because this will actually help me for the case that we have postponed.
The null space is zero. I will continue here.
What we need to show, therefore, what we need to show is that Tv1 up to Tvn is the basis of the range. Not of W. Because we're not speaking about the dimension of W, we're speaking about the dimension of the subspace of W, namely the range.
That means that we have to show that they are linearly independent, right? And span the range.
Now, what does it mean, first of all?
So first of all, the spanning property, spanning property, we have to show that any element in the range of T is equal to a linear combination of the vector, right?
But what does it mean that it is in the range? It means that W is equal to Tv for some v, right?
But if it is equal, but then this v can be written as a linear combination of, well, let's say should write it out.
Has to be, we know that this is a basis. We can actually write V is equal to a1v1 , et cetera, um. Plus 11bn N now we have to apply T to it, right? That
s W
bad
That's, we want to show that this is going to be a linear combination of these guys, but it's obvious because bilinearity.
We can open the brackets. We can write this as one of one plus et cetera, AM. Of plus one of one N N by construction.
By definition to the they give a basis of the null space. At the very least, they are actually in the null space. This is, this is all of the disappear, the U terms.
You are left with a linear combination of just v one up to vn, which is what we wanted to show.
We find the W in the range can be written in this form, right? Any questions?
because they are all in null space. This is zero, you have zero plus zero plus zero M times, of course, when you add zero, you can change nothing. What's left is just this expression, right? And that's what we want to show,
That it's linear combination of Tvi, which is what we have shown. Okay, That's done.
What remains is to show that this is a linear independent subset, that this vectors are linearly independent, right?
Then once we show this, we will know that this set is the basis of the range, and then we're done.
This equation is proved because they're exactly as many of them as we need to prove this equation.
Okay, now what is linear independence? Linear independence has to do with writing an equation. Let's say c1 of V, one plus eta T of VN is equal to zero and W.
And linear independence means that this equation can only be satisfied when all CIs are equal to zero. That's what we're going to show.
Of course, the whole point is that every time you have such an expression, use linearity of the linear map, use in this case both first and second property.
You see here, I have used both the first and second property. I use the first property by opening the brackets so I could write instead of of all the sum I could write of the first term plus T of the second term and so on.
And then I use the second property by pulling out the efficient 1.2 and so on. That's how I reach this expression.
Now I do the reverse. I write it like this one, T c1 V one plus, et cetera, N is equal to zero, right? Again, the fact that it's a linear map, which by the way, shows you that none of it will, would work if we were to consider a general map between two vector spaces, which is not compatible with vector operations of addition, scalar multiplication.
But this looks familiar because what it says is that this vector is in the null space. To say that T of something is zero means that this vector is in the null space, right?
But if it is in a null space, then it can be written as a linear combination of this guys, because we know that that's our starting point, that this is a basis we have chosen for the null space.
See how interesting.
What's going to happen now is that we will write a linear combination of this guys as a linear combination of the UIs. I will keep the equation and raise this part will be enough to complete the proof. Let's write it again.
One V one plus et cetera. Cnvn is in null space, that's what we infer from this equation.
But that means that it is equal to a linear combination of U one up to, Um, something like D one, et cetera. Um, for D one DM.
Now let's take these guys to the left hand side.
We'll have to reverse the signs of the coefficient. We'll have negative one, U one plus e, negative DM, m plus one, V one plus, et cetera, plus V is equal to zero in
V.
But remember, this whole set is a basis.
Therefore the sectors in a linear independent just one up to just V, one to V, but the whole set is linear independent.
That was the whole point, because there was a basis of V, so it implies that all vi are zero and all ci are zero. This is exactly what we want to show.
We conclude that the set V one up to TVN is linearly independent. We have already shown that it is a spinning set. Therefore, it's a basis.
Therefore, the dimension of the range is n and this equation is satisfied. Okay? Any questions?
It's almost writes itself.
If you start, if you look at it and you keep in mind the various definitions and the main results that we have proved, kind of feel that there is a certain quality to the proof that it's sort of like inexorably is moving you towards the end.
It's not like we have a choice in any of the steps, the proof moves by itself.
There is a certain system right now.
I have to come back to this case, which we have postponed when m is equal to zero. Let's look at it separately.
Finally, suppose that the null space is 0, that means, in this case m is zero. The equation becomes, the equation is that the dimension of V is equal to the dimension of the range of T, right?
But so then choose a basis of V. Now it will only have elements v1 up to a vb=n. There are no u's anymore.
Then we have to show that there images v1v form a basis in the range.
But you can see that the same way the same argument works to show that it spans, we use the same argument, that vector in the range is Tv. For some V, then it can be written as a linear combination of these guys, and therefore, Tv is a linear combination of these guys.
To show linear independence, we use exactly the same formal, except we don't have any u's anymore. This is going to be equal to zero. It's actually simpler, there's no obstruction in this case.
It's just to demonstrate that this does fit this general equation. Okay, that's the result.
It's very useful because you see there's one thing I want to emphasize, which is that the notion of a basis, the notion of a basis, it includes two parts, linear independence and the spanning property.
Originally, you could have thought that that's how you always have to prove that something is a basis by proving those two properties opposite to each other.
But then there was a huge improvement.
Namely, we learned that if the vector space is finite dimensional, then every basis has the same number of elements.
If you already know a priori what the dimension is, all right, then it's enough to check just one of the two properties. Do you have a question?
Yes, here, one up to UM, v, one up to V. This whole set is the basis of V by construction.
So, if you have an equation like this in V, it means that all of these coefficients are zero.
That's what linear independence means.
It is a basis, therefore it's linearly independent. Okay,
first important aspect of the story is that to verify that something is a basis, it is sufficient to check either linear independence or the spanning property.
Usually the former is easier because for the former you actually have an equation like this and you have to show that the coefficients are zero. It's very precise.
Spanning property sometimes looks a little bit more intimidating, but the point is that if you know a priori what the dimension is, then you don't need to check the second property provided that your set has the right number of elements.
If it's linear independent, it must be a basis. There's not a choice.
Then the question is, how do you know the dimension?
Well, if your space is, for example, polynomials of degree less than real to four, you know its dimensions five because there is an obvious basis of monomials, one x x squared and so on for those cases we know.
But what if we consider the example that I discussed earlier where you consider polynomials in say, Pr over r such that the second derivative at number six is equal to zero.
This is where you use this dimension theorem because you interpret the subspace as a null space of the linear map which goes from this vector space to R by taking a polynomial to its second derivative at number six.
Now since it's a null space its dimension, is equal to the difference between the dimension of this, which you know is five, and the dimension of the range.
The only thing you need to find out is what is the range. But because this is one dimensional, there are only two options.
Either it's the whole space which is one dimensional or zero.
But you know that there exists at least one polynomial which has a non-zero second derivative at six: x minus six squared. That's how you know that this is surjectiscte. Therefore, the range has dimension one. That shows that the null space has dimension five minus 1: 4.
Any linear independent subset here that you construct is going to be a basis. That's how you do it. This is where the theorem helps.
....cut off