Dual Space
Last time. Hi, last time I introduced the notion of a dual space.
So today we will talk about various properties of dual spaces and their applications and various problems connected to the dual spaces.
Let me remind you that here we start out with a finite dimensional vector space over a field F.
We define its dual as the space of linear maps from V to F.
V' = $\large\mathcal{L}(V,F)$
To say that $\large\phi$ is an element of V star means that $\large\phi$ itself is a linear map from V to F.
$\large\mathcal{L}(V,F)$
to last time we looked at some examples, and the typical example of a linear map is obtained as follows.
Linear map means, by the way, maybe I should write it down, $\large\phi$ of (v plus w )is $\large\phi$ of v plus $\large\phi$ of w and $\large\phi$ of lambda v is lambda $\large\phi$ v for arbitrary vectors v and v and arbitrary number lambda, right?
Typical example is the following. Just think about it. You have a finite dimensional vector space.
How can you produce a rule which assigns to every vector some number in a way that is linear in this sense? Right
here, we use the fact that V is finite dimensional.
Therefore, it has a finite basis.
Choose a basis V, let's call it v1 to v n.
Then we know that any vector can be written uniquely represented, uniquely represented as a linear combination of the elements of this basis,
That is to say, v = a1 and v1 plus, et cetera, plus an vn, where each ai is an element of F.
Therefore, we can assign to each v a particular coefficient in this decomposition, in this linear combination, you see that gives you a rule, we'll call this,
$\large\varphi_i$ . Phi i. it is a rule. $\varphi$
We need a well defined rule, first of all, that assigns the number to every vector.
(the number in F, the coefficient in the expression. but this is for the basis vectors. What about for any vector v?)
This is a well defined rule because for every vector there is a unique coefficient i relative to a given basis.
It is linear because if you have V and W here, there will be some combination. Let's actually call it like 11b and VN, that if you take val, the coefficients will add up. The coefficient of V plus W is precisely the sum of ai and bi.
Therefore, according to this rule, the value of I, V plus is I plus I, which is V plus five of, that's the first equation, the first property that we need to verify.
Second lambda V, we'll go to the coefficient of this vector, but the coefficient of lambda V is just going to be lambda, which is lambda of. We check both properties and see that this is a linear map. You see this can be done for any vector space.
Once you choose a basis, we see that an interesting thing happens, that you started out with a basis V, which has N elements, and you have produced N elements of V': F1f2, et cetera. So far, we don't know if it's a basis, but we know that they are well defined as elements of V prime. We get a collection of elements. Okay,
So the question then is, is this a basis? That would make sense because actually we know that the dimension of V' is equal to the dimension of V, therefore is equal to n, which is what I call n is the dimension of V, since it has a basis with n elements.
How do we know this? Because we know in general that space of linear maps from a finite dimensional vector space V to a finite dimensional vector space is equal to the product of dimensions of the spaces.
How do we know that?
Because we know that this space we discussed, this is isomorphic to the space of m by n matrices where this is as a dimension of V and M as dimension of the dimension of FMN. Obviously is m times n. Because F is a space of rectangular arrays, By rectangular arrays, each of them has m times n elements.
From this, we derive that its dimension is n times n.
Now in this case, we consider the case when is right. Because we define prime as of becomes its dimension is one.
We get the dimension is equal to n times one, which is the dual space. Prime has the same dimension as the original letter space. Therefore, it is tempting to think that this is actually a basis, right? In fact, that is the case. This is in fact, man is a basis O V prime, right? This is called terminology, this is called the dual basis to the original one. Okay? Let's prove this. How do we prove that something is a basis? The ideal situation is when we already know the dimension of vector space and we're given exactly that many elements. If we know the dimension of vector space and we're given fewer or more elements, we know it cannot be a basis because every basis has the same number of elements and that number is a dimension, right? The ideal situation is when we know the dimension of the space, we are given a set which has exactly this many elements, which is our situation in this case. Instead of verifying two properties of a basis, namely linear independence and the spining property, it is sufficient to verify just one. The second will follow automatically. We discussed this before. Usually it's easier to verify the linear independence because it's like an equation. And you try to show that there are no solutions or the only solution is zero. Whereas the spining property usually is a little bit more difficult. Okay, let's verify their linear independence. That means that we write the sum. We suppose that the following equation is satisfied one. No, I already used B. Okay. Let's use another letter. C151 plus c252 plus et. N is equal to the zero element. What is, what do I mean here by zero? A number zero. Here it is a zero element of the vector space under consideration. The vector space under consideration is V prime. This is a zero element of V prime, as I often do. I will emphasize this fact by putting it as a subscript to emphasize the fact that we're talking here about zero number. It's a zero element. What is a zero element? By the way, what is zero element of V prime? When answering questions like this, you have to remember the origin of the dual space. There's nothing mysterious about the dual space, it's just a special case of the vector space of linear maps. The special case of L of V, that's space that we looked at just above in the case when W is the one dimensional space, right? We might as well as what is zero element in the space of linear maps from V to W. In a more general situation, for V and W are arbitrary vector spaces. Which linear map is a zero map? The answer. Is it a map? It's supposed to be a map. From V to which one is it? It's the one which sends every vector to zero. Right? You space V, you have space. They're both vector spaces, so each of them has a zero element. Would you the zero element of zero, Right? Out of all linear maps from V to there is a unique map which raises everything which sends every vector to zero, right? That map, which is an element of LV, is the zero element of this vector space. In particular, our beloved zero, V prime, is a functional from V to which sends every vector to zero. Okay, We are starting out with this equation. We want to show that this equation, which on the left hand side you have a linear combination of this F1f that we defined here by this formula, I has input an arbitrary vector V and as the output, it's eightfficient in the decomposition with respect to specific basis that we started with. Right, that's I. Now on the left hand side we take a linear combination of, this is in the space, which is the space of maps from v22, for which C 1c2n. Can this linear combination be equal to that special zero element which sends every vector to zero? You see, so it may look a little bit confusing at the beginning because it feels like there are several layers of things. You already start have V. Now you consider functionals on V. Out of all the functionals, there is one which sends every vector to zero. And now we take some other functionals and we take clean combination. Start feeling like your head is about to explode. Possibly, but don't worry, it's going to be fine. Just bear with me here. It's like everything. You have to get a little practice with it to get used to it. At the beginning, yes, it is a little bit confusing and overwhelming perhaps, but fear or not, it's really something very well structured and well defined and if you just pull the thread and stay with it, you will slowly, gradually, will get to feel more and more comfortable with it. Here is what we have this equation and what we want to show is linear independence. Which means that we have to show that this equation can only be satisfied if CIs are equal to zero. Okay? Now you start thinking how can A possibly prove that? First of all, I don't even understand what this I is. I barely understand what the right hand side is. How can possibly do this? It's almost like a player game of chess. You look at all the pieces that you have. What pieces do you have? You have a basis in V? Definitely, everything depends on the basis. Because without a basis, we could not possibly define this functional, right? This functional references a coefficient in the decomposition of V. But relative to what? Relative to a specific basis. If you change basis, for instance, with different efficient basis in V is given without base in V, this discussion is empty, it makes no sense. Let's use it. The first thing we can do is to say, well, here on both sides we have elements of V prime. Because I was defined as an element of V prime, right? It was defined as a function on V. I suppose I'm a little, my annotation slightly unclear here. I meant a function, this one is b. Just to emphasize I, it is an element of V prime. Here we take a linear combination of these elements in vector space prime. In vector space, it makes sense as an element of V prime, the right hand side. On the right hand side, we also have an element of prime, which I've just defined. We are saying the left hand side is equal to the right hand side, which means that two functions are the same. But to say two functions are the same is a bit abstract because each functional in turn is a rule which assigns to every vector some number. But if you have equality of two functionals, it means that there are values on every vector are the same, right? The values of the left hand side and the right hand side. This equality means precisely that if you take the left hand side, which is an element of V prime, and evaluate it on some vector v, then it will be equal to the right hand side, which is another functional in V prime. Evaluate on that same vector. For all that's all. It means two or two maps are the same if and only if there are values on every element in the domain of this function or map coincide. Okay? But out of all possible vis we could substitute elements of our basis as you do homework and so on. You will find this a shortcut or an idea that is really a very powerful application of this notion of a basis, that rather than proving some abstract statement for every V, we restrict ourselves to elements of a given basis. Now there could be a problem that you are not given a basis, then you have to construct it or argue in a generalized sense that knowing that this space is finite dimensional, therefore the resistive basis, let's consider it and so on. But in the ideal case, yes, this is the left hand side, this is the right hand side. In the ideal situation, you are actually given a basis. Instead of trying to solve it immediately For general vector, it may be useful to just specialize to the case when V is one of the basis vectors. This is the idea. Let's evaluate the left hand side, which is this combination on, on VJ. J is an element of our basis. It's a generic J element of this basis. Let's see what the equation, that these two values are the same mean. Now, on the right hand side, this is easy because right hand side is a function which is a zero functional. For zero functional, you don't need to know much value zero on everything basis. Vector basis doesn't matter on every vector. It takes value zero. Without any thought, you can put 00 is a number. We are evaluating the functional on a particular J, which happens to be the J element of our basis. The result of this evaluation is a number. That number is a zero number now without any indices. Okay, that's what this equation means. That's what the right hand side of the equation is. What's the left hand side? The left hand side is going to be 11 of 12 Tj VJ plus, et cetera, plus J. J, J plus Et CNN VN, right? We're evaluating this combination of functionals on VJ, but that's just a sum of the evaluations of each of the terms. Guess what is F one of VJ? Let's put J is noticle to one. If you write VJ combination, its expression is a combination of the basis is zero times V, one plus et cetera, plus one times J, plus zero times VN, only one coefficient. And this expansion is non zero, bless you. It's equal to one, it's the one which appears in front of VJ. Obviously, V J is equal to VJ. If there were some other non zero things that would contradict linear independence, that would contradict the fact that this is a basis for VJ. This expansion is extremely simple. The so called element G, which appears in front is equal to one, and all the other AIs are equal to zero. But that means that all of these guys are equal to zero, except for this one where ij, evaluated on VJ, is equal to one. Let me write this, emphasize this by the following formula. F I of J is equal to one if I is equal to j and zero otherwise. There is a nice symbol for that, which is called the chronic symbol delta J. Delta IJ is a symbol for something which is equal to one. If I is equal to zero, if I is not equal to j, that's the value of Fi VJ. For example, one of VJ is zero. Well, assuming that is nottical to one, let's say is somewhere in the middle Fi one of J0f2 of J zero. And so on, FijJ is one. Then the rest of them are also zero. What do we get? On the left hand side, we get J. This expression is C J times one, Which is J J, the right hand side is zero. We conclude that J is equal to zero for all j from one to n, because we could substitute arbitrary J into this equation. That's what we wanted, we wanted to show, well, I wrote it, that I is equal to zero, doesn't matter, you see, by simply substituting the J's basis vector on both sides into this equation, the equation of linear dependence. We conclude that C J is equal to zero. And that is so for every J, it means precisely that F1f2, and so on, F N viewed as elements of V prime are linearly independent because there are n of them, and we know the dimension of V prime is n, we find that it's a basis. This completes the proof. Any questions? Yes, that's right. Vj. Oh, God, I'm sorry. I am a little spacy today. Thank you for Yeah, please, please keep correcting me, but I hope I'll get better. It still is better than saying that October instead of February. I guess so anyway, Okay. Let's focus. So it's a basis and it's called, it's called a dual basis. Okay? So that's the first thing we can do. What else can we do with this dual spasis? So far, it's like two spaces of the same dimension, and every basis in V gives rise to a basis in V prime. Next, let's consider linear maps. Suppose you have a map from V to more. V and W are both finite dimensional. The question is whether given such a linear map, there is something like that, there is some natural linear map between V prime and W prime. All right? So that is a question. Let me use gra space for it. So suppose you have some function on V, right? Can we use a linear map from V to to construct some function on'll function? V goes from V to F, n goes from V to. If you have two functions which have the same origin, same domain, you cannot take composition of those functions. In the case of linear maps, we can their sum in general sometimes if the target spaces are the same. But in this case they're definitely not going to be the same because it's a very special case here. And this tab, we could be arbitrary. However, if you stare long enough at this picture, you start thinking, what if we reverse the roles of V and W? Instead of looking at the functional from V to look at the function from, then we can compose it with this composition will give us a linear map from V to. In a weird way, this linear map has enabled us to pass not from functions on V to functions, but in the opposite direction we can go from i, which is from W, to two. Now this is like this, and pi is like this. Here we're talking about the composition of the two maps, and we have specialiation for a composed with pi, We can go from pi, can say precomposed with t. Therefore, we have a rule now that assigns to every element of W prime, because prime consists of all in maps like two, the space of linear maps on V. You see how interesting duality, what we're talking about, can be viewed as under the umbrella of duality. You have dual vector space, dual basis, and so on. Under duality, linear maps reverse their direction. If you have a map from V to, it automatically gives you a natural map from W prime to V prime for the simple reason that you can compose them in this way. And if you had to, you see something weird going on. Okay? That's interesting, that we can go from maps, from V to maps from prime to V prime. Now, of course, you have to check that this is linear, but that's easy. You need to check that if you take the sum of two linear functions 51.52 from two, the resulting composition will be the sum as well. But that's clear, because composition preserves linearity, we know that. All right, the sum as well as for the scalar multiple. Then the next question is, what about numerical representations? We know that if you have a basis in V and a base in W, we can represent a given linear map. Gama is a basis of W. Then we can associate this matrix matrix. Sometimes I write it like this, beta which is going to be, let's say this is n dimensional, this is m dimensional, this is going to be m by n dimensional matrix. Okay? But now we know that you have this new map which we will call prime, which goes from Dow prime to V prime. And now we know that given basis beta V, we have a basis beta prime in V prime, right? Because in our example, beta would be this, V1v, beta prime would be one, I call it beta prime. Now, without specifying the elements, but think of them as V from the previous discussion, I will denote the dual basis by beta prime. Sometimes in previous lectures I use beta prime as a nation for a different basis and V, but now I am using primes in a consistent fashion, that if I put a prime, it's something referencing the dual space, that's a dual basis to this one. Likewise, then you also have a dual basis prime, right? Because here you have a basis gamma in W. So therefore, you should have a dual basis in W prime. Okay, That's nice. But also we have prime, which we have just constructed, which is a linear map now from the Funk functions to functions, but in the opposite direction it has its own matrix. We can construct its own matrix now relative to these two basis. But you see it is not going to be beta prime, gamma prime, it's going to be gamma prime, beta prime. Because the lower index we put the basis of the space from which we go. And we go now from prime whose basis is called Gamba prime, the upper index is beta prime. You see this two switch the direction of the map. Switch being from V to being from W prime to V prime, we get a matrix. Now this is by matrix, therefore n dimensional but just bio matrix here, biomtrixw. Interesting we were able to for we consider bio matrix for prime, we construct bio matrix. Do we know a rule which assigns to bio matrix and bio matrix? We do. Taking the transpose, that's the next result that the matrix of t prime is in fact a transpose transpose of the first one is m by m, it has rose and colons. And what does it mean to take the transpose? It means to flip it with respect to this diadina. The result is a matrix which has colons. And if this is a matrix A, this is what we call a transposed. Lo and behold, the transpose matrix is exactly the matrix of the dual map, relative to the dual basis things S. And it gives a nice interpretation of the transposition of matrices, which is a nice procedure at the level of matrices. But we know that matrices are so representations of linear maps. You see at the level of linear maps we have passage from to prime. At the level of matrices representing them, we have a passage from a matrix transpose. Now, before we prove this, I want to give you an example, maybe a couple of examples of basis and dual basis just to see how this works in practice. But if you have any questions, please ask me about this. We'll return to this in a moment. But I want to bring it a little bit down to Earth so that we look at more concrete examples of basis and dual basis. Before we tackle these issues, let's look at the following example. Let's say our V in general. Last time, at the end of last lecture, I talked about the vector space, let's say P of r. I talked about two classes of natural linear maps which arise here. You see in the previous discussion where we talked about this linear functions, this was abstract and required the choice of a basis because we did not consider particular vector space, we consider generic vector space V, all we did is chose the basis in it and say, okay, once you have a basis, you have this natural linear functionals I, and then we prove that they form a basis. But here we're talking about something much more concrete. Now, what do I mean when I say concrete? It's not just some abstract vector space because it consists of polynomials which are functions from R to R p of x. It sends every x to x. I want to x, let's call it just so that sometimes we use x. For a vector t p of t, this is a polynomial of degree less than equal to n. We impose this condition to get a finite dimensional vector space. But it means that an element of this vector space, it has dual interpretations. One is as a vector in this space, but the other one as a function from r to right. Now from this perspective, what we are considering a functions on functions, if you think about V prime. Now in this, V prime consists of all functionals on V. Which means that it's something which assigns to an element of, well, let me just write it to emphasize that we're talking about specifically this key, some real number. How interesting. We substitute a function under, in other words, pi, is a function on functions, even though it's a controlled situation where we don't consider all functions in one variable, but only polynomial functions and only of degree lesinical to. But still, it gives you a certain insight into what the meaning of the dual space is. Typical vector spaces are functions on something. What do I mean by that? Even our favorite example of FN, as an aside, our favorite simplest vector space N can be viewed as a space of functions from the set one to n to, to vector space. Fn can be sort of as a space of functions from the set one of n elements to be what is function? Function is defined by its values of one of two and so on which we can are as a colon of n n elements of. Right? Therefore, there's no, this is a, there's no difference between N and the space of functions from the set of elements, numbers from one to n to to. This is a good perspective on vector spaces. Vector space structure always arises on the space of functions from some set to a field. We even had a special notation for it. If you remember, FS, FS is defined as a set of all functions from to. This is a vector space over for any set. It's a good perspective on vector spaces to think that the vector space is a space of functions on something. Function sites, find some conditions. For instance, you could consider again, something we talked about. Could be real numbers and could be R be functions from R to R. But there are too many of those inside. We consider say, continuous functions or differentiable functions, which we are interested in, in calculus. It's a subspace in the space of all functions. Now, from this point of view, what is a dual space? If your space is a space of functions of some sort, maybe sides find some conditions. By the way, here also the subfunctions from R to R, but sides find the condition that this function is given by a polynomial of degree lesson to n. It is a space of functions. Then the dual space consists of functions on functions because for every polynomial we have to give a real value first. It sounds weird because why you want to take functions on functions until you realize that there are some very natural functions on functions because you can take the value at a specific element. You see, this is what I explained last time. If you pick a real number, then you can define a functional. I will use a different file to emphasize that I'm talking about specific class of functionals whose value on the polynomial P is the value of a a. You see what's happening. We're reversing the role of a, and normally we are thinking of a as the argument and is a function. You substitute a into the concept of dual space enables you to reverse the roles because now a labels functions more precisely functional elements of the dual space to N of r is now an argument. But why not? Because if a is real number, nobody forbid us from thinking about it. Not only as value of a, but also as value of a P. That's the idea. Of course, the point is that for this to be interesting, and linear algebra for this reversal, that reversal has to be linear. That if you take the sum of two poinomals, those values be given by the sum of the values. But certainly the case, in fact, it's always true for all functions, what I just said. Can be generalized to the case of vector spaces of this nature. If your vector space is a space of functions from set to, you can construct linear functionals on it by simply evaluating those functions at a fixed element of. By definition, this space consists of functionals, that is to say maps from S to. But there is a natural of such functions which are labeled by elements of the set, Same way as there is a natural class of functionals on polynomials which are labeled by elements of R. R is an example is what plays the role of here. This space is not S, but it is a subspace in R, R in this equation, this is a polynomial. We can evaluate at the point a, right, get a number. Normally we think of it as the value of a polynomial at this number. But now I want to think of it as the value of the functional cris, point to a on the polynomial, reverse the rules t equals P of a. We substitute a for T. Let me give you concrete examples. Maybe it will become more clear. Let's say that A is equal to two. Okay. And then consider two. P is a polynomial. Let's say N is two, Palo degree two. Let's say T squared minus T plus one. I'm, I define F two as follows. Its value on the polyoma of t is the value of t at two. For example, on this polynomial this value will be four minus two plus one, which is 33. Can be viewed as the value of this polynomial at two, but also as the value of the functional two on this polynomial. I'm reversing the rules of the point of evaluation and the polynomial. You see more generally if I have a space of functions from some set to in the example on that board is R, Is R also. However, we are not considering all functions from R to R. We are only considering polynomial functions only of degree lessic to n. That's the difference between the two boards. They're not exactly the same examples, but in both cases the retro space consists of functions of a certain kind from somewhere, somewhere, some set to some field f. It's just that in this case both and are equal to r. In general, you can consider functional given an element a. Define a from S to as a functional whose value on a function is the value of at a. Maybe let's put it in y so that it's similar to, okay, now I know it may look a little weird and it's strange, but stay with it and eventually gradually you will get used to it. It is a powerful idea which is at the core of this notion of dual vector space. And actually one of the main reasons why we are interested in considering dual spaces precisely. This language of dual spaces enables us to think about evaluation of functions as actually an element of vector space. That procedure of evaluation is important. I was going to. Okay, let me come back to it, why it's important in application, which is application to plynomal approximation. But before I get there. I want to go back to the question of dual basis again. For every n, we have the vector space. And of R pm, lesson equal to n in one real variable, in the dual space to it, we have this evaluation maps for every real number. Now to tie these things together, let's consider the following question. Let's say we have the simplest example. Let's say n is equal to one. Then we know that the dimension of the space and of R is n plus one. The dimension of one of r, which is ponomous of degree less equal to one, is two, right? Now, let's suppose two elements. Let's suppose I want to do evaluation at one and negative 1.52 negative one are two elements of this space, right? For every number A, we have such a functional, its value on the polynomial is the value of the polynomial at a. No, I claim that they are linearly independent. Why there? If you have a vector space and you have two vectors, they are linearly dependent if and only if one is proportional to the other. Right? We all agree on this. Let me show you that these are not proportional to each other. I not proportional to each other. Let me keep this for now to show they're not proportional to each other. Let me observe that one, the value of one on T minus one is right, because we're supposed to evaluate every plyomal at one. If you evaluate this at one, you get one minus 10 minus one of t plus one is zero. Minus one of t minus one is minus two. I should have written this is 2.5 minus one of t minus one is minus two. Ask me if it's not clear. Now, let's suppose they are proportional. If they were proportional, it would mean that one, say, is equal to lambda times five minus one, right? For some lambda or the other way around. But if so, then it means that the value of one on any p of t has to be equal to lambda times five minus one on any P of t. However, if I substitute P of T, T plus one on the right hand side, I'll get zero. And on the left hand side I'll get non zero. That cannot be true. If I substitute, if I substitute T plus one, this will be zero because P minus one of T plus one is zero. But you don't need to look at that blackboard. You know that if you substitute minus one in T plus one, you get M plus one equals zero. If you substitute plus one on the left hand side, you're evaluating at one. So you get two here, you got lambda times zero. But lambda times zero for any lambda will still be zero. You get something which cannot be satisfied. Therefore, this cannot be satisfied and likewise the opposite condition cannot be satisfied. I will just substitute instead of t minus one subset plus one, I'll subs minus one. This way you see that these two functionals. Are linearly independent Remark one can show that the same is true in general. That one, if you have numbers one up to a plus one and they're all distinct for nottical to J, then the functionals one form a basis of the dual space. Okay? So in other words, look how well we consider fy dimensional vector spaces. Therefore, it has finite basis. We know it's dimension n plus one, the dual space has mentioned plus one. But see how many functions we've got. We've got a whole continuous family of functionals. They are labeled by real numbers. They cannot be all linearly independent obviously because would mean that the space is intindmensional. It turns out that n plus one of them will do any n plus one of them which are distinct. That is to say, come from this evaluation, points a one up to a plus one give rise to plus one linearly independent elements. Because you have one plus one linearly independent elements and the dimension is n plus one, it's a basis. Okay? But let's go back to our example. Here we have two elements. Here we have a 11 is 12 is negative one, right? One is 12 is negative one. We got one negative one. Well, they form a basis of one of R D. Now the question is, what is a dual basis? Or more precisely, to which basis? And V, which is one of R, is a dual to what basis? One of two of, let me call it P and Q, so that I don't use indices. Otherwise we get confused maybe with this indices P and Q of one is a dual, you see? So because we discussed the fact that for every basis in V, you have a dual basis in the dual space. Now I'm reversing the rules of V and V prime for good reason, because in a moment I will explain. I actually already mentioned that last time, that if you take the dual of the dual V itself, therefore you can go from a base and V to a dual base in V prime, just as well as you can go from a base in V prime to its dual basis in V, to which it is dual. In other words, you have V and V prime. And you can go back and forth linear functionals. If you take linear function V, you get V prime, take linear functions on V prime, you get back V. Therefore, this question is actually well posed, even though we don't see it yet. But we might just calculate, let's say how to solve this problem. We just solve it by solving system of equations. We write P as A plus B P of t. Because remember this Palo is of degree lesson to one. It's really very simple example. But something like this works in general. You might need a computer calculated to do it Is what is the property that we need to establish? We need to establish that F one of P of t is equal to one, right? F minus one of P of t is equal to zero, whereas Fi one of t is equal to zero. F minus one y q of t is equal to one. That's the Neer delta. This is the first one. Is the first one. This is a second. I know what the problem is. It's something with this wire. Okay? Look, yes, because you see this is the first functions, the second functional. And the condition is the first functional on the first element should be one. The second function on the first element should be zero. That's what I wrote before. We had f I of VJ is delta IJ. Remember here, I'm reluctant to use Natasha indices 1.2 because I'm using them to indicate the evaluation points. But because there are only two of them, it's not really necessary because we know that the first is a second. Likewise, this first a second, the condition is first first is 1, second second is one, which is this. And this first with second, second with first is zero. So we got four equations and four unknowns. And of course, we'll solve very easily. So how do we solve it? Let's look at the first 2f1p of t is a plus B of t. When val at one, I get a plus B is equal to one, right? Minus a minus b is equal to zero. That means if I add them up to a is one. Which means that a is 12. From the first equation I find is 12p of t is 12. I wrote like this. Okay? 12 plus 12, right? It's very easy. This is all your nose. There's nothing complicated here. Likewise, let's do this now with C and D. C plus D is zero, right? Because now it's the value of 51 is zero, then minus D is one. If I take the sum I get is 12, the sum I get two equals two equals one is 12. Then D is minus 12q of T, which is plus D is 12 minus 12. We have found the basis. Now, clearly it's a basis because they're not proportional to each other. That's not the standard monomial basis, but monomials you can get by taking the sum or the difference of these two. You get one and this way, but that's the basis which is adapted to these two evaluations at one and merits of y. Likewise, you could consider, say, functionals, which are evaluations at the point minus square root of two and pi. As long as two numbers are 1.2 are different, the two functions will form a basis. Then you can ask, what is the dual basis in the original space, which is paramus in two variables, and you will solve in the same way. You see that it's easy to see that there is a unique solution. In this way you will find the dual basis. You see, all of this is indicating that something is going on that we are not yet seeing because we are our so far our set up was that V prime is secondary to V. We go from V to V prime. But in fact, these examples are slowly, are starting to indicate to us that V prime and V should look as if the two vector space on equal footing. In fact, neither of them is superior to the other. This is an indication because this shows that the dual of V prime is, is a chicken and egg problem. Who came first, or V prime, right? From one perspective, V came first because V prime is dual. But from another perspective, itself is a dual. V prime, V prime came first. How to understand this? And this is a crucial point that I'm going to make from which everything will follow. What is the crucial point? The crucial actually, in this case, he is modifying our notation slightly. As you know, language oftentimes can be very powerful in a sense that it lead us to insights, but it can also lead us to blockage. And not being able to see what's right in front of our eyes. Up to now, I talked about an element of V prime as a map from V to I'm substitute, given we write F V, this creates a hierarchy Pm. This is secondary, but let's use instead new nation. Notation like this. Then it becomes, they become an equal footing. We read this from left to right, you're, it's v, but you can also read it from right to left. It's evaluated on. You see in fact what we have. What is this disappearing given in V prime V V, we obtain a number. You can think of V f, but you can also think of it as previously but also as Fi. Instead, I propose a more democratic notation, like where it's really between left and right. Why intended, but it's much more of asymmetry between left and right. This in, you see, our whole point of view was actually incorrect. We were to centered V prime for us was something that was external, something evaluated on elements of V. But in fact, we could reverse the rules and you get an equivalent theory. An equivalent understanding of what's going on. The proper way is to really think that what is for every, all vector spaces come in pairs. Actually, how interesting. For every vector space somewhere in the world, there is another vector space which is called this dual. When you put them together, there is some chemistry between them. There is a pairing given an element of an element of V prime, you can evaluate one on the other. You get a number at first. You may think, it doesn't really add anything to the story, but it does because as I said, thinking about it this way, you can reverse the rules and therefore you can imagine that V is a functional five. This is very similar to what we did here. It's not exactly the same because here we're talking about elements of V and V prime. Here we're talking about polynomials and the points on the real line, but the idea is the same. The basic idea is that you can think that you're evaluating a polynomial at a point, but you can also be thinking that you're inviting a point at all polynomials. In other words, you can evaluate a fixed polynomial on many different points. Or you can evaluate a fixed point on many different polynomials. But the result is the same. You're speaking about the same numbers. Those numbers are the values of polynomals on points. Just either fix the point or you fix the polynomial. Here the picture becomes more precise because these are the objects of V and V prime are objects of the same kind, the apps at spaces. Whereas here it was points of line and polynomial, the real line. How does it help us? Example, helps us to prove this, this dilemma. Okay? But before I get to this, first, let me show you that indeed from this perspective, the double prime itself. So now we see that actually I suppose you might need this. Okay? So to see that V prime prime is, observe that each defines a functional on V prime. Namely, let's call this functional five. V from V prime five. Let me emphasize at this one more time, similarly to what was before five. Namely, given any fi in here, you evaluate on V. Normally we fix and we'll allow to vary, and then we think of as a functional. But now we fix V and verify, therefore you get a functional on V prime, because now the argument becomes pi. The only thing that is inconvenient is that the nation is weird. Because normally we like to think of the argument as being in the brackets and not outside of the brackets. That's why I suggest our new nation, which will make us much more comfortable with it, because All we're doing is we're reading it from right to left instead of from left to right, we're valuing V on five is f i is changing, therefore we get a functional now it's a function for sure, for every five we get a number. Then we check that if you have the sum f one plus 52 with a fixed V, which is just the value of one plus 52 on V, of course, but we're using our new notation to simplify the psychological transition to this new perspective. Obviously it's a sum of the two, that's how these things are set up. So it is actually linear functional, okay? And likewise for the multiple lambda, lambda pi is equal to lambda V, it is indeed linear. Okay? So now, what about dual basis? For the dual basis, if we choose a basis of V, if we choose a base of v, then we get an isomorphis between V and N, where to every vector, for every vector, every assign an entopole of real numbers which, which are the efficients in the linear decomposition rate that we talked about from the very beginning today. Okay, on the other hand, you also get a basis beta prime of V prime. Then e, which is an element of V prime, is going to give rise to a collection of numbers. But here it is useful. This five can be written as a sum of b1f1. This one consists of. This is what I'm using notation from the very beginning of this lecture. This is going to be 1f1nf. You see, the only thing is different, we will represent it as a row vector, as a colon vector. Then the value is just the product of this row and this colon. And how nice. Because rows and colons can be multiplied if we have the same length, by the rules of met multiplication, you get B1b multiplied by one N, which is 11, et cetera. What makes me believe that this is a formula that is symmetrical with respect to V? That I can reverse the rules instead of multiplying. Thinking of this as a row vector, I can think of this as a colon vector. Instead of thinking of this as a coal in vector, I can think of it as the row vector and the product will still be the same. It will be just the dot product between the com efficient, you see likewise it's the same as one N because now from the other perspective, I'm treating one N as the original basis. Let's say this one is V1v as a dual. They are dual to each other, but this duality has no beginning and no end. It can go both ways. Basis is dual to this basis, but this basis is dual to this basis. What proves it, is the fact that the product of the row of bees and the colon of As is the same as the product of a row of A's and the colon of bees. It's the same as this. Anyway, two different ways to look at it. You look from the left or you look from the right. You look from the point of view of V, you look from the point of view of V prime. Now, what about this? Now we're coming to this dilemma. Suppose now we have a linear map from V to you have pairings like this. What you can do is you can substitute in here and V here, I have functional vector. But now in addition, I have a linear map from V to V. Will be in, but will be in W prime. Then it makes sense because I can think of this as a pairing. This is T applied to v from one perspective is five of V, where the middle linear map hits the one on the right. Then this is interpreted as of, of v. But on the other hand, I can think that actually affects in it acts as p of five. And then it becomes prime, evaluated on V prime is the map from W prime to the opposite direction to V prime. But you see again, it is writing it this way or that way breaks the symmetry. This nation is most suggestive and closest to the truth. The truth is that this is an operation which is symmetrical respect to pi and V T here in the middle can be thought of as simultaneously acting on both. Or you can throw the weight of onto V. Then you're thinking about the map from V to W. Oh, you can think of throwing the weight of it on the left on five. Now, when you do the matrix representation, you will exactly get the transpose matrix. Why? Because you will have a colon vector of one A N. You'll have the matrix of the. Actually, let me write it more suggestive like this, okay? And then I have a row vector of B's. Now if you have a product of three matrices, you can either multiply this two and then multiply by this, or you multiply this two and then multiply by this. If I multiply this and this, it means I'm using the first road I apply to V, then I take the value of fi on it. Because we already learned that the product of rows and colons is evaluating functionals on vectors. If I do it this way, I recover the first way. But if I do it the second way, I can, this is M, because now fi lives in.