All right. So let me first say a few words about the stuff that we discussed at the very end of last lecture. There wasn't enough time, and it's an important insight.
So the idea is that if a vector space is a finite dimensional vector space over R or C. Equipped with an inner product.
Then we can use the inner product to identify five V and V star, V prime, the dual space. Okay. For a very simple reason that actually we have a natural map from V to V star prime. Sorry that we defined in this way, namely for any vector y in here, we can use this as the value of the second argument of the inner product. Inner product has two arguments. So we have some x and y. For any x and y and v, We get a number, which is an element of f. So this is F. But if we fix y, then instead of two free arguments, we only have one. Effectively, by pinching the second one, fixing the second one, we have a function of one variable. If you have a function of two variables, think about it this way. Function of two variables, you fix the second variable, you end up with a function of one variable only because there's only one free parameter now or free argument. Let's do that. Then for every x, you get a number. Then we can see that it's actually linear because the original thing is bi linear, then it's going to be linear with respect to x. I will call this. This is a functional now. Inside the dual space, and it's defined by the formula that f sub y of x is this. Let me apuate maybe subways this for all x in. Now only one argument is left. So we check that this is a linear functional. So it is an element of v prime. And then the claim is that this let's call this map. This map. It is a bijection. So that's the is called. Is that is a bijection. So I was one to one correspondence. V and V. Moreover, is an isomorphism. Okay. So in other words, it's a linear map, which is a bijection, which is one to one correspondence. If F is R. And I will explain what happens if F is equal to C. So the idea is that that first of all, that's prove bijection bijective means injective and surjective, right? And so to prove injectivity. Suppose you have y one and y2v such that the images under this map, which is f one and f y two are equal to each other. But this means that from the definition of x one is equal to x two for all x. Taking this to the left hand side and using linearity, we get x y one, y two is equal to zero for all x. But we know that if you have a vector which is artogonal to every vector in V, this vector must be zero. This implies that y one minus y two is equal to zero. That is to say y one is equal to y two. So two elements of v go to the same functionals, if and only if these two elements coincide. That means that this map is injective. Different things go to different things. Now for subjectivity, We have to show that for any functional in V prime, so Pi is a linear functional element of pri. There exists and element in v such that v of x, let's say, is equal to x y for all x. Here there are different ways to do it. But here is a simple proof of this. Choose a basis one to n of v. And define define is given, right? We want to find an element y such that this formula holds for every x. And I just produce it explicitly. Using my inner product, is equal to the sum of E bar times E. For every, this is a value of R functional on the basis element I, it's an element of F. I take it's complex conjugate and take it as a coefficient front of E. This a liar combination of S is a vorn v. And so I claim that the values of that if I calculate both sides with this y on basis vectors, s, then I will get the same result. Let's check that. If I take on the left hand side, I will have five K Okay. We're checking this formula. X equals k in this formula. On the left, I have five of E K. On the right, I have E K in a product f of EI bar EI. Now, remember that when you have a linear combination for the second argument, when you pull out the coefficients, you have to put complex conjugate. But we already put a complex conjugate. It will be double bar, double complex conjugate, which will return five of Ei. There is going to be f of E E K E i from one to n. But only one of this is is non zero, and that's the one for equal k. In that case, it's equal to one. This whole sum is equal to f of k, which is exactly what we have on the left hand side. This verifies this formula For the case one x is equal to K. In other words, we check that these two linear functionals on the left and the right hand sides have the same values on the basis vectors. If you have two linear functions whose values on basis vectors are the same, then the two linear functions are the same, because then the value on every vector of both of them is the same, which you get by linearity. This implies formula star for all x in V. And that shows that the y given by this formula, indeed reproduces this linear function in the sense of formula star. That shows subjectivity. Thus, we obtain a bijection between V and V prime. In some sense, the meaning of the inner product or one of the consequences of inner product is revealed. Inner product is a way to couple things. You have two inputs X and Y, and the output is a number. But then if you fix one of them, you get a linear functional. Therefore, an inner product effectively gives you a way to construct a linear functional for every vector. And that correspondence sets up a bijection between the two. Now, if Now, in both cases, so this is maybe part one. I've shown the bijection. Now, for part two, if you have f of y one plus y two is f of y one plus f of y two. This obvious from this formula because the bilinear, the inner product of a sum is equal to the sum for both arguments, first and second argument. This is for both f equal R and C. But now, the question is, what if I take a scalar multiple of a vector? Then the crisponufunctional, you see is going to be x C y. I f is r, and this is the same as c x y, which means that f c y c times f y. I is r. C is indeed the isomorphous is a linear map. And bijection by part one, hence is an isomorphism of two vector spaces. In the case when our vector space is over r, an inner product actually gives rise to an isomorphism between V and V prime. Even though don't look at don't be deceived by my proof. In my proof, I use the basis. But the map I constructed, does not use any basis. It just use common sense. If you have two arguments, you have a function of two arguments, you fix one of the arguments, get a function one argument. And after that, you check linearity. And then you check bijection as we did. In the course of proving bijection, I use a particular normal basis, but that's just verification of a property. It was not involved in the definition of the map. That isomorphism. You see this isomorphes independent of any choices. It's an interesting application of an inner product. In the product allows you to identify B and B prime. Now, what about the case when f is a field of complex numbers? In this case, it's going to be c bar, x y. That means that f of c y is equal to c bar of y. I F is equal to c. Okay. And so in this case, it's not a linear map because linear map means that some goes to sum and scalar multiple goes to scalar multiple. In this case, some goes to some, but a scalar multiple by c goes to a scalar multiple by c bar. It's close, but it's not quite. There is a term for this. This is called semi linear map. So you can say semi isomorphism or something. Such maps are also interesting. They also have a special place in mathematics, but it's important to realize that in the case when F is the field of complex numbers, the two arguments are not symmetric. With respect to the first argument, the inner product is linear on the nose. In the case of the second argument, when we pull out the coficient we get CBR. And that means that it's not a linear map, but it's semlinear which makes the inner product into 1.5 linear or squallinear, if you will. Okay, so that's what I wanted to explain. And it's an important kind of connecting link between the theory of dual spaces and the theory of inner products. So now, We can actually do a little more. Remember how for dual spaces, we define the following thing, which was called, if you have a subspace, then you get U zero inside the dual space, which is called annihilator. Now, on the other hand, if we has an inner product, then we have the notion of tgonal complement. Rtgonal complement. Rtgonal complement is something that is very familiar from the study already of dot products. Certainly, we have a very good intuition gmetic intuition of it when our space is just a plane or three dimensional space like the ambient space of this classroom. So that tonal complement is really literally all of all vectors which are perpendicular to a given vector, or if you have a subspace of all the vectors perpendicular to a given subspace. If the subspace is the stage, then the line goes in st is togo complement of it and vice versa. In both cases, interestingly enough, if you take twice, zero. If you take the annihilator of annihilator, you get back the original thing. Likewise, the tag complement of Orthogonal complement is the original thing. So it's a duality. Not surprisingly, the two notions are compatible with respect to this bijection that I explained. Here also U double prime is itself. In fact, what you could do is you could have the following diagram, you have, inside, inside v. Then you have perpendicular inside V. But now I have this APA V, which identifies v with v prime. And in V prime, you have zero. What you can check is that this is a big. This is a bijection, and this is a bijection. The two notions are parallel to each other. In a way, it sheds light a little bit on the anhilator which I remember how when we talked about it looked a little abstract. But if you think from this perspective, if you think of a vector space with an inner product in which case, we can actually identify V prime with the regional vector space V itself, then the annihilator is nothing but the tgonal complement, which is much more intuitive. Any questions? Now, our next task is we define what's called the adjoint accurators. Suppose you have two verted spaces, V and W two inner product spaces. Over F. Suppose you have a linear map from v to w. Okay. So then we can do a kind of a twist of this picture by T. You see? So here, I was just I was just fixing one of the arguments, and then having the other argument free. But what I could do is I could also apply t to the argument. You see? So then because t is linear, I would still get a linear functional. So in other words, Consider the following functional following expression. For v in V. Look at TV assigned to it TV and fix. I consider this. Maybe. You see, goes to this, this is an F. I get a function, this is pi. Pi is going to be from v to f. I apply t to, I get an element of W B t is from V I take the inner product with a fixed vector small in. I get this functional takes v to t. Maybe put it yellow to indicate that this is fixed. I claim that this is linear functional on v. Right? Why? Because what does it mean linear function, means that if I replace v by v one plus v two, this will be equal to the values on v one plus value on v two. But when I apply to v one plus v two, I get the sum because t is linear. When I apply inner product, is also linear respect to the first factor. Likewise, for the scalar multiple. This is actually going to be a linear function on v. In other words, is an element of v prime. But we just realize that every functional in V prime can be written in this form for some in v? Okay. So pi. So according to the previous result, so let's call it som one. By one. There exists a V B we unify, function on V. Such that That expression, TV, TV, W is equal to V y. Here is in, and y is in V. This is true for all V. You see, I'm twisting the thing a little bit. Before I would consider just in a proto V and W is fixed. But now I take TV coupled with W. And if you and I know it's a bit hard to wrap your head around this from right away. Don't feel frustrated if you don't see it. You requires a few readings to process it. But you see, so that means that to, we were able to assign. We were able to canonically assign some element of V. W is in. That's a vector. We get a map. Which we will call T star from w to v. You see, the original map is from v to w. But now, I'm constructing a map in the opposite direction, and what does this map satisfy? So I want to say that y is t star of W because it's determined by W. That was a functional on V written in a weird way because I have I pre apply t to it. But I know that every function can be written as just a bare coupled to something. That's something that must be a function of W. Therefore, there is a map T star, which sends to this y. Then if I substitute in here, I get the following formula that TV W is equal to v t star W. Okay. You see, T star is this form enables me to swap T from the left argument to the right argument. There on general grounds, namely Rs theorem, I know that there exists a map T star, which, if I apply it to the second argument, will give the same result as applying my original t to the first argument. But you see to be sure, this is happening. This is inner product in W. Because W is a vector in W, and TV is in W because t is a map from v to w. Whereas here, I'm calculating the inner product in v, v is in v and t star of W is also in v. This operator t star is called the adjoint to T. If you feel confused, perhaps the foling will help. If we identify V and V prime, by using our inner product, and likewise W is W prime, by using inner product. T star will become the transpose operator that we studied when we talked about those spaces. In other words, In other words. You have to v, which is the T star which I have justified. But can be identified with p by means of this map. Whereas can be identified with W prime by means of C. The question is what will then be the map, by the way, just the irregularity of the bird bard. That's going to be the transpose map. Okay. The transpose. Okay. So let me write again, the defining property. Okay. Of t star. You have the original map is from v to w. I have just shown that there exists another map from w to v called T star, which catfis this equation. T x t v equals v t star. For all v and v and w. That's the adjoint actor, adjoint two t. Now, when we talk about linear maps, it is useful to consider their matrix representation. I'm going to show you that the matrix representations of T and T star are connected in a very nice and a very neat way. One of them is a transpose of the other followed by complex conjugation if we work over the field of complex numbers. So Mtex representation. I want to start this discussion by reminding you that if you have an noon normal basis, then you have a great tool for finding the coefficients of the expansion. The inner product gives you a way to calculate the coions and expansion of vector. Last time, I showed you the following that if you have V with an inner product, and you have one to n and orthonormal basis with respect to this inner product. Meaning I E J is a chronicerdelta. Then to find an expansion of any vector with respect to the atnrmal basis, you don't need to solve a system of linear equations as you do in ordinary circumstances. But you can simply take the inner product of V with EI. That's what we discussed. Last time. Now I want to apply this to matrix representation. On linear map. Now suppose you have T, which is a map from v to and suppose that both of them have inner products. There is an inner product here, subv and there is an inner product here. Let's choose a basis. I I from one to E J, I want to call them E J. I more convenient. We j from one to n goes from one to n. F here, where I goes from one to. Okay. And suppose that both are normal. I will use the abbreviation th, even though it's ambiguous because I could mean rtgonal but I mean tnrmal. Both of them are normal. This one is an orthonormal base in V. This one is an orthonormal base in W. Okay? I claim that the matrix of the matrix of t relative to these two bases is very easy to find by using the inner products and using this basis. You see So it's always true that if you have a linear map from V two, and you have any basis in Beta in V. Let's call this Beta. The basis gamma let's call this gamma in W. You have the matrix representing this linear map relative to this two bases. Let's call this matrix A, and we know that it's an m by n matrix. In fact, we know in general, what the entries are. We know that if you take the JS colon, what it is is just t of the JS element of the basis, which is E J written in terms of gamma, Maybe I make it a colon. That's the matrix. It has rose and colons, the respective dimensions of W and V. You know that always the J is colon is nothing but T applied to E J and you take the coefficients of its expansion with respect to GMA, right? But that means that what you're doing is you're taking t of E J and write it in terms of So that going to be A, these are the entries of the colon, A J, is the number of the number of the colon times F, where i goes from one to. But we see F is also an tonrmal basis. According to our general principle, we can always find the efficients by simply taking the inner product of the vector, which we want to expand with each of those vectors. This formula this Let's call it double star since I already used star. Star double star implies that this coefficient to emphasize it. What do I need to do? I need to take v, but what is V here? V is this. That's the vector we are trying to expand. These are the coefficients. This is what plays the role of A in the double star. I have to take j and couple it, pair it with f. That's what the formal double star tells me. In double star, I have I because I was talking about a vector space with a basis one N. But now we are in the vector space W where our basis is called F one FM the end result of this discussion is that the entries of the matrix can be found as easily as the coefficients of the vector in the expansion of the spector thnomal basis. Simply by this expressions. You see? But now I'm going to use this, see how nice inner pro vector spaces, you can solve these two big problems which we face usually when we do calculations in the neuralgebra. We can find easily the coefficients of the expansion of every vector relative to that basis. And we can find entries of a matrix representing a linear map. If provided that the basis involved are tnmals very essential that they have to be tnrmal with respect to the inner product at hand. By the way, where is this inner product taken? It's taken in W. So Because FI is in D and this is in D. All right. But now, what if we take the operator Let's take the adjoint operator. And let's find out what its matrix is. And this way we will find a link between the two matrices. Which will be extremely simple and nice. Now consider take the adjoint operator, T star. Now remember, T star goes in the opposite direction. But now we still use the same basis, we have Gama and Beta. We can still talk about the matrix representation of T star, but now the lower basis is Gamma and upper basis is Beta because it goes in the opposite direction, compared to that formula. Let's call this A star. Okay. Now by the same formula, double star now shows me that A J is going to be T star of F. You see, so now I'm using Ji there was I J. I have to switch f and f and E J. It's going to be T star of F inner product with E J. Check this because what is A J. It means that I take. Maybe I should explain this is a star is a star. So this is your A star. This is a coefficient. When you take the well, actually if this one is longer across than vertically, then the other one should be should be the more like this. Because now it's going from w2v, then is here and is here. Now I take the colon, colon number. Maybe make it a little bit longer. Then what is this? This is t star of F. I have to write of F as a sum of I E J. Then A J according to formula double star is going to be t star of f j, which is what I wrote. But now, guess what? This is equal to always in the inner product, y is equal to x with a complex conjugation. That's just an axiom of an inner product. Therefore, this is equal to, I have to switch them if I switch them and put a bar, I get the same result. This is equal to this, just like this is equal to this for every x and y. I simply switch them and put a bar. I'm allowed to do that. That's one of the axioms. You recognize precisely the one which we found before, because this is by definition of the adjoint, equal to t of i f t of j. F. I swap. I'm allowed. If I have t star on the second factor, I can swap it to the first factor and it will reincarnate as t. That is the definition of t star. If you have T star applied to W you get achieved the same result by having Bar W, but T applied to the first factor. This is what I did. I swapped the star to this, and then of course, since I had a bar, I'll have still a bar. What do I find? I find that A star or more precisely. This is a star is the name of this matrix. It's the name of them. I call it a star. If I take this J entry, I end up with J entry of the original matrix. Do you see? So the matrix corresponding to T star, which I call A star. Obviously, bus choice of nation is a transpose of AIJ with a bar. Because that's AI and I have a bar. So the Let me summarize it here. If I take the matrix, matrices of T and T star, relative to the same basis. And we require that Beta and Gamma arm. T two bases. And we require and this is very essential that we require that Beta is orthonormal basis of V, and GMA is orthonormal basis of W. In this case, a star is At conjugate. So at the Mtex level, at the level of Mtex representation, the joint simply means taking the transpose in the case when the field is a field of real numbers, then there is no complex conjugation or it has no effect. Or if you are of the field of complex numbers, you have to take the transpose followed by a complex conjugation. Okay. That's going to be very important for us. So next, you check check this properties. I just kind of easy. I don't want to spend too much time on this. You check various properties of this aperation. What do I mean? You have this Operation which converse map T to a map T star, reversing the direction in which it acts. If t acts from v to w t star will act from W to v. T goes to T star. This one is in L of VW and this one is in L of WV. Okay? What properties does this satisfy. Naturally, if you take the sum, t plus starts t star plus star. If you take Lamb that times T star. That's going to be Lambda bar t star because you see if you have an overall factor, when you take the trans, it will still be an overall factor, but when you take the conjugation, it will get replaced by its complex conjugation. Then if you take start twice, you will get T back. This is obvious because you're swapping, you see, if you swap twice, you get back the original one. That's one way to see it. Another way to see it if you apply the separation twice to the matrix. You will get double and doubles this conjugation. Double comes the conjugation is identity, double transpose is identity. By the way, remember, if you have a linear map from v to W, We have two linear maps, let's say, from v to W, and you have fixed basis, Beta in V and GMA in W. The two operators, two linear maps are equal, if and only if the matrices relative to those bases are equal. Therefore, this opens the door for verifying various properties in two ways. One is more abstract where you don't choose basis and you just go by definition, for example, formula like this to check this identity. The other possibility is to say, I'm going to use the basis in V, an orthonormal basis in V, and an orthonormal base in W. We have proved that such exist, and we even have construction algorithms to produce them. Then this property becomes very concrete property of the crispona matrices, and it's clear that up taking the transpose and comes conjugation twice, we'll give you back the original matrix, which is what this statement is translated into matrix representations. Then if you take the product, you will discover that it's going to be a product in the opposite direction, which of course is compatible with what we know but transpose matrices. A transpose of a product is a transpose but in reverse direction. Finally, if t is invertible, Then T star is also invertible, and T star inverse is the inverse is t inverse star. So it follows from this. And the fact that I star the adjoint of the identity is identity. All right. So now, the most interesting application of all this arises when v is equal to W. The most interesting application. Let me erase here. That's the case of most interest. And you will see why in a moment. So, most interesting is the case. Is when both spaces are the same. You have t acting from V to V, and then T star is also acting from V to V. Therefore, we can talk about the equation t equals t star, and that's called self adjoint property. Now that you can only do it if v is equal to W because otherwise, t and t star act between different vert spaces. And it turns out that this property distinguishes a very important class of operators on inner product retro spaces which with absolutely remarkable properties, which I'm going to now list. The first property is going to be given by this lemma. Which is that all of its eigenvalues are real. Every eigen value of a self adjoint operator. I is real. Both for f equal r, where it's obvious, eigen value of an operator of r is a real number. But also for equal C. Obviously, if you have a general operator acting on a complex to space, it may well have vales which are complex numbers, which are not real numbers, which have non zero meaginary part. It turns out that this does not happen if your operator is self adjoint. All of its eigen values are real, even if it has complex entries. It's absolutely remarkable fact, which is used in quantum mechanics, which I will explain on Thursday. And the proof proof is extremely simple. Is just playing with this equations with this That's the defining property. So proof. So suppose Lambda is an eigen value of t. Then there exists a v and v such that is non zero. Remember an eigenvector means that it is non zero and TV is Lambda V. But then if I take Lambda times square of the norm of v, which is the same as Lambda v in a product with v by definition of the norm squared. Then I use linearity to pull it into the first factor. Then I use the fact that Lambda V is t v then I use this formula, let's call it exclamation point. To swap? Well, first of all, sorry, not yet. So I swap this. So a priori, I'm going to get, that's right. So it's going to be V T star v. But now I use the fact that itself a joint, which means that T star is actually equal to t. So that's equal to V TV, right? Are you following this? Okay. And now I hope you see where it's going because that, again, I write V Lambda V. But now Lambda appears on a second factor. When I pull it out, I'll get Lambda bar. This is again, the squared of the norm. What happened? I found that p squared is equal to m bar squared. But because v is non zero, the norm is non zero. Therefore, I can cancel it out and I get Lambda is equal to Lambda bar, which means that Lambda is a real number. So you see, I'm playing with this what appeared to us perhaps when I introduced the inner product for complex vector spaces. It looked like an annoyance that the inner product is linear for the first argument and semi linear for the second argument. But in fact, it was a blessing in disguise because I'm using it here to show that Lamb is equal to the bar because here I'm pulling it into the first factor. And here after this chain of identities where I use the definition here, the definition of the adjoint, and here, the fact that I'm considering self adjoint aerator self adjoint. Okay. Then I pull out from the second factor and it comes out as Lambda bar. Lambda is equal to Lambda bar. Lambda is real. You see, it's a consequence of this disparity between the first and second arguments and the fact that respect to the first one, it's linear and respect to the second one is semi linear. That's the first property. Ask me if something is not clear so far. Second property. I think I'll keep this. Second property is Lemma two. Let V and W two eigen two eigen vectors of self adjoint operator. T with eigenvalues. Okay. Lambda and Mu, which are, which are not equal to each other. In other words, two eigen vectors with distinct eigen valleys. Then they are togonal very similar argument. So you have TV, first of all, they are non zero, then you have TV is Lambda V and T is. Now, I take Lamb that times V. I pull it by linearity into the first factor. But that's TV. Now I use again, the adjoint property to swap it onto W. But remember, we are talking about self adjoint operator. T star is equal to t. It's V. But T is. I pull out. Now it comes out as bar. But I have already proved that bar is equal to u because every eigen value for a self agent operator by m one is real. In fact, it's equal to u times W. Look what I have achieved. I have shown that Lambda of VW, which was the original expression. Is equal to the last expression, which is u times v w. My assumption is Lamb is not equal to u. Now, if you have two different well, then you just subtract, so you get amb minus Mu times VW equals zero, and since this is non zero assumed to be non zero, it implies that VW is zero. The two vectors are togonal. Eigen vectors pointing to distinct eigenvalues are tgonal and eigen values are real. For every supagonaperator, be it on a vector space over the real numbers or over vector space over the complex numbers. Now, let's also recall that the matrix of So I can pick an orthonormal basis. Beta of V. You have your apart t on V and suppose itself a joint. What does it tell us about the matrix of the operator relative to this basis. The call is matrix A. That's the matrix of the operator, relative to this basis. If you have an operator acting from v to itself, we can use the basis a single basis for the matrix representation. A prior we could choose one basis on the input, one basis on the output. But if you act from v to itself, you can choose the same basis for input and output, and that's what we're going to do. We'll use this tonrmal basis Beta for the input and the output. Because it's tonmal This property should be satisfied that this should be equal to A. This a star has to be equal to A, which means what? If I take a star by definition, is m of t star Beta Beta. Then I know that a star in general, A star is a transposed bar, which is what I wrote over there. But if t is star is equal to T star, then a star is equal to A, or a equal a star. In other words, whatever is true for the operator is going to be true for the crystal matrices. But now, what does that mean? It means that A is equal to its own transpose complex conjugate. For example, Okay. Let me find some space for this. By the way, I I have posted some information about the final exam, as you probably have seen already. The Okay. Today is the last week of classes so we have a lecture today and on Thursday. So next Tuesday, I will give a review lecture right here and then we can stay longer even after the lecture, and if there are more questions. Usually, either this auditorium is free or if somebody comes, we can usually find a smaller auditorium somewhere near. I'm happy to stay longer and answer your questions. Next Tuesday. The exam will be also here on May 10, which is a Friday of the exam week, I posted a mock exam also on B courses. And the first problem on the smoke exam is about self ajoint operators. I just want to give this to you as an illustration. Now, I am not 100% I don't remember a 100%, I might be slightly off, but I think it's something like this. Here is an example of a self adjoint matrix. Why? Because if you have an if you take a transposed, you will get one plus one minus one two. But if you take also comes a conjugation, this will be replaced by minus, this will be replaced by plus, and it will be equal to A. So this is an example. And so we have already proved that eigen vectors are tgonal and for different eigen values. So what is what is them Okay. Let's find out what are the eigenvalues here. The easiest way is to take the crisp either memo poly cts polynom. But now we know both, we have all the tools in our arsenal. We take minus z minus z, so it's one minus z times two minus minus one plus i times one minus i. That's squared minus three, guess what? This is two. Because one plus i times one minus i is one minus i squared, which is two. First of all, there is plus two and then minus. These two cancel out and you get z times z minus three. Indeed, even though the matrix has complex entries, the agonal is a real, zero and three, right? Now you can then it's clear that since they are two distinct eigenvalues, they have to be the eigenspaces are one dimensional, and we have shown by two that they are gonal to each other. So you can easily find the corresponding eigenvectors, and then you can normalize them so you can find an orthonormal eigenbasis. So this suggests that maybe something like this is true in general. And indeed, that is a big result, which is called spectral theorem for self joint operators. That in fact, it's always true that if you have a self adjoint operator on a vector s space over R or over C, it will always have an eigen basis, which is also a normal. You see, and all the eigen values are real, even if you operator was defined over complex numbers like here. So this is a big deal, which is called spectral theorem for self joint, operators. Suppose that you have a vector space, finite dimensional vector space over R or C T, which acts on v, which is self joint. Then the fooling are equivalent, equivalent statements. A, T, sorry, I should not say. This will be one of the statements. T is operated. Now, to say that Teljoints equivalent to saying that T T is diagnozable Well, maybe just say T has a diagonal. Let me say a little bit more precisely. T has a diagonal matrix. With real entries on the diagonal, the only non zero entries are on the diagonal and they're all real. Do matrix with real entries. With respect to an normal basis. Finally, the third one, V has an orthonormal basis r basis of eigenvectors. Of T, and all the eigen values are real. I'm writing a shorthand Lambda, usually did not eigen Lambda, I'm writing Lambda for eigenes. You see a dream the best of all worlds. If you think about it, in this course, we have talked about these important concepts. First concept is a basis. This concept of a basis. Which itself, in turn, is a combination of two properties, basis is a list which is linearly independent and Spanon. Let's just say basis. Basis. That's one concept which is crucial to linear algebra. Then the second concept that is also important is eigenvector. Then when we bring them together, we get the notion of an eigen basis. Eigenvector for some operator. This is for a vector space. This is a vector space together with an operator acting on it, that's notion. If we bring them together, we get the concept of an eigen basis. Something that's the basis and also every element of it is an eigenvector of a given operator. We have seen that if an operator has an eigen basis, things simplify, for instance, we can take it to an arbitrary power, and so on easily. But now, we also have the concept Over Artogonal. I should have done it a little bit differently, but maybe like this. I'll switch them so that it looks nice Eigenvector here. Okay. And this is basis for v. Then we also have a notion of tonormal set. Then if we combine them, we get the notion of tormal basis. That's very nice. If you have an orthonormal basis, for instance, you can find easily the confusion of the expansion of any given vector relative to this basis. You can find the matrix entries, as I just explained, and so on. Finally, we can bring this together and you get tonormal eigen basis. This is what this is about. It's about the class of operators with which there is an tnrmal eigen basis. Of course, to talk about an tormal eigen basis, we have to have three structures. We have to have a vector space. Otherwise, we can't talk about basis, it has to be in a particular vector space. We have to have a linear operator. Otherwise, we cannot talk about eigen vectors, we have to take an in product on V to talk about tonmal vectors or an tnrmal set of vectors. But if you have all three structures, it makes sense to talk about whether there exists an tnrmal eigen basis for a given operator T acting on the vector space V. With respect to your inner product. What this theorem says is that this happens. If you put an impose additional condition that also eigenvalues are real, then this happens precisely if your operator is self adjoint with respect to the inner product. That's the power of this result. On Thursday, we'll consider what happens if we relax this condition if you remove the condition that the eigenvalues are real. That allow eigen values to be complex and keep the rest. In this case, the statement will be that this happens for operators which are so called normal normal operators. Normal pers are the ones for which T and T star commute with each other. I will explain this next time. But this is a really important case because this present is exactly the operators which arise in the context of quantum mechanics. It's very essential that the eigenvalues are real because they represent observable values that you can measure in a lab and all of this like energy and velocity and momentum and coordinate and all of these values are real numbers. Physically meaningful values are real numbers. So the question is, how could you possibly ensure that the eigenvalues, which according to quantum mechanics are precisely the values that you can absorb in lab. How can you ensure that the eigen values are real? Well, you ensure it by considering self adjoint iterators. I'll explain more on Thursday. But let me prove it quickly so that I have just enough time to do it. Let's prove first in the case one, it is a field of complex numbers. First, can I see the case when the field is the field of cow. Then according to Sh theory, which I proved at the end of last lecture, there exists an orthonormal basis. Beta of v such that the matrix of t. So I'm I'm proving what am I proving actually, I'm proving how A implies B. So I'm assuming the t itself a joined, I want to show that T has a diagonal matrix representation with respect to some mart normal basis. So is upper triangular. Remember. Actually, when we talked about a way like a month ago, we talked about apangulo matrix representation, we showed that if an operator has an operangomtrix representation even if it's minimal polynomal splits. But here we are over the field of complex numbers. I'm assuming this for now. And then every polymo splits, therefore, every operator has a basis in which sorry, for every operator, there is a basis such that the matrix representation is a pangular. But we improved on it last time last Thursday, and that's called Schurz theorem that not only there exists a basis some basis, in which it's upper triangular. But there is a basis which is orthonormal, in which the matrix is uptiangular. I explained this last time. That's because of the Gram Schmidt procedure because you say, there is some basis, but then that basis we convert into an tonrmal basis by Gram Schmidt. And guess what? The erangular form stays upper triangular. It changes a little bit, but still upper triangular. It's a very simple argument which I gave last time. Now, what does it mean? Let's call this A. We assume that T is equal to t star, and this implies that A is equal to a star, as we just discussed over there. Let me emphasize that here it is essential. That Beta is tonrmal. You see, this property, that A is equal to a star is only true if you're considering matrix representation relative to orthonormal basis. If you just take a random basis, and you take even self adjoint aperta, and you take its matri presentation relative to some random basis, which is not tonrmal, you will lose this property. But here, luckily, Serm guarantees that we can pick a basis, which is tm, in which it is upper triangular. Relative to that basis, A has to be equal to a star, which means that a is equal to a transpose bar. Now, guess what happens when you take the transpose of an upper triangular matrix, the lower triangle matrix. Under what conditions could the upper triangular matrix be equal to lower triangle matrix, only if it's diagonal. But remember, we also took the bar, which means on the diagonal, we replace every entry by a complex conjugate and they have to be equal. It means that it's diagonal with real entries. With real entries. In other words, the basis which is guaranteed by Shore theorem is precisely the orthonormal basis that provides such a form, such a matrix representation. We have shown that A implies B. But B also implies A because if you know that there is a basis in which the matrix is self adjoint, then there is an orthonormal basis with respect to which the matrix is self adjoint, then diagonal matrix with real entries is self adjoint. By the same token. Then the crisponia operator is also self adjoint. In other words, these two properties, t equals t star, and a equals a star are equivalent to each other. If you have a diagonal matrix with real entries, then the matrix is self adjoint. Therefore, the operator is self adjoint. That way you show that A is actually equivalent to B. Finally, b is equal to C. This is just rephrasing because what does mean that there is a basis of eigen vectors, it's a basis in which the matrix representation is diagonal. If all the eigenls are real, this diagonal matrix will have real entries. That's all. B and C are essentially just the same statement. This proves it for vector spaces over the complex numbers. And now the last thing is to see how this also implies this statement one V is a vector space over the real numbers. And this is where we're going to use the fact that we already know that for self joint operator, all the eigenvalues are real. Because you see if you have an operator, next. So the only place where we use that F is insureem because we use the fact that our minimum polynomial here we use the fact that minimal polynomial. Polynomal splits, which is true for every complex polynomial. But how do we know that it also splits over the reels? You see. We know because its roots are real numbers. So basically, that's it. That solves it. But let me write this down. The point is that we can treat our aerator as a matrix. Pnormal basis. G then of g is a matrix. Let's call it B, so we don't confuse it with the matrix A, in which is diagonal. This is a matrix. It's going to be a joint and real because we started out with a real operator. But it has its minimum polynomial. In the space of polynomials over the reals. But we can treat it as a polynomial in the polynomials over a complex numbers. I mean, obviously, every real polynomial is a complex polynomial. It cofians are real numbers, but every real number is a complex number. It carries card carrying complex number as well. So as a complex polynomial, it can split. Right? Where Lambdas are the eigen values. But the eigen values are real. By M one. The the eigen values of t. Because the eigen values of the matrix are the same as the eigen values of the operator. Yes.