So we have two weeks left. Two weeks left, and we will spend them discussing what's called inner product spaces and the so called spectral theory. What is this all about?
In the first course of linear In the first course of the year algebra, you learned We started what's called the dot product.
So at that time, we worked with a vector space RN, and we defined the dot product as follows.
A vector in RN is a collection is a collection of real numbers arranged as a column.
And so if you have two vectors like this, Then the dot let's call this x and y, usually we would even have an arrow above that the dot product of x and y is obtained by taking the sum of the products of the crisponan components, ABI. Why is this?
Why is this separation interesting? Well, for two reasons that it enables us This enables us to talk about the length of a vector. To define the length or magnitude. So we define the length, for instance, xation sometimes people put just one vertical line, but we will use two as the square root of x. Naturally for this to make sense, we have to make sure that this is greater than the recall to zero. Remember we're here working with real numbers. Also, even if we weren't working with real numbers, we still want the length to be a real number. I've never seen anything that has length so even if we include complex numbers, we still want lengths to be something physical, something that we can observe and all the lengths that we observe are real numbers, in fact, positive real numbers or could be zero, no negative real numbers. But of course, luckily, it works because According to this formula, this is the sum of squares, each square varial number is always going to be greater than equal to zero. Moreover, this is greater than equal to zero, and in fact, it's equal to zero, if and only if x is actually itself, zero vector. That is to say all all components are zero. If you have a sum of non negative numbers, it is equal to zero, if a leave each of them is equal to zero, and that means each of the A is equal to zero.
That's the first reason that we can talk about lengths.
The second reason we can talk about orthagonality. We have a notion of about technology. We say that two vectors are orthagonal if the product between them is equal to zero.
In fact, we can do even more. We can define an angle between two vectors, we can show one can show that the dot absolute value is less than equal to x times Ts. And therefore, x is between the length of x times length of y and the instigation. Here it's a number, and this is a real number, and this is just the absolute value of this number in the usual sense. Therefore, we can write that this is equal to x times some number, which is 0-1, between negative one and one, we say that that the angle between the two vectors is theta. If alpha is cosine theta. In other words, the dot product is the product of the length of the two vectors times the cosine of the angle between them. We can do that precisely because this number is between one and negative one, every real number between one and negative one can be written as a cosine of an angle which is 0-2 Pi.
So a priori, if we just work with the vector space, there is no notion of angle or there's no notion of length.
The structure, the operation of dot product enables us to speak about these terms. There are various applications of that, which we'll talk about in a more general setting. This is the prototype for what we would like to do now for general vector spaces. I want to define a similar operation. Similar. I will tell you why I wrote this similar. Operation for a general vector space. I find a dimensional vector space. Okay. Final dimension of vector space. Not only over R, but also over C for complex vector space. Before I explain this, I want to mention something which make contact between this operation or the structure of dot product and what we discussed in the last two weeks. Formulated remark, the dot product, product can be thought of as a bilinear map. Okay. Bilinear map from RN cross R to R. Because it exactly does what the bilinear map is supposed to do. It's input consists of two vectors or other words, it has two arguments x and y. The output is a dot product. And the dot product is linear for x and y, satisfies operations. In other words, it is an element of R N with upper index two, which we use to denote the space of bilinear bilinear map. But also notice that it is symmetric because x dot y is equal to y dot x, which is obvious from this formula. Be obviously A is equal to A B is I because these are real numbers, and they commute. In fact, it lives in a symmetric part of R two. Okay. So now we can guess how to generalize it. A generalized dot product, which we will call in inner product. For a general vector space. V. Let's say first define the notion of inner product. That will be the generalized version of a dot products. We don't want to reserve the term dot product for the specific operation for the specific operation given by this formula. In the case, when the vector space is actually RN. Now we are considering a general vector space V over R and soon also overse we'll define something similar. The term for it will be inner product. O V. And so let me check the annotation. So we'll use notation like this. Meaning here I put two placeholders two dots as placeholders for two vectors. As an element of V two. The provisionally, we could just say, here we have a dot product which is an element of r2s as a symmetric bilinear form. In other words, for any V and W and V, then we will have this expression, which will denote, which is an element of r. I recall, right now, I'm talking about to spaces over R. Which is a bilinear form. Then it means that we have to just repeat the axioms of asymmetric bilinear form. In other words, This inner product is a map from V cross V to R such that It's bilinear. Symmetric. U plus W is plus U plus W. Okay. That's the first part of the linear property for the first argument, then also c times u equals C times. Here, and here for all and C in R. In principle, then we should also write the same properties for the second argument. But remember, we also want it to be symmetric. If you combine and let's put this as the third property. That is equal to W U for all U and W. Notice that this property and the symmetricity property implies the linearity for the second argument. Because if you switch, then you will have the summon the second factor here if you switch, you'll get each summon of the second argument. Therefore, this implies linearity for the second argument. If you apply this formula to this, you will get linearity to this. You will get linearity for the second argument, we don't have to list it as a separate axiom. Now, the problem is that if we do this, we get too many examples. This, for example, we could do this way. So another example. So this is the first example. So it setfies the properties. Like I just mentioned, it is a symmetric bilinear form. But here is another example. Where you take some of this with plus sine and some of them with minus sine. You see? For example, say n is equal to two, and then you have A one, A two, and B one, B two. We define this in a product as a one B one minus A two B two. Okay. So now, it is bi linear and it is symmetric. You can check easier. But it's not true anymore, that it's not negative if these two are equal. Not true that x x in a product with itself, takes only no negative values. Because for example, you can take the vector zero one and take its product with itself, and you'll get minus one. In this case. Here, you get the sum of squares. But if you define a more general inner product in this way, with some of the confusions being positive and negative, there will be vectors which are non zero, and they are square inner product with itself would be negative, it's negative. Then we cannot define the length because on the length would be an imaginary number and we want lengths to be real physical thing. That doesn't work. That means that we have to impose an additional property. Inner product is a special case of a symmetric bilinear form. In the case of vector space over the real numbers. This is you have to remember. Because the first temptation is to say there is nothing new here is just a symmetric bilinear form on a real vector space. It is a symmetric bilinear form on vector space, satisfying an additional axiom. Which is that v is greater than a q20 for all V. That's not enough either. You see, because for example, another possibility is to just take a one times one. Then you will have a non zero vector like, for which this inner product. If it's just the first term, a one will be zero. The vector is non zero, inner product is zero. We don't want that. We want every non zero vector to have a non zero length. Only zero vector has to have a length zero. That's why this is not enough either. You need to also say that itsqal to zero and only if v is actually the zero vector in vector space. Now we got ourselves a proper generalization of the notion of dot product. Okay. And now, you ask ok but what kind of examples do we get beyond what we already know? Why bother? We know after all that every finentd space over R is isomorphic to RN. But remember, it is isomorphic to RN, but the isomorphis is not unique. To get an isomorphism between V and RN, you need to choose a basis in, and there are many different bases. For different choice of basis, you'll have different identifications and the dot product would correspond to different inner products on V. There are actually many choices of inner products which go beyond the mere dot product. However, we will show that for every inner product, there exists a basis in which this inner product looks like the dot product. That is true. Okay. So now, so that takes care of the real case. Before I move on to the complex case, let me give an example. So which goes beyond the dot product. So example. Let's take v to be the space of polynomials or R of some degree. Now, every polynomial can be thought of as a function of p of x from R to R. Polynomal were studied in one variable were studied in the context of framework of single variable calculus, so we know that they can be viewed as nice functions on the real line. In particular, the functions defined on the interval 0-1. And so if you have two polynomials, P and Q. Let's define the inner product between them as the integral 0-1 P of x q of x. X. Let's check the axiom. You will have four axioms. Linearity linearity and symmetry are obvious from the properties of well symmetry is obvious because two plynomals commute. But linearity follows from the fact that the integral of a sum of two functions is a sum of the incresponing integrals. Likewise, the integral of a scalar of a function is equal to the scalar of the integral. You can pull out scalars out of an integral. The first three axioms are obvious. The last axiom is less so only need to check the last axiom, axiom four. For that, we need to calculate p of x p of x. The result is an integral of some zero to one P x squared d x. I claim that this is indeed a greater than equal to zero for the simple reason that this function is greater than equal to zero everywhere. Remember, if you have a function which is greater than equal to zero, there is a interpretation of the integral as the area under the graph of this function. You will have p squared will look something like this. And so it might actually touch somewhere the x x. This is y equals p of x squared. Might touch somewhere, but it never goes below according to because it's a square of a function. The integral is actually the area under this graph, and this area is no negative. The only way it can be zero is zero if the function is actually identically zero. Both parts of maximum force. This greater to zero is equal to zero, If and only P of x is identically zero is the zero function. You see how interesting that once we expand our horizon, so to speak, once we go from RN to a more general even finite dimensional to space over R, there are other options available for us to define the type of structure, which allows us to interact with calculus with analysis. We can push this further because in principle, nothing prevents us From considering infant dimensial vector spaces and trying to define similar structures for infant dimensial rector spaces. This allows us actually to extend the same definition from polynomial functions of a bound degree. For example to all polynomials of all degrees, which is an infant dimensaltor space. This formula would still make sense and we still satisfy all the axioms. We can even extend it further, for example to all continuous functions, all continuous differentiable functions. And so on, or all smooth functions, which have well defined derivatives at all points of all orders. This leads us to very interesting structures which are actually the structures that appear in quantum physics and quantum mechanics. The difference it leads us to the notion of a helper space. The difference is that the vector spaces which appear in quantum mechanics are defined over the complex numbers. That's our next task is to generalize this notion to the case of vector space over the complex numbers. But I already show you the one avenue in along which we can proceed, which is that we can talk about integration defines Integration defines a whole class of interesting inner products. Now, you can also realize that this is very similar to actually to the original definition. Because you can think of this naively, and I'm not saying it's correct, but figuratively speaking, an integral is like a sum. But now it's a sum over all points of the interval. Think of a function as a vector where each component is the value at a particular point. If we could enumerate all points, which we know we can't because actually, it's the set of points on the interval 0-1 has a larger cardinality than the set of natural numbers. But if we could, then we would just assign to each of them the value at that point. We can think of a function as he is the convector where the entries are the values of the function at every point. And see what we're doing here. We're multiplying the values at each point, just like we did in that formula for the dot product. In the formula for the dot, the analogue of x is i, which goes from one to n. In this case, you can think of it as a function taking values at numbers one, two, three, four. Vector, a cooltor of size n is in fact the same as a function from the set one, two, three up to n to the set of real numbers. This is a direct analog This is a direct of this formula, but in the continuous case, in the case of continum when you have points on the interval 0-1 instead of a set from one to n. Roughly speaking, this is the sum for all x 0-1 P x Q x, so to speak. In this example doesn't come out of the blue, it's naturally connected to what we are used to. Of course, you can make changes so instead of zero to one, you can put a to b over any interval. The point is in if you have an integral over a finite interval, integral over finite interval. It's going to converge, always it's well defined. You can even explore what happens if you're more ambitious and you want to have an integral over the entire real line or maybe half a line. Well, in this case, the problem is the integral will not converge unless the function functions are equal to zero. The product is equal to zero, each of them is one of the two is equal to zero. But you can always put a weight. You can put some function here, for example, the typical thing to do is put e minus x squared function which goes to zero, very fast, much faster than any polynomal that would also define for minus f to plus f or if you want to do from zero to infinity, you can put minus to minus x, this kind of thing. All of these things are available suddenly. Once we open ourselves to more general examples than RN. Now let's move on to if there are no questions at this point. Let's move on to the case of vector space over the field of complex numbers, but we will see one more twist happening. Before I get to that, I want to mention one thing. This example that I give xy is sum where you have plus minus AI BI. It's not a sum of AI BI, but some of them with plus some with minus. This may seem outlandish, but in fact, Einstein's special relativity, that's exactly the kind of bilinear form that appears. It has one plus. It's a four dimensions space time, you have R four. But one of the directions is special, is a time direction. The other three. That's the way we think about space time. The point is that the sin for the time is plus, and for the space is minus minus minus or sometimes people write it the other way around by putting an overall sign. Then time comes with a negative sign and the space comes with positive sign. So that's called Min kovsky signature on Minkowski by the form. You may find this if you Google this and find more about it. In other words, these kind of structures are also very interesting and relevant in physics. But at the moment, we are interested in more traditional, so to speak, I'm not ready yet to move on to that, so we're interested more traditional inter products. The term used for such inter products which have all pluses is Euclidean. And then fors quantum field theory, there is one of the big kind of unsolved problems is how how to calculate what's called path integral in Minoski signature. Typically, the idea is that you calculated in euclidean signature that you pretend that you have actually an product with all plus even though Einstein's special activity tells you that one of them is plus and the other ones are minus or let's say along the time direction is minus along the space direction is plus. And then you want to do an analytic continuation, this is called weak rotation. You see the idea also is that If you have something, for example, time, with the minus appearing in the formula. You could pretend the time is imaginary. If you write t is, then this will be minus and then you have minus times minus squared, so it becomes squared. To relate the two, you have to pretend the time is imaginary. This is something very much in line with current research in quantum filter and I don't know why I'm telling you this. I just want to show you that many other possibilities which are beyond the scope of what we're studying here. It's just the beginning. There's a lot more. Anyway, let's move to the since I just introduced. It's time to move on to vector space over the complex numbers. Over complex numbers, there is an extra problem. This is all good. But we are more ambitious. We want to do we want to do so vector spaces. Over the complex numbers. And here's the problem for complex numbers. Maybe I would use the red red shock, so I don't have to erase. Okay. So now let's suppose that v is a finite dimensional vector space over complex numbers. So then why not define also a bilinear form. But the problem is that you see if you have bilinear form for the field of complex numbers consists of bilinear maps which take values in the field of complex numbers. It always has to take values in the field of definition of your vector space. Otherwise, for example, you cannot make sense of what does it mean to multiply the value of this bilinear form by a complex number, by an element of which your vector field over which your vector space is defined. This will take values in C now. Which is okay, so it's a dot product, so to speak. We don't we can relax the condition that VW is a real number for every V and W. But one thing we cannot compromise is that if v is equal to W, it has to be a real number and no negative real number at that because otherwise we will not be able to define the length. This is one thing which we will insist on. We'll try to relax all the condition except for this one. At first, it looks like it's impossible to solve this problem. But in fact, there is a way to do it and you see the simplest case, in fact. Let's consider the case when it's just a one vector, n is equal to one. Then RN is just r. Here we are simply talking about so there's no arrow anymore. We're simply talking the absolute value of x. A absolute value of x can indeed be defined as a square root of x squared, if x is a real number. In other words, for every real number, the square is a non negative number and we can extract the root and that's what we call the absolute value. What would be the log of this if we want to replace if we take c instead of r. Now, x is a complex number. Then for sure, x squared is all over the place. For example, we know that first of all, you can get negative real numbers for sure, you take. And you get i squared is negative one, much worse than that because remember, for example, how complex numbers get multiplied. If you have two complex numbers, can call it. Let me I guess you cannot really see red probably. It's not such a good color. To yellow. And You can be presented as a point on a plane in which the horizontal axis is a real part and vertical axis is imaginary part. If you have two complex numbers like this, the product is the following. You have to you got a complex number whose length is the product of the length and the angle of the argument is the sum of the arguments. This shows you that, for instance, the square one to talk about the square, Let's say let's suppose this has length one just for simplicity. This is some angle theta. The square is going to have angle two theta. You can get arbitrary complex number as a square. You see? That's the problem. How could we possibly get a real number. But the solution presents itself, if we remember but such a thing as a complex conjugation. If you have a complex number, we can write it as x plus y, real part, imaginary part. Then there is a notion of the complex conjugation where you put a minus sign in front of the imaginary part. Then lo and behold, if you take the product of z and z bar. Going to get x plus y times x minus y. But you know in general that a plus b times a minus b is a squared minus b squared. But this minus will actually help us. It will conspire with i to give us a positive sine. You see because it's going to be x squared minus y squared, but that's x squared plus y squared. Bingo, we've got something which is not only real, but it's no negative. That's the way to go. Before explaining what axioms are, For a vector space over the field of complex numbers, let me give you an analog of the dot product. But for C N. We already understand what the dot product is for C one. So for v equals C, C one, one dimensional space, which is C. We'll define the dot the analog of the dot product by the formula, so these are complex numbers is z times z times W bar. That's a clever way to have it all. But not quite so what is the property? It's linear for the first factor. But for the second factor, in other words, the sum will go to and so, but there will be one problem, which is that if you take, You will have z times C W bar. But CW bar means C bar times W bar. And so what comes out is not C, but C bar, W. So we have to modify our linearity condition by not demanding that for the second argument it's linear. That means that we're actually not considering bilinear form, strictly speaking. But we're considering what's called squlinear form, a form, which for the second argument satisfies only half of the linearity condition. Sesqu is like 1.5 of the properties. It's linear for the first argument. It respects the sum for the second argument, but it does not respect the scalar multiple for the second factor, although in a controlled way, by replacing this factor by its ticket. Okay. And now we can guess how to generalize it. But on the upside. If you take z z, you get z times z bar. That's actually what we call the square of the absolute value of a complex number. For a good reason because it actually is creating equal to zero and is equal to zero, even only if z is equal to zero. Now we generalize to the case one V is N Now z is a collection of A one, A N just like before accept each of this is a complex number, and is now B one, B N where each BI is a complex number. We define, if you want, you can even put arrows here to indicate that I'm not considering numbers anymore that colons you define it like this as a sum almost the same as before, but AI times BI bar. That will satisfy the crucial property that we need number four. We will see why it's crucial, why it's important. But for now, I think a sufficient argument is that this way we can actually think of vectors having a magnitude, having a length. Without contradicting what we expect from physics, that the length has to be real number and not only real number, but a positive real number if the vector is non zero. Now let's discuss what are the axioms. That this structure satisfies. We already see that this is a sifi's the struggle was for. This is satisfied. This is because for the first argument is fine. But the crucial difference is, this is a case this is V over C. The crucial is this. If you switch them, it's not equal, it's not symmetric. If you switch them, you're switching the roles of A and B. In this example, right? So if you write z, that's the sum of BI AI bar. But this is equal to A B bar bar. Because the complex conjugate of the product is a product of complex conjugates. The complex conjugation has a property that if you apply twice, you get back the original number. Because all you're doing is switching the sign date this. Here, you're switching the sign in front of the imaginary part. If you apply it one more time, you will get plus. Right? That's why if you apply bar to AI BI bar, you will get AI bar, BI. But that's the comps conjugation of this guy. If you do it in this order is going to be ZW bar overall. That's what I wrote. Now we are ready to work with inner products for both factor space over the real numbers and over the complex numbers. Also note that in fact, this template, this set of axioms works for real numbers two, because for real numbers for complex numbers, the dot product takes values Okay. I complex numbers. But for real numbers, so if is over r over C. But if v is over r, The dot product takes values in real numbers. Then the complex conjugate of a real number is the same number. Because if a number has no imaginary part, it's complex conjugate is the same. If it's equal to x and then it's complex is also x. Therefore, actually this axiom works in both cases, and over over R. That's a slight modification we have to make going from bilinear forms to squlinear forms. All right. What about example? So I give you an example of the analog of the dot product, and now you can guess how to do an example of the integral. Let's suppose now you have polynomals with complex efficients. You can do an integral like so over the real line. In fact, you can just restrict yourself to a polynomial like this will give you map from C to C, but interval 0-1 is still part of it, you have a function. I just the function takes values in complex numbers in general. But there is a simple cure for this. You just put complex conjugation for the second polynomial and this will satisfy the axioms. By analogy with the dot product. Now here, if we extend this definition to more general functions on this interval with complex values. That is to the Hilbert space that is essential in quantum mechanics. Now we're talking about a complex vector space vector space over complex numbers with an inner product, and that is a playground for all quantum mechanical models. A hopefully we'll have time to discuss this before the end of the semester. Any questions on this now, What are we going to do? We're going to discuss some properties. I remember, one of the things again is the notion of a length of a vector. In both cases over the real numbers or over a complex numbers, the length is going to be physical, so it's a physical length. It's a real number. It's positive real number for any non zero vector. But we also have notion of tagonalgy In the case of real vector space, we have notion of an angle as well. For complex vector space, we don't have notion of an angle anymore. We lose it, but we do have a notion of togonal vectors. The vectors for which the dot product or the inner product more properly is equal to zero. That's a very powerful notion, which we can use to solve various issues, and in particular, we'll get some very nice results. About the eigen values of operators of a certain class, which are called self adjoint and normal. This will be next week. That's our plan. Our plan is to study the properties in our products and study what's called the spectral theory for self adjoint and normal operators, our plan for the rest of the semester. The proper term is inner product vector space. It means a vector space endowed with a particular choice of an inner product. I already mentioned there are different choices of an product. For instance, in the case here, you could put an arbitrary efficients, which are positive. You could put some numbers here. Alpha, which are positive real numbers. Bless you. That's a different That gives you a whole family of inner products. Maybe the proper notation will be like this. What kind of things can we prove about these things? Okay. Yeah, so a few properties. Some properties. So first of all, if the behavior of the inner product, if you fix one of the vectors for vector v, v is over R or C. For any V in, if you consider, you have a map from, and so we will call it F. You have a map from v to F, which sends u and v to the inner product, and fix an inner product. That is to say this structure that satisfies this four axons. So we pin down the second argument and allow this to be free. This way, we get a function in one variable, and this function is going to be linear, so it's a linear functional. Right? Note that so that's kind of obvious. I doesn't even require proof because it follows immediately from the definition, which is the first two, not from the definition, but from the first two axioms of the definition. But if F is R, the same is true for the first argument. If we fix and let V run over vectors in V. But not for not if f is equal to c. Because in this case, if you fix and then you put multiply v by constant, you're going to get c bar times V. That means that this map is not linear. It it sends the sum of two vectors to the sum, but it sends a multiple scalar multiple, is there a question? Square. True. This square. Same. Sorry. I'm I get slope closer to the end of the lecture. This is true. Now it's not true, but it will be true again. Okay. What else? If one of the vectors is zero, then you get zero. So this follows from the fact that this is a value. So if you called V a linear functional, for every V, you get a linear functional, which should have introduced. Let's call it five. Five v of u is V. But every linear functional evaluated at a zero vector is equal to zero because linear function is a no map. A linear map always sends a zero vector to zero vector. So linear function function five v of zero is equal to zero. Then this guy by the third axiom is given to zero V bar, and so is zero bar and zero. The complex conjugate of zero is obviously zero because it's a real number. Okay. So that's the first. So next, I'm going to introduce the lengths, norm of the vector. So next, define for n vector x and V x like this as the square root of x x. So first of all, it makes sense because according to the axiom, four, this is a non negative real number, and every no negative real number has a unique square root which is also no negative real number. I do not it like this, it's a real number no negative. Even in the case when the vector space is defined over the complex field as I explained over there. And so we call this a norm. We want to call it a norm because we want to expand the definition to the case of complex vector spaces. But the morally they should think of this as length or magnitude or magnitude of x. And what are the properties of this? So lemma two. The first is that if you take C x, that's equal to c times x for any x in V and c in f. So this service is both the case when f is R or or or C, For the how do we know this? I'm going to have several parts here, let me give the proof before I formulate other parts. And this will give you an idea of how you prove things for vector spaces within in a product. By definition, let's write actually C x squared. By definition is c x c by axiom two, We can pull out the first one so we get by axiom three, you can write this C x. Well, I actually already explained that, if you have C for the other Did I explain this? Well, let me do it slowly. C x x bar. So I have switched them, and I can do that for f being r or C, but I have to put a bar over it. And here I use it's going to be c times c times x x bar. That's going to be c times c bar x x bar. This is c squared. This is a square norm of a complex number or the square of the absolute value of a real number. This is in r by x four. Therefore, I can remove this. I can remove the bar and I get c squared times x squared. The conclusion is that this is equal to this, and that both no negative real numbers which have square roots uniquely defined no negative also real numbers, and we get this. That's the first part. Which is what we expect if you multiply a vector by a number, it's length has to be multiplied by the absolute value of this number. For real numbers by the way, so if you multiply a vector by minus two, the length doesn't get multiply by minus two, it gets multiplied by two, obviously, because you cannot have a negative length. For complex numbers is a little bit more subtle because what you're multiplying by is the proper notion of absolute value for complex numbers, and proper notion of absolute value for complex numbers is exactly that length. So this is maybe I should mention that that What length is for a complex number. If you have a complex number C, which you show as a vector on the complex plane, this being the real part of C. This is imaginary part C. Then the length, this length is absolute what we write as an absolute value of a complex number. It's exactly the length of the vector that you get on a complex plane, which represents this complex number. Okay. Okay. Mm hmm. So what's next? Two, is that the length greater than equal to zero and is equal to zero for any x. And it is equal to zero, if and only if x is a zero in V, which is more or less more or less equivalent to axion Okay. And then we have two more statements. So So first of all, if you have the length of a sum of two vectors. So suppose that for the third one, we introduce the notion of Otagonoity maybe Octagony two vectors U and V and V are called Otagonal If the vector space endowed with a particular inner product. You have to always remember that there isn't a unique inner product on a given vector space. For a simple reason you can always multiply any given inner product by any positive real number, and you'll still get another inner product. So when we say when I write U and V vectors and V, the proper way would be to say endowed with a particular choice of an inner product. Then they're called togono with respect to this specific inner product. In other words, two vectors could be togon respect to one inner product, but not with respect to another. You have to always keep in mind that when speaking of a vector space, I'm actually here speaking of a vector space in doubt with a particular in product and everything is with respect to that specific inner product. They called tgonal if the inner product is zero. Okay. And so then the first property of ortagonality, is that if v is equal to zero for all let's say for all in. For how to say. Yeah. Then U is a cable to zero. This is a very statement because this is something which will be very important tool for us. If you have a vector which to everything, with respect to giveno product, it must be zero. We already know that and converse already know the converse. We already know the converse because we've showed it here that if one of the vector is zero is tonal to everybody. But this is a strong statement. I somehow you have a vector, you don't know what it is, but somebody gives you information that this vector is tonal to everybody in your vector space. You know, for sure, you're talking about zero vector. There's no other vector like this. At first, you start wondering how could it possibly be. But see the trick is To use axon four. If it is equal to zero for all. Sorry. I didn't write it correctly, did I? Then V is equal to zero. Just choose take U equal V. Then the condition is we say for all. In particular, for v itself. So the v v equals zero. But what? V equals zero only is zero or. That's the thing. This is where this is what mathematician a term for this is called non degeneracy. Because in principle, remember I gave you an example where you define your inner product by summation. But in principle, you could drop some of the terms. Like remember I gave you an example of n is equal to two, and in a product in the real case was just a one times b one. Then you have a lot of vectors who is in a product with themselves are is equal to zero. We want to avoid that. The vector we want a situation where the only vector which has norm zero is the zero vector, and everybody else has a norm or length. This is really powerful because then, for instance, it allows you to make to get this result. Take equal V, you get v equals zero, and therefore v equals zero by action four. By the way, same is true for the other argument that if this is equal to zero for all v, then u is equal to zero, which obviously follows by switching them and applying action three, but also by substituting the case qual v. Next, we have an analog of Ptagra theorem. Suppose you have two tognal vectors. Let. Rtgon By the way, This notion is equivalent to the notion of perpendicular, but usually we reserve perpendicular for the specific example of dot product on RN. But togonal is a generization of that. Orthogonal vectors. I V, equipped with an inner product, a particular product if we were in two or three, so we could draw them like we know what it means. It means that I So it means that the angle between them is 90 degrees. Then u plus v, so you can also draw u here, and then this is u plus v. The hypotenus you get the right triangle and you get hypotenus. We know of pitagm that u plus v, that's the length of this vector. Squared is equal to u squared plus v squared. Now we can easily prove it from this definition because we say indeed, u plus v squared is u plus v u plus v. Right? Bilinear. This is u plus v plus u plus v. Then for the second one, we can switching the order, applying linearity for the first argument, and switching again, we see that it also works. You get u plus v plus u v v then this is zero by our assumption, that this is zero by assumption that they are tog because this is equal to V bar, both of them are zero. This is u squared plus V squared. And so you look at this and it's like, wow, we proved Ptagorterm without any effort. So how can it be? Of course, there's some cheating here. Because how do we know that we defined UV what I mean is we prove it now for general retrosp As far as the statement is concerned. If we introduce notion of tagonal With respect to a given inner product, as the notion that the inner product of this vector cycle to zero, everything is legit. It's very simple calculation following from the axioms. The cheating appears if you say that let's specialize to the case when v is RN and the inner product is the familiar dot product. You say ortgonality from that perspective, means that the two vectors are perpendicular. How do you know that? For that, what we say is that there's a second formula for the dot product, which is that the dot pro is equal to this product of the length times the cosine of the angle. But that formula is hard to prove. You shift the difficulty of the pedagra theorem to proving in the case when rector space is RN to proving that actually if v is RN Well, let's say R three because strictly speaking, we don't know what is this like to imagine two vectors and angle between them. In vectors at least I don't know. In space dimension greater than three. But in the space of dimension two three, we do have a notion of an angle. And so in this case, and we take the do product to be a one B one plus A two two plus a three B three. Okay. And in this case, then a x y is actually equal to the length of x times length of y times the cosinodanal between. And that follows from a simple trigonometry, which is well known. And this is actually exercise 15 of Chapter six A. You'll have a chance to revisit this. Once we have that, this implies that the dot product equal to zero is equivalent to the angle being Pi over two or three Pi over two. Which means that they are perpendicular. And then you combine it with this very simple argument from linear algebra, and then you get ptagaster be don't believe that somehow we found the magic want to prove petagrastter without any effort. Okay. Okay. Then there are a few other things. I want to mention one, which I'm not going to prove because I have very little time. But it's very easy to understand it from the book. So that is going to be part. So this was one, two, three, Pagers theorum was part four, and now we'll have part, which is called shorts in quality. For sorts. Inequality. Which is that for any U and V in a vector space within a product, The inner product is This is an element, it's a number, it's either real number or complex number. In both cases, you have the notion of absolute value, which is obvious for real numbers and which I just explained for complex numbers as well, the length of the crispon vector on the complex plane. This is less than or equal to V. So this actually justifies this formula, which as I mentioned earlier, I mentioned this in quality earlier. It's just that it shows you that the ratio between the it shows you that if you remove in the real case, if you remove the bars, This will be between the product of length and the negation and minus product of length, which shows you that this is equal to the product times a number between one and minus one, and you might as well call it cosine of something. But the question is whether it's exactly the same angle that we are normally associating with two vectors. That requires additional proof which comes from the econometr. But formally, you can think of this as a precursor of the definition of the angle, which also works for complex vector spaces. Compor space, you could also think that there is a notion of an angle between two complex vectors. That's the fifth part. So in the remaining time, I'm going to discuss the construction of togonal basis. What will actually be interesting is not just like two vectors are togonal. But what we're interested in is in a situation where we have a basis of a vector space with an inner product, such that, these vectors are togonal to each other, pairwise togonal in fact, even better tonormal which means that also the norm of each of them is equal to one. If we have such a basis, there will be some magical things that we can do with linear operators and stuff like that. But we have to learn, for instance, that such basis exists. It's not obvious. In the case of product, of course, we know that it exists. Let me explain. So for v equal RN and x y being the dot product, we have a basis, standard basis, which is one N or normal. And I'm introducing this notion. By the way, EI, of course, means here, the vector which has one in the e position and zeros everywhere else. In this case, Ei j is Delta IG the chronicer symbol. In other words, E is artgonal to J for all, not equal to J E norm is one, which is equivalent to saying it is equal to one. Now we carry this notion to an arbitrary. Notion normal Orthogonal and on normal basis. To the arbitrary arbitrary vector space within a product. We also have notion of togonal basis where we relax the second condition that the norm of which vector is one could be but if you have tgonal basis, you can divide each of the vectors by its norm, and you will according to m two part one, if you multiply a vector by a number, its length gets multiplied by the absolute value of that number. You can always rescale the vector to make its norm equal to one. The big result here is that A orthonormal basis exists for every vector space within the product, which is important because not obvious. In this case, it's obvious because the formula is so simple, but in general, who knows, right? So big result here is that for any finite dimensional vector space V with an inner product, there exists, and noormal basis. Moreover, there is an algorithm. Given any basis, O V just like normal basis, meaning a set of vectors which are linearly independent and spend our vector space. There is an algorithm, Okay. Producing an tormal one. It's a normal basis. Okay. So I have time probably to explain it in the case of a two dimensional matter space. So see here it is. Suppose that first that that n is equal to two, and f is r. You have two dimensional a space. You start with two vectors. Let's suppose you have y one and y two. We want another basis, which is tormal might as well we have to start somewhere. Let's say the first of very basis has two elements. Let's say that we'll start with y one, and then construct a second one, which is to and also the normalize both. The idea is that we want one, two, which are art normal Art normal. The way we do it is we start with y one and we can divide y one by its norm. Obviously, since it's a basis, they both non zero, something