I want to go back to what we discussed at the very end because I feel like I rushed it a little bit and it's an important point. So I want to talk about this one more time.
Remember, our set up is that we have a finite dimensional vector space over field.
We define the dual space to it initially is defined as a vector space of linear maps from V to.
Therefore, from this perspective, it's natural to think about elements of this vector space as functional functions on V.
But at the end of last lecture, I presented a different point of view in which
instead, think about appearing given a functional pi in here and the vector in here, we get a number.
This number I would like to denote as follows, so that it looks symmetrical between functionals and vectors.
The nice thing about it is that we can reverse the, this nation immediately suggests the possibility to reverse the roles.
Think about the pairing between V prime and V. Between V and V prime, which takes to this five, think about reading this expression from left to right or from right to left, which I mentioned last time.
This immediately suggests that there is something more going on between the vector space and its dual. That in fact, there is a true duality between them.
Meaning that not only V prime is dual to V, but V itself is also dual to V prime. In fact, you cannot really separate them and say which one is right and which one is left, or which one is right, which one is wrong. They equal footing.
That creates a very interesting new layer in linear algebra, at least linear algebra, finite dimensional vector spaces.
Let me actually prove this.
Now I write equals low.
Normally we say two spaces are isomorphic to each other.
If I write equal, I mean not just that there is an isomorphism between them, but there is a canonical isomorphism. There is a particular isomorphism which can be seen by everyone. It's, there's certain objective quality to it. There is no choice involved. You see it when we have a canonical thing in mathematics.
It's highly priced, and this is one of them. Let me explain.
So any vector V can be treated as an element of the dual space to the dual. Meaning that v can be viewed as a map from V prime to which is what an element of V prime prime would be.
It's defined by saying that f goes to what I denoted here as V.
But in our old nation, if we think about prime as being functional, then it's just evaluating this functional on V. But now from the perspective of we fix five but very V, but from the perspective of V, we fix v and we verify. If we do that, you see, let me indicated by putting it, maybe using a different color for it. Say the argument now is not, but this varies for every five, we get a value, right? We get a function for V prime to which corresponds to v. This function of this map is linear because clearly, if you take 51 plus 52, it will go to f one plus 52 of this is the same as f one of plus 52 of v. And likewise for lambda, lambda pi, it is actually linear.
I have constructed, we have defined a map from V to V prime prime, sending to this functional, This meaning this functional that is on this board associated to V.
Again, check, check that this is a linear map.
You see that we have constructed it without any choices.
We don't need a choice of a basis. We don't need a choice of any other structure.
We just need the vector space V, and the prime is defined on top of it.
Then we have this natural map from V to v double prime. We have constructed it without any choices. That's what I mean by canonical. It's a canonical map.
Now we have a theorem which says that this map, let's call it D, D, is an isomorphism. In fact, there is a canonical isomorphism between V and V double prime d of the dual, which is what I expressed by saying that there is a genuine duality.
Genuine duality is not just when you can go from one to the other, but when you can go from one to the other from the other by applying the same procedure and you return to the original point.
That's exactly what happens here. You start with UV prime. But if you apply this procedure one more time, you come back to original.
You see that's duality. It's an isomorphism. Okay, How to show this?
Well, see now we have such great experience with isomorphisms that it's a piece of K. Proving such a thing is very easy for us now, because what do we know?
We know the dimensions of the spaces. We know that they're equal. We have discussed this because we know the dimension space of linear maps in general, and in particular, when you go from V to dimensions are the same and you have a linear map, it is sufficient, it suffices to prove that it's injective.
To prove that D is injective, that in turn is equivalent to saying that the null space of D is just a zero vector in V.
Let's suppose that v in V is in the null space. That means that T applied to v is zero.
But what is DOV?
Dv is a functional, this is DOV actually, right? It is a functional on V prime which sends five to five V. Saying that it's equal to zero is equivalent to saying that V is equal to zero for all five.
Now we want to derive from this that it can only happen if we is equal to zero.
In the proof, we will choose a basis. The map itself is constructed without a basis. But it's often happens so that to prove something about this map, we may need, or it could be, convenient, to choose a basis.
Don't take that as a sign that the map depends on the basis, it doesn't.
I constructed it before making choices, but now within this proof, I will choose a basis V1vn of V and let F1f and the dual basis which which we discussed last time, it is uniquely determined by this property that I wrote last time.
That the value of V j is given by the chronicer delta symbol, which is 11 icicle two j and zero otherwise. Okay?
Now we have a formula for V. Because V1v and is a basis, we can write V as a linear combination, uniquely linear combination of the vectors v1vn. Okay? We are saying that this is zero. We're going to use this fact is equal to zero for all, in particular, for every fi, but f of V is I, as it is equal to zero. Now, for all I, we conclude that all of these coefficients are equal to zero, and therefore itself is zero. And that completes the proof.
This one, I introduced it last time, it is called the chronicer symbol. It is equal to one if I is equal to j and zero otherwise. Okay? We find that is indeed an isomorphism. V and V double prime are isomorphic to each other. Now, contrast that to the fact, contrast to the fact that any two vector spaces of dimension n, of the same dimension, let's say n dimension n, are isomorphic to each other, which we learned previously. Right? You have some better space V, V one and V two. There exists an isomorphism from V one to V two such that, which is bijective.
Isomorphism means linear map, which is bijective or equivalently invertible, right?
What is the difference? The difference is to construct it.
First of all, there are many of them isomorphisms. In fact, to get such an isomorphism, we have to pick a basis in V one and a basis in V two.
And then const isomorphism is one that sends basis elements of the first basis two based element of the second. Even though they are isomorphic, there isn't a unique isomorphism. There is a choice involved if you want to construct it, and there are many different choices. In general, there is a special case, there is an exception. There is one exception. When when you have a vector space, we special case, I mean that there is a unique basis there. Many such emerg, which depend on basis, depend on choice of basis. Basis in V1v2. There is one special case when V is A, when V is defined actually colons elements of F. In this case there is a canonical basis, then there is a canonical basis, economical basis. But in general there isn't. Therefore, there are a lot of results like this, that something exists, but it's not unique way. There's no objectively obvious or preferred choice of one of the many things. But this statement is really powerful because in this statement we claim that there is an isomorphisy between them, which does not depend on any basis. That's very interesting. That means that actually all vector spaces come in pairs. For every vector space V, there is a dual. The dual of the dual itself. You break all vector spaces into pairs. The two ply know about each other and there are many things that we can get things for one from the other. For instance, for every basis in V, we get the dual basis in V prime, and vice versa. For base in V prime, we get a dual base in V. Also, for linear map from V to W, we get linear map, dual linear map from prime to V prime, which we'll talk about in a moment. Okay? By the way, because FN has a canonical basis, in this case, actually V prime is canonically isomorphic to V. This is one case when actually taking the dual doesn't change anything. The space, as I will just explain, you can think of elements of the dual space as rows rather than colons. But row and colons rose and colons are one to one correspondence. But this, again, is very rare, it's a unique case. In all other cases, V prime is not equal to V, but double prime is equal to V. Don't get confused, we're not saying that primal v, we're saying double primal. The dual of the dual for the F, the dual is econoiclysomorphic V itself. And that's why in our earlier course of linear algebra, where our pretty much only example was just we couldn't, couldn't really see the need for the dual space. But now that we consider more general theory, we do see that these things naturally arise. Okay? And they have interesting properties. All right. Now, now let me quickly go over this numerical representation. If you choose a basis V, one basis of V, let me actually write it in this way here. One basis of one basis of the dual dual basis, right now, every vector here can be represented as a sum, as a linear combination. Likewise, every functional here can be represented as linear combination of these guys. Then this pairing that I was talking about gets a numeric representation as a product between row of bees and the colon of As. There are n of them, both as and S, we get a row of length and a colon of length n. That's exactly a good situation where we can multiply the two. This shows that, whereas it is convenient to represent vectors in the general vector space relative to given basis as a colon, if we want to think about the pairing between vector space and this dual, it is convenient to think about one of them as represented by colons and the other one represented by rows. Because then the pairing between them can be expressed as a product of matrices. You cannot multiply two colons, cannot multiply two rows, but we can multiply row and a colon. A comment about not sometimes in physics, for example physis like to put the one of them use the lower index and for the other one the upper index. For example, physics would use B I upper instead of lower then. Well, this can be written as the sum of, let me use that. Not all right. Sometimes actually in old physics papers they use this notation where if you have upper and lower index, then you don't have to write the summation sign. It is assumed that it is implied that you have to sum over them. So there is a certain system to that and there is a certain advantage to that. But I'm not going to use it, I just want to comment. So in case you have encountered this before, it is a particular frontational scheme. But I don't want to do that because I want specifically to emphasize the reversibility of this procedure. That we can think of this as a pairing between V prime and V. If you think about prime elements of V prime functions on V, represent them as roles and represent these guys as colon. Then you'll write it like this. But if you reverse the roles, you will, instead of writing as a colon, you'll write as a as a and B as a colon. But if you multiply, you will get the same result. That's exactly the symmetry that I was talking about. Right? If we reverse the rules now V will be represented as a role of the same numbers. Five will be represented as a colon of the same numbers. Then five would be written in. The opposite is applied to the role of as applied multiplied with the colon of s. The answer is the same, because we're multiplying elements of a field, and the product in the field is a pico. Music Okay, but there is one more which I talked about at the very end of last lecture, which is the idea that you can apply this also in the following context. That you have a linear map from V to, you have the I, a natural map from W prime to V prime. You see the weird thing is under this duality, the direction in which the map goes is reversed, which I explained last time, right? Then what you can do is you can, you can choose a basis, beta here and gamma here. And then we know by the general procedure how to construct a matrix of this, of this line linear map. But we also have the d, this is V prime, dual basis to beta in V prime, which is called beta prime. We have a dual basis to Gare prime which called prime. Here we can also construct matrix, but now it will go from Gamo prime to beta prime. The statement that I made last time is that these two matrices are related by very obvious operation. Namely, taking the transpose that this yellow matrix, m of t prime gama prime beta prime is equal to M of t beta gamma transposed transpose matrix. Okay, then how to see that the rules that I explained this idea of a pairing allows you to prove it very quickly. Because you realize that you can combine this linear map with this pair. And then you get this expression like this. What does it mean? On the one hand, it means applying to V like it's in the middle. And you can make it act on the thing which is to its right, which is V. You apply to V, you get an element of W. Then you can have a pairing in between which actually starts life. This is V, this is in prime is from V to everything is consistent. You see you apply to V, you end up in because starts life in V, M is in a veteran V, but you apply is a map from V to you get a veteran lives in prime. So you can take the pairing between I and this. That's one way to read it, but another way to read it is to say that we apply prime to, then this will be in V prime. Then we can take it sparing with V numerical realization. It is represented as follows. You simply take i, which is, let's say you mentioned here, but here is m, i will be represented by a row matrix B. Well, let's call it one because it's not in the dual to v. Let's me call it like one, a different board. Let's say I corresponds under this basis gamma prime to the row one. M v corresponds under the basis beta in to the colon, like before one. The matrix will be a matrix of size m by n. Let's call it A. A is M of t beta gamma. You see that this expression can be written as a product of three matrices. The first of them is a row, that's the role of the coefficients of five relative to Gama. The second is a matrix of size m by n. That's the matrix of t relative to bet Gama. Then there is a matrix colon. This expression is exactly what represents this number. Indeed, we can multiply because this can be multiplied by this, multiplied by this. But you see the point is that there are two ways to multiply three matrices. You either multiply the last two and then multiply the result with the first. This is what's written here, right? You multiply with a, you hit colon vector with a, you get the numertical representation of the vector T of V, W. Then you take the pairing between the row and the colon. That row comes from pi and the colon you obtained in this way. That's one way, but now there's a second way, which is that now the first is like this, you start out, but the second is you start here. What is doing is exactly applying first the transpose, the dual map to in. As a result of this, you obtain a role, but now it will have length N because it will leave in V prime and then you take the pairing. But what's happening here, you see if you multiply this role. Bio matrix A. Let me reproduce this one more time. And you have a matrix A. We multiply and then take the transpose. Then take the transpose of the resulting role. What are you going to get? You see if you hit this matrix, multiply this row and this matrix which is n by n, you're going to get a row again. But this row will have n entries so that you can actually take the pairing with that colon. If you take that, you just put, organize it instead as a colon. What are you going to get? You're going to get exactly the transpose matrix multiplied by the colon. You get this. And that's exactly the statement, because what is this matrix? This matrix is the matrix for the linear map, t prime. I wrote it in a slightly unconventional way because normally we say goes from V to, but now left and right get switched. If we start thinking from the point of view of those spaces, prime is acting from W prime to V prime, right? But if you want to write a matrix of prime, you would then be representing elements of W prime as colon. That means what we're doing is exactly reproducing the matrix of T prime, right? That's the matrix which is supposed to give you the action of T prime. But provided that you think about elements of W prime and V prime, not as rose but as colons, that means that you try to, what do you need to do to the colon, one M? By which matrix do you need to multiply the colon of one M to reproduce multiplication? On the right here, you multiply on the left by matrix, right, you ask, which matrix should you put here so that this will give you the same result as multiplying this by a and then taking the corespon colon. If you think about, you see immediately this is exactly the transpose matrix that is the proof of this formula. Okay questions. There is a very convenient way of thinking about things. Which by the way also shows if you take the transpose twice, you get back to the original matrix. Transpose is just symmetry by respect to the diagonal. The shows that actually this first, the second statement, if you take prime and take its prime, you get back because prime is from double prime to V prime. Therefore, double prime is supposed to go from V to prime prime. But we have seen that this is canonically, and this is, could it be a map from V to W? Well, that's the original. Of course, there is a duality also at the level of maps. If you take the dual of the dual, you get back the original map. This. All right. Let's move on. I want to give you an application. Okay. For now, it's a curiosity. Maybe a little bit, because the curiosity is that we realize that there are many interesting finite dimensional vector spaces. Even though every two vector space of the same dimensions are isomorphic, but isomorphic, non canonical. Therefore, it's like a zoo of different vector spaces, even of the same dimension. Yet there is some organizing principle which breaks all the vector spaces of dimension n into pairs, where every pair includes vector space and it's dual. But what does it do to us? What applications, what mileage we can get out of it? Here's one application, which is called a Polynomial interpolation, or Lagrange interpolation. Okay, in this case, we take one of our favorite vector spaces. Polynomials of degree less nircle n over r. Its dimension is n plus one. Last time, I explained that the dual space contains a family of functionals which are labeled by points on the real line. For every point on the real line, every real number, we have a function, that is to say an element of V prime. This function, the value of this function on the polynomial is the value of this polynomal. At the point A maybe emphasize here that we talked about this extensively last time. You see for instance how the vector space and V and its dual V prime have totally different flavor flavor consists of polynomial functions. Prime consists of, for example, things like evaluation of functions. There's no unilateral analogue of this in, there is no family of polynomials necessarily which, well, you could say you could take a degree one pnomal t minus a. Yes, that's true, but it's not the same has a different flavor than this functional. This functional is really about evaluating something and it doesn't fit in the original vector space. It really fits and belongs to the dual space, okay? So now dimension is n plus one. We know that any lineary independent subset of this cardinalga is going to be a basis. I'm going to produce one such basis by taking n plus one distinct real numbers. Suppose you have a collection a one, a plus one. We want to call them zero. It may be more natural to call them a zero. It's a little bit nicer distinct real numbers. That is to say, I is not equal to j. If I is not equal to j, I claim that the Lemma, if I take 51505, this is a basis, V prime. The functional point to evaluating things at the distinct points are linearly independent. A linear independence, if they're n plus one of them, it implies that it's a basis. We have a lot to cover. So I'll leave the proof for us. It's very easy. Okay? So now, but if there is such a basis, it implies that there exists a dual basis in V itself. That means you have polynomials p n of t, and uniquely defined by the property that I at J is delta J, the chronicer symbol. These are called Lagrange polynomials. Actually there is a simple formula for them. J of t is equal to the product of minus J, not equal to j over J minus a check that these Pomals have this property. Okay? Okay. So now you see we have a basis, a dual basis. What does it mean? It means that every other polynomial can be written in terms of this basis. But also every other function, any other value for any value, any real number. We also have a functional f B, which is evaluating. Evaluating polynomial at the point B. Now we know that on general grounds that there exists some coefficients. 01, et cetera, N such that F is equal to the sum of I, I. These coefficients are universal independent of the polynomials. They only depend on the numbers I, abi from which we drew the original basis, the basis of dual space, and the point B. Maybe I should indicate as upper index. Once you fix a 10n, these numbers only depend on B. We obtain the following theorem. Basically, for free, just from knowing linear algebra, that vector space there is a dual vector space and that there is a notion of a dual basis, that evaluation functional form a basis of the prime. The theorem is that for any polynomial in M, doesn't matter which one, its value at the point B is always given by a combination of its values. Or at the point A sub I with this universal coefficients IB you see whatever Paloma, it's interesting. If you know Paloma at n plus one points, then you know it's value everywhere. At any other point. That's clear because it's delegee independent parameters you would think perhaps it's but the point is that actually there is a formula. There's a universal formula you can find the efficient, you have a universal formula which gives you a value of any polynomial at any given point just from knowing their values at the fixed points, a zero up to N. This is a cool application in my view of this theory. There is one more thing that we need to talk about in regards to dual spaces. Because there's a vector space, there's a dual space. We talk about basis, how bases are related, we talked about linear maps, how they are related, when we apply duality. There's one more thing we haven't discussed yet, which is subspaces. What about subspaces? Suppose you again, a finite dimensional vector space is a subspace. Surely there must be some subspace in V prime, Right? You would think otherwise this duality wouldn't work. Indeed there is, and it's not that like, so subzero is defined, this is definition is defined as the subset of all in V prime such that for any v is equal to zero. For example, let's suppose that V is P and of R, our previous example inside we have a one dimensional subspace. Let's call it R times five. All maybe like this, lambda f, where a lambda is an R, one dimensions are space generated or spent by five, where f is a evaluation at the point A defined by that formula over there, okay? It's one dimensional, then if that's the zero, it's all polynomials whose value at a is equal to zero. Now, in this example, this has dimension one. We discussed this before because there were several problems earlier in homework where one needed to find not only dimension but a basis in a subspace of polynomials given by conditions like. So, we already know that this has dimension n minus one. Well, more precisely, plus one minus one, because that's the dimension of the, of this one. You see what happens. We start out with the subspace U in V, we mention one and this is called the null space of U. This null space turns out to be less by one than the dimension of the whole space. There is a term for this. Mathematicians sometimes say dimension one. It has co, dimension one. This has dimension one, this has dimension one. The next result says that there is a pattern. This is a special case of a pattern where dimensions of and zero are complimentary to each other. The one, first of all, zero is indeed a subspace. It's not obvious from the definition, but almost obvious is a subspace, not just a subset of prime. Its dimension is equal to dimension of V minus dimension of. The proof is very simple. It's 3.1 25. I'll just leave it for you. There is one more result about tying this null space with null spaces and ranges of linear maps and their duals. Suppose you have T from V to both finite dimensional, then you have prime. I really like to write it like this. It's more suggestive from right to left for prime goes from V to prime, goes from the right to the left, from right to left. Then each of them has an null space and the range, the statement is that those are related by this procedure. By the way, maybe I should add that if you take the double zero, you get back the original one. It is a true duality. Again, you actually have a correspondence between subspaces. You see you have VV prime. Then for every subspace in V, there is a subspace prime, and vice versa. There's a procedure to go between subspaces. But the weird thing is that they don't have the same dimension but rather complementary dimensions. Meaning this has dimension, then zero has dimension, and vice versa. If you have a linear map, you can write the no space of t. That's a subspace in V. Now let's take it's space, that's going to be a subspace in V prime. What could it possibly be other than the range of prime? And indeed it is, it is a range of prime. Second, if you take the no space, if you take the range of zero, we actually one follows from the other because you just switch the roles of T and T prime. But I might as well just it. Now the picture is complete. You have a dictionary between relating things in vector space and it's dual, right? A subspace in V gives you a subspace in V prime. If you apply this procedure twice, you get back the original one. If you have a map from V to, you get a map prime from W prime to V prime. If you want to study the no spaces and ranges of those maps, you will see that they are dual to each other in the sense that is written here. They are interdependent, all right? And this has a nice corollary. This has a nice corollary, which is that if one of them is injective, the other one is subjective, and vice versa. Because you see, let me give is injective then the no space of zero, but it's annihilator is, the whole thing is from V to W. That's in V. Annihilator is V prime. But by, by the first part of the theorem, two, it's a range there. Two, part one. This is a range of t prime. We see that it is subjective. We just proved, derived from property one that is injective if and only if T prime is subjective. Likewise, we prove that from this it follows that is subjective. If we just switch and T prime, we get, but I'll write it down anyway, Injectivity of t and t prime are tied together. It's just that they are dual to each other. It's not that is injective, t prime is injective, It's T is injective. T prime is surjective, okay? And is surjective, prime is injective. Which also implies that if T is an isomorphism, if and only T prime is an isomorphism, because then both properties, both properties hold, yes. Well, if you have a linear map which is injective, we prove that it is equivalent actually to its no space being zero, right? Because zero already goes to zero. If it's injective, nobody else goes to zero, right? But in fact, we prove that it's equivalent, right? So it's one of the things we did a couple of weeks ago. Any other questions? All right, so that was a quick summary of all things and now we move on. Our next big topic is eigenvectors and eigenvalues. We will start it in earnest after the exam, but in today's lecture, I will just covers some preliminaries, so to speak, things which will be needed right after the exam. When we talk about eigenvectors and eigenvalues, what are these preliminaries for eigenvectors and eigenvalues? First, there is a short chapter four in the book which is about polynomials. Most of it is, you already know. For instance, it talks about complex numbers and various profits of complex numbers and so on. But it, it introduces and terminology for polynomials. And I want to go quickly over this. What do we need to know here? First of all, there is a definition. If you have a polynomial, we will consider polynomials over real numbers or complex numbers. Of course, this is something we already discussed extensively, even in today's lecture. We had two examples that referenced an of R. One was the polynomial interpolation and the other one we talked about here just now about the null space of a one dimensional subspace generated by an evaluation functional. This is all familiar, but we want to introduce the following terminology. Suppose you have a polynomial of some degree. Let's say of, again could be real numbers or complex numbers. Then we will say that lambda, which is an element of this field, is called, we will say that it is a zero or a root. The two synonymous terms, zero or root of, let's call it z to be consistent with the nation of the book of z of lambda is equal to zero. In other words, if you substitute lambda into this polynomial, you'll get zero. In our notation, this can also be written as lambda of all right, because we're evaluating at lambda. Maybe I use of before, now I'm using the notation for the coordinate. It doesn't matter. You can choose any notation for it. Okay? We will need the following lam. Before I explain it, let me give you an example of a polynomial and a zero. The obvious example, suppose P of Z is z minus lunda. Then lambda is a zero or a root of the simple panal degree one. Every plum of degree one has a zero. And it's unique because every panum of degree one can be written as a multiple of this. A general pan of degree one is going to be a multiple of minus loder. For some under it has a unique zero, which is lambda itself. Now this is generalized to polynomus of degree M. If P of z is p of, then a lambda is a zero of P of z if and only if P of Z can be written as a product of Z minus lambda and another polynomial which is going to have degree less by one. Okay? So it's not obvious. It's not obvious because you see here we're honestly speaking about evaluating the polynomial, so we actually have to substitute it into the formula. And here we are claiming that something at the level of symbols, so it's symbolic manipulation that if you open the brackets, there exists something. So if you open brackets and this isle to this, the two statements are really of different nature. This is a formal statement, symbolic statement, algebraic statement. This is more analytic statement where you're claiming if you substitute something into a function, you get zero. This type of questions really appear in analysis or in calculus. Yes and minus one? That's right. Thank you. Okay. So this is 48, a very direct proof. I'll leave it for you to read. Now, the question is how many zero given polynomial have. Actually, this is 46. What I'm going to tell you next is 48. Now there's always the possibility that a particular zero will appear more than once. What do I mean by this? Look at the polynomial P of Z equal to z minus lambda to the power k. For example, z is lambda squared. It has only 10, namely lambda. But it is divisible not just by zm lambda, but by zm m squared. In this case, we will say that lambda is zero of multiplicity two. Okay. I mean two, if and this we can generalize. Let's suppose that you have a situation like this. That P of Z is not only divisible by z minus lambda, which is always the case if lambda is a zero of P of Z, but it's divisible by z minus lamb to the k. Let's assume that lambda is not a of this one of the second factor. Then we will say that lambda is of, of, of multiplicity k. Then combining the two, we get the corollary which is essentially 4.8 Even though I was surprised in the book, it doesn't talk about multiplicities. It's assumed but 48, and this is for six. I wanted to emphasize this point that there is multiplicity. The corollary is that every polynomial, polynomial of degree M, has at most zero, where we count zero with multiplicity. Okay? In fact, it is at zero for sure. But in fact it could be. It's a stronger statement that in the ideal situation you actually have, you can write your plynomals a product of linear factors like so, each linear factor gives a degree one. Therefore, this guy has degree M. Then in this product, what I mean is, what I mean is, uh, let's suppose that if it has exactly zero is counted, matpstsf, it can be written as some non zero factor times the product of these linear factors. This could go from one to this has degree. Then this must be equal to the sum of this k I, because each of these factors has degree KI, right? That's the ideal situation. When we say that the situation in this case, we say that it factors into linear factors. P of Z can be factor, factorized into linear factors, then the natural question is whether it's always possible to do right? It turns out that if our field is a field of real numbers, that's not the case already. In degree one, degree two, there are panomous which may be factor is a better word. Factor cannot be factored into linear factors. If the field of complex numbers, then we have what's called quite pompously the fundamental theorem of algebra, which says that it's always the case every polynomial with complex confusions can be factored into linear factors. Now, the proof of this follows from the theorem that every Paloma has a zero by induction. You have to prove this, you simply, you derive it from the following theory follows from another theorem which says that every polynomial P of Z of degree, every polynomial of degree greater than equal to one. Well, I guess here also, I should say at least one, because degree zero, so it's a constant polynomial. There is no point of talking about factoring it into linear factors. Every polynomial of degree greater than equal to one has every complex polynomial P of Z of degree at one Lambda complex. Sometimes this is called the fundamental theorem of algebra, the corollary, and I think that's how it is in the book. But the two states are very closely related. Obviously, if you have this theorem, it implies this. Because if you can factor your polynomial into a product of linear factors, just one of the lambda eyes appearing on the right hand side, and that's going to be zero. But you can go from this theorem to this by simply saying that, okay, how to derive it. So let's call it, I think I already had 3.1 and 32, so that would be 3.4 and this is them, three. Okay, From there four we go to theorem three by using this lemma. Take your polynomial, take polynomial, then you have this lambda which is a zero by the Lemma on that board of z is equal to minus lambda times Q of z. Then apply there four to this guy, to Q of Z and so on. All right, each time you chipping away a linear factor Z minus lambda for some lambda and the degree of the Paloma goes down by one. You start out with the Palma degree 100, let's say. Stereo says that this Palma has a Okay, but then dilemma. And that's why this lemma is so convenient. Because it allows you to reduce the question to a polynomial of degree less by one. Then if this has degree 100, this will have degree 99. Now you feed that polynomial into the theorem four and it says, well of also has a zero. Okay, Great. Then let's write Q of Z in this form. Then there will be some one which will have degree 98. And you continue this procedure until you factor the whole thing into product linear factors. At the end of the day, there will be some non zero efficient. The last step, when you drop from degree one to degree zero, degree zero, a polynomial of degree zero is just a number. And it has to be non zero because the original polynomial was non zero. That's how you prove it. Okay? That's pretty much everything you need to know for the complex polynomials because, okay, it's ideal situation. Every polynomial can be written as a product of linear factors. Can be factored into linear factors. Now, what about real polynomials? For real polynomials, there are two possibilities. You can have either linear factors, which are degree one factors, or you have degree two factors. If you have a quadratic polynomial, A squared plus Z plus, we want to find zero. We have a formula minus b plus minus square root of squared minus four a divided by two a. This number under the square root is called the discriminant. Let's call it D. D could be positive or negative depending on what ABC are. Obviously you could for example, if you take B equals zero and AC positive numbers, both positive or both negative, this will be a negative number. Or if you take both A and C zero and then well, you don't want to take a zero, okay? But anyway, whatever. And are you can always find a big enough B so that it can be both. If D is positive, we get two zeros, although it could be just one with multiplicity two positive get two distinct zeros. Polyoma is equal to the product of Z minus sum one in slumber two P of Z is eight times Z minus lumber one Z minus lambda two. If D is equal to zero then you get 10 but with multip two you get eight times Z lambda squared. Lambda one is equal to M two is negative, we get two complex O, lambda one and lambda two are complex numbers and one of them is the complex contribute of the other. This is why the Paloma has real coefficients, but the zero are complex. But for this to happen, as this discussion shows, those two complex numbers cannot be independent from each other. They have to be complex conjugates of each other, that's the situation. Degree two, degree two, we either have to scalar a product of two distinct linear factors or a square of a specific linear factor. This poma cannot be factored over the reals, but only over the complexes of the complex numbers. That situation persists, namely, if you have a problem of degree M over the reels, it can be factored, but into a product of both linear and quadratic factors of this nature. I'm not going to write this because I'm almost out of time and I still have to tell you a couple of things. All right? That's the subject of chapter four. Chapter four is, as I said, very simple, very direct. There is only one difficult thing there which is a proof of this theorem which relies on some notions of complex analysis. It will not be required for you on the statement, the proof of this theorem. All the rest you are responsible, but you're not responsible for the proof of this one. All right, and finally, we come to the beginning of the chapter on eigenvectors and eigenvalues here. It's just a very basic stuff, namely just the definition of enviaran subspace and the definition of eigenvector and eigenvalue. First, suppose you have a linear map, but not the most general linear map which could go from a vector space to another vector space. It could have different dimensions and so on. In this chapter, chapter five, we will only consider the case when the linear map goes from V to itself. In this case, we will call an operator because it operates. The simplest example is rotation. If V is R two, We talked about rotation by some angle, right? It's operating on the vector space V. Traditional term for it is an operator. We could also still call it linear map, but linear map is a more general notion where you could go from vector space to another vector space. Here we go from vector space to itself. To indicate that we call such a linear map an operator, suppose you have an operator. Then there are two things. A subspace inside V is called invariant under, or invariant if V is for all in. It doesn't mean that U is equal to you, doesn't mean that each vector from U goes to itself. It only means that the vector from another vector from you. For instance, let V be the space of this classroom. Okay? Let's suppose, imagine like a vertical axis going through this point on the stage, okay? Like this. Suppose your T is a rotation about this axis. Then the stage is an invariant subspace, right? Imagine that this would be the zero element. The stage is a subspace. It's invariant because any vector in the stage will rotate, but to another vector on the stage, right? But for instance, if you take some line which this axis itself also is invariant. But if you take any other line or any other plane going through the zero and the rotation and Nal rotation will go to another subspace is not invariant. Okay, That's easy. The second definition is that lambda, let's suppose it over some field lambda in an element of value of. If there exists a vector in v such that two properties, V is non zero, I really want to emphasize it. In addition, TV, TV is lambda v. You see people often forget about this first property. Only focus on the second property. Second property says that if you apply to it gets multiplied by lambda, but you see zero vector satisfies this for every lambda. This equation would be useful if we did not impose this condition as well, because if you just write this equation, I give you a solution right away equals zero, it's not interesting. An eigenvector is a non trivial solution of this equation. Non trivial means that it's non zero. In this case, this is called an eigenvector. So you've got the notion of eigen value and the notion of eigen vector. Eigenvector of t. You could say V is an eigenvector of t with eigenvalue lambda. That's simple. And by the way, I know that you studied this in match 54 or an equivalent because in that course, definitely eigenvector and eigenvalue appeared. What this is really is like brushing up on what was done before with the slightly more general context where we're seeing arbitrary finite dimensional electro spaces rather than just FN. Finally, there is one lemma that I want you to know in this regard. That's perfect timing, which is that if you suppose, suppose you have V one up to key are eigenvectors with eigenvalues which are distinct pair wise equal, then this set V1vk is linearly independent. It's very easy to prove. If I had a couple more minutes, I would have proven here, but I better not rush it. Just leave it to you for you to read because it's very direct, very obvious. All right, so that completes today's lecture. I want to remind you that all the information about the midterm exam is available on courses including a midterm exam. Solutions to the mock midterm exam will be posted by Monday morning. Also solutions to the homework for this week. On Tuesday, we'll have a review lecture right here. And I'll try to set aside some time for you to ask me questions. If you have questions, you can prepare them for Tuesday. And then we'll have the exam Thursday week from today.