Okay, So here's the first part of the Jordan canonical form theorem. That a visa finite-dimensional complex, because the whole thing starts with proving, the way you prove the theorem is proved there is one eigenvalue and then use induction. Okay? So that's how the proofs we need complex here. And if these are all the distinct eigenvalues, and they have to be only a finite number because we know that. Yeah. So why, why are we guaranteed there are only a finite number of distinct eigenvalues? What do you think? This guy is? Linearly independent? Eigenvectors, distinct eigenvalues, right? And the dimension is finite. So you can have only a finite number of distinct, linearly independent vectors. So that's why it says take all of them and look at the generalised eigenspaces for each of them, then V is a direct sum. So every V can be written as uniquely as US. Element of G lambda one, lambda two, b, they're all null spaces. Imagine. Okay, nullspace is for each Lambda, the Lambda I to the power MI. And everybody can be written uniquely. Okay? That's one. So there is a basis of generalized eigenvectors for V, there isn't a basis of eigenvectors, the eigenvectors of T, but with a generalized eigenvectors. So that takes us partly somewhere. And that's basically because TD lambda is contained in G lambda. D maps generalized eigenvectors corresponding to lambda two generalized eigenvector. It's not complicated to see. Okay, that was part of the earlier proposition. So it says, you know, well you can what's called divide and conquer. Right? So you want to get the matrix of T, find a nice basis for the matrix of T. So that the matrix of T have a nice form. Maybe diagonal athlete cannot be diagonal. It says you can divide and conquer, does focus on each of the generalised eigenspaces. You do it for t, concentrate on each generalised eigenspace, whatever nice form you want. And then that nice form carries over for the whole vector space V. Alright? So you can focus on T restricted to G lambda. And the advantage of that is, so J0 lambda is the nullspace of some M that special them, right? The highest power or the smallest positive. This is what the m is going to be important for us. I want to remind you, what is this m, right? If lambda is an eigenvalue, then the eigenspace, then the nullspace size keeps on increasing for the same lambda until you reach some m, after which they are all the same. And this largest nullspace is the generalised eigenspace. All right? So remember the special Lemma would use it in several places. For each eigenvalue lambda there is an M. The M may change with the eigenvalues, but every lambda there is a unique hub. After which the nullspace with that said. Correct. Okay, So this is what I was saying, that each lambda is the nullspace. So you can focus on this. And there is only one eigenvalue for the on G lambda for t. And that's just Lambda. Since the rough looking on vector spaces where t could have multiple Eigen, distinct eigenvalue. There's only one eigenvalue. And actually you can even make it nicer by defining S to be T minus Lambda I. Okay, Then when you construct a matrix for sales, that'll be matrix of T minus matrix a, which has, So basically, you know, the matrix a, the matrix of T, whatever basis you choose. You just have to add the diagonal entries are. So basically test is on the eigenvalue of S is 0. That's it. So you reduce the problem with studying operators on a vector space. The only eigenvalue. Okay, that's a simplification. That's what this allows you to do. This first theorem. Okay? So now I'm going to study, all right, well I'll not, this is how you prove it, the second part, so I'm just going to study vector spaces, operators to you on a vector space which has only one eigenvalue. What kind of nice basis can you choose? Okay? And it will turn out you cannot do with a basis of eigenvectors. That's not possible. Okay? Right. Suppose, you know, was the only eigenvalue and you had a basis of eigenvectors. Tv we add the 0 vector operator. And we know we have operators with only 0 as the eigenvalue in fact, will be many of them and they are non-0. All right, so what kind of basis can you choose if you have only one eigenvalue? And there is this very special business called a cyclic basis, in which the matrix of t has this diagonal with the upper one. So let me define those things called Jordan matrices. Okay? Instead of writing a definition, I'll write a definition later. Let's look at examples. So what is J4? There'll be a lambda, the eigenvalue. So what does therefore lambda? Okay, I even talked about it last time, but yeah, so it's an i 4 by 4 matrix. Will have lambdas on the diagonal and one on the, what I call the super diagonal. And then 0 everywhere else. That's the Jordan matrix. Okay? Let me write one more. J3 lambda. So it's a three by three matrix. Lambdas on the diagonal, one on the super diagonal, zeros everywhere else. All right, then what does J1 Lambda? And that's reason that's intentional. I want to J1 Lambda has special treats, a one-by-one made. Can you hear me? My connection is unstable. Can you see me and him? Okay? So this is a Jordan matrix, right? So I write a definition. Is a Jordan matrix KP lambda is a p by p matrix with lambda on the diagonal, one on the super diagonal, and 0 everywhere else. All right, that's okay. So what the second part of the Jordan canonical form theorem says, I'm going to just say orderly first one part. It says basically. So if T has 100 dimensional complex vector space and only one eigenvalue Lambda, then there is a special basis for v in which the matrix of T is just a collection of Jordan matrices on their diet. Not one Jordan matrix. There could be several, but just corresponding to lambda. So that's what I'm going to write down. But that's only one part of than others, some other things there. Initially it seem That's all I'm saying, but there are several other pieces there, but I want to write down, okay. As I said on if it doesn't these notes here, okay. Type these notes here. Okay. I'm going to post it. It's all there and I've gone through it. This is my second iteration, the third iteration of the nodes, so they're getting better every time. Okay? I'm pretty happy with the way it is right now. I'll be honest with you. I started, I sort of vaguely study Jordan canonical forms when I was a student. I didn't really remember it. It's only when I started teaching this course that I started learning more and more. And actually I've understood it better. And at least some things I've understood better. Just this well, just this year. These things may not be I don't I've never seen a question like this on the prelim, but I think it's important. So there's something called the Jordan Tablo. We'll talk about in a second. Okay? So anyway, what does the statement of the Jordan canonical form the second part. Let me just keep it here so that I don't miss any part. So it says basically, suppose me is finite dimensional complex vector space and t belongs to V. Lambda is an eigenvalue of T and M the smallest positive integer such that t lambda is the nullspace. Okay? So I'm trying to tell you what T restricted to G times the matrix. So that looks will behave. So Visa find a metal complex vector space T is an LV. Lambda is one of the lambda, is an eigenvalue and m if that smallest positive integer, that is true. Okay? And there is a basis for g lambda. Sorry, yeah, basis for g lambda, okay? Not a basis for v, not just focusing on G lambda such that the matrix of T restricted to G lambda with respect to this basis is the following. There'll be some Jordan matrix P1, another Jordan matrix P2, zeros everywhere else. So the matrix of T with respect to d lambda for that basis, it not just one Jordan matrix. It's Jordan matrices on the diagonal. Okay? Now, okay, by the way, you can rotate the entries of the matrix, could go up and the JP to come down or whatever, you can rearrange them. So where do you want uniqueness? You arrange them in decreasing order. Okay? You can reinforce sizes of these matrices in any way, right? You just move the basics around. It just moves the matrices around. So you want to arrange them in decreasing order. So I want p1 to be greater than p2, better than or equal to pn, right? So here is the first thing. The size of the largest Jordan matrix representation is going to be this number n. That was the first new thing. And it's not just that they arranged like this. That m is p1 greater than p2. So the size of the largest Jordan matrix in your representation is going to be, everything else will be smaller than or less than or equal to that second bullet. Okay? Now, you know that what is the size of this matrix? By the way, the size of this whole matrix. In terms of another quantity, of course it's p1 through pK added together. That's correct. But how much is this? The dimension of V, Right? So if t was on v, but D is restricted to the dimension of G, right? So the size of the matrix is the dimension of V on which you're acting, right? But we're just restricted to G lambdas, so it's okay. And then this is interesting. How many different blocks are there going to be done? It's not obvious. Okay. It turns out the number of blocks, there are, how many blocks the blocks t is equal to the dimension of the eigenspace. Okay? These are nontrivial things. The number of blocks equals the dimension of the eigenspace. Okay? And then you say, well, what are these numbers? P1, P2, P3, T, What are they? How are they obtained? Remember that what is M here, right? D lambda is the smallest positive integer m such that g lambda is that. So the null space has a certain dimension here. Okay? Now here's one question you may ask. You know, you say, well, I chose a basis of g Lambda and I got this Jordan form. Maybe I choose another basis for g lambda, I get on a different Jordan form. Same lower decreasing order by different one. The answer is no. If you impose this restriction that is in decreasing order. And you have a matrix which gives you a Jordan form than the numbers P1 through P D are uniquely determined. You have no choice. Okay, That's the next last bit. How are the uniquely determined actually? So m, right, Let's go back one more line, but I'm going to come back here. Save lambda is the generalized eigenvalue looking at T of G lambda, right? This is the largest m. Now d and g Lambda determined these subspaces uniquely null spaces. If you know T and G lambda, you know, basically the nullspace t minus lambda and so on. So the dimensions are uniquely determined by T on G lambda and the PI are determined by the dimension of the null spaces. That's why they're unique. It doesn't depend on the basis, it depends on the null spaces. Okay, so, and I'm going to show you that's where the connection between them, if the Jordan Tableau, that's what I'm going to show you. You give me these dimension of the nullspace is, I will tell you exactly what the p are going to be. So the part 1 of the Jordan canonical form that will just saying, you know, you can study G lambda D and G lambda separately. This is really the crux of the Jordan canonical forms. Okay, let us go back and look at what it says here. I'll spend I'm not going to give you a proof of this at all. It's written in the notes. Okay. And they don't I mean, I've never seen it ask Danone prelim exam, but I do want you to understand the statement and how to use it. That's what I'm going to spend time. Alright, so let's look at this. So V is a finite dimensional complex vector space, T is an L V and Lambda is an eigenvalue of T. So there's the generalised eigenspace J0 lambda. Alright? So there is the smallest positive integer m. So that d lambda is the null space of T minus Lambda I to the ribs. After that, the null spaces are just the same. Okay? Then it says there is a basis for the null, the generalized eigenvectors. So that when you focus just on G lambda, lambda, G lambda. Then with respect to this basis, the matrix just consists of Jordan matrices on the diagonal. Further, let us say you can move around the basis elements. So let's, let us arrange the Jordan matrix is in decreasing order and size. Then the size of this first, the largest one, is going to be exactly that number m. That's number one statement. That's the first part of the statement. That the largest charter matrix that you get is the size of that EMF for the null space. And then of course everything is then going down. The second part is obvious that the size of a matrix is the size of the vector space. Your vector space here is J0 lambda. The third bullet is not obvious, far from obvious, that the number of Jordan blocks is the size of the eigenspace. Okay? Which is another way to write this as this is equal to and the last bit. The sizes of the pieces once you're in them in decreasing order, are determined by the dimension of the nullspace of t minus lambda. That's why they are determined uniquely. They do not depend on the basis. Because this is independent of the basis. This just depends on the operator and the null spaces. It's not any basis dependent thing. All right? So I want to there are two things which one needs to understand here? Okay? One is, well, what kind of special basis is it that gives us this kind of matrix, which is Jordan form here. If I have a matrix, a basis for V such that the matrix is this, I know that if it was a diagonal, then it's the basis of eigenvectors. But if it's a basis of lambdas with the one on top, what kind of, what kind of businesses? This is not a basis of eigenvectors, but what is special, and it turns out is something called a cyclic basis, very special structure that I will do later on. But I just want to emphasize so it's useful to understand what kind of basis gives you this kind of think. At the moment, I'm not going to focus on that. I'm going to focus on using this. Tara. I'm basically going to focus on what is the connection between the P's and the dimensions of these spaces and all these other things. You know that the dimension of t0 lamda is all that, all that is this thing. These three things really does. If you understand this fourth bullet and the connection, or the first three bullets are really a consequence of it. Okay? So what's the connection between the piece and the dimension of the nullspace is, okay. Now, there is an algebraic relation, but it's writing it as complicated. A visual description has the same information and it's much easier to work with. Okay, I can write down the system of equation, but just remember it. But I'll draw a picture. And that information is that. So this is something called a question. Okay, Your book does not have this. If I do remember, this actually connection is, comes out in the proof of this theorem. Okay? Actually, before I do this, let me write down a definition here. Because I need to refer to this. This whole thing here. I will call it B Jordan block of t associated to Lambda. Okay, this whole block, I want to give it, want a way to refer to it. So this is the one which is coming for lambda, right? So I'll call it the Jordan block for t associated to Lambda. So I can refer to it. Okay, this whole collection of B12. I can refer to it, talk about it. So what are the Jordan blue? So I have an example, it's in the notes. So I'm going to just do that one. So suppose T is an eigenvalue, Lambda eigenvalue. And the Jordan block associated with lambda is the following. Okay? So this is yellow, this is P1, P2, P3, P4, P2, P1 is one, sorry, p5 is one. Sixes. So what does the Jordan Pablo. Okay, now that i'm I'm just going to copy this again, right? So that is a J5 Lambda 3 lambda J3 Lambda J2 lambda j 1 lambda k 1 lambda. Okay, so I have it there. So you start with this five. Jordan tableau is a tableau of dots. It's a matrix of dots. It's a representation. Okay? So put five dots left, adjusted 1, 2, 3, 4, 5. And just so you can remember it. Okay. Then you have three dots. 1, 2, 3, all lifted just a pump I just said. Okay. Then you put the second three dot. Then you put the two dots. And then one dot. And one dot. Sorry, I'm getting mixed up here. Okay? So the number of dots in each row are the P-values. And this is called the Jordan tableau. Okay? And the interesting thing is from this you can read off the dimension of the nullspace. So what do you, how do you read off the null spaces? So look at the first column. How many dots? 1, 2, 3, 4, 5, 6. This is the dimension of the nullspace of t minus lambda to the power one. How many dots in the first and the second columns? Ten. This is the dimension of the number of dimension of the nullspace of t minus lambda I square. How many dots in the three columns? 13. They said the number nullspace of t minus lambda cubed, 14 and 15 as that. Let me write this down low. This comes from the proof. I'll be okay with this. So this is actually equal to six. Number. The first second column, which is 6 plus 4 equals 10, 12, 13. Me so far. All right? Then for that's 14 is equal to 6, 6, 13, 14, 15. So this is the Jordan Tableau's establishes the connection between the size of the PIs and the dimension of the nullspace is proven not trivial. I'm not saying the proof is trivial. It's not obvious is when you prove the whole Jordan canonical form part 2, it's in that proof that you see this. Okay? How can we go backwards? Suppose I gave you these numbers, 6, 10131415. Could you figure out the piece? Okay, now you can write down equations for it. It's complicated. But you look at the da, ga and Tableau, it's very easy to read a tough. So let's do an example going the other way now. I'll be okay. Does anyone have any question on the Jordan tableau? Right. And actually, all these things that are remarks that I made, that the size of the largest one is m is P1. All right, all that is there. Okay, It's Jordan. Tableau has all this information. Okay, anyway, let's go the other way around. Suppose t belongs to V is an eigenvalue. Okay? And I give you the mouthpiece inflammation. So you have here is five. I just made up an example, actually paid up numbers. There's a subtle question here. We'll come back to in a second. Okay, so suppose I give you the space, it has to be increasing, right? Each one, remember in that proposition, Each is contained in the next one. So the dimension of the nullspace is, has to be increasing and eventually they'll be equal. So this is your m. After that it doesn't increase. Right? What I wanted to find out is what does the Jordan block? The question I gave you, the nullspace is can you find the Jordan block? Right? So again, you just basically do the Jordan Tableau. So how did we do this? Well, can you tell me how you got the Jordan Tableau for this? So we have to work backwards from here. Right? So the peas were the number of dots on each row? Correct? And the nullspace is where the nullspace of p minus lambda was the number of bots and the first column, the number of dots in the first two columns are the nullspace of t minus lambda I square. Total number of dots on the team as lambda cube, right? So at least we can forget the number of dots here for That's right. That's the number of knots in the nullspace of t minus lambda i1. So how many dots go in the first column for this problem? Five, correct? So you may see this, but I can't see these numbers, so I'm just going to, so there are five dots in the first column. Okay? So by the way, these are always drawn. Top adjusted and left adjusted. What that means is whenever you write the dots, they're all pushed to the left. All pushed to the left and all pushed to the top. Okay, that's how you draw them. So it is top adjusted, left adjusted, always going to draw them. Okay? So there are five dots from this one. Okay? Then dimension of now t minus lambda I squared is eight. So how many dots in the second column? Just three dots, right? So my first second added together. Then how many dots in the third column? So you went from five to eight and now you're going to 10. So how many dots? And then you went to 12. So that's two more dots. And then you went to 13. One more dot. Then you went to 14. Sorry, let me redraw this. I didn't draw them straight. Okay, So we're going 58, 10, 12, 13, and 14. Okay, so you draw this line. I'll be okay with everyone. Is it okay? You have a question or is it okay for now? Sharps? George's, what about here? Is that okay? Yeah. Also have a question, but I think it's it's it's after that. Why don't you is it related to this or something else? It is, but I don't know if it's helpful. Why don't you ask go ahead, let me have the same question. So I was thinking if we have two operators that are the same, jordan tableau, what does that mean? How are these two are actually, that's a very important question. So suppose you have two operators, a and B. Okay? And then you write down the Jordan, Jordan blocks for each eigenvalue. So they have somebody one. Suppose you have the same eigenvalues and they have the same Jordan blocks. Are the operators the same? And the answer is no. But then how are they related? Can anyone want to make a guess if you've seen something like it, you might be able to guess the correct answer. If two linear maps on complex vector space, a habit same Jordan form, the same eigenvalues, and the same Jordan blocks. How do you think they are related? They're not identical, they're not S is not equal to t. So what is the next best thing? Yeah, they're multiples of each other. So you mean s is, t is equal to some n times n times s, something like that. Now at the M times as you can really change the operator completely, I think maybe make any other operator actually. I don't know if any, if you've seen this kind of thing before, you might, then no it any other guess? And we're going to upgrade. That's one of the propositions later. One random make a guess. If two linear operators have the same Jordan form, they're not identical. We know that. I can you can see that marriage. That's, yeah, you can say something about the eigenspaces because the Eigen blacks are connected, right? I don't know how that SHE aids the map. So for this cannula as eigenspaces, right now, it's not, the answer is not here. Whatever I've told it does not contain the answer to that question. Okay. I will just saying that if you've seen similar idea, then you might know it. Okay, Have you heard of similar maps, similar linear maps are similar matrices. If you have not, that's okay. That's why I'll write that down. That's why I said the answer is not here. So two linear maps are two matrices, n by n matrices M sides have the same Jordan canonical form if and only if they are what is called similar. And I'll just, just in case you're getting. So as similar means basically this S and T are similar if one can be written in terms of the other like this, you inwards ASU for some linear up invertible operator. Okay, we'll come back to that later. So now this a good time to ask a question that since he's already asters.
Lecture I - Jordan canonical form
From Rakesh Rakesh January 03, 2022
23 plays
23
0 comments
0
You unliked the media.