There's one more thing we have to talk about, and that's called Eigenvalue multiplicity. Let's say a is an n by n matrix and P of lambda is its characteristic polynomial. Fundamental theorem of algebra says we can factor this polynomial into terms that expose all of its roots. The lambda one, lambda two, and so on, would be the distinct eigenvalues of a. And then these exponents are referred to as the algebraic multiplicity of those eigenvalues. So this is familiar from the way we've always held handled. Polynomials with double roots are even higher roots. It's the case that the algebraic multiplicities have to add up to en, the degree of the polynomial. And if one of those multiplicities is one, we can say that lambda is a simple root of the polynomial and it's a simple eigenvalue. Here's an example. We find the determinant of a minus lambda, i, subtracting lambda from the diagonal. And this determinant is just four minus lambda squared or lambda minus four quantity squared. So then one equals four is the only eigenvalue and it has algebraic multiplicity equal to two. Let's go ahead and find the eigenspace going with this eigenvalue. So we take a minus lambda one, i, subtracting four from the diagonal just leaves us with a 0 matrix. No leading ones. So both variables are free. So we can express all the eigenvectors for this eigenvalue as a linear combination of 1-0 and 0-1. Let's change this example just a little bit. By replacing one of the zeros with a one. We go through the same process. But because of the way determinants work, this determinant isn't actually change trait that one multiplies a 0. So we still have a single eigenvalue at four with algebraic multiplicity equal to two. Now what about the eigenspace? This time the matrix does have a leading one in the first row. So while x1 is free, x2 must be 0. And we only need one basis vector to express the eigenspace. If we look at these two examples side-by-side, they both have a double eigenvalue at four. But in the first case, it takes two basis vectors to describe the eigenspace. And in the other case, it takes only one basis vector to describe it. To, we have a new definition to describe what's going on. The geometric multiplicity is the number of basis vectors in the eigenspace. So in this case it's two and in the other case it's one. That's a fact that for every eigenvalue, the geometric multiplicity is at least one. There's at least one eigenvector. And it's less than or equal to the algebraic multiplicity of that eigenvalue. The critical distinction here is if you actually have a strictly less than here to the geometric multiplicity is actually less than the algebraic multiplicity. Then we say the eigenvalue is effective. And if a matrix has any defective eigenvalues, which say the matrix is defective as well. Defective matrices complicated what we're gonna do later, as well as many other things. So it's nice to know when it can't happen. There's an easy but important theorem that says that if you have an n by n matrix and it has n distinct or n simple Eigenvalues, then none of them are defective. And that's basically because there's no space between one and the algebraic multiplicity of the geometric multiplicities equal one. Since we're mostly going to be dealing with two-by-two matrices. It's useful to know that it's even simpler. If you do have a double eigenvalue in the two-by-two case, there's only two possibilities. Either a is a multiple of the identity matrix, or a is defective. And two examples I showed illustrate both of these cases. First one was a multiple of the identity. The other one is not.
I.14 Eigenvalue multiplicity
From Tobin Driscoll February 02, 2021
133 plays
133
0 comments
0
You unliked the media.