Eigenvalue: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Michael Underwood
No edit summary
imported>Michael Underwood
No edit summary
Line 6: Line 6:
That is, to find a number <math>\lambda</math> and a vector <math>\vec{v}</math> that together satisfy
That is, to find a number <math>\lambda</math> and a vector <math>\vec{v}</math> that together satisfy
:<math>A\vec{v}=\lambda\vec{v}\ .</math>
:<math>A\vec{v}=\lambda\vec{v}\ .</math>
What this equation says is that even though <math>A</math> is a matrix its action on <math>\vec{v}</math> is the same as multiplying it by the number <math>\lambda</math>. Note that generally this will ''not'' be true.  This is most easily seen with a quick example.  Suppose
What this equation says is that even though <math>A</math> is a matrix its action on <math>\vec{v}</math> is the same as multiplying it by the number <math>\lambda</math>.
This means that the vector <math>\vec{v}</math> and the vector <math>A\vec{v}</math> are [[parallel]] (or [[anti-parallel]] if <math>\lambda</math> is negative).
Note that generally this will ''not'' be true.  This is most easily seen with a quick example.  Suppose
:<math>A=\begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}</math> and <math>\vec{v}=\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}\ .</math>
:<math>A=\begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}</math> and <math>\vec{v}=\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}\ .</math>
Then their [[matrix multiplication|matrix product]] is
Then their [[matrix multiplication|matrix product]] is
:<math>A\vec{v}=\begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
:<math>A\vec{v}=\begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix}
=\begin{pmatrix}a_{11}v_1+a_{12}v_2 \\ a_{21}v_1+a_{22}v_2 \end{pmatrix}</math>
=\begin{pmatrix}a_{11}v_1+a_{12}v_2 \\ a_{21}v_1+a_{22}v_2 \end{pmatrix}</math>
whereas
whereas the [[scalar]] product is
:<math>\lambda\vec{v}=\begin{pmatrix} \lambda v_1 \\ \lambda v_2 \end{pmatrix}\ .</math>
:<math>\lambda\vec{v}=\begin{pmatrix} \lambda v_1 \\ \lambda v_2 \end{pmatrix}\ .</math>
Obviously then <math>A\vec{v}\neq \lambda\vec{v}</math> unless
<math>\lambda v_1 = a_{11}v_1+a_{12}v_2</math> and [[simultaneous equations|simultaneously]] <math>\lambda v_2 = a_{21}v_1+a_{22}v_2</math>,
and it is easy to pick numbers for the entries of <math>A</math> and <math>\vec{v}</math> such that this cannot happen for any value of <math>\lambda</math>.
==The eigenvalue equation==
So where did the eigenvalue equation <math>\text{det}(A-\lambda I)=0</math> come from?  Well, we assume that we know the matrix <math>A</math> and want to find a number <math>\lambda</math> and a non-zero vector <math>\vec{v}</math> so that <math>A\vec{v}=\lambda\vec{v}</math>.  (Note that if <math>\vec{v}=\vec{0}</math> then the equation is always true, and therefore uninteresting.)  So now we have
<math>A\vec{v}-\lambda\vec{v}=\vec{0}</math>.  It doesn't make sense to subtract a number from a matrix, but we can factor out the vector if we first multiply the right-hand term by the identity, giving us
:<math>(A-\lambda I)\vec{v}=\vec{0}\ .</math>
Now we have to remember the fact that <math>A-\lambda I</math> is a square matrix, and so it might be [[matrix inverse|invertible]].
If it was invertible then we could simply multiply on the left by its inverse to get
:<math>\vec{v}=(A-\lambda I)^{-1}\vec{0}=\vec{0}</math>
but we have already said that <math>\vec{v}</math> can't be the zero vector!  The only way around this is if <math>A-\lambda I</math> is in fact non-invertible.  It can be shown that a square matrix is non-invertible if and only if its [[determinant]] is zero.  That is, we require
:<math>\text{det}(A-\lambda I)=0\ ,</math>
which is the eigenvalue equation stated above.

Revision as of 17:02, 3 October 2007

In linear algebra an eigenvalue of a (square) matrix is a number that satisfies the eigenvalue equation,

where is the identity matrix of the same dimension as and in general can be complex. The origin of this equation is the eigenvalue problem, which is to find the eigenvalues and associated eigenvectors of . That is, to find a number and a vector that together satisfy

What this equation says is that even though is a matrix its action on is the same as multiplying it by the number . This means that the vector and the vector are parallel (or anti-parallel if is negative). Note that generally this will not be true. This is most easily seen with a quick example. Suppose

and

Then their matrix product is

whereas the scalar product is

Obviously then unless and simultaneously , and it is easy to pick numbers for the entries of and such that this cannot happen for any value of .

The eigenvalue equation

So where did the eigenvalue equation come from? Well, we assume that we know the matrix and want to find a number and a non-zero vector so that . (Note that if then the equation is always true, and therefore uninteresting.) So now we have . It doesn't make sense to subtract a number from a matrix, but we can factor out the vector if we first multiply the right-hand term by the identity, giving us

Now we have to remember the fact that is a square matrix, and so it might be invertible. If it was invertible then we could simply multiply on the left by its inverse to get

but we have already said that can't be the zero vector! The only way around this is if is in fact non-invertible. It can be shown that a square matrix is non-invertible if and only if its determinant is zero. That is, we require

which is the eigenvalue equation stated above.