MATH1231 - Eigen Vectors and Eigen Values

Eigenvectors and Eigenvalues

Eigenvectors and Eigenvalues are ways of simply expressing the relationship between a matrix/linear map and the vector it is applied to.
If a linear map T or a square matrix A can be represented as

(1)
\begin{align} T(v) = Av = \lambda v \end{align}

where v is not zero, then we say that $\lambda$is an eigenvalue for T/A, and v is an eigenvector of T/A for that eigenvalue.

Complex Number Field

When dealing with eigenvalues and eigenvectors of matrices we work within the Complex Numbers Field. (This is because the eigenvalues of a matrix are zeroes of some polynomial, and we can only be certain of finding zeroes for complex polynomials).

Finding Eigenvalues of Matrices

A scalar $\lambda$ is an eigenvalue of a square matrix A iff

(2)
\begin{align} det(A - \lambda I) = 0 \end{align}

The set of eigenvectors for any eigenvalue is set of non-zero vectors that solve the following equation. Essentially $ker(A - \lambda I)$ (taking away the 0 vector).

(3)
\begin{align} (A - \lambda I)v = 0 \end{align}

The determinant must be 0, because that is the only way in which the above equation can be solved with a non-zero v.

Characteristic Polynomials

The Characteristic Polynomial for A is defined as $p(\lambda) = det(A - \lambda I)$
The Characteristic Equation is $p(\lambda) = 0$

Number of Eigenvalues For A Matrix

Any n*n matrix A has n eigenvalues in $\mathbb {C}$. These eigenvalues are the zeroes of A’s characteristic polynomial.

Finding Eigenvalues and Eigenvectors

To find the eigenvalues

  1. Find the matrix $B = (A - \lambda I)$
  2. Find the determinant of the above $(det(B))$
  3. Find the roots of the above equation

To find the eigenvectors

  1. For each eigenvalue sub it in for lambda: $B = (A - 3I)$, for instance
  2. Take any row of B and figure out, by row reduction or otherwise, what to multiply it by to make it 0. (e.g. (2 2) * (-1 1) = (-2 2) which sums to 0)
  3. This new vector is the eigenvector for that eigenvalue
  4. Rinse and repeat

Eigenvalues and Linear Independence

We said above that any n*n matrix has n eigenvalues. If each of these eigenvalues is distinct (i.e. they are all different) then the eigenvectors are all linearly independent, meaning they form a basis for $\mathbb{R}^n$.

Eigenvectors and Diagonal Matrices

A diagonal matrix is any matrix where the only non-zero elements are on the diagonal top left-bottom right line.

The cool thing is that if an n*n matrix A has n linearly independent eigenvectors then there's an invertible matrix M and a diagonal matrix D so that:

(4)
\begin{equation} M^{-1}AM = D \end{equation}

Where each of the diagonal elements of D is an eigenvalue of A, and each column of M is the corresponding eigenvalue of A. (So the kth eigenvalue of D goes with the kth column of M).

We define A as 'diagonisable' if the above matrices M and D exist.

Powers of Matrices

Diagonal matrices lead us on to really cool ways of finding powers of matrices.

For instance if :

(5)
\begin{align} D = \begin {pmatrix} \lambda_1 & 0 & 0 & 0 \\ 0 & \lambda_2 & 0 & 0 \\ 0 & 0 & \lambda_3 & 0 \\ 0 & 0 & 0 & \lambda_4 \\ \end {pmatrix} \end{align}

then for k >= 1:

(6)
\begin{align} D^k = \begin {pmatrix} \lambda_1^k & 0 & 0 & 0 \\ 0 & \lambda_2^k & 0 & 0 \\ 0 & 0 & \lambda_3^k & 0 \\ 0 & 0 & 0 & \lambda_4^k \\ \end {pmatrix} \end{align}

We can prove this with induction; D * D2 will result in D3, rinse and repeat.

Rearranging our above equation to get $M^{-1}AM = D \rightarrow A = MDM^{-1}$ we can then apply this to matrix powers, ending up with:

(7)
\begin{equation} A^k = MD^kM^{-1} \end{equation}

Hence to find Ak we:

  1. Find the eigenvalues of A
  2. Create D
  3. Create M and M-1
  4. Find Dk
  5. Multiply M by Dk by M-1

Eigenvalues and First-Order Linear Differential Equations

Eigenvalues have many and varied uses, and the last we will look at is solver ODEs.

When given an equation such as:

(8)
\begin{align} \frac{dy_1}{dt} = a_{11}y_1 + a_{12}y_2 \\ \frac{dy_2}{dt} = a_{21}y_1 + a_{22}y_2 \\ y_1(0), y_2(0) \text{ are given} \\ \end{align}

We first write it as a matrix:

(9)
\begin{align} y = \begin {pmatrix} y_1 \\ y_2 \\ \end {pmatrix} \text {, } A = \begin {pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end {pmatrix} \end{align}

And from there we find

(10)
\begin{align} \frac{dy}{dt} = Ay \text{, with } y(0) = \begin{pmatrix} y_1(0) \\ y_2(0) \\ \end{pmatrix} \end{align}

We call y the state vector, and let t represent time, and then look to calculus to see that we find one-dimensional counterparts with the solution $y(t) = y_0e^{at}$, and then guess a similar exponential solution for our n-dimensional equation.

So if we guess $y = u(t) = ve^{\lambda t}$ and substitute that into the matrix equation we get $\frac{dy}{dt} = \lambda ve^{\lambda t} = Ay = Ave^{\lambda t}$.

Which leads to

(11)
\begin{align} e^{\lambda t}(Av - \lambda v) = 0 \end{align}

And as this is not 0 for all t, we hence find that this is only 0 of [[$(A - \lambda I)v = 0]].

Hence $e^{\lambda t}(Av - \lambda v) = 0$ is a solution of dy/dt iff $\lambda$ is an eigenvalue and v is an eigenvector for$\lambda$ for A.

If $u_1 & u_2$ are solutions, then so are any linear combinations of them.