# Eigenvectors and Eigenvalues

Eigenvectors and Eigenvalues are ways of simply expressing the relationship between a matrix/linear map and the vector it is applied to.

If a linear map T or a square matrix A can be represented as

where v is not zero, then we say that $\lambda$is an eigenvalue for T/A, and v is an eigenvector of T/A for that eigenvalue.

# Complex Number Field

When dealing with eigenvalues and eigenvectors of matrices we work within the Complex Numbers Field. (This is because the eigenvalues of a matrix are zeroes of some polynomial, and we can only be certain of finding zeroes for complex polynomials).

# Finding Eigenvalues of Matrices

A scalar $\lambda$ is an eigenvalue of a square matrix A iff

(2)The set of eigenvectors for any eigenvalue is set of non-zero vectors that solve the following equation. Essentially $ker(A - \lambda I)$ (taking away the 0 vector).

(3)The determinant must be 0, because that is the only way in which the above equation can be solved with a non-zero v.

# Characteristic Polynomials

The Characteristic Polynomial for A is defined as $p(\lambda) = det(A - \lambda I)$

The Characteristic Equation is $p(\lambda) = 0$

# Number of Eigenvalues For A Matrix

Any n*n matrix A has n eigenvalues in $\mathbb {C}$. These eigenvalues are the zeroes of A’s characteristic polynomial.

# Finding Eigenvalues and Eigenvectors

To find the eigenvalues

- Find the matrix $B = (A - \lambda I)$
- Find the determinant of the above $(det(B))$
- Find the roots of the above equation

To find the eigenvectors

- For each eigenvalue sub it in for lambda: $B = (A - 3I)$, for instance
- Take any row of B and figure out, by row reduction or otherwise, what to multiply it by to make it 0. (e.g. (2 2) * (-1 1) = (-2 2) which sums to 0)
- This new vector is the eigenvector for that eigenvalue
- Rinse and repeat

# Eigenvalues and Linear Independence

We said above that any n*n matrix has n eigenvalues. If each of these eigenvalues is distinct (i.e. they are all different) then the eigenvectors are all linearly independent, meaning they form a basis for $\mathbb{R}^n$.

# Eigenvectors and Diagonal Matrices

A diagonal matrix is any matrix where the only non-zero elements are on the diagonal top left-bottom right line.

The cool thing is that if an n*n matrix A has n linearly independent eigenvectors then there's an invertible matrix M and a diagonal matrix D so that:

(4)Where each of the diagonal elements of D is an eigenvalue of A, and each column of M is the corresponding eigenvalue of A. (So the k^{th} eigenvalue of D goes with the k^{th} column of M).

We define A as 'diagonisable' if the above matrices M and D exist.

# Powers of Matrices

Diagonal matrices lead us on to really cool ways of finding powers of matrices.

For instance if :

(5)then for k >= 1:

(6)We can prove this with induction; D * D^{2} will result in D^{3}, rinse and repeat.

Rearranging our above equation to get $M^{-1}AM = D \rightarrow A = MDM^{-1}$ we can then apply this to matrix powers, ending up with:

(7)Hence to find A^{k} we:

- Find the eigenvalues of A
- Create D
- Create M and M
^{-1} - Find D
^{k} - Multiply M by D
^{k}by M^{-1}

# Eigenvalues and First-Order Linear Differential Equations

Eigenvalues have many and varied uses, and the last we will look at is solver ODEs.

When given an equation such as:

(8)We first write it as a matrix:

(9)And from there we find

(10)We call **y** the state vector, and let t represent time, and then look to calculus to see that we find one-dimensional counterparts with the solution $y(t) = y_0e^{at}$, and then guess a similar exponential solution for our n-dimensional equation.

So if we guess $y = u(t) = ve^{\lambda t}$ and substitute that into the matrix equation we get $\frac{dy}{dt} = \lambda ve^{\lambda t} = Ay = Ave^{\lambda t}$.

Which leads to

(11)And as this is not 0 for all t, we hence find that this is only 0 of [[$(A - \lambda I)v = 0]].

Hence $e^{\lambda t}(Av - \lambda v) = 0$ is a solution of dy/dt iff $\lambda$ is an eigenvalue and v is an eigenvector for$\lambda$ for A.

If $u_1 & u_2$ are solutions, then so are any linear combinations of them.