Diagonalization

The conversion of a matrix into diagonal form is called diagonalization. The eigenvalues of a matrix are clearly represented by diagonal matrices. A Diagonal Matrix is a square matrix in which all of the elements are zero except the principal diagonal elements. Let’s look at the definition, process, and solved examples of diagonalization in this article.

Table of contents:

Diagonalization of a Matrix

If there is an invertible n×n matrix C and a diagonal matrix D such that A=CDC-1, then an n×n matrix A is diagonalizable.

For example,

\(\begin{array}{l}\begin{bmatrix}4 & 0 & 0 \\0 & 5 & 0 \\0 & 0 & 6 \\\end{bmatrix}=I_{3}\begin{bmatrix}4 & 0 & 0 \\0 & 5 & 0 \\0 & 0 & 6 \\\end{bmatrix}I_{3}^{-1}\end{array} \)

Hence, we can say, any diagonal matrix D is diagonalizable, as it is similar to itself.

Diagonalization Theorem

If and only if A has n linearly independent eigenvectors, then the n×n matrix A is diagonalizable.

A=CDC-1 for this example.

\(\begin{array}{l}C =\begin{pmatrix} |& | & … & | \\v_{1} & v_{2} & … & v_{n} \\| & | & … & | \\\end{pmatrix}\end{array} \)
\(\begin{array}{l}D = \begin{bmatrix}\lambda_{1} & 0 & \cdots & 0 \\0 & \lambda_{2} & \cdots & 0 \\\vdots & \vdots &\ddots & \vdots \\0 & 0 & \cdots & \lambda_{n}\\\end{bmatrix}\end{array} \)

Here,

v1, v2, …, vn are the linearly independent Eigenvectors,

λ1, λ2, …λn are the corresponding Eigenvalues.

Diagonalization Proof

Assume that matrix A has n linearly independent Eigenvectors such as v1, v2, …, vn, having Eigenvalues λ1, λ2, …λn. Defining “C” as considered above, we can conclude C is invertible using the invertible matrix theorem.

Assume that D = C-1AC, Hence, A = CDC-1.

Now, consider multiplying the standard coordinate vectors picks by the elements columns of C, we can have

Cei = vi, and hence ei = C-1vi.

To obtain the columns of D, we multiply by the standard coordinate vectors.

Thus we can say,

Dei = C-1ACei = C-1Avi = C-1λivi = λiC-1vi = λiei

D’s columns are hence multiples of the standard coordinate vectors:

\(\begin{array}{l}D = \begin{bmatrix}\lambda_{1} & 0 & \cdots & 0 & 0 \\0 & \lambda_{2} & \cdots & 0 & 0 \\\vdots & \vdots & \ddots & \vdots & \vdots \\0 & 0 & \cdots & \lambda_{n-1} & 0 \\0 & 0 & \cdots & 0 & \lambda_{n} \\\end{bmatrix}\end{array} \)

Assume A = CDC-1, where C has columns v1, v2,…, vn, and D is diagonal with diagonal entries λ1, λ2,…, λn. C’s columns are linearly independent since it is invertible. We need to demonstrate that vi is an eigenvector of A with eigenvalue λi. Because the standard coordinate vector ei is an eigenvector of D with eigenvalue λi we can write:

Avi = CDC-1Vi = CDei = Cλiei = λiCei = λivi.

Hence, we can conclude that if

If an n×n matrix A has n different eigenvalues λ1, λ2,…, λn, then a selection of matching eigenvectors v1, v2,…, vn is inherently linearly independent.

In other words, an n×n matrix with unique Eigenvalues is diagonalizable.

Also, read:

Diagonalization of 2×2 Matrix

Go through the below example to understand the procedure for diagonalizing the 2×2 matrix.

Example 1:

Diagonalize the given 2×2 matrix:

\(\begin{array}{l}A = \begin{bmatrix}1/2 & 3/2 \\3/2 & 1/2 \\\end{bmatrix}\end{array} \)

Solution:

First, calculate the characteristic polynomial to find the Eigenvalues and Eigenvectors.

Hence, f(λ)= λ2 – Tr(A)λ + det(A) = λ2 – λ – 2 = (λ+1)(λ-2)

Thus, the Eigenvalues are -1 and 2.

Now, we have to calculate the Eigenvector for every Eigenvalue.

Thus, if λ1= -1,

\(\begin{array}{l}(A+1I_{2})v =0\Leftrightarrow \begin{bmatrix}3/2 & 3/2 \\3/2 & 3/2 \\\end{bmatrix}v =0\overset{RREF}{\rightarrow}\begin{bmatrix}1 & 1 \\0 & 0 \\\end{bmatrix}v =0\end{array} \)

Now, the parametric form is x = -y, and hence

\(\begin{array}{l}v_{1}=\begin{pmatrix}-1 \\1\end{pmatrix}\end{array} \)
is an Eigenvector for the Eigenvalue λ1.

Similarly, the Eigenvector for the Eigenvalue λ2= 2 is given by:

\(\begin{array}{l}(A-2I_{2})v =0\Leftrightarrow \begin{bmatrix}-3/2 & 3/2 \\3/2 & -3/2 \\\end{bmatrix}v =0\overset{RREF}{\rightarrow}\begin{bmatrix}1 & -1 \\0 & 0 \\\end{bmatrix}v =0\end{array} \)

Hence, the parametric form is x = y, and hence

\(\begin{array}{l}v_{2}=\begin{pmatrix}1 \\1\end{pmatrix}\end{array} \)
is an Eigenvector for the Eigenvalue λ2.

Therefore, the Eigenvectors v1 and v2 are linearly independent and according to the diagonalization theorem, we can write

A = CDC-1 for

\(\begin{array}{l}C =\begin{bmatrix}-1 & 1 \\1 & 1 \\\end{bmatrix}\end{array} \)
\(\begin{array}{l}D =\begin{bmatrix}-1 & 0 \\0 & 2 \\\end{bmatrix}\end{array} \)

In case, if we choose λ=2, then we can get

A = C’D(C’)-1 for

\(\begin{array}{l}C’ =\begin{bmatrix}1 & -1 \\1 & 1 \\\end{bmatrix}\end{array} \)
\(\begin{array}{l}D’ =\begin{bmatrix}2 & 0 \\0 & -1 \\\end{bmatrix}\end{array} \)

Diagonalization of 3×3 Matrix

For a better understanding of how to diagonalize a 3×3 matrix, look at the example below.

Example 2:

Diagonalize the given matrix:

\(\begin{array}{l}\begin{bmatrix}4 & -3 & 0 \\2 & -1 & 0 \\1 & -1 & 1 \\\end{bmatrix}\end{array} \)
.

Solution:

Firstly, find the characteristic polynomial by expanding the cofactors of 3rd column to calculate the Eigenvalue First, calculate the characteristic polynomial by expanding the cofactors along the 3rd column, to find Eigenvalues and Eigenvectors.

\(\begin{array}{l}f(\lambda )=det(A-\lambda I_{3})= (1-\lambda )det\left [ \begin{bmatrix}4 & -3 \\2 & -1 \\\end{bmatrix}-\lambda I_{2} \right ]\end{array} \)

f(λ) = (1-λ)(λ2-3λ+2) =-(λ-1)2(λ-2).

Hence, the Eigenvalues are 1 and 2.

Thus, the Eigenvector for the Eigenvalue λ1 = 1.

\(\begin{array}{l}(A-I_{3})v =0\Leftrightarrow \begin{bmatrix}3 & -3 & 0 \\2 & -2 & 0 \\1 & -1 & 0 \\\end{bmatrix}v=0\overset{RREF}{\rightarrow}\begin{bmatrix}1 & -1 & 0 \\0 & 0 & 0 \\0 & 0 & 0 \\\end{bmatrix}v=0\end{array} \)

Thus, the parametric vector form is

\(\begin{array}{l}\left\{\begin{matrix}x=y \\y=y \\z=z\end{matrix}\right.\Rightarrow \begin{pmatrix}x \\y\\z\end{pmatrix}=y\begin{pmatrix}1 \\ 1\\0\end{pmatrix}+z\begin{pmatrix}0 \\0 \\1\end{pmatrix}\end{array} \)

Therefore, the basis for the 1-Eigenspace is

\(\begin{array}{l}B_{1}=\left\{ v_{1}, v_{2}\right\}\end{array} \)

Where,

\(\begin{array}{l}v_{1}=\begin{pmatrix}1 \\1 \\0\end{pmatrix}\end{array} \)
\(\begin{array}{l}v_{}2=\begin{pmatrix}0 \\0 \\1\end{pmatrix}\end{array} \)

Similarly, compute the Eigenspace for λ2 = 2.

\(\begin{array}{l}(A-2I_{3})v =0\Leftrightarrow \begin{bmatrix}2 & -3 & 0 \\2 & -3 & 0 \\1 & -1 & -1 \\\end{bmatrix}v=0\overset{RREF}{\rightarrow}\begin{bmatrix}1 & 0 & -3 \\0 & 1 & -2 \\0 & 0 & 0 \\\end{bmatrix}v=0\end{array} \)

Therefore, the parametric form is x = 3z, and y =2z, then the Eigenvector for the Eigenvalue 2 is

\(\begin{array}{l}v_{3}=\begin{pmatrix}3 \\2 \\1\end{pmatrix}\end{array} \)

Hence, the Eigenvectors v1, v2 and v3 are linearly independent.

Here, v1 and v2 form the basis of 1-Eigenspace, whereas v3 does not belong to 1-Eigenspace, as its Eigenvalue is 2.

Hence, from the diagonalization theorem, we can write

A = CDC-1, for

\(\begin{array}{l}C =\begin{bmatrix}1 & 0 & 3 \\1 & 0 & 2 \\0 & 1 & 1 \\\end{bmatrix} \end{array} \)
\(\begin{array}{l}D =\begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 2 \\\end{bmatrix} \end{array} \)

Frequently Asked Questions on Diagonalization

Q1

What is meant by diagonalization?

The transformation of a matrix into diagonal form is known as diagonalization.

Q2

How do you determine that the matrix is diagonalizable?

A matrix is considered to be diagonalizable if and only if the dimension of the eigenspace for each eigenvalue is equal to the eigenvalue’s multiplicity.

Q3

What is the main purpose of diagonalization?

The primary goal of diagonalization is to figure out what a matrix’s functions are.

Comments

Leave a Comment

Your Mobile number and Email id will not be published.

*

*

close
close

Play

&

Win