Normalization and Decomposition of Eigenvectors

In linear algebra, Eigenvector is a special part of vectors containing a system of linear equations. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations such as in the field of stability analysis, atomic orbitals, matrix diagonalization, vibration analysis and many more. In this section will study about eigenvectors definition, vectors normalization and decomposition of eigenvectors.

Formal Definition of Eigen Vector

Eigenvector of a square matrix is defined as a non-vector by which when given matrix is multiplied, is equal to a scalar multiple of that vector.

Explanation:

Let us suppose that A is an n x n square matrix and if v be a non-zero vector, then product of matrix A and vector v is defined as produced of a scalar quantity λ\lambda and the given vector, such that: AvA v = λv\lambda v where, v = Eigenvector and λ\lambda be the scalar quantity that is termed as eigenvalue associated with given matrix A.

Normalized Eigenvector

In problems related to finding eigenvectors, we often come across with computation of normalized eigenvectors. Normalized eigenvector is nothing but an eigenvector having unit length.

It can be found by simply dividing each component of the vector by the length of the vector. By doing so, the vector is converted into the vector of length one.

The formula for finding length of vector:

X=[x1x2..xn]X = \begin{bmatrix} x_{1}\\ x_{2}\\ .\\ .\\ x_{n} \end{bmatrix}\\

L = x12+x22++xn2\sqrt{x_{1}^{2}+x_{2}^{2}+…+x_{n}^{2}}

For example: Given eigenvector is

[151]\begin{bmatrix} 1\\ -5\\ -1 \end{bmatrix}

Here, L = 12+(5)2+12\sqrt{1^{2}+(-5)^{2}+1^{2}}

L = 33\sqrt{3}

It’s normalized form be represented by:

[133533133]\begin{bmatrix} \frac{1}{3\sqrt{3}}\\ \frac{-5}{3\sqrt{3}}\\ \frac{-1}{3\sqrt{3}} \end{bmatrix}

Eigenvector Decomposition

The decomposition of a square matrix A into eigenvalues and eigenvectors is known as eigen decomposition. The decomposition of any square matrix into eigenvalues and eigenvectors is always possible as long as the matrix consisting of the eigenvectors of given matrix is square matrix, also explained in eigen decomposition theorem.

As we are well known to the fact that a matrix represents a system of linear equations. The matrix can be worked out in order to determine its eigenvalues as well as eigenvectors. The determination of eigenvectors can be done only after the computation of eigenvalues. The whole process of determining eigenvectors is known as eigenvalue decomposition. It is also termed as eigendecomposition.

Solved Examples On Eigenvector Decomposition

Example 1: Show the process of eigenvector decomposition of matrix  A=[1013]\mathbf{A} = \begin{bmatrix} 1 & 0 \\ 1 & 3 \\ \end{bmatrix}

Solution:

The 2 × 2 real matrix A=[1013]\mathbf{A} = \begin{bmatrix} 1 & 0 \\ 1 & 3 \\ \end{bmatrix} may be decomposed into a diagonal matrix through multiplication of a non-singular matrix B=[abcd]R2×2.{\displaystyle \mathbf {B} ={\begin{bmatrix}a&b\\c&d\end{bmatrix}}\in \mathbb {R} ^{2\times 2}.}\\.

Then [[abcd]]1[1013][abcd]=[x00y][\begin{bmatrix} a &b \\ c&d \end{bmatrix}]^{-1}\begin{bmatrix} 1 &0 \\ 1&3 \end{bmatrix}\begin{bmatrix} a &b \\ c&d \end{bmatrix}=\begin{bmatrix} x &0 \\ 0&y \end{bmatrix}\\ for some real diagonal matrix [x00y]\begin{bmatrix} x &0 \\ 0&y \end{bmatrix}\\

Multiplying both sides of the equation on the left by B, we get [1013][abcd]=[abcd][x00y].{\displaystyle {\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x&0\\0&y\end{bmatrix}}.}\\

The above equation can be decomposed into two simultaneous equations.

Factoring out the eigenvalues x and y: {[1013][ac]=x[ac][1013][bd]=y[bd]\begin{cases} \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} a \\ c \end{bmatrix} = x\begin{bmatrix} a \\ c \end{bmatrix} \\ \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} b \\ d \end{bmatrix} = y\begin{bmatrix} b \\ d \end{bmatrix} \end{cases}\\

Letting: a=[ac],b=[bd],{\displaystyle {\overrightarrow {a}}={\begin{bmatrix}a\\c\end{bmatrix}},\quad {\overrightarrow {b}}={\begin{bmatrix}b\\d\end{bmatrix}},} this gives us two vector equations:

{Aa=xaAb=yb\begin{cases} A \overrightarrow{a} = x \overrightarrow{a} \\ A \overrightarrow{b} = y \overrightarrow{b} \end{cases}\\

And can be represented by a single vector equation involving two solutions as eigenvalues: Au=λu\mathbf{A} \mathbf{u} = \lambda \mathbf{u}\\

where λ represents the two eigenvalues x and y, and u represents the vectors a and b.

Shifting λu to the left-hand side and factoring u out

(AλI)u=0{\displaystyle (\mathbf {A} -\lambda \mathbf {I} )\mathbf {u} =\mathbf {0} }

Since B is non-singular, it is essential that u is non-zero. Therefore, det(AλI)=0{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0}\\

Thus, (1λ)(3λ)=0{\displaystyle (1-\lambda )(3-\lambda )=0} giving us the solutions of the eigenvalues for the matrix A as λ = 1 or λ = 3, and the resulting diagonal matrix from the eigendecomposition of A is thus [1003].\begin{bmatrix} 1 &0 \\ 0&3 \end{bmatrix}.\\

Putting the solutions back into the above simultaneous equations

{[1013][ac]=1[ac][1013][bd]=3[bd]\begin{cases} \begin{bmatrix} 1 & 0 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} a \\ c \end{bmatrix} = 1\begin{bmatrix} a \\ c \end{bmatrix} \\ \begin{bmatrix} 1 & 0\\ 1 & 3 \end{bmatrix} \begin{bmatrix} b \\ d \end{bmatrix} = 3\begin{bmatrix} b \\ d \end{bmatrix} \end{cases}\\

Solving the equations, we have

a=2candb=0,[c,d]R.{\displaystyle a=-2c\quad {\text{and}}\quad b=0,\qquad [c,d]\in \mathbb {R} .}\\

Thus the matrix B required for the eigendecomposition of A is B=[2c0cd],[c,d]R,{\displaystyle \mathbf {B} ={\begin{bmatrix}-2c&0\\c&d\end{bmatrix}},\qquad [c,d]\in \mathbb {R} ,}\\

that is: [2c0cd]1[1013][2c0cd]=[1003],[c,d]R{\displaystyle {\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\0&3\end{bmatrix}},\qquad [c,d]\in \mathbb {R} }