Eigenvalues Properties

Matrix is a two-dimensional array of expressions or numbers, which defines a system of linear equations. The roots of this system are termed as eigenvalues. This article helps students to have a clear idea of eigenvalues properties. In linear algebra, we come across an important topic called matrix (plural – matrices).

Eigenvalues are also known as characteristic values or characteristic roots. In branches like: physics and engineering, the knowledge of eigenvalues and their calculation is extremely important. In this page, we will discuss eigenvalues properties in detail.

Eigenvalue Equation

The equation for finding eigenvalues of a matrix, is known eigenvalue equation.

Eigenvalue equation is shown below –

AλI \left | A – \lambda I \right | = 0

Where, A is a k×kk \times k square matrix.

Two parallel lines | | represent determinant of expression written within it.

λ\lambda denotes eigenvalue of matrix A.

I is the identity matrix of same order as A.

Eigenvalue Properties

Few important properties of eigenvalues are as follows:

1) A matrix possesses inverse if and only if all of its eigenvalues are nonzero.

2) Let us consider a (m x m) matrix A, whose eigenvalues are λ1\lambda_{1}, λ2\lambda_{2}, …, λn\lambda_{n}, then:

i) Trace of matrix A is equal to sum of its eigenvalues as shown below:

tr (A) = λ1+λ2++λn\lambda_{1} + \lambda_{2} + … + \lambda_{n}

ii) Determinant of matrix A is equal to product of eigenvalues of A as given below:

det (A) = λ1.λ2.λn\lambda_{1} . \lambda_{2} . … \lambda_{n}

iii) Eigenvalues of kthk^{th} power of matrix A i.e. AkA^{k} will be λ1k\lambda_{1}^{k}, λ2k\lambda_{2}^{k}, …, λnk\lambda_{n}^{k}

iv) If the matrix A is invertible, then its inverse A-1 does have eigenvalues 1λ1\frac{1}{\lambda_{1}}, 1λ2\frac{1}{\lambda_{2}}, …, 1λn\frac{1}{\lambda_{n}}.

(3) Eigenvalue can be Zero

There may be situations that arise such that zero becomes one of the eigenvalues of a matrix. In this case it is obviously implied that any of the solutions of eigenvalue equation of given matrix is zero. This happens when there are more than one equilibrium point that lies at origin (0, 0).

(4) If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the eigenvalues of A are entries of the main diagonal of A.

(5) If μ ≠ 0 complex number, λ is an eigenvalue of matrix A, and x ≠ 0 corresponding eigenvector, then μx is a corresponding eigenvector.

(6) If A is an n × n matrix, then the following are equivalent.

  • A is invertible.
  • λ = 0 is not an eigenvalue of A
  • If λ is an eigenvalue of matrix invertible A, and x ≠ 0 corresponding eigenvectors, then 1 / λ is an eigenvalue of A1A^{-1} and x is a corresponding eigenvector.
  • det(A) ≠ 0.
  • Ax = 0 has only the trivial solution.
  • Ax = b has exactly one solution for every n × 1 matrix B
  •  AT A is invertible.
  • A is diagonalizable.
  • A has n linearly independent eigenvectors.
  • The reduced row-echelon form of A is InI_{n}.
  • A is expressible as a product of elementary matrices.
  • Ax = b is consistent for every n × 1 matrix b.
  • The column vectors of A are linearly independent.
  • The row vectors of A are linearly independent.
  • The column vectors of A span Rn.
  • The row vectors of A span Rn.
  • The column vectors of A form a basis for Rn.
  • The row vectors of A form a basis for Rn.
  • A has rank n.
  • A has nullity 0.
  • The orthogonal complement of the null space of A is Rn.
  • The orthogonal complement of the row space of A is 0.
  • The range of TAT_{A} is Rn.R^{n}.
  • TAT_{A} is one-to-one.
  • λ = 0 is not an eigenvalue of A.

(7) If an n × n matrix A has n distinct eigenvalues, then A is diagonalizable.

(8) If A is a square matrix, then:

  • For every eigenvalue of A, the geometric multiplicity is less than or equal to the algebraic multiplicity.
  • A is diagonalizable if and if the geometric multiplicity is equal to the algebraic multiplicity for every eigenvalue.

(9) Let A and B are similar matrices. If the similarity transformations performed by the orthogonal or unitary matrix Q i.e. if applies B=QTAQB=Q^{T}AQ or B=UHAUB=U^{H}AU we will say that the matrices A and B are unitary similar. Since the unitary similar matrices are a special case of a similar matrix, the eigenvalues of unitary similar matrices are the same.

(10) If A is Hermitian (symmetric) matrix, then:

  • The eigenvalues of A are all real numbers.
  • Eigenvectors from different eigenspace are orthogonal.

For example: If we consider a characteristic polynomial λ2\lambda^2 – 4λ\lambda = 0. Then the eigenvalues are λ\lambda = 0 and λ\lambda = 4.

Also Read:

Dominant and Complex Eigenvalue

Dominant Eigenvalue

Dominant eigenvalue of a matrix is defined to be an eigenvalue which is greatest of all of its eigenvalues.

Let us suppose that A is a square matrix of order n and λ1\lambda_{1}, λ2\lambda_{2}, …, λn\lambda_{n} be its eigenvalues, such that:

λ1\lambda_{1} > λ2\lambda_{2} > … > λn\lambda_{n}

Then, λ1\lambda_{1} which is the biggest value of all eigenvalues of matrix A, is known as the dominant eigenvalue.

Complex Eigenvalue

So far, we know that all the values of λ\lambda computed by the eigenvalue equation:

AλI \left | A – \lambda I \right | = 0 are known as eigenvalues.

When the characteristic equation thus solved, gives the roots that are complex in nature, the matrix is said to have complex eigenvalues.

In other words, complex eigenvalues of a matrix are the eigenvalues that are of the form:

λ1\lambda_{1} = a+iba + i b and λ2\lambda_{2} = a – i b.

Where, “a” and “b” are real and imaginary parts respectively.

BOOK

Free Class