JEE Main 2024 Question Paper Solution Discussion Live JEE Main 2024 Question Paper Solution Discussion Live

Important Matrices and Determinants Formulas for JEE Main and Advanced

Any rectangular arrangement of numbers in m rows and n columns is called a matrix of order mΓ—n. Matrices and determinants is an important topic for the JEE exam. These formulas will help students to have a quick revision before the exam. Students can expect 2-3 questions from this topic.

Download Complete Chapter Notes of Matrices & Determinant
Download Now

Matrices and Determinants Formulas

Matrices

Any rectangular arrangement of numbers in m rows and n columns is called a matrix of order mΓ—n.

\(\begin{array}{l}A = \begin{bmatrix} a_{11} &a_{12} &a_{13} &… & a_{1j} &…. &a_{1n} \\ a_{21} & a_{22} & a_{23} &… & a_{2j} &…. &a_{2n} \\ … & … & … & … & … &…. & …\\ a_{m1}& a_{m2} &a_{m3} &… & a_{mj} & …. & a_{mn} \end{bmatrix}\end{array} \)

Where aij denotes the element of the ith row and jth column. The above matrix is denoted as [aij]mΓ—n . The elements a11, a22, a33 etc are called diagonal elements. Their sum is called the trace of A denoted by Tr(A).

2. Basic Definitions

(i) Row matrix: A matrix having one row is called a row matrix.

(ii) Column matrix: A matrix having one column is called a column matrix.

(iii) Square matrix: A matrix of order mΓ—n is called square matrix if m = n.

(iv) Zero matrix: A = [aij]mΓ—n is called a zero matrix, if aij = 0 for all i and j.

(v) Upper triangular matrix: A = [aij]mΓ—n is said to be upper triangular, if aij= 0 for i > j.

(vi) Lower triangular matrix: A = [aij]mΓ—n is said to be lower triangular, if aij = 0 for i < j.

(vii) Diagonal matrix: A square matrix [aij]mΓ—n is said to be diagonal, if aij = 0 for i β‰  j.

(viii) Scalar matrix: A diagonal matrix A = [aij]mΓ—n is said to be scalar, if aij = k for i = j.

(ix) Unit matrix (Identity matrix): A diagonal matrix A = [aij]n is a unit matrix, if aij = 1 for i = j.

(x) Comparable matrices: Two matrices A and B are comparable, if they have the same order.

3. Equality of matrices: Two matrices A = [aij]mΓ—n and B = [bij]pΓ—q are are said to be equal, if m = p and n = q and aij = bij βˆ€ i and j.

4. Multiplication of a matrix by a scalar: Let Ξ» be a scalar, then Ξ»A = [bij]mΓ—n where bij= Ξ»aij βˆ€ i and j.

5. Addition of matrices: Let A = [aij]mΓ—n and B = [bij]mΓ—n be two matrices, then A+B = [aij]mΓ—n+ [bij]mΓ—n = [cij]mΓ—n where cij = aij+bij βˆ€ i and j.

6. Subtraction of matrices: A-B = A+(-B), where -B = ( -1)B.

7. Properties of addition and scalar multiplication:

(i) Ξ»(A+B) = Ξ»A+Ξ»B

(ii) Ξ»A = AΞ»

(iii) (Ξ»1+Ξ»2)A = Ξ»1A+Ξ»2A

8. Multiplication of matrices: Let A = [aij]mΓ—p and B = [bij]pΓ—n , then AB = [cij]mΓ—n where

\(\begin{array}{l}c_{ij} = \sum_{k=1}^{p}a_{ik}\: b_{kj}\end{array} \)
.

9. Properties of matrix multiplication:

(i) AB β‰  BA

(ii) (AB)C = A(BC)

(iii) AIn = A = InA

(iv) For every non singular square matrix A (i.e., | A |β‰  0 ) there exists a unique matrix B so that AB = In = BA. In this case we say that A and B are multiplicative inverses of one another. I.e., B = A-1 or A = B-1 .

10. Transpose of a Matrix.

Let A = [aij]mΓ—n then A’ or AT the transpose of A is defined as A’ = [aji]nΓ—mΒ .

(i) (A’)’ = A

(ii) (Ξ»A)’ = Ξ»A’

(iii) (A+B)’ = A’+B’

(iv) (A-B)’ = A’-B’

(v) (AB)’ = A’B’

(vi) For a square matrix A, if A’ = A , then A is said to be a symmetric matrix.

(vii) For a square matrix A, if A’ = -A , then A is said to be a skew symmetric matrix.

11. Submatrix of a matrix:

Let A be a given matrix. The matrix obtained by deleting some rows and columns of A is called a submatrix of A.

12. Properties of determinant:

(i) | A | = | A’ | for any square matrix A.

(ii) If two rows or two columns are identical, then | A | = 0.

(iii) If | Ξ»A | = Ξ»n| A |, when A = [aij]nΓ—n.

(iv) If A and B are two square matrices of the same order, then | AB |= | A || B |

13. Singular and Non-singular matrix:

A square matrix A is said to be singular, if | A | = 0.

A square matrix A is said to be non-singular, if | A | β‰  0.

14. Cofactor and adjoint matrix.

Let A = [aij]nΓ—n be a square matrix. The matrix obtained by replacing each element of A by the corresponding cofactor is called the cofactor matrix of A. The transpose of the matrix of the cofactor of A is called the adjoint of A, denoted as adj A.

15. Properties of adj A.

(i) A . adj A = | A |In = (adj A)A where A = [aij]nΓ—n.

(ii) | adj A | = | A |n-1 , where n is the order of A.

(iii) If A is a symmetric matrix, then adj A is also symmetric.

(iv) If A is singular, then adj A is also singular.

(v) Let A be non singular matrix, then

\(\begin{array}{l}\frac{adj \: A}{\left | A \right |}\ \text{is the multiplicative inverse of A and is denoted by}\ A^{-1}.\end{array} \)

(vi) (A-1)T = (AT)-1 for any non singular matrix.

(vii) (A-1)-1 = A, if A is non singular.

(viii) A-1 is always non singular.

(ix) (adj AT ) = (adj A)T

(x) Let k be a non zero scalar and A be a non singular matrix, thenΒ 

\(\begin{array}{l}(kA)^{-1} = \frac{1}{k}A^{-1}\end{array} \)

(xi) | A-1 | = 1/|A| for | A | β‰  0.

(xii) Let A be a non singular matrix, then AB = AC⇒ B = C and BA = CA ⇒ B = C.

16. System of linear equations and matrices:

System of linear equations AX = B is said to be consistent if it has at least one solution.

(i) System of linear equations and matrix inverse:

(a) If A is non-singular, solution is given by X = A-1B.

(b) If A is singular, adj(A) B = 0 and no two columns of A are proportional, then the system has infinitely many solutions.

(c) If A is singular and (adj A)B β‰  0, then the system has no solution.

(ii) Homogeneous system and matrix inverse:

If the above system is homogeneous, n equations in n unknowns, then in the matrix form it is AX = 0. ( since b1 = b2 =….. bn = 0), where A is a square matrix.

If A is non-singular, the system has only one trivial solution. X = 0.

If A is singular, then the system has infinitely many solutions (including the trivial solution) and hence it has non-trivial solutions.

(iii) Elementary row transformation of Matrix:

The following operations on a matrix are called elementary row transformations.

(a) Interchanging two rows.
(b) Multiplication of all the elements of a row by a non zero scalar.

(c) Addition of a constant multiple of a row to another row.

17. Characteristic Polynomial and characteristic equation.

Let A be a square matrix, then the polynomial | A-xI | is called the characteristic polynomial of A and the equation | A-xI | = 0 is called the characteristic equation of A.

18. Cayley Hamilton theorem:

Every square matrix A satisfies its characteristic equation. I.e., a0xn + a1xn-1 + ….. + an-1x + an = 0 is the characteristic equation of A, then a0An + a1An-1 + ……..+ an-1A + anI = 0

19. More definitions on matrices:

(i) Nilpotent matrix: A square matrix A is called nilpotent if Ap = 0 for some positive integer. If p is the smallest such positive integer, then p is called its nilpotency.

(ii) Idempotent matrix: A square matrix A is said to be idempotent if, A2 = A.

(iii) involutory matrix: A square matrix A is said to be involutory if, A2 = I.

(iv) Orthogonal matrix: A square matrix A is said to be orthogonal if, ATA = I = AAT.

(v) Unitary matrix: A square matrix A is said to be unitary if

\(\begin{array}{l}A\: (\bar{A})^{T}= I,\ \text{where}\ \bar{A}\ \text{

is the complex conjugate of A.}\end{array} \)

Determinants

We write the expression a1b2 – a2b1 as

\(\begin{array}{l}\begin{vmatrix} a_{1} & b_{1}\\ a_{2}& b_{2} \end{vmatrix}\end{array} \)
and
\(\begin{array}{l}\begin{vmatrix} a_{1} & b_{1}\\ a_{2}& b_{2} \end{vmatrix}\ \text{is called a determinant of order 2.}\end{array} \)

1. Expansion of determinant:

\(\begin{array}{l}\begin{vmatrix} a_{1} & b_{1} & c_{1}\\ a_{2}& b_{2} & c_{2}\\ a_{3}& b_{3} & c_{3} \end{vmatrix}\end{array} \)
is called the determinant of order three. Its value can be found as

D = a1(b2c3Β – b3c2) – b1(a2c3 – a3c2) + c1(a2b3 – a3b2)

2. Minors: The minor of aij is obtained by deleting ith row and jth column from the determinant. It is denoted by Mij.

3. Cofactor: Cofactor of element aij is Cij = (-1)i+j Mij

D = a11M11 – a12M12 + a13M13 = a11C11 + a12C12 + a13C13

4. Transpose of a Determinant: The transpose of a determinant is a determinant obtained after interchanging the rows and columns.

\(\begin{array}{l}D = \begin{vmatrix} a_{1} & b_{1} & c_{1}\\ a_{2}& b_{2} & c_{2}\\ a_{3}& b_{3} & c_{3} \end{vmatrix}\end{array} \)
and
\(\begin{array}{l}D^{T}= \begin{vmatrix} a_{1} & a_{2} & a_{3}\\ b_{1}& b_{2} & b_{3}\\ c_{1}& c_{2} & c_{3} \end{vmatrix}\end{array} \)

5. Symmetric, Skew symmetric, Asymmetric Determinants:

(i) A determinant is symmetric if it is identical to its transpose. The ith row is identical to its ith column, i.e. aij = aji for all values of i and j.

(ii) A determinant is skew-symmetric if it is identical to its transpose, having the sign of each element inverted, i.e. aij = -aji for all values of i and j.

(iii) A determinant is asymmetric if it is neither symmetric nor skew-symmetric.

6. Properties of determinants:

(i) D = D’

(ii) If a determinant has all the elements zero in any row or column, then D = 0

(iii) If any two rows or columns of a determinant be interchanged, then D’ = -D.

(iv) If a determinant has any two rows or columns identical, then D = 0.

(v) If all the elements of any row or column be multiplied by the same number k, then D’ = kD.

(vi) If each element of any row or column can be expressed as a sum of two terms, then the determinant can be expressed as the sum of two determinants. i.e.

\(\begin{array}{l}\begin{vmatrix} a_{1}+x & b_{1}+y & c_{1}+z\\ a_{2}& b_{2} & c_{2}\\ a_{3}& b_{3} & c_{3} \end{vmatrix}=\begin{vmatrix} a_{1} & b_{1} & c_{1}\\ a_{2}& b_{2} & c_{2}\\ a_{3}& b_{3} & c_{3} \end{vmatrix}+ \begin{vmatrix} x & y & z\\ a_{2}& b_{2} & c_{2}\\ a_{3}& b_{3} & c_{3} \end{vmatrix}\end{array} \)

(vii) The value of a determinant is not altered by adding to the elements of any row or column a constant multiple of the corresponding elements of any other row or column.

7. Multiplication of two determinants:

\(\begin{array}{l}\begin{vmatrix}_{} a_{1} &b_{1} \\ a_{2}& b_{2} \end{vmatrix} \times \begin{vmatrix} l_{1} &m_{1}\\l_{2}&m_{2}\end{vmatrix}= \begin{vmatrix} a_{1}l_{1}+b_{1} l_{2}&a_{1}m_{1}+b_{1} m_{2} \\ a_{2}l_{1}+b_{2} l_{2} & a_{2}m_{1}+b_{2} m_{2} \end{vmatrix}\end{array} \)

8. Summation of determinants:

LetΒ 

\(\begin{array}{l}\Delta (r) = \begin{vmatrix} f(r) & g(r) &h(r) \\ a_{1}&a_{2} &a_{3} \\ b_{1}& b_{2} & b_{3} \end{vmatrix}\end{array} \)
, then
\(\begin{array}{l}\sum_{r=1}^{n}\Delta (r) =\begin{vmatrix} \sum_{r=1}^{n}f(r)& \sum_{r=1}^{n}g(r) &\sum_{r=1}^{n}h(r) \\ a_{1}& a_{2} &a_{3} \\ b_{1}& b_{2} &b_{3} \end{vmatrix}\end{array} \)

Where a1, a2, a3, b1, b2, and b3 are constants independent of r.

9. Integration of a determinant:

Let

\(\begin{array}{l}\Delta (x) = \begin{vmatrix} f(x)&g(x) & h(x)\\ a_{1} & b_{1}&c_{1} \\ a_{2}& b_{2} & c_{2} \end{vmatrix}\end{array} \)
,

then

\(\begin{array}{l}\int_{a}^{b}\Delta (x) \: dx= \begin{vmatrix} \int_{a}^{b}f(x)&\int_{a}^{b}g(x) &\int_{a}^{b} h(x)\\ a_{1} & b_{1}&c_{1} \\ a_{2}& b_{2} & c_{2} \end{vmatrix}\end{array} \)
Where a1, a2, a3, b1, b2, b3 are constants independent of x.

10. Differentiation of determinants:

Let

\(\begin{array}{l}\Delta (x) = \begin{vmatrix} f_{1}(x)&f_{2}(x) & f_{3}(x)\\ g_{1}(x) & g_{2}(x)&g_{3}(x) \\ h_{1}(x)& h_{2}(x) & h_{3}(x) \end{vmatrix}\end{array} \)

then,

\(\begin{array}{l}\Delta ‘(x) = \begin{vmatrix} f_{1}'(x)&f_{2}'(x) & f_{3}'(x)\\ g_{1}(x) & g_{2}(x)&g_{3}(x) \\ h_{1}(x)& h_{2}(x) & h_{3}(x) \end{vmatrix}+\begin{vmatrix} f_{1}(x)&f_{2}(x) & f_{3}(x)\\ g_{1}'(x) & g_{2}'(x)&g_{3}'(x) \\ h_{1}(x)& h_{2}(x) & h_{3}(x) \end{vmatrix} + \begin{vmatrix} f_{1}(x)&f_{2}(x) & f_{3}(x)\\ g_{1}(x) & g_{2}(x)&g_{3}(x) \\ h_{1}'(x)& h_{2}'(x) & h_{3}'(x) \end{vmatrix}\end{array} \)

11. Cramer’s rule:

(i) Two variables:

(a) Consistent equations: Definite and unique solution. [Intersecting lines]

(b) Inconsistent equation: No solution. [Parallel line]

(c) Dependent equation: Infinite solutions. [Identical lines]

Let a1x+b1y+c1 = 0 and a2x+b2y+c2 = 0 then,

\(\begin{array}{l}\frac{a_{1}}{a_{2}} = \frac{b_{1}}{b_{2}}\neq \frac{c_{1}}{c_{2}}\end{array} \)
β‡’ given equations are inconsistent.

If

\(\begin{array}{l}\frac{a_{1}}{a_{2}} = \frac{b_{1}}{b_{2}}= \frac{c_{1}}{c_{2}}\end{array} \)
β‡’ given equations are dependent.

(ii) Three variables:

Let a1x + b1y + c1z = d1 , a2x + b2y + c2z = d2 and a3x + b3y + c3z = d3, then x = D1/D, y = D2/D and z = D3/D, whereΒ 

\(\begin{array}{l}D = \begin{vmatrix} a_{1} & b_{1} & c_{1}\\ a_{2} & b_{2} &c_{2} \\ a_{3} & b_{3} & c_{3} \end{vmatrix}\end{array} \)
;
\(\begin{array}{l}D_{1}= \begin{vmatrix} d_{1} & b_{1} & c_{1}\\ d_{2} & b_{2} &c_{2} \\ d_{3} & b_{3} & c_{3} \end{vmatrix}\end{array} \)
;
\(\begin{array}{l}D_{2}= \begin{vmatrix} a_{1} & d_{1} & c_{1}\\ a_{2} & d_{2} &c_{2} \\ a_{3} & d_{3} & c_{3} \end{vmatrix}\end{array} \)
;
\(\begin{array}{l}D_{3}= \begin{vmatrix} a_{1} & b_{1} & d_{1}\\ a_{2} & b_{2} &d_{2} \\ a_{3} & b_{3} & d_{3} \end{vmatrix}\end{array} \)

(iii) Consistency of a system of equations:

(a) If D β‰  0 and at least one of D1, D2, D3 β‰  0, then the given system of equations is consistent and has a unique non-trivial solution.

(b) If D β‰  0 and D1 = D2 = D3 = 0, then the given system of equations are consistent and has trivial solution only.

(c) If D = D1 = D2 = D3 = 0, then the given system of equations have either infinite solutions or no solution.

(d) If D = 0 but atleast one of D1, D2, or D3 is not zero, then the equations are inconsistent and have no solution.

(e) If a given system of linear equations has only zero solution for all its variables, then the given equations are said to have trivial solution.

(iv) Three equation in two variable:

If x and y are not zero, then the condition for a1x + b1y + c1 = 0; a2x + b2y + c2 = 0 and a3x + b3y + c3 = 0 to be consistent in x and y is

\(\begin{array}{l}\begin{vmatrix} a_{1} & b_{1} & c_{1}\\ a_{2} & b_{2} &c_{2} \\ a_{3} & b_{3} & c_{3} \end{vmatrix} = 0\end{array} \)

13. Application of Determinants:

(i) Area of a triangle whose vertices are (xr, yr) ;r = 1, 2, 3 isΒ 

\(\begin{array}{l}D =\frac{1}{2} \begin{vmatrix} x_{1} & y_{1} & 1\\ x_{2} & y_{2} &1 \\ x_{3} & y_{3} & 1 \end{vmatrix}\end{array} \)
. If D = 0, then the three points are collinear.

Toughest JEE Advanced Problems from Matrices and Determinants

Top JEE Advanced Questions from Matrices and Determinants

Matrices and Determinants Revision for JEE – Part 1

Matrices and Determinants Revision for JEE – Part 2

Matrices and Determinants Revision for JEE – Part 3

Matrices and Determinants Revision for JEE – Part 4

Matrices and Determinants – Important Topics

Matrices and Determinants - Important Topics

Matrices and Determinants – Important Questions

Matrices and Determinants - Important Questions

Matrices and Determinants – Top 10 Most Important and Expected JEE Main Questions

Comments

Leave a Comment

Your Mobile number and Email id will not be published.

*

*