Method of Lagrange’s Multipliers

The Lagrange method of multipliers is named after Joseph-Louis Lagrange, the Italian mathematician. The primary idea behind this is to transform a constrained problem into a form so that the derivative test of an unconstrained problem can even be applied. Also, this method is generally used in mathematical optimization. The method of Lagrange’s multipliers is an important technique applied to determine the local maxima and minima of a function of the form f(x, y, z) subject to equality constraints of the form g(x, y, z) = k or g(x, y, z) = 0. That means it is subject to the condition that one or more equations are satisfied exactly by the desired variable’s values.

Lagrangian function

The association between the slope of the function and slopes of the constraints relatively leads to a reformulation of the initial problem and is called the Lagrangian function. Thus, the Lagrange method can be summarized as follows:

To determine the minimum or maximum value of a function f(x) subject to the equality constraint g(x) = 0 will form the Lagrangian function as:

ℒ(x, λ) = f(x) – λg(x)

Here,

ℒ = Lagrange function of the variable x

λ = Lagrange multiplier

Lagrange’s Multipliers Method

There are multiple ways to define the method of Lagrange multipliers. Let’s learn one by one.

Suppose f and g are two functions such that they both have continuous partial derivatives.

Let (x0, y0, z0) ∈ S := {(x, y, z) : g(x, y, z) = 0} and ∇g(x0, y0, z0) ≠ 0.

If the function f contains a local minimum or local maximum at a point (x0, y0, z0) then there exists λ ∈ R such that:

∇f(x0, y0, z0) = λ∇g(x0, y0, z0)

To determine the extremum points, we generally consider the below equations:

∇f(x, y, z) = λ∇g(x, y, z)

g(x, y, z) = 0

By solving these equations, we get the values of unknown variables, say x, y, z and λ. Thus, we will get the local extremum points through the solutions of the above set of equations.

Read more:

Lagrange Multipliers Theorem

The mathematical statement of the Lagrange Multipliers theorem is given below.

Suppose f : Rn → R is an objective function and g : Rn → R is the constraints function such that f, g ∈ C1, contains a continuous first derivative. Also, consider a solution x* to the given optimization problem so that ranDg(x*) = c which is less than n.

Objective function: Max f(x)

Constraints: Subject to

g(x) = 0,

where Dg(x*) = Matrix of partial derivatives, i.e., [∂gj/∂xk], then there exists a unique Lagrange multiplier λ* in Rc so that Df(x*) = λ*TDg(x*)

Let’s understand how to define the method of Lagrange multipliers for both single and multiple constraints so that we can easily solve many problems in mathematics.

Lagrange Multiplier Theorem for Single Constraint

In this case, we consider the functions of two variables. That means the optimization problem is given by:

Max f(x, Y)

Subject to:

g(x, y) = 0

(or)

We can write this constraint by adding an additive constant such as g(x, y) = k.

Let us assume that the functions f and g (defined above) contain first-order partial derivatives. Thus, we can write the Lagrange function as:

ℒ(x, y, λ) = f(x, y) – λ g(x, y)

If f(x0, y) is the maximum point of the function f(x, y) for the given constrained problem and ∇g(x0, y0) ≠ 0 then there will be λ0 so that (x0, y0, λ0) is known as a stationary point for the above Lagrange function.

However, we can find the maximum and minimum values for the function f(x, y) subject to the constraint g(x, y) = 0 using the below equations involving partial derivatives.

(∂f/∂x) + λ(∂g/∂x) = 0

(∂f/∂y) + λ(∂g/∂y) = 0

g(x, y) = 0

Also, it is not mandatory to estimate the explicit values for Lagrange multiplier λ.

Similarly, for the functions of f and g in three variables, say x, y, and z, we use the following equations to determine the maximum and minimum values of f. They are:

(∂f/∂x) + λ(∂g/∂x) = 0

(∂f/∂y) + λ(∂g/∂y) = 0

(∂f/∂z) + λ(∂g/∂z) = 0

g(x, y, z) = 0



Solved Example

Question:

Find the maximum and minimum values of the function f(x, y) = xy subject to the constraint g(x, y) = 4x2 + y2 = 8.

Solution:

Given,

f(x, y) = xy

g(x, y) = 4x2 + y2 = 8

As we know, to find the maximum and minimum values for the function f(x, y) subject to the constraint g(x, y) = 0 we solve the following equations.

(∂f/∂x) + λ(∂g/∂x) = 0

(∂f/∂y) + λ(∂g/∂y) = 0

g(x, y) = 0

Let’s estimate the partial derivative of the first order.

∂f/∂x = (∂/∂x) (xy) = y

∂f/∂y = (∂/∂y) (xy) = x

∂g/∂x = (∂/∂x) (4x2 + y2) = 8x

∂g/∂y = (∂/∂y) (4x2 + y2) = 2y

Substituting all these values in the above set of equations, we get;

y + λ(8x) = 0

y = -λ(8x) …..(1)

x + λ(2y) = 0

x = -λ(2y) …..(2)

4x2 + y2 = 8 …..(3)

Let λ is not equal to 0.

Dividing equation (1) by (2), we get;

y/x = -λ(8x)/-λ(2y)

y/x = 4x/y

⇒ y2 = 4x2 {by cross multiplication}

⇒ y = ±2x

Substituting y = ±2x in equation (3), we get;

4x2 + (±2x)2 = 8

4x2 + 4x2 = 8

8x2 = 8

⇒ x2 = 1

⇒ x = ±1

Let us write the values of the point (x, y) by taking x = ±1 and y = ±2x

y = 2x

y = -2x

x = 1

2; (x, y) = (1, 2)

-2; (x, y) = (1, -2)

x = -1

-2; (x, y) = (-1, -2)

2; (x, y) = (-1, 2)

Now, we need to calculate the values of the given function f(x, y) = xy at all these points.

So, f(1, 2) = (1)(2) = 2

f(1, −2) = (1)(-2) = −2

f(−1, 2) = (-1)(2) = −2

f(−1, −2) = (-1)(-2) = 2

Therefore, we can get maximum and minimum values at two points each subject to the given constraint. That means,

At (1, 2) and (-1, -2), we get the maximum value for f is 2.

At (1, -2) and (-1, 2), we get the minimum value for f is -2.

Practice Problems

  1. Identify the maximum and minimum values of the function f(x, y) = x2 + y2 subject to x + 2y – 1 = 0.
  2. Find the extreme values of the function f(x, y, z) = 2x + 3y + z subject to x2 + 2y2 + 3z2 = 1.
  3. Find the maximum of f(x, y) = 5x − 3y subject to g(x, y) = x2 + y2 = 136.

Comments

Leave a Comment

Your Mobile number and Email id will not be published.

*

*

close
close

Play

&

Win