Download the BYJU'S Exam Prep App for free GATE/ESE preparation videos & tests - Download the BYJU'S Exam Prep App for free GATE/ESE preparation videos & tests -

Dynamic Programming Algorithm

Dynamic programming is a problem-solving technique that splits the issue into sub-problems and saves the outcomes so that we don’t have to compute them again.

Dynamic Programming Algorithm

Table of Contents

What is Dynamic Programming?

Dynamic programming is mostly used to handle optimisation challenges or to crack problems related to optimisation. When we talk about optimisation problems, that means we are trying to find the minimum or maximum solution to a problem. One of the best parts of dynamic programming is that it assures us to discover the optimal solution to an issue if the solution exists.

In the short version, we can define dynamic programming:

It is a procedure in the computer science world that aids in effectively cracking a class of problems that have optimal substructure features and overlapping subproblems.

Dynamic Programming Example

Let’s find the Fibonacci sequence up to the 5th term. So, first, we will learn what the Fibonacci series is.

It is the series of digits in which each digit is the sum of the two foregoing ones.

Take 0, 1, 1, 2, 3 as an example. You can see that in this series, each number is the aggregate of the two foregoing numbers.

Algorithm

Suppose i be the number of terms.

1. If i <= 1, return 1.

2. Else, return the aggregate of two foregoing numbers.

Let’s estimate the Fibonacci series up to the 5th term.

  1. We can notice that the first digit is 0.
  2. The 2nd digit is 1.
  3. The 3rd digit is an aggregate of 0 and 1, which is 1.
  4. The 4th digit is the sum of the third digit and the second digit, which means 1 + 1 = 2.
  5. The fifth digit is the sum of the 4th digit and 3rd digit, i.e. 2 + 1 = 3.

Hence, we have the series 0, 1, 1, 2, 3. Here, we have used the outcomes of the prior steps. This whole procedure is known as the dynamic programming approach.

Dynamic Programming Algorithm 2

How Does Dynamic Programming Work?

We need to perform a few steps to achieve results from dynamic programming. The steps are:

  • It splits the complicated problem into the simple version of subproblems.
  • It aids in exploring the optimal answer to the sub-problems.
  • It helps in storing the final results of the subproblems and stores the results of subproblems, or we can say that it helps in memorisation. The method of keeping the outcomes of subproblems is known as memorisation.
  • It tends to reutilise stored outcomes if a provided subproblem recurs.
  • Finally, compute the outcome of the complex issue or problem.

The above-mentioned five steps are the necessary actions we need to perform for dynamic programming.

Dynamic programming is appropriate for problems that have overlapping subproblems and optimal substructures.

The term “optimal substructure” refers to the capability to crack optimisation problems by just merging the most suitable solutions to all subproblems.

Different Approaches to Dynamic Programming

There are two approaches to dynamic programming. Let’s discuss these approaches here:

  • Top-down approach
  • Bottom-up approach

Top-down Approach

When it comes to the top-down method, pursue the memorisation technique. In this case, memorisation is equivalent to the aggregate of caching and recursion.

Advantages of Top-down Approach

  • This method is simple and easy to comprehend and execute.
  • It cracks the subproblems only when it is needed.
  • It is manageable and demands no effort to debug.

Disadvantages of Top-down Approach

  • It utilises the recursion approach that settles more additional memory in the call stack. Occasionally, the stack overflow situation happens due to recursion.
  • It settles more memory which degrades the performance.

Bottom-up Approach

The bottom-up approach pursues the tabulation technique to execute the dynamic programming. It helps in cracking the problems; however, it eliminates the recursion.

And, if we can remove the recursion, there will be no stack overflow problem.

Types of Dynamic Programming Algorithms

There are two types of dynamic programming algorithms, and they are:

1. Longest Common Subsequence

2. Floyd-Warshall Algorithm

Practice Problems

Q1. The Floyd-Warshall algorithm for all-pair shortest paths computation is based on _________.

A) Greedy paradigm.d method for solving

B) Divide-and-conquer paradigm.

C) Dynamic programming paradigm.

D) None of the above

Q2. Match the following:

List 1

(P) Prim’s algorithm for minimum spanning tree

(Q) Floyd-Warshall algorithm for all pair shortest paths

(R) Mergesort

(S) Hamiltonian circuit

List 2

(i) Backtracking

(ii) Greedy method

(iii) Dynamic programming

(iv) Divide and conquer

  1. P – iii, Q – ii, R-iv, S-i
  2. P – i, Q – ii, R – iv, S -iii
  3. P- ii, Q – iii, R – iv, S – i
  4. P – ii, Q – i, R – iii, S – iv

Frequently Asked Questions on Dynamic Programming Algorithm

Q1

Which problems can be solved by dynamic programming?

Answer: We can solve two problems by using dynamic programming if

(1) It has an optimal substructure.

(2) It has overlapping subproblems.

Q2

What are the types of dynamic programming algorithms?

Answer: The two types of dynamic programming algorithms are:

1. Longest Common Subsequence

2. Floyd-Warshall Algorithm

Keep learning and stay tuned to get the latest updates on GATE Exam along with GATE Eligibility CriteriaGATE 2023GATE Admit CardGATE Syllabus for CSE (Computer Science Engineering)GATE CSE NotesGATE CSE Question Paper, and more.

Also Explore,