Dynamic Programming: Understanding the Basics
- Samvar Shah
- Nov 14, 2024
- 3 min read
Updated: Apr 2
Optimal sub structure, Overlapping sub problems, Memoization, Tabulation
Dynamic Programming (DP) is an algorithmic technique used for solving complex problems by breaking them down into simpler subproblems. It's widely used for optimization problems and can significantly reduce the computational time compared to brute-force approaches. In this blog post, we'll understand when to apply it, and walk through some practical examples.
What is Dynamic Programming?
Dynamic programming is a method for solving problems by storing the results of subproblems and reusing them when needed. The core idea is to solve each subproblem only once and store its solution in a table (usually an array or matrix), thus avoiding redundant work. DP is particularly useful when the problem can be broken down into overlapping subproblems, meaning that the same subproblems are solved multiple times. By solving each subproblem once and saving its result, DP reduces the time complexity significantly.
Key Concepts in Dynamic Programming
To effectively use dynamic programming, there are two main principles to understand:
Optimal Substructure: This means that the solution to the overall problem can be constructed from solutions to subproblems. For example, the optimal solution to a problem can often be derived from the optimal solutions to smaller instances of the same problem.
Overlapping Subproblems: This means that the problem can be divided into subproblems that are solved multiple times. Instead of recalculating the solutions each time, dynamic programming stores the results of these subproblems in a table and reuses them as needed.
Two Approaches in Dynamic Programming
There are two primary ways to implement dynamic programming solutions:
Top-Down (Memoization):
In this approach, you start with the original problem and recursively break it down into subproblems.
If a subproblem has already been solved, its result is retrieved from a memoization table (often an array or dictionary) to avoid redundant calculations.
This approach uses recursion but optimizes it by remembering past results.
Bottom-Up (Tabulation):
In this approach, you start by solving the smallest subproblems first and use their results to build up to the original problem.
A table (often an array or matrix) is created where each entry corresponds to a subproblem, and the solution is built iteratively from the smallest to the largest problem.
This approach avoids recursion and is usually more efficient in terms of memory and execution time.
Some common examples of problems that can be solved using dynamic programming include:
Fibonacci Sequence: A classic example where each number is the sum of the two preceding ones. It’s a textbook case of overlapping subproblems.
Knapsack Problem: A problem of selecting items to maximize value without exceeding a weight limit.
Shortest Path Problems: Problems like finding the shortest path in a graph, such as Dijkstra’s or Floyd-Warshall algorithm.
Matrix Chain Multiplication: Optimizing the order of matrix multiplications to minimize computational cost.
Longest Common Subsequence (LCS): Finding the longest subsequence that is common to two sequences.
Example: Fibonacci Sequence
Let’s look at a simple example to better understand dynamic programming in action: the Fibonacci sequence.
Recursive Approach (Inefficient)
In the traditional recursive approach, the Fibonacci sequence is defined as:
F(0)=0
F(1)=1
F(n)=F(n−1)+F(n−2) for n>1
Recursive Python implementation:
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
The issue with this approach is that it recalculates the same Fibonacci values multiple times. For example, fibonacci(5)Â calls fibonacci(4)Â and fibonacci(3), but fibonacci(4)Â will again call fibonacci(3), leading to a lot of redundant work.
Dynamic Programming Approach (Efficient)
By using dynamic programming, we can store the results of the subproblems to avoid redundant calculations. We’ll use a bottom-up approach to solve the Fibonacci sequence.
def fibonacci(n):
if n <= 1: return n
# Create an array to store the Fibonacci values
fib = [0] * (n + 1)
fib[1] = 1
# Fill the array using the bottom-up approach
for i in range(2, n + 1):
fib[i] = fib[i-1] + fib[i-2]
return fib[n]
This approach runs in O(n)Â time and uses O(n) space, improving efficiency over the recursive method, which has exponential time complexity.
Hope this explains the basics of dynamic programming and when it can be applied.