Dynamic Programming
Dynamic Programming (Dynamic Programming) is a mathematical technique for solving optimization problems, especially those with overlapping subproblems. Dynamic programming provides an efficient solution method because it dramatically reduces the amount of exponential computation by saving and reusing the results once computed.
Dynamic programming was invented by Richard Bellman in the early 1950s. The name “dynamic programming,” Bellman says, was chosen to conceal from government sponsors “the fact that I was actually doing mathematics.
A problem is said to have an optimal substructure if the global optimal solution is obtained by combining optimal solutions of the sub-solutions. For example, merge sorting takes advantage of the fact that the entire list can be sorted by first sorting several parts and then combining (merging) them.
When the same problem needs to be solved to obtain the optimal solution, the problem is said to have overlapping subproblems. This property does not manifest itself in merge sorting. The lists are merged many times, but each time they are separate lists. Although not obvious, the 0/1 knapsack problem has both of these properties.
Dynamic programming is primarily applied to optimization problems with the following characteristics
- Problems with optimal structure: Problems that can be solved by dividing a larger problem into smaller subproblems; such problems are called having an optimal substructure.
- Overlapping subproblems: A subproblem whose solution, once computed, can be reused to compute other subproblems. In other words, it is a case where the same subproblem is solved many times.
The basic idea of dynamic programming is to divide the problem into smaller subproblems and solve those subproblems, and the solution of each subproblem is stored and reused to obtain the overall solution. In general, dynamic programming is performed in the following steps.
- Memorization: A data structure is prepared to store the solutions to the subproblems. Arrays or dictionaries are typically used.
- Recursive functions or loops: Compute solutions sequentially, starting with smaller subproblems. Recursive functions or loops are used to combine subproblems to find solutions to larger problems.
Dynamic programming is frequently applied to problems such as the following, among others
- When efficiently computing recursive functions such as the Fibonacci sequence
- String processing problems such as the Longest Common Subsequence
- Matrix processing problems such as Matrix Chain Multiplication
- Combinatorial optimization problems such as the Knapsack Problem
While dynamic programming provides a fast solution by reusing the results once computed, it requires more memory and can be somewhat complex to implement. However, it is a very effective method for applicable problems and is used in many high
Algorithms used in dynamic programming
There are several typical algorithms in dynamic programming. The following is a description of some typical algorithms commonly used in dynamic programming.
- Memoization Recursive: Memoization recursive is a way to implement dynamic programming using a recursive function. It performs memoization (memorization) to store the computed results and omits computation when solving the same subproblem recursively. This dramatically reduces the number of exponential recursive calls.
- Bottom-Up Approach: The bottom-up approach calculates solutions sequentially, starting with smaller subproblems. Instead of using recursive calls, it uses loops to solve subproblems in order and stores the results in arrays or tables to allow efficient reference to the necessary subproblems.
- Longest Common Subsequence (LCS): LCS is the problem of finding the longest common subsequence of two strings. The length of the common subsequence can be calculated by dynamic programming.
- Knapsack Problem: The Knapsack Problem involves multiple items of different weights and values, and selecting the item such that the sum of the weights achieves the greatest value within a limit. Dynamic programming can be used to efficiently find the optimal item selection.
- Matrix Chain Multiplication: Matrix Chain Multiplication is the problem of minimizing the number of multiplications by determining the order in which the products of several matrices are computed. Dynamic programming can be used to find the optimal order of computation.
These algorithms are typical applications of dynamic programming, and dynamic programming has been applied to many other problems. Dynamic programming is a very useful method for improving the efficiency of computation and finding optimal solutions, and it is used in a wide range of applications.
Next, we will discuss the difference between dynamic programming and divide-and-conquer, which is a different method, although it is a divide-and-conquer approach like dynamic programming.
Dynamic Programming and Divide-and-conquer
Dynamic Programming and Divide and Conquer are both algorithmic design methods used to solve optimization and recursive problems, but there are significant differences in approach.
Dynamic Programming:
- Dynamic programming is a method for reducing computational complexity recursively by dividing the problem into multiple smaller subproblems and memoizing (storing) the solutions to those subproblems.
- When there are overlapping subproblems, memorizing and reusing the results once computed dramatically reduces exponential recursive calls and provides efficient solution methods.
- It can be implemented with either an upward (bottom-up) or downward (top-down) approach.
- Typical problems include the Fibonacci sequence, longest common subsequence (LCS), and knapsack problems.
Divide and Conquer:
- Divide and Conquer is a method of dividing a problem into multiple subproblems, which are then recursively solved and integrated into a final solution.
- The subproblems into which the problem is divided are designed to have the same properties as the original problem.
- By recursive calls, the problem is divided until it becomes small enough to find a solution, and the solution of the original problem is obtained by integrating these solutions.
- Typical problems include quick sort, merge sort, and binary search.
The main difference between the two is that dynamic programming focuses on optimizing problems with overlapping subproblems while divide-and-conquer focuses on dividing subproblems so that they have the same properties as the original problem. In addition, dynamic programming is unique in that it recursively performs memoization to reduce computational complexity.
Divide-and-conquer methods are mainly used for sorting and exploring data in tree structures, and are also applied to knapsack problems and matrix product problems, which are also required by dynamic programming.
Application of dynamic programming to planning problems
Dynamic programming is also applied to planning problems such as those described in “Automata and State Transitions/Petri Nets and Automatic Programming. A planning problem is the problem of selecting a series of actions or decisions to achieve a certain goal, and by using dynamic programming, it is possible to determine the optimal action from a long-term perspective. The following are examples of applications of dynamic programming in several planning problems.
- Robot path planning: The problem is to plan the shortest path for a robot to reach a goal in a space with obstacles. Dynamic programming can be used to determine what actions the robot will choose at each time and location and to calculate the shortest path.
- Resource Allocation Problem: This problem involves efficiently allocating resources (e.g., manpower, machines, time, etc.) to different tasks. Dynamic programming can be used to determine the optimal allocation for each task and maximize the overall resource use efficiency.
- Trading Problem: When an investor has multiple investment targets, the problem is to plan the appropriate allocation of funds to each investment target. By using dynamic programming, it is possible to determine the optimal portfolio by considering the return and risk of multiple investments.
- Budget allocation problem: This problem involves allocating budgets to different projects and advertising campaigns in order to maximize profit within a budget. Dynamic programming can be used to determine the optimal budget allocation, taking into account the costs and benefits of each project or campaign.
These examples will be some of the applications of dynamic programming to planning problems. Dynamic programming is very useful in solving optimization problems and helps to make optimal decisions in the long run. However, there are cases where dynamic programming cannot be applied depending on the nature of the problem, so it is necessary to select an appropriate algorithm depending on the problem.
Application of Dynamic Programming to Speech Recognition Technology
Dynamic programming is applied in speech recognition and post-processing of speech as described in “Speech Recognition Techniques“. In particular, it is useful for improving the results of speech recognition and for mapping speech on a time axis. The following are some examples of applications of dynamic programming in speech recognition technology.
- Speech recognition post-processing: Speech recognition results can be erroneous depending on the accuracy of the acoustic or language model. Dynamic programming can be used to find the best correspondence between frames of speech on the time axis and word sequences in the recognition results. This makes it possible to improve speech recognition results from frame-by-frame to word-by-word.
- Voiceprint recognition and speaker identification: In voiceprint recognition and speaker identification, it is necessary to evaluate the similarity between the input speech and the registered speaker’s voiceprint or features. Dynamic programming can be used to improve the accuracy of voiceprint recognition by finding the best temporal correspondence between the input speech and the registered voiceprint data.
- Understanding Intentions in Spoken Dialogue Systems: Spoken dialogue systems need to understand the intentions from the user’s utterances. Dynamic programming can be used to find the best correspondence between user utterances and predefined intent patterns to improve the understanding of the intent of utterances.
These are just a few examples of how dynamic programming is applied in speech recognition technology. Dynamic programming has been used as a useful method for a variety of problems in speech recognition, such as mapping speech over time and evaluating similarity. However, speech recognition is a very complex task and may need to be combined with other methods and machine learning techniques.
Finally, we will discuss a concrete example of implementation in python.
Implementation in python solving the Fibonacci sequence by dynamic programming.
An example of a Python implementation of solving the Fibonacci sequence using dynamic programming is shown. In dynamic programming, once the result is calculated, it is memorized (memorized) and reused.
def fibonacci_dynamic_programming(n):
if n == 0:
return 0
elif n == 1:
return 1
# Prepare and initialize a list for memoization
dp = [0] * (n + 1)
dp[0] = 0
dp[1] = 1
# Calculate and memoize Fibonacci sequences in sequence
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]
# test
n = 10
result = fibonacci_dynamic_programming(n)
print(f"The {n}-th Fibonacci number is: {result}")
In this code, passing an integer n to the fibonacci_dynamic_programming function returns the nth Fibonacci number. A dp list is prepared in the function, and the Fibonacci numbers are calculated in order and the results are recorded in the dp list, thus eliminating the need to calculate the same Fibonacci number multiple times and making the calculation more efficient. As an example, if n = 10, the 10th Fibonacci number would be 55. Running the above code yields the following result.
The 10-th Fibonacci number is: 55
Implementation in python using dynamic programming to solve the longest common subsequence
An example of a Python implementation of solving the Longest Common Subsequence (LCS) using dynamic programming is shown.
def longest_common_subsequence(str1, str2):
m = len(str1)
n = len(str2)
# Prepare and initialize DP table
dp = [[0] * (n + 1) for _ in range(m + 1)]
# Update DP table and calculate LCS length
for i in range(1, m + 1):
for j in range(1, n + 1):
if str1[i - 1] == str2[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
# Returns the length of the LCS
return dp[m][n]
# test
str1 = "AGGTAB"
str2 = "GXTXAYB"
result = longest_common_subsequence(str1, str2)
print(f"The length of the Longest Common Subsequence is: {result}")
In this code, when two strings str1 and str2 are passed to the function longest_common_subsequence, it returns the length of their longest common subsequence. A two-dimensional list called dp is provided in the function, and dp[i][j] contains the length of the LCS when the first i characters of str1 and up to the first j characters of str2 are considered. As an example, if str1 = “AGGTAB” and str2 = “GXTXAYB”, the longest common subsequence is “GTAB” and its length is 4. When the above code is executed, the following results are obtained.
The length of the Longest Common Subsequence is: 4
Implementation in python using dynamic programming to solve for the chain matrix product.
An example Python implementation of solving a chained matrix product (Matrix Chain Multiplication) using dynamic programming is presented. Matrix Chain Multiplication is a problem of minimizing the number of multiplications by determining the order in which the products of multiple matrices are computed.
def matrix_chain_multiplication(dims):
n = len(dims) - 1
# Prepare and initialize DP table
dp = [[0] * n for _ in range(n)]
# For a single matrix, the number of multiplications is 0
for i in range(n):
dp[i][i] = 0
# Calculate the length of the submatrix product and update the DP table
for l in range(2, n + 1): # Length of the submatrix product
for i in range(n - l + 1):
j = i + l - 1
dp[i][j] = float('inf')
for k in range(i, j):
# Calculate the number of times the matrix is multiplied and update the minimum value
dp[i][j] = min(dp[i][j], dp[i][k] + dp[k + 1][j] + dims[i] * dims[k + 1] * dims[j + 1])
# Returns the number of final matrix products
return dp[0][n - 1]
# test
matrix_dimensions = [10, 30, 5, 60]
result = matrix_chain_multiplication(matrix_dimensions)
print(f"The minimum number of multiplications is: {result}")
In this code, when the matrix_chain_multiplication function is passed a list dims indicating the matrix dimension, it returns the minimum number of multiplications of the chain matrix product, and a two-dimensional list called dp is prepared in the function, where dp[i][j] is the minimum number of multiplications to multiply the submatrix i through j dp[i][j] contains the minimum number of multiplications to multiply submatrices i through j. As an example, if matrix_dimensions = [10, 30, 5, 60], the minimum number of multiplications is 4500 when optimizing the order in which the three matrices with matrix dimensions of (10×30), (30×5), and (5×60) are calculated. Execution of the above code yields the following results.
The minimum number of multiplications is: 4500
Implementation in python using dynamic programming to solve the knapsack problem
An example of a Python implementation of the Knapsack Problem (Knapsack Problem) solved using dynamic programming is presented. The Knapsack Problem consists of multiple items of different weights and values, and the item is selected so that the sum of the weights achieves the maximum value within a limit.
def knapsack_dynamic_programming(max_weight, weights, values):
n = len(weights)
# Prepare and initialize DP table
dp = [[0] * (max_weight + 1) for _ in range(n + 1)]
# Update DP table to calculate maximum value
for i in range(1, n + 1):
for w in range(1, max_weight + 1):
if weights[i - 1] <= w:
dp[i][w] = max(dp[i - 1][w], values[i - 1] + dp[i - 1][w - weights[i - 1]])
else:
dp[i][w] = dp[i - 1][w]
# Return items of maximum value and selection
max_value = dp[n][max_weight]
selected_items = []
w = max_weight
for i in range(n, 0, -1):
if dp[i][w] != dp[i - 1][w]:
selected_items.append(i - 1)
w -= weights[i - 1]
return max_value, selected_items
# test
max_weight = 10
weights = [2, 3, 4, 5]
values = [3, 4, 5, 6]
result_value, result_items = knapsack_dynamic_programming(max_weight, weights, values)
print(f"The maximum value is: {result_value}")
print(f"Selected items: {result_items}")
In this code, the knapsack_dynamic_programming function is passed the maximum max_weight of the knapsack, a list of weights of the items, and a list of values of the items, and it returns the maximum values and the index list of the selected items. A two-dimensional list called dp is prepared in the function, and dp[i][w] stores the maximum value of the first i items that are selected so that their weight is less than w. As an example, if the maximum weight is max_weight = 10, the item weights = [2, 3, 4, 5], and the item values = [3, 4, 5, 6], then the maximum value is 10 and the item index list is [1, 3]. Executing the above code yields the following result.
The maximum value is: 10
Selected items: [1, 3]
Reference Information and Reference Books
For more details on the application of dynamic programming, see “Automata and State Transitions/Petri Nets and Automated Programming. See also there.
Reference book is “Dynamic Programming“
“Introduction to Stochastic Dynamic Programmin“
“Dynamic Programming: Foundations and Principles, Second Edition“
“Dynamic Programming: Models and Applications“
コメント