Dynamic Programming
Dynamic programming (also known as dynamic optimization) is a method for
solving a complex problem by breaking it down into a collection of simpler sub
problems, solving each of those sub problems just once, and storing their
solutions.
Dynamic programming, like the divide-and-conquer
method, solves problems by combining the solutions to sub problems.
Divide-and-conquer algorithms partition the
problem into disjoint sub problems, solve the sub problems recursively, and then
combine their solutions to solve the original problem. In contrast, dynamic
programming applies when the sub problems overlap - that is, when sub problems
share common sub problems.
A
divide-and-conquer algorithm does more work than necessary, repeatedly solving
the common sub problems. A dynamic-programming algorithm solves each sub problem just once and then saves its answer in a table, thereby avoiding
the work of recomputing the answer every time it solves each sub problem. The
technique of storing solutions to sub problems instead of re computing them is
called "memoization".
Dynamic
programming is applied to optimization problems. Such problems
can have many possible solutions. Each solution has a value, and we wish to
find a solution with the optimal (minimum or maximum) value.
Elements
of dynamic programming
The two key ingredients that an
optimization problem must have in order for dynamic programming to apply:
optimal substructure and overlapping sub problems.
1. Optimal substructure
To
solve a optimization problem using dynamic programming, we must first
characterize the structure of an optimal solution. Specifically, we must prove
that we can create an optimal solution to a problem using optimal solutions to
sub problems.
Dynamic programming cannot be if the optimal solution to
a problem might not require sub problem solutions to be optimal. This often
happens when the sub problems are not independent of each other.
Optimal substructure varies across
problem domains in two ways:
a. How many sub problems an optimal
solution to the original problem uses, and
b. How many choices we have in
determining which sub problem(s) to use in an optimal solution.
Dynamic programming often uses optimal
substructure in a bottom-up fashion. That is, first find optimal solutions
to sub problems and, having solved the sub problems, find an optimal solution
to the problem.
2. Overlapping sub problems
When a recursive algorithm revisits the
same problem repeatedly, we say that the optimization problem has overlapping
sub problems.
Dynamic-programming algorithms typically take advantage of overlapping
sub problems by solving each sub problem once and then storing the solution in a
table where it can be looked up when needed, using constant time per lookup.
No comments:
Post a Comment