• No results found

Dynamic Programming

N/A
N/A
Protected

Academic year: 2022

Share "Dynamic Programming "

Copied!
44
0
0

Loading.... (view fulltext now)

Full text

(1)

Dynamic Programming

(2)

Dynamic Programming

• An algorithm design technique (like divide and conquer)

• Divide and conquer

– Partition the problem into independent subproblems – Solve the subproblems recursively

– Combine the solutions to solve the original problem

(3)

Dynamic Programming

• Applicable when subproblems are not independent

– Subproblems share subsubproblems

E.g.: Combinations:

– A divide and conquer approach would repeatedly solve the common subproblems

– Dynamic programming solves every subproblem just once and stores the answer in a table

n k

n-1 k

n-1

= + k-1 n

1

n

=1 n =1

(4)

Example: Combinations

= +

=

=

=

=

=

+ +

+ + + +

+ + + + + +

+

+ + +

+ + +

+ + + + + + +

+ +

+

+ + + +

+ + +

+ 3 3 Comb (3, 1)

2 Comb (2, 1)

1 Comb (2, 2) Comb (3, 2) Comb (4,2)

2 Comb (2, 1)

1 Comb (2, 2) Comb (3, 2)

1 1 Comb (3, 3) Comb (4, 3)

Comb (5, 3)

2 Comb (2, 1)

1 Comb (2, 2) Comb (3, 2)

1 1 Comb (3, 3) Comb (4, 3)

1 1 1 Comb (4, 4) Comb (5, 4)

Comb (6,4)

n k

n-1 k

n-1

= + k-1

(5)

Dynamic Programming

• Used for optimization problems

– A set of choices must be made to get an optimal solution

– Find a solution with the optimal value (minimum or maximum)

– There may be many solutions that lead to an optimal value

– Our goal: find an optimal solution

(6)

Dynamic Programming Algorithm

1. Characterize the structure of an optimal solution

2. Recursively define the value of an optimal solution

3. Compute the value of an optimal solution in a bottom-up fashion

4. Construct an optimal solution from computed information (not always necessary)

(7)

Assembly Line Scheduling

• Automobile factory with two assembly lines

– Each line has n stations: S1,1, . . . , S1,n and S2,1, . . . , S2,n

– Corresponding stations S1, j and S2, j perform the same function but can take different amounts of time a1, j and a2, j

– Entry times are: e1 and e2; exit times are: x1 and x2

(8)

Assembly Line Scheduling

• After going through a station, can either:

– stay on same line at no cost, or

– transfer to other line: cost after Si,j is ti,j , j = 1, . . . , n - 1

(9)

Assembly Line Scheduling

• Problem:

what stations should be chosen from line 1 and which from line 2 in order to minimize the total time through the factory for one car?

(10)

One Solution

• Brute force

– Enumerate all possibilities of selecting stations

– Compute how long it takes in each case and choose the best one

• Solution:

– There are 2n possible ways to choose stations – Infeasible when n is large!!

1 0 0 1 1

1 if choosing line 1 at step j (= n)

1 2 3 4 n

0 if choosing line 2 at step j (= 3)

(11)

1. Structure of the Optimal Solution

• How do we compute the minimum time of going through a station?

(12)

1. Structure of the Optimal Solution

• Let’s consider all possible ways to get from the starting point through station S1,j

– We have two choices of how to get to S1, j:

• Through S1, j - 1, then directly to S1, j

• Through S2, j - 1, then transfer over to S1, j

a1,j a1,j-1

a2,j-1

t2,j-1

S1,j S1,j-1

S2,j-1 Line 1

Line 2

(13)

1. Structure of the Optimal Solution

• Suppose that the fastest way through S1, j is through S1, j – 1

– We must have taken a fastest way from entry through S1, j – 1

– If there were a faster way through S1, j - 1, we would use it instead

• Similarly for S2, j – 1

a1,j a1,j-1

a2,j-1

t2,j-1

S1,j S1,j-1

S2,j-1

Optimal Substructure

Line 1

Line 2

(14)

Optimal Substructure

Generalization: an optimal solution to the

problem “find the fastest way through S1, j” contains

within it an optimal solution to subproblems: “find the fastest way through S1, j - 1 or S2, j – 1”.

• This is referred to as the optimal substructure property

• We use this property to construct an optimal solution to a problem from optimal solutions to subproblems

(15)

2. A Recursive Solution

• Define the value of an optimal solution in terms of the optimal solution to subproblems

(16)

2. A Recursive Solution (cont.)

• Definitions:

– f* : the fastest time to get through the entire factory

– fi[j] : the fastest time to get from the starting point through station Si,j

f* = min (f1[n] + x1, f2[n] + x2)

(17)

2. A Recursive Solution (cont.)

• Base case: j = 1, i=1,2 (getting through station 1) f1[1] = e1 + a1,1

f2[1] = e2 + a2,1

(18)

2. A Recursive Solution (cont.)

• General Case: j = 2, 3, …,n, and i = 1, 2

• Fastest way through S1, j is either:

– the way through S1, j - 1 then directly through S1, j, or f1[j - 1] + a1,j

– the way through S2, j - 1, transfer from line 2 to line 1, then through S1, j

f2[j -1] + t2,j-1 + a1,j

f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j)

a1,j a1,j-1

a2,j-1

t2,j-1

S1,j S1,j-1

S2,j-1 Line 1

Line 2

(19)

2. A Recursive Solution (cont.)

e1 + a1,1 if j = 1

f1[j] =

min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2

e2 + a2,1 if j = 1

f2[j] =

min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) if j ≥ 2

(20)

3. Computing the Optimal Solution

f* = min (f1[n] + x1, f2[n] + x2)

f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j)

• Solving top-down would result in exponential running time

f1[j]

f2[j]

1 2 3 4 5

f1(5) f2(5) f1(4)

f2(4) f1(3)

f2(3)

2 times 4 times

f1(2) f2(2) f1(1)

f2(1)

(21)

3. Computing the Optimal Solution

• For j ≥ 2, each value fi[j] depends only on the values of f1[j – 1] and f2[j - 1]

• Idea: compute the values of fi[j] as follows:

• Bottom-up approach

– First find optimal solutions to subproblems

– Find an optimal solution to the problem from the f1[j]

f2[j]

1 2 3 4 5

in increasing order of j

(22)

Example

e1 + a1,1, if j = 1

f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2

f* = 35[1]

f1[j]

f2[j]

1 2 3 4 5

9

12 16[1]

18[1] 20[2]

22[2]

24[1]

25[1]

32[1]

30[2]

(23)

FASTEST-WAY( a, t, e, x, n )

1. f1[1] e1 + a1,1 2. f2[1] e2 + a2,1 3. for j 2 to n

4. do if f1[j - 1] + a1,j ≤ f2[j - 1] + t2, j-1 + a1, j 5. then f1[j] f1[j - 1] + a1, j

6. l1[j] 1

7. else f1[j] f2[j - 1] + t2, j-1 + a1, j 8. l1[j] 2

9. if f2[j - 1] + a2, j ≤ f1[j - 1] + t1, j-1 + a2, j 10. then f2[j] f2[j - 1] + a2, j

11. l2[j] 2

12. else f2[j] f1[j - 1] + t1, j-1 + a2, j

Compute initial values of f1 and f2

Compute the values of f1[j] and l1[j]

Compute the values of f2[j] and l2[j]

O(N)

(24)

FASTEST-WAY( a, t, e, x, n ) (cont.)

14. if f1[n] + x1 ≤ f2[n] + x2 15. then f* = f1[n] + x1 16. l* = 1

17. else f* = f2[n] + x2 18. l* = 2

Compute the values of the fastest time through the entire factory

(25)

4. Construct an Optimal Solution

Alg.: PRINT-STATIONS(l, n) i ← l*

print “line ” i “, station ” n for j ← n downto 2

do i ←li[j]

print “line ” i “, station ” j - 1

f1[j]/l1[j]

f2[j]/l2[j]

1 2 3 4 5

9

12 16[1]

18[1] 20[2]

22[2]

24[1]

25[1]

32[1]

30[2]

l* = 1

(26)

Elements of Dynamic Programming

• Optimal Substructure

– An optimal solution to a problem contains within it an optimal solution to subproblems

– Optimal solution to the entire problem is build in a bottom-up manner from optimal solutions to

subproblems

• Overlapping Subproblems

– If a recursive algorithm revisits the same subproblems over and over  the problem has overlapping

subproblems

(27)

Parameters of Optimal Substructure

• How many subproblems are used in an optimal solution for the original problem

– Assembly line:

– Matrix multiplication:

• How many choices we have in determining

which subproblems to use in an optimal solution

– Assembly line:

– Matrix multiplication:

One subproblem (the line that gives best time)

Two choices (line 1 or line 2)

Two subproblems (subproducts Ai..k, Ak+1..j)

j - i choices for k (splitting the product)

(28)

Parameters of Optimal Substructure

• Intuitively, the running time of a dynamic

programming algorithm depends on two factors:

– Number of subproblems overall

– How many choices we look at for each subproblem

• Assembly line

(n) subproblems (n stations) – 2 choices for each subproblem

• Matrix multiplication:

(n2) subproblems (1  i  j  n) – At most n-1 choices

(n) overall

(n3) overall

(29)

Longest Common Subsequence

• Given two sequences X = x1, x2, …, xm Y = y1, y2, …, yn

find a maximum length common subsequence (LCS) of X and Y

• E.g.:

X = A, B, C, B, D, A, B

• Subsequences of X:

– A subset of elements in the sequence taken in order

A, B, D, B, C, D, B, etc.

(30)

Example

X = A, B, C, B, D, A, B X = A, B, C, B, D, A, B

Y = B, D, C, A, B, A Y = B, D, C, A, B, A

• B, C, B, A and B, D, A, B are longest common subsequences of X and Y (length = 4)

• B, C, A, however is not a LCS of X and Y

(31)

Brute-Force Solution

• For every subsequence of X, check whether it’s a subsequence of Y

• There are 2m subsequences of X to check

• Each subsequence takes (n) time to check

– scan Y for first letter, from there scan for second, and so on

• Running time: (n2m)

(32)

Making the choice

X = A, B, D, E

Y = Z, B, E

• Choice: include one element into the common sequence (E) and solve the resulting

subproblem

X = A, B, D, G

Y = Z, B, D

• Choice: exclude an element from a string and solve the resulting subproblem

(33)

Notations

• Given a sequence X = x1, x2, …, xm we define the i-th prefix of X, for i = 0, 1, 2, …, m

Xi = x1, x2, …, xi

• c[i, j] = the length of a LCS of the sequences Xi = x1, x2, …, xi and Yj = y1, y2, …, yj

(34)

A Recursive Solution

Case 1: xi = yj

e.g.: Xi = A, B, D, E

Yj = Z, B, E

– Append xi = yj to the LCS of Xi-1 and Yj-1

– Must find a LCS of Xi-1 and Yj-1  optimal solution to a problem includes optimal solutions to subproblems

c[i, j] = c[i - 1, j - 1] + 1

(35)

A Recursive Solution

Case 2: xi  yj

e.g.: Xi = A, B, D, G

Yj = Z, B, D

– Must solve two problems

• find a LCS of Xi-1 and Yj: Xi-1 = A, B, D and Yj = Z, B, D

• find a LCS of Xi and Yj-1: Xi = A, B, D, G and Yj = Z, B

• Optimal solution to a problem includes optimal c[i, j] = max { c[i - 1, j], c[i, j-1] }

(36)

Overlapping Subproblems

• To find a LCS of X and Y

– we may need to find the LCS between X and Yn-1 and that of Xm-1 and Y

– Both the above subproblems has the subproblem of finding the LCS of Xm-1 and Yn-1

• Subproblems share subsubproblems

(37)

3. Computing the Length of the LCS

0 if i = 0 or j = 0

c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi yj

0 0 0 0 0 0 0

0 0 0 0

yj:

xm

y1 y2 yn

x1 x2 xi

i

0 1 2 n

m 1 2 0

first second

(38)

Additional Information

0 if i,j = 0

c[i, j] = c[i-1, j-1] + 1 if xi = yj max(c[i, j-1], c[i-1, j]) if xi  yj

0 0 0 0 0 0 0

0 0 0 0

yj:

D

A C F

A B xi

j

i

0 1 2 n

m 1 2 0

A matrix b[i, j]:

• For a subproblem [i, j] it tells us what choice was made to obtain the

optimal value

• If xi = yj

b[i, j] = “ ”

• Else, if c[i - 1, j] ≥ c[i, j-1]

b[i, j] = “ else

b[i, j] = “

3

3 C

b & c: D

c[i,j-1]

c[i-1,j]

(39)

LCS-LENGTH(X, Y, m, n)

1. for i 1 to m 2. do c[i, 0] 0 3. for j 0 to n 4. do c[0, j] 0 5. for i 1 to m

6. do for j 1 to n 7. do if xi = yj

8. then c[i, j] c[i - 1, j - 1] + 1

9. b[i, j ] “ ”

10. else if c[i - 1, j] ≥ c[i, j - 1]

11. then c[i, j] c[i - 1, j]

12. b[i, j]

13. else c[i, j] c[i, j - 1]

14. b[i, j] 15. return c and b

The length of the LCS if one of the sequences is empty is zero

Case 1: xi = yj

Case 2: xi  yj

Running time: (mn)

(40)

Example

X = A, B, C, B, D, A

Y = B, D, C, A, B, A

0 if i = 0 or j = 0 c[i, j] = c[i-1, j-1] + 1 if xi = yj

max(c[i, j-1], c[i-1, j]) if xi  yj

0 1 2 3 4 5 6

yj B D C A B A

5 1 2 0

3 4

6 7

D A B xi

C B

A B

0 0 0 0 0 0

0

0 0 0 0 0

0 0

0

0

0

1 1

1

1 1 1

1

2 2

1

1

2 2

2

2

1

1

2

2

3 3

1

2

2

2

3

3

1

2

3

2

3

4

1

2

2

3

4

4

If xi = yj

b[i, j] = “ ”

Else if c[i - 1, j] ≥ c[i, j-1]

b[i, j] = “ else

b[i, j] = “

(41)

4. Constructing a LCS

• Start at b[m, n] and follow the arrows

• When we encounter a “ “ in b[i, j] xi = yj is an element of the LCS

0 1 2 3 4 5 6

yj B D C A B A

5 1 2 0

3 4 6

D A B xi

C B A

0 0 0 0 0 0

0

0 0 0 0 0 0

0

0

0

1 1

1

1 1 1

1

2 2

1

1

2 2

2

2

1

1

2

2

3 3

1

2

2

2

3

3

1

2

3

2

3

4

(42)

PRINT-LCS(b, X, i, j)

1. if i = 0 or j = 0 2. then return 3. if b[i, j] = “ ”

4. then PRINT-LCS(b, X, i - 1, j - 1) 5. print xi

6. elseif b[i, j] = “↑”

7. then PRINT-LCS(b, X, i - 1, j) 8. else PRINT-LCS(b, X, i, j - 1)

Initial call: PRINT-LCS(b, X, length[X], length[Y])

Running time: (m + n)

(43)

Improving the Code

• What can we say about how each entry c[i, j] is computed?

– It depends only on c[i -1, j - 1], c[i - 1, j], and c[i, j - 1]

– Eliminate table b and compute in O(1) which of the three values was used to compute c[i, j]

– We save (mn) space from table b

– However, we do not asymptotically decrease the auxiliary space requirements: still need table c

(44)

Improving the Code

• If we only need the length of the LCS

– LCS-LENGTH works only on two rows of c at a time

• The row being computed and the previous row

– We can reduce the asymptotic space requirements by storing only these two rows

References

Related documents

the design of an energy optimal scheduling policy for an average delay constraint, we use Constrained Dynamic Programming (CDP) [10], [13], to devise a transmission strategy using

linearization method Hagel (1984) pointed out that by assigning the complete initial conditions to the first stage of linearization itself can assure the agreement

A stochastic dynamic programming model is developed to obtain a steady-state optimal fraction-removal policy which specifies the optimal fraction-removal levels in a season for

In the present work, a GA is used as an optimization tool to solve the fuzzy optimization prob- lem and obtain an optimal solution vector 共set of optimal fraction removal levels

From a discrete polymatroid, a corresponding generalized index coding problem is constructed and it is shown that a representation to the discrete polymatroid exists if an

The optimal paths and computational cost of the VISIR system are then compared to those of the gov- erning differential equations for time-optimal path planning in dynamic

‘‘P’’ that is capable of eliciting T-cell mediated immune response), (ii) naive T-cell, (iii) myeloid dendritic cell in the form of professional antigen presenting cells (APC),

develop suboptimal solutions for the nonlinear control problem.In chapter 4, however, an optimal solution is presented for linear stochastic systems involving process time