OPTIMAL 
TRAJECTORY 
IN, 
| 
|<— AT—>| 
| 
| 
| 
| 
Figure 2 -- Constant Cost Fronts 
These arguments hold only if the terminal cost is zero; G = 0. 
RELATION TO DYNAMIC PROGRAMMING 
The partial differential equations (1.43) and (1.44) can be ob- 
tained by the method of dynamic programming. This method is based on 
the Bellman principle of optimalicy.! According to the Bellman prin- 
ciple, an optimal control policy has the property that, regardless of 
the initial state or initial decision, the remaining decisions must 
constitute an optimal control policy with regard to the state which 
results from the first decision. 
In terms of the cost function 
A 
C(u) = i F(G, x, u) do 
0) 
the Bellman principle takes the form. 
The cost C(u) is a minimum along a curve x defined on [0, TE] suie abie 
is a minimum along each later part of the curve, that is, if 
Varese. S. E., "Dynamic Programming and the Calculus of Variation," 
Academic Press, Inc., New York (1965). 
23 
