oT 
il F(o, x, u) do 
t 
is a minimum along the curve x on the interval [t, T] for all te[0, T]. 
The integral is dependent on the end point (t, x(t)). If one 
defines 
ab 
J(x, t) = min | WG, 385 wy) do @m) 
u t 
for all admisible controls u, then 
T 
Je, te) S malin Wee, se u) 6t} + min { F do 
u UW EOE 
or 
JiGs) tt) = min THe, x. uw) Ot + JiGg + Ox, tt Ob TCR 
u 
This equation forms the basis of the direct methods for solving control 
problems, described by Teerines! Lamson extended the direct methods to 
constrained problems. 
If it is assumed that J has partial derivatives, the differential 
equations (1.43) and (1.44) can be obtained from (2.2). Hence, the 
boundary value problem for the optimal control is obtained. If the 
partial derivatives of J exist, the right-hand side of (2.2) can be 
expanded in a Taylor series: 
J(x, t) = min {Fot + J(x, t) + J Gx, i) Ox ou J x, ©) Orel 
u 
(2.3) 
Sparse, R. E., "State Increment Dynamic Programming," American 
Elsevier Publishing Company, Inc., New York (1968). 
24 
