problem is solved; the solution is substituted into the boundary con- 
ditions at the other end. If these conditions are satisfied, the solu- 
tion to the initial value problem is the desired solution to the two- 
point, boundary-value problem; otherwise, a new set of assumptions is 
made based on the discrepancy between the actual boundary values and the 
calculated values. Hopefully, as one continues this iteration process, 
the solutions to the initial value problem converge to a solution of the 
two-point, boundary-value problem. The shooting method may not con- 
verge, or it can be unstable, that is, a small variation in the initial 
conditions results in a large variation in the solution. If the initial 
problem is unstable, a small error, such as roundoff on a computer, 
could cause subsequently computed values at another point to be meaningless. 
Before proceeding to the direct method for solving the optimal 
control problem, take a second look at the Hamiltonian H and the func- 
tions nice Suppose that the terminal cost G is identically zero; the 
cost function is then 
T 
C(u) = | F(a, x, u) do 
0 
Further, assume that every point in an open neighborhood N of an optimal 
trajectory z(t) can be joined to the initial point (0, Xo) by a trajec-— 
tory x(t) resulting from an optimal control. This assumption makes the 
minimal cost J a function of the terminal point (T, X) in N. 
a 
J (p> T) = Min i (Gs x5 Ww) ao (1.39) 
0 
It is assumed that J is twice continuously differentiable. Then, 
J (Sp + AX T+AT) = J (&n> T) + J Ax + JpAT (1.40) 
20 
