As an example of a control problem, consider a ship moving through 
a current of water; the ship is a system undergoing a change in state. 
In this example, the state is the position (x, y) of the ship. The 
parameters which control the motion of the ship are the power, which 
determines the velocity relative to the water, and the steerage angle, 
which controls the heading angle 8. In this simplification of the 
system, the dynamic equations are: 
be 
il] 
VN COS Gita Gxemmnyp) 
V sin 6 + v(x, y) 
S| 
Il 
where u and v are the velocity of the current in the x- and y-directions, 
respectively. The goal might be to go from point A to point B. If it 
is desired to reach B in the shortest possible time, the cost function 
would be the accumulated time; if it is desired to reach B with the 
minimum expenditure in fuel, the cost function would give the expended 
fuel in terms of x, y, V, and 8. A more complicated cost function would 
result if it is desired to reach B in the least time with a reasonable 
expenditure of fuel. Both the power and steerage angle could be subject 
to unpredictable perturbations; there could also be a stochastic pertur- 
bation of the current. 
This lecture on control theory first treats a deterministic optimal 
control problem with no constraints on the controls. It is first solved 
by transforming the problem into a boundary-value problem for an ordin- 
ary differential equation, the so called indirect approach; it is then 
solved by the direct method developed by Bellman, the method of dynamic 
a reaptemmeenee The big contribution of modern control theory to the de- 
terministic control problem has been the extensions to controls with 
constraints, and a discussion of constrained controls constitutes an- 
other major topic of the lecture. Still another important area is the 
development of the theory of stochastic processes necessary in the 
“Heilman. R. E. and S. E. Dreyfus, "Applied Dynamic Programming," 
Princeton University Press, Princeton, N.J. (1962). 
