treatment of stochastic controls. Finally, the theory of Kalman-Bucy 
filters is given and their solution to the stochastic control problem is 
presented for linear systems. 
THE OPTIMAL CONTROL PROBLEM 
In these lectures the simplest optimal control problem considered 
is that of a state variable x(t) and a control variable u(t) defined on 
an interval O<t<T. The process being controlled is described by the 
dynamic equations 
BE) = (Ey Bo W) Gigi) 
with 
x(0) = x GR) 
0 
The vector f is twice continuously differentiable with respect to x and 
Lipschitz continuous with respect to u; this latter condition means 
simply that there is a constant L such that for every pair of control 
vectors u and v 
IEG, & w) = 2, => wD) <aloe x (1.3) 
For each control vector u, these conditions imply that the state 
vector x, which is obtained from solving (1.1) and which also satisfies 
the initial condition (1.2), exists and is unique. Moreover, from among 
the set of control vectors, it is assumed that there is a unique control 
u which minimizes the cost function C The cost function is defined by 
T 
the following: 
T 
CS lw) =sCl Ga) ao | F(o, x, u) do (1.4) 
qt =r) A xu 
