ABSTRACT 
The lectures present an introduction to modern control 
theory. Calculus of variations is used to study the problem 
of determining the optimal control for a deterministic sys- 
tem without constraints and for one with constraints. The 
method of dynamic programming is also used to solve the 
unconstrained control problem. Stochastic systems are intro- 
duced, and the Kalman-Bucy filter is derived. 
INTRODUCTION 
Optimal control theory is involved with the great human effort to 
control or influence processes of one type or another. The objectives 
and criteria for the performance of a physical system may be diffused or 
defy tractable analysis in many situations, but the basic concepts on 
which to proceed have been established in control theory. One first 
considers a system and a process through which the state of the system 
is changing in time; in other words, some action or motion of the system 
takes place in time. This behavior of the system is described by a set 
of time-dependent variables x (t) = (x) > a 3S x) which are called 
the state variables. In addition to the state of the system, one also 
considers controls by which the process in question can be influenced. 
These controls are represented by a set of variables u (t) = 
(uy Ge) eis un (t)) which are called the control variables. 
At a certain instant in time, say to» the state of the system is 
known to be x,. If an analysis of the system is to be performed, a sys- 
0 
tem of equations must be specified which predict the state for t > ty 
and for a given control function u. These equations are called the 
dynamic equations for the system; they may take the form of an ordinary 
differential equation 
