T 
G(x, T) + i F(o, x, u) do 
fo 
where Xp 
the system through the function G and on the intermediate states and the 
= x(T). This cost function is dependent on the final state of 
control function through the function F. The additive property of the 
control function with respect to the intermediate times is represented 
by the integral. By an optimal control is meant that control which 
minimizes the cost function; it is this function which is the desired 
result of optimal control theory. 
Any process that is being controlled is subject to unpredicted 
disturbances, and these can make a significant difference in the choice 
of a control function. Suppose the dynamic equations of a system is 
given by the differential equation 
where p(t) represents a disturbance. The behavior of the system in 
response to the two different controls (uy, = - x) and (uy =- ape) does 
not differ if there is no disturbance (p = 0); however, if a disturbance 
is present, the response is significantly different. If x, = 1, the 
0 
response to the first control is given by 
whereas the response to the second control is 
ie 
san= ang + | p(o) do 
« 0 
Such differences could conceivably result in a different choice for an 
optimal control. 
