PALM: FISHERY REGULATION VIA CONTROL THEORY 



APPENDIX 



The Linear-Quadratic Optimal Control Problem 

 and its solution are now outlined. For a thorough 

 discussion, see Ho and Bryson (1969) or'Kwaker- 

 naak and Sivan (1972). By use of state variable 

 notation, any set of time-invariant ordinary 

 differential equations can be put into the follow- 

 ing form: 



df 



= g i!i,.t) 



(A-l) 



where y is a general /^-dimensional vector func- 

 tion, y is the ^-dimensional state vector for the 

 model, and/'is the ///-dimensional input or forcing 

 function vector. If this system has an equilibrium 



(^eq'/eq- the followlng Set of algebrais equations 

 must be satisfied: 



Q =^(//eq./eq)- 



The values of //p,, and f\,^ depend on the system's 

 parameters, and static optimazation methods such 

 as calculus or linear programming can be applied 

 to find the optimal j/gq, /eqand system parameters 

 according to some criterion. This was done in the 

 first example to determine the condition of 

 maximum equilibrium yield. Since any real system 

 is subjected to varying conditions and distur- 

 bances, it will be continually displaced from 

 equilibrium. Thus for unstable systems or stable 

 systems with large time constants, a static method 

 of analysis is not suflicient. In such a case the next 

 step is to apply a dynamic optimization method, 

 such as the method presented here, to devise a 

 control scheme which ensures that the system will 

 return to equilibrium with a satisfactory time 

 constant. Thus static and dynamic methods should 

 not be viewed as alternative approaches to op- 

 timization, but rather as mutually complementary 

 methods. 



After the equilibrium is determined, Equation 

 (A-l) is linearized by expanding the function g in a 

 Taylor series in y and/, and keeping only the 

 first-order terms. This gives the linearized model: 



where: 



^=[A]x + [B]ii 





(A-2) 



[A] = 



[B] = 





(A-3) 

 (A-4) 



in which the subscript eq indicates that the partial 

 derivatives of _^ are evaluated at the equilibrium. 

 The stability of the equilibrium can be determined 

 from the roots of the determinant equation: 



s [/] - [A] 



= 



(A-5) 



where [/] is the {)! x //) identity matrix. The 

 equilibrium is stable if and only if all of the roots s 

 have negative real parts. 



By finding the function n which minimizes the 

 following quadratic performance index, £ and n 

 are kept near zero and thus the system is kept near 

 equilibrium. 



J ^ i (£'r[Q]x+ i±T[R]u)dt. (A-6) 







The feedback control function which minimizes J 

 has been shown to be: 



. = -[Al£. 



(A-7) 



The feedback "gain" matrix [K] is calculated from: 



[K] = [R]-'[Br[P] (A-8) 



where the Riccati matrix [P], an (n x n) symmet- 

 ric matrix, is the steady-state solution of the Ric- 

 cati matrix differential equation: 



dJF] 

 dt 



= [Q]+[AnP] + [P][A] 



- [P][B][R]-nBV [P] (A-9) 

 with the initial condition: 



[pm = [0]. 



The matrix [P] is usually found by numerically 

 solving the Riccati equation until all the com- 

 ponents of the solution [P] become constant. This 

 will always occur. 



837 



