RIVARD and BLEDSOE: PARAMETER ESTIMATION FOR PELLA-TOMLINSON MODEL 



sarily; our immediate purposes are better served 

 by the simpler form of the e, . We want to observe 

 the response of the estimation procedure to realis- 

 tic levels of stochastic error, but we also want to 

 avoid wrong interpretations in those cases where 

 parameter values might be ill determined because 

 of some inherent fault of the estimation procedure 

 itself and not because of some complication of the 

 error structure. Therefore, the stochastic repli- 

 cates, as well as the objective function of our esti- 

 mation procedui'e, are constructed on the assump- 

 tion of independence of errors. As we shall see in a 

 subsequent discussion, the following estimation 

 procedure is indeed robust with respect to that 

 assumption. 



PARAMETER ESTIMATION 

 PROCEDURE 



In its general form, the solution of the nonlinear 

 model described by Equation (3) can be written as 



Yrs^fvf2 



, f-Q) / = 1,2, . . . ,r (5) 



where2 Q = [m,B^, n, g,^^]'^. 



Quantity r represents the total number of observa- 

 tions over time, f, the fishing effort during time 

 interval z,and Y, the predicted yield (biomass or 

 number) over the interval /. Following Fox ( 1971), 

 we also consider an error term e, proportional to 

 population size and equivalent in terms of yield to 

 the form 



Y. = Y. + Y. e., 



(6) 



where Y", represents the observed yield over the 

 interval /. Then the error is described by the rela- 

 tionship 



e. = (Y. -y.)/ Y.. 



I ^ I i' ' I 



0) 



Least squares estimation of G by a vector re- 

 quires minimization of the function 



S(e) = S e. 



(8) 



In terms of residuals (Y, - Y,), this is equivalent 

 to 



S(e) = 2 W.iY.-gif^,...,f.;Q))\ (9) 



where the W, are statistical weights. That is, from 

 Equation (7), 



W. = Y. 



-2 



(10) 



If S(G) were an analytic form, we would find Q by 

 writing the normal equations 



"3 S(G)' 



3 0, 



= 0. 



Since S must be calculated via numerical 

 methods, we will instead consider S(0) as a con- 

 tinuous function that describes a hypersurface in a 

 five-dimensional parameter space; that space 

 must be searched for the appropriate minimum 

 value of S(Gi. The iterative process of successive 

 approximations which we employ is an adaptation 

 of the Levenberg-Marquardt technique (Leven- 

 berg 1944; Marquardt 1963). Given some initial 

 estimate Go, the method generates a sequence of 

 estimates G, from the inductive relation 



G 



y+l 



=G - 



j 



^h^^^'^.^r (11) 



In Equation (11). /3, is a positive constant, I5 the 

 identity matrix of order 5, J, an r by five matrix 

 having elements de,id8/; ii = 1, . . . , r: k = 1, . . . , 

 5), and e, the vector of errors afterj iterations. The 

 method combines the best festures of both the gra- 

 dient and the Taylor series methods and avoids 

 their most serious limitations (Conway et al. 

 1970 ). We employ a FORTRAN computer program 

 which incorporates a derivative-free version of the 

 Levenberg-Marquardt method (Brown and Den- 

 nis 1972). and we approximate the solution of the 

 model differential equations ( 1 ) and ( 3 ) by a fourth 

 order Runge-Kutta algorithm for numerical in- 

 tegration. The general structure of the progi-am is 

 shown by the flow diagram of Figure 1. Since all 

 the parameters have to be positive, we also con- 

 strain the optimization by transforming each 

 component of G by its absolute value before 

 evaluating the model. 



ACCURACY OF RESULTS 



^The notation [. . .] indicates that a row vector or matrix is 

 formed of the elements enclosed by brackets. 



Since the solution to the least-squares estima- 

 tion problem is the result of a numerical search 

 along the S(G) hj^persurface, we do not generate 



525 



