KIMURA: LIKELIHOOD METHODS FOR GROWTH CURVE 



pendence, the likelihood of these y, is described by 

 Equation ( 1 ). It follows that likelihood procedures 

 applicable to unweighted methods (a) and (b) are 

 also applicable to weighted methods (c) and (d) 

 with few modifications. These arguments apply 

 when fitting any function using LS estimation. 



Selection of an appropriate LS method for a 

 given problem can largely be made on the valid- 

 ity of the error assumption, but not solely on this 

 basis. It is also useful to keep in mind the purpose 

 of fitting the curve. For example, a curve fit with 

 method (a) would do well in predicting the length 

 of a randomly selected individual (if data were 

 from random samples), and this property would 

 be important in, say, modeling applications. 

 Method (b), on the other hand, may best describe 

 the growth of a species over its entire lifespan, a 

 property which would be desirable when compar- 

 ing growth among species. It should be noted that 

 the practice of graphing the estimated curve and 

 plotting average lengths observed at each age is 

 visually biased toward method (b). Method (b) 

 will generally look best on this type of plot. 



Method (c) is appropriate when it is apparent 

 that the variance at each age varies significantly. 

 This assumption can be examined using Bartlett's 

 or Cochran's tests (Dixon and Massey 1957) for the 

 homogeneity of variance. 



Method (d) is largely a computational device. In 

 the following section it is shown that method (d) is 

 nearly equivalent to method (a), but often requires 

 much less computational effort. 



The sum of squares to be minimized using method 



(d) is 



i 



The normal equation is derived for the parame- 

 ter K say, using method (a), by taking the partial 

 derivative of S with respect to K and setting the 

 result equal to zero: 



as 



dK 



^E-2{t„-,i(t^))u„{t,) 



= -S2/Jk(?,)/j,(/",-p(/,)) = 0. 



i 



The normal equation is obtained for method (d) by 

 taking the partial derivative of S„, with respect to 

 K, yielding a result identical to that from method 



(a): 



9S„, 



-^2^K{U)n,{l-^(ti)) = 



Because this identity of the two normal equations 

 is not due to any special property of K, normal 

 equations obtained from methods (a) and (d) are 

 identical, implying corresponding LS estimates 

 are also identical. 



Similarity of 

 Covariance Matrix estimates 



NEAR EQUIVALENCE OF 

 METHODS (A) AND (D) 



Calculations for method (a) can be performed 

 using method (d), with often a large savings in 

 computational effort. It will be shown that 

 methods (a) and (d) yield identical parameter es- 

 timates, and similar covariance matrix estimates 

 of parameter estimates. These results are general 

 properties of LS estimates under the assumptions 

 of method ( a ) , and are not dependent on the form of 

 the function being fitted. 



Identity of Parameter Estimates 



For the von Bertalanffy curve using method (a), 

 the sum of squares to be minimized is 



S(L,KJo) = :::^{lij-iJi{L,K,toJi)f  



i J 



The asymptotic covariance matrix for parame- 

 tersf^' = (^1, . . . ,ep) estimated using ML theory is 

 the inverse information matrix 7((i) ' (Kendall 

 and Stuart 1973), where /(W = (/y), 



Av = 





HO,X) = log(C(9,X)),andC(9,X) is the likelihood 

 function. 



For nonlinear LS estimates, 1(0)'^ can be esti- 

 mated using 



which is the formula used by nonlinear LS compu- 

 ter programs. Generally, Z = (Z,^), where Zy is the 

 partial derivative of the expectation of the jth 

 observation with respect to the jth parameter 



767 



