Poole et al : A proposed stock assessment method and it application to bowhead whales 



151 



was underestimated by about 4000 whales for all 

 three values of P1993. The estimates of K were inac- 

 curate regardless of the accuracy of P 1993. Although 

 not shown, this inaccuracy holds for the Bayesian 

 method as well as for MCA. Both methods provided 

 the best coverage when P1993 was too large, but the 

 Bayesian method provided better coverage than MCA 

 in the other two cases. Again, these results separated 

 coverage problems attributable to inaccurate priors 

 from additional performance degradation apparently 

 introduced by opting for MCA analysis. 



As a final point, these simulations and others that 

 we ran suggest that the MCA estimates of K are 

 heavily dependent on the prior distribution for 

 MSYR, to the point of being undesirably insensitive 

 to the true values of K, P1993, and the data P1993. 

 This behavior is more extreme than would be the 

 case if the prior was updated to a posterior in a fully 

 Bayesian framework. These results concur with the 

 results in the section "A simple point estimation ex- 

 ample," where the MCA estimates were greatly in- 

 fluenced by the prior mean and variance. When the 

 prior is accurate and precise, MCA may perform well, 

 as will Bayesian techniques. Bayesian techniques 

 seem to weight the data more heavily in relation to 

 the prior, than does MCA. 



Relation between MCA and bootstrap 



The MCA approach described in the section "Imple- 

 mentation of MCA and a Bayesian approach" in- 

 cluded bootstrapping the abundance data uncondi- 

 tionally. A standard (conditional) bootstrap would 

 proceed by resampling the residuals (e.g. Efron and 

 Gong, 1983, p. 43) from the model fit. The uncondi- 

 tional approach used in the bowhead application in- 

 troduced excess variability because bootstrapped 

 pseudodata varied about the observed data, which 

 in turn varied about the model fit. 



The general MCA approach might be viewed as an 

 approximation to a bootstrap that is unconditional 

 on the model fit. In such applications (e.g. Smith and 

 Gavaris, 1993), the interpretation of the stochasticity 

 thereby introduced must be carefully considered if it 

 differs from the data stochasticity that causes esti- 

 mation uncertainty. When, as is permitted with MCA, 

 the unconditional approach simulates from a sub- 

 jective prior rather than from data, the method is 

 not a bootstrap because the simulation reflects 

 stochasticity other than that introduced by data used 

 for estimating the parameter 



Even in the case when sufficient data are avail- 

 able to permit parametric bootstrap simulation of all 

 inputs (case 1 in the section "The Monte Carlo ap- 



proach"), MCA does not reduce to a parametric boot- 

 strap of the desired estimator. A parametric bootstrap 

 expresses sampling uncertainty about a statistic 

 RiX, F(6)), where X ~ F,hy observing the distribu- 

 tion otRiX*, F (6)), where F is an estimate of F that 

 depends on the data X, and X: ~ F. Variability in R 

 is due tp X. A parametric bootstrap arises when a 

 model F(e) = Fie) is fitted, or less desirably FiO) = 

 G{y)m some applications of MCA. In this case, 9 or 

 7 should be estimated from the data, X, whose 

 stochasticity induces sampling variability in 

 RiX, Fie)). However, with MCA, or y is estimated 

 from different data, not the data on model output 

 parameters, although it is the uncertainty associated 

 with estimators of output parameters that is desired. 

 Even in this case, the sampling distributions used 

 are effectively data-based priors, and MCA relies on 

 the unusual approach of integrating a conditional 

 maximization of the likelihood over the prior. 



Conclusion 



Theoretical investigation and simulation show that 

 the combination of Bayesian and conditional maxi- 

 mum likelihood techniques used by MCA has the 

 potential to yield quite variable or biased results, or 

 both, though it can perform well in ideal circum- 

 stances. In some situations, a fully Bayesian or clas- 

 sical ML solution can be obtained by small modifica- 

 tions to MCA, and the optimal properties of these 

 more standard methods are well known. For more 

 complex problems, Bayesian and ML solutions are 

 sometimes more difficult to obtain than is an MCA 

 solution. However, as our examples illustrate, MCA 

 can result in unreliable inference even in simple situ- 

 ations. MCA integrates a conditional maximization of 

 the likelihood over the prior, whereas a fully Bayesian 

 approach integrates a conditional mean. If one uses 

 what is effectively a Bayesian prior, then it is subopti- 

 mal to use it in a non-Bayesian inference framework. 

 We have seen how MCA produces estimates with 

 excessive bias. However, there may exist classes of 

 assessment problems where, owing to some feature 

 that is identifiable in advance, the extra bias is ac- 

 ceptably small. MCA could be applied to such prob- 

 lems because the excess bias would not cause MCA 

 results to differ much from results produced by ei- 

 ther fully Bayesian or ML methods. Our bowhead 

 whale and simple examples clearly do not belong to 

 such a class. Furthermore, in the general case, the 

 extent to which MCA might err is not controllable or 

 estimable by the analyst. Although MCA can produce 

 good estimates in some applications, a method that 

 can also go badly wrong is risky when one does not 



