150 



Fishery Bulletin 97(1), 1999 



values were bootstrapped from a distribution whose 

 parameters were determined by maximum likelihood; 

 with Bayes, they were sampled from the posterior. 



Simulation results 



Simulations were performed by using three sets of 

 "true" parameters as shown in the first three col- 

 umns of Table 2. The values of MSYR in these three 

 sets of simulations correspond to the 0.5, 0.975, and 

 0.025 quantiles (respectively) of the gamma (8.2, 

 372.7) prior for this parameter. In this way, we in- 

 vestigated the performance of the method when the 

 true MSYR is at the center and the boundaries of its 

 prior 959^^ probability interval. In each case, a value 

 of K was chosen such that extinction did not occur 

 and Pi993 was positive. 



The first set of simulations represents a scenario 

 where the mean of the prior for MSYR happens to 

 coincide with the true value. If we use the prior mean 

 as a point estimate (or "best guess") of MSYR, then 

 our point estimate and the true value are the same. 

 For this set, then, we would expect all assessment 

 methods to provide accurate inference about K. The 

 remaining two sets of simulations represent sce- 

 narios where our prior is inaccurate, i.e. the true 

 MSYR is either larger or smaller than the prior mean 

 (which remains unchanged). In these cases, this in- 

 accuracy is naturally going to cause the resulting 

 distribution of the output K to be biased. All assess- 

 ment approaches will be affected by this bias, par- 

 ticularly with respect to their point estimates of K. 

 Indeed, these parameter sets represent situations 

 that all assessment scientists would like to avoid. 

 The key point of interest is the extent to which a 

 method can be insensitive to poor prior information 

 and still provide somewhat reliable inference. 



The results of the simulations are shown in Table 

 2. For each of the three sets of true parameters, the 

 MCA analysis was run three times by using the 0.025, 

 0.5, and 0.975 quantiles of the normal likelihood as 

 the observed 1993 population, Pig^s- Then, this en- 

 tire simulation design was replicated 500 times. The 

 quantiles shown are MCA medians for if msyt?  across 

 the 500 replicates, and the coverage rates (last two 

 columns) show the percentage of the 500 replicates 

 for which the estimated 95% MCA or Bayes confi- 

 dence interval covered the truth. 



In the first set of simulations, where the true 

 MSYR and the prior mean MSYRq were exactly 

 equal, the conditional MLE /^ A/sy/?„ was very accu- 

 rate. This is to be expected in this optimistic (if some- 

 what unlikely) scenario. The MCA confidence inter- 

 vals provided by K i^jgyR* covered the true K in all 

 cases, as did the confidence intervals obtained with 

 the Bayesian method. Also, the estimation was fairly 

 insensitive to the accuracy of the estimate Pi^gs- A 

 difference of 1227 whales in the estimate of P1993 (i.e. 

 two standard deviations) resulted in if A/.sv'flo chang- 

 ing by less than 100 whales. 



In the second set, the true value of MSYR was greater 

 than the prior mean. This resulted in K having been 

 overestimated. Here, both MCA and Bayesian results 

 were biased by the use of the same bad prior, but the 

 MCA coverage was worse. If follows that the subopti- 

 mal behavior of MCA cannot be attributed solely to the 

 choice of prior. The 95% MCA interyals provided poor 

 coverage of the truth, worse when P1993 was accurate 

 than when it was too low. For the Bayesian method, 

 this difference was not as great. The coverage of the 

 Bayesian intervals was somewhat better in two cases, 

 particularly when P1993 was too large. 



In the final set, the true value of MSYR was at the 

 low end of the prior interval, and we observed that K 



