146 



Fishery Bulletin 97(1), 1999 



4 Find the conditional MLE 9^ given y* by using A simple point estimation example 



the simulated data X*. Store this value. 

 5 Repeat steps 2-4 many times to obtain a collec- 

 tion of 0.^'s. 



MCA inference about 6 is then based on the distribu- 

 tion of this collection of 0y,'s. Typically, the original 

 dy^, or the mean or median of the Monte Carlo sample, 

 is used as a point estimate, and the 0.025 and 0.975 

 quantiles form the bounds of a 951 confidence inter- 

 val. 



The Monte Carlo approach is not a straightforward 

 extension of a sensitivity analysis. Usually, in a sen- 

 sitivity analysis, a small number of alternative pa- 

 rameter values are tried, and the individual point 

 estimates and confidence intervals obtained by us- 

 ing each parameter set are tabled. Inference usually 

 follows from a single analysis where a particular 

 baseline parameter has been set; the remainder of 

 the table is used to assess how conclusions would 

 change under different modeling assumptions. With 

 MCA, a potentially vast number of parameter val- 

 ues are tried, and an overall confidence intei-val is 

 obtained by pooling the results of each individual 

 analysis, effectively integrating over the distribution 

 of the nuisance parameters. We show in the rest of 

 this paper that this integration is a source of poten- 

 tial bias. 



The MCA technique is potentially highly sensitive 

 to violations of its assumptions. If one is going to 

 express uncertainty in the value of a parameter by 

 means of a probability distribution, then this distri- 

 bution should be treated as a prior in a fully Baye- 

 sian setup. The method given in Equation 1 provides 

 estimators that will not necessarily possess the de- 

 sirable properties of either Bayesian or ML estima- 

 tors. A general overview of Bayesian methods is given 

 by Lee (1997). 



In a Bayesian framework, the best estimator of 

 (with respect to squared error loss) is the posterior 

 mean E{9\ data), therefore it would be better to de- 

 fine the estimator as 



0,-2, = E[e\data) = E\E{e \y ,data)] = 



^6yK[Y\data)dY, '^' 



where 9y - the posterior mean of conditional on 



7, and 

 ;r(y|data) = the posterior distribution of y. 



One might regard Equation 2 as a general strategy 

 and use it in cases where 9y is not necessarily the 

 Bayesian estimator. In this case, however, the prop- 

 erties of 0,2) are not clear. 



Schweder and Hjorf* first identified potential weak- 

 nesses with MCA, and they described two situations 

 where differences between the methods of Equations 

 1 and 2 arose. The first of these is repeated here: 

 consider a random sample of size n from a normal 

 distribution with mean n and variance o-, denoted 

 X, ~ Nifi, o2) for / =: 1, . . . ,;?. Assume that there is a 

 N(/io, xl ) prior for n, where /^q and Tq are the prior 

 mean and variance respectively. Let cr-' be the pa- 

 rameter for which inference is desired. ;U is a nuisance 

 parameter and ct- is regarded as fixed. The maxi- 

 mum likelihood estimate of d~ conditional on /j is 



1 " 



1=1 



The estimators given by Equations 1 and 2 are then 



 1 " 



and 



1=1 



1 " 



respectively, where jj^,,^, and r'^,,,, are the posterior 

 mean and variance of the nuisance parameter ^. For 

 this simple case, we have closed form expressions 

 for Equations 1 and 2, and no Monte Carlo sample 

 from the prior for /i is required. Note that in Equa- 

 tion 2, CT~ (the conditional maximum likelihood esti- 

 mator of (fi) is used as 9y. In addition we note that 

 (T,2i depends on o^ because both Hp,,^, and r p^,, are 

 functions of d^. In other words, the estimator (T,!, 

 depends on the quantity it is trying to estimate. This 

 occurs when 9y is not the Bayesian posterior mean 

 of 9 conditional on y. In our examples, we simply 

 plugged in the ordinary MLE where needed to re- 

 move this dependency. Thus, to evaluate ct,!, here, 

 d~j^i was used as a plug-in estimate of a- in the ex- 

 pressions for Hp„^, and r^^,,,,. 6%^ is the usual MLE 

 of d^ and is what would normally be used if condi- 

 tioning on j.1 were not of interest. 



We know from standard theory that djf^i -> a^ as n 



>oo. Because r^, 

 6^2 1 — > CT" as n 



post 



and jj 



post 



>/i, we have that 



ve'rge to a~ only if /Vq = P and Tq = 0. It follows that 

 6~it will, in general, yield accurate estimates only if 



^ Schweder, T., and N. L. Hjort. 1997. Indirect and direct like- 

 lihoods and their synthesis — with an appendix on minke whale 

 dynamics. Paper SC/49/AS9 presented to the IWC Scientific 

 Committee, October 1997. 



dm , on the other hand, will con- 



