In a sense, we can never be certain that these conditions hold true. We 

 can be more confident with populous stocks where sufficient data provide 

 a convincing argument for the model's use. Unfortunately, this is not the 

 case for many stocks of concern, such as those within the Columbia River 

 Basin and other coastal streams of the Pacific Northwest. 



Addressing the uncertainty inherent in fisheries management requires 

 us to restructure our perspective on planning and the use of models 

 (Walters 1986). Some basic modifications in model design can help us cope 

 with our inability to unerringly predict future outcomes. The first step to- 

 ward a richer analysis that embraces uncertainty is to use stochastic mod- 

 els. Stochastic models can incorporate at least three sources of uncer- 

 tainty: (1) temporal variation in population structure and environmental 

 conditions, (2) intrapopulation variation among individuals, and (3) uncer- 

 tainty in parameter estimates. All three sources of uncertainty can have 

 important policy implications. 



Nowhere is uncertainty in planning more critical than for stocks that 

 have a high probability of becoming extinct or losing significant genetic 

 resources through declining populations. Chance occurrences can be cata- 

 strophic for threatened or endangered species on the brink of extinction. 

 Central tendencies or expected values are insufficient when dealing with 

 these populations. The probabilities of catastrophic outcomes must be as- 

 sessed using stochastic models that can simulate uncertainty. Determinis- 

 tic models that consider only central tendencies have no place in the analy- 

 sis of threatened or endangered species. 



FUNDAMENTAL PRINCIPLES 



Stochastic models are created in two basic ways. One option is to start 

 with a deterministic model and recast the model parameters as random 

 variables drawn from selected probability distributions. This option intro- 

 duces little or no change in the basic model structure, but does require 

 specifying probability distributions for each model parameter. Particular 

 attention must be given to parameter estimation. The estimation proce- 

 dure must explicitly include the proposed variance structure, and it should 

 produce parameter estimates with meaningful statistical properties. For 

 nonlinear models, the expected or mean values of model outputs such as 

 population size will differ from the output of the deterministic model using 

 mean parameter values. This result, which arises from Jensen's Inequality 

 Theorem, illustrates the fallacy of assuming that the results of determinis- 

 tic models using point estimates will be directly comparable to those of a 

 properly constructed stochastic model, or that one can convert a determin- 

 istic model to a stochastic model simply by adding noise to the parameters. 



The deterministic model with random parameters is not very satisfying 

 conceptually because the deterministic relationships continue to be empha- 

 sized. Stochasticity in the model is essentially noise obscuring a determin- 

 istic signal. At the level of human resolution, however, nature is inher- 

 ently stochastic, not deterministic. Therefore, it seems appropriate to 

 build this inherent stochasticity into our models from the ground up. 



A more proper way to incorporate nature's uncertainty is to use stochas- 

 tic process models. In this approach, the focus is on the state variables in 

 the model, rather than the parameters. Given the state of the system at 

 time t, the likelihood or probability of all possible future states at time 

 t+1 are assessed. The range of possible future states together with their 



2 



