Do Not Start with Objectives — End with Them 



Objectives of individuals or groups would seem to provide a logical way to start an 

 analysis. A representative of a wildlife agency, for instance, might express his 

 objectives in terms of protection (of wildlife) or battle (with the developer). At times 

 one objective can dominate the other so that in the drive to "get" the developer, for 

 example, long-term objectives relating to wildlife can be unconsciously compro- 

 mised — winning a battle and losing the war. 



That problem of conflicting and hidden objectives within and between organi- 

 zations presents the first problem. We encountered that early in the GIRLS project 

 and found that the effort to define alternative objectives at the beginning was divisive 

 and unproductive. We resolved the impasse by insisting that any early discussion of 

 objectives be limited to defining both policy actions and evaluation indicators. 

 Actions are those management or regulatory levers that can be applied using rules 

 that define a policy. For example, a simple fisheries policy might be to control fishing 

 using actions such as limiting the size, bag, season or area in accordance to a rule that 

 maintains a fixed number of spawning fish. Indicators are those quantities that in 

 various combinations can define an objective. Indicators such as population density, 

 productivity, income and catch can then be used to evaluate the ability of a policy to 

 achieve different objectives such as maximizing sustained yield or economic return, 

 minimizing income variability, or enhancing social equity. People can fruitfully 

 define sets of actions and evaluation indicators knowing that at the end of the 

 analysis alternative policies to meet their objectives can thus be explored. 



The second problem in starting with a firm definition of objectives arises from the 

 assumption that people know their objectives. But all our experience, and indeed that 

 of pollsters of political elections, indicate that objectives emerge as a result of 

 dialogue and growing understanding. The analyses and procedures should have that 

 as their end-point, not their beginning. We have found this point of view to be 

 particularly difficult for people from agencies with single missions to accept. And 

 most difficult for those far from the scene of the problem. After all, if you are in 

 headquarters, what else can you do to control your local personnel than to insist they 

 define their objectives and stick to them? The result is the articulation either of 

 fervent dogma or of counterproductive trivia. 



The prime lessons: the identification of actions and indicators at the first gives 

 policy direction to an analysis and limits the area of conflict; starting with objectives 

 generates irrelevant conflict and minimizes learning: objectives are as much a part of 

 the research and learning experience as is the development of understanding and 

 policies. 



A Model is Not an A nalysis 



The GIRLS model and other similar simulation models represent an effort to 

 develop a kind of laboratory world that can be used to integrate existing knowledge 

 and identify gaps, to respond to questions, and to adjudicate conflict. There has been 

 enough written in various fields that 1 will not dwell on their strengths (integration of 

 parts to generate systems behavior, incorporation of non-linearities, and many 

 variables) or weaknesses (danger of becoming too large, too detailed, too complex 

 and unrelated to the questions). But a model is only effective if it is embedded in a 

 larger process of analysis — problem identification, modeling, policy design and 

 evaluation, and policy decision and implementation. We learned from GIRLS that a 

 simulation model can be a powerful device to blend the knowledge of different 

 disciplines, to make invisible assumptions visible, and to provide an environment to 

 ask questions. That can lead to priorities for filling key knowledge gaps and to the 

 exploration of the systems effects of actions and policies. But that only emerges if it 

 integrates with the other parts of the process. Hence we learned quickly that GIRLS 

 had to be transparent, capable of easy modification as questions emerged, and able to 

 produce graphical information of different levels of detail and generality. Later 



81 



