decision-making is shifting from measures of program and administrative performance to 

 measures of environmental condition. 



ORD recognizes the need for consistency in indicator evaluation, and has adopted many of 

 the tenets of the PSR/E framework. ORD indicator research focuses primarily on ecological 

 condition (state), and the associations between condition and stressors (OPPE's "effects" 

 category). As such, ORD develops and implements science-based, rather than 

 administrative policy performance indicators. ORD researchers and clients have 

 determined the need for detailed technical guidelines to ensure the reliability of ecological 

 indicators for their intended applications. The Evaluation Guidelines expand on the 

 information presented in existing frameworks by describing the statistical and 

 implementation requirements for effective ecological indicator performance. This 

 document does not address policy indicators or indicators of administrative action, which 

 are emphasized in the PSR approach. 



Four Phases of Evaluation 



Chapter One presents 15 guidelines for indicator evaluation in four phases (originally 

 suggested by Barber 1994): conceptual foundation, feasibility of implementation, response 

 variability, and interpretation and utility. These phases describe an idealized progression 

 for indicator development that flows from fundamental concepts to methodology, to 

 examination of data from pilot or monitoring studies, and lastly to consideration of how the 

 indicator serves the program objectives. The guidelines are presented in this sequence 

 also because movement from one phase into the next can represent a large commitment 

 of resources (e.g. , conceptual fallacies may be resolved less expensively than issues raised 

 during method development or a large pilot study). However, in practice, application of the 

 guidelines may be iterative and not necessarily sequential. For example, as new 

 information is generated from a pilot study, it may be necessary to revisit conceptual or 

 methodological issues. Or, if an established indicator is being modified for a new use, the 

 first step in an evaluation may concern the indicator's feasibility of implementation rather 

 than its well-established conceptual foundation. 



Each phase in an evaluation process will highlight strengths or weaknesses of an indicator 

 in its current stage of development. Weaknesses may be overcome through further 

 indicator research and modification. Alternatively, weaknesses might be overlooked if an 

 indicator has strengths that are particularly important to program objectives. The protocol 

 in ORD is to demonstrate that an indicator performs satisfactorily in all phases before 

 recommending its use. However, the Evaluation Guidelines may be customized to suit the 

 needs and constraints of many applications. Certain guidelines may be weighted more 

 heavily or reviewed more frequently. The phased approach described here allows interim 

 reviews as well as comprehensive evaluations. Finally, there are no restrictions on the 

 types of information (journal articles, data sets, unpublished results, models, etc.) that can 

 be used to support an indicator during evaluation, so long as they are technically and 

 scientifically defensible. 



IX 



