Guideline 13: Data Quality Objectives 



The discriminatory ability of the indicator should be evaluated against program data quality objectives and 

 constraints. It should be demonstrated how sample size, monitoring duration, and other variables affect the 

 precision and confidence levels of reported results, and how these variables may be optimized to attain stated 

 program goals. For example, a program may require that an indicator be able to detect a twenty percent 

 change in some aspect of ecological condition over a ten-year period, with ninety-five percent confidence. 

 With magnitude, duration, and confidence level constrained, sample size and extraneous variability must be 

 optimized in order to meet the program's data quality objectives. Statistical power curves are recommended 

 to explore the effects of different optimization strategies on indicator performance. 



Guideline 14: Assessment Thresholds 



To facilitate interpretation of indicator results by the user community, threshold values or ranges of values 

 should be proposed that delineate acceptable from unacceptable ecological condition. Justification can be 

 based on documented thresholds, regulatory criteria, historical records, experimental studies, or observed 

 responses at reference sites along a condition gradient. Thresholds may also include safety margins or risk 

 considerations. Regardless, the basis for threshold selection must be documented. 



Guideline 15: Linkage to Management Action 



Ultimately, an indicator is useful only if it can provide information to support a management decision or to 

 quantify the success of past decisions. Policy makers and resource managers must be able to recognize the 

 implications of indicator results for stewardship, regulation, or research. An indicator with practical 

 application should display one or more of the following characteristics: responsiveness to a specific stressor, 

 linkage to policy indicators, utility in cost-benefit assessments, limitations and boundaries of application, and 

 public understanding and acceptance. Detailed consideration of an indicator's management utility may lead 

 to a re-examination of its conceptual relevance and to a refinement of the original assessment question. 



Application of the Guidelines 



This document was developed both to guide indicator development and to facilitate indicator review. 

 Researchers can use the guidelines informally to find weaknesses or gaps in indicators that may be corrected 

 with further development. Indicator development will also benefit from formal peer reviews, accomplished 

 through a panel or other appropriate means that bring experienced professionals together. It is important to 

 include both technical experts and environmental managers in such a review, since the Evaluation Guidelines 

 incorporate issues from both arenas. This document recommends that a review address information and 

 data supporting the indicator in the context of the four phases described. The guidelines included in each 

 phase are functionally related and allow the reviewers to focus on four fundamental questions: 



Phase 1 - Conceptual Relevance : Is the indicator relevant to the assessment question (management 

 concern) and to the ecological resource or function at risk? 



Phase 2 - Feasibility of Implementation : Are the methods for sampling and measuring the environmental 

 variables technically feasible, appropriate, and efficient for use in a monitoring program? 



Phase 3 - Response Variabilltv : Are human errors of measurement and natural variability over time and 

 space sufficiently understood and documented? 



Phase 4 - Interpretation and Utility : Will the indicator convey information on ecological condition that is 

 meaningful to environmental decision-making? 



1-5 



