Summary 



Information management requirements for the indicator are currently time consuming (6-1 2 months to develop 

 indicator values from the raw data), but time should be reduced with automated routines. The length of time 

 currently required to validate measurement data may affect the desired turnaround time established for the 

 proposed monitoring program. No specialized hardware, software, or programming support is required, and 

 data storage is compatible with other systems for retrieval. Critical data sets and associated metadata are not 

 extensive or complicated. 



Guideline 6: Quality Assurance 



For accurate interpretation of indicator results, it is necessary to understand their degree of validity. A 

 quality assurance plan should outline the steps in collection and computation of data, and should identify 

 the data quality objectives for each step. It is important that means and methods to audit the quality of 

 each step are incorporated into the monitoring design. Standards of quality assurance for an indicator 

 must meet those of the targeted monitoring program. 



Performance Objective 



1 . Demonstrate that the critical components of an appropriate quality assurance program are established 

 for the indicator, and that techniques are available to monitor and control important sources of error in 

 the measurement data for the indicator. 



The scale and time frame of the proposed monitoring framework and the need for multiple field crews (Table 

 4-10) require a rigorous quality assurance (QA) program to ensure consistency in data collection and 

 interpretation of indicator values (e.g., Chaloud and Peck 1994). There are important considerations (Table 

 4-12) for developing an appropriate QA program for EMAP-related studies. Resources are available, in the 

 form of guidance documents and existing quality assurance plans, that can be adapted or modified to other 

 types of monitoring efforts. No additional research is required to develop appropriate standards or other 

 techniques to monitor and control data quality. All field and laboratory procedures associated with the indicator 

 are amenable to the development of performance criteria and to internal or external audits by qualified personnel. 

 Measurement related errors can be identified (Guideline 8) and compared against established performance 

 criteria. The use of a qualified museum facility to confirm field identifications of voucher specimens and as a 

 permanent repository provides a means to control and correct for a critical source of error. Examination of 

 data from sites visited more than once during a single index period (Table 4-7) can be used to evaluate the 

 consistency and performance of collection methods and field personnel. Concurrent identification of fish 

 species in the field by a recognized authority in fish taxonomy can provide rapid identification and correction 

 of errors. Finally, a variety of procedures are available to provide a rigorous review and validation of data to 

 identify and correct for entry errors, erroneous species identification, and abundance values. 



Summary 



An appropriate quality assurance program can be developed and implemented for the indicator and monitoring 

 framework using available resources and techniques. No additional research is required to provide appropriate 

 performance standards or other techniques to monitor and control data quality. Measurement errors can be 

 identified and evaluated at each critical step of indicator measurement. 



4-22 



