Additional QA of the instrumentation can be conducted periodically in the laboratory under more controlled 

 conditions. This might include daily tests in air-saturated water in the laboratory, with Winkler titrations 

 verifying the results. Much of this depends upon the logistics of the program, for example, whether the 

 program is run in proximity to a laboratory or remotely. 



Three potential sources of error could invalidate results for this indicator: 1 ) improper calibration of the CTD, 

 2) malfunction of the CTD, and 3) the operator not allowing sufficient time for the instrument to equilibrate 

 before each reading is taken. Taking a concurrent surface measurement should identify problems 1 and 2. 

 The third source of error is more difficult to control, but can be minimized with proper training. If data are not 

 uploaded directly from the CTD or surface unit into a computer, another source of error, transcription error, 

 is also possible. However, this can be easily determined through careful review of the data. 



Guideline 7: Monetary Costs 



Cost is often ttie limiting factor in considering to implement an indicator Estimates of all implementation 

 costs should be evaluated. Cost evaluation should incorporate economy of scale, since cost per indicator 

 or cost per sample may be considerably reduced when data are collected for multiple indicators at a 

 given site. Costs of a pilot study or any other indicator development needs should be included if 

 appropriate. 



Cost is not a major factor in the implementation of this indicator. The sampling platform (boat) and personnel 

 costs are spread across all indicators. As stated earlier, this indicator adds approximately 30 minutes to 

 each station; however, one person can be collecting DO data while other crew members are collecting other 

 types of data or samples. 



The biggest expense is the equipment itself. Currently the most commonly used type of CTD costs 

 approximately $6,000 each, the deck unit $3,000 and a DO meter approximately $1 ,500. A properly outfitted 

 crew would need two of each, which totals $21,000. Assuming this equipment lasts for four years at 150 

 stations per year, the average equipment cost per station would be only $35. Expendable supplies (DO 

 membranes and electrolyte) should be budgeted at approximately $200 per year, depending upon the size 

 of the program. 



Phase 3: Response Variability 



Once it is determined that an indicator is relevant and can be implemented within the context of a specific 

 monitoring program, the next phase consists of evaluating the expected variability in the response of that 

 indicator. In this phase of the evaluation, it is very important to keep in mind the specific assessment 

 question and the program design. For this example, the program is a large-scale monitoring program and 

 the assessment question is focused on the spatial extent of hypoxia. This is very different from evaluating 

 the hypoxic state at a specific station, as will be shown below in our evaluation of variability. 



The data used in this evaluation come from two related sources. The majority of the data were collected as 

 part of EMAP's effort in the estuaries of the Virginian Province (Cape Cod, MA to Cape Henry, VA) from 

 1990 to 1993. The distribution of sampling locations is shown in Figure 2-2. This effort is described in 

 Holland (1990), Weisberg et al. (1993), and Strobel et al. (1995). Additional data from EPA's Mid-Atlantic 

 Integrated Assessment (MAIA) program, collected in 1997, were also used. These data were collected in 

 the small estuaries associated with Chesapeake Bay. 



2-7 



