than most could manage), it would take about six months to 

 digitize 10 minutes of video footage. Unless the data can be for 

 the most part digitized automatically, a study of the video tapes 

 that should constitute a record of every submersible dive would 

 just take too much time. 



The second prototype digitizing system ( apparently not in 

 current operation), the Bugsystem, was constructed some 15 years 

 ago (Graves, 1977; Koltes, 1984; see also Miller et al., 1982). 

 The Bugsystem is more sophisticated than the Galatea system in 

 that a computer is used to recognize specific images and it then 

 automatically tracks the movements of those images over time. 

 The operator does not manually introduce data into the computer. 

 This system recognizes specific objects by preprocessing each 

 video frame for contrast discontinuities, then outlining the 

 object and summarizing its position in space as a single 

 centroid. This machine therefore sacrifices visual information 

 in order to track movements of objects in real time. We 

 currently use a commercially available descendant of the 

 Bugsystem, the ExpertVision Integrated Motion Analysis System, 

 which is capable of 3-D tracking of objects recorded by multiple 

 cameras placed at any angle and through any transparent media or 

 mixtures thereof (such as an air-water interface). 



Even though we have stressed the need for automatic data 

 analysis, the need still exists for non-automated, human 

 interactive capabilities. In short, machines make mistakes. 

 During the process of automatically tracking a given individual 

 like a shrimp or squid within a school the ExpertVision system 

 gets confused when images are occluded and/or adjacent tracks 

 become too precisely aligned. In this event the scientist can 

 usually determine what actually happened by reviewing the 

 original 3-D video tape again and again at slower and slower 

 speeds. This is because humans use much more visual information 

 than that provided by stereopsis in order to judge distance. 

 For example, we use retinal image size, linear perspective, 

 overlapping, aerial perspective (haze), light and shadows and 

 directional reflection, color, textural gradients of acuity, 

 motion parallax, accomodation, convergence, disparity, and 

 stereopsis, whereas the computer relies on stereopsis alone for 

 z-coordinate calculations (Lipton, 1982). What all this means is 

 that the biologist must be able to intrude into the digitizing 

 process and interact with the computer via an appropriate 

 computer compatible language to edit the files as they 

 accumulate (Potel et al., 1980). The ExpertVision System permits 

 interactive editing of data files. 



Complete Analysis of Stereopaired Images: Sacrifice of Time for 

 Information 



We have been able to track the movements in 3-dimensional 

 space of animals ranging in size from approximately 5 mm ( large 



314 



