von Szalay and Somerton: A method for predicting trawlability in the Gulf of Alaska 
501 
Table 4 
Goodness of fit, based on Akaike’s information criterion (AIC), prediction accuracy of untrawlable 
and trawlable sampling cells (sampling units) in the survey area, overall prediction accuracy, 
and prediction accuracy after cross-validation of the 3 generalized linear models (GLMs) and 
3 generalized additive models (GAMs) evaluated in this study for the use of acoustic data to 
predict whether sampling areas in the Gulf of Alaska are trawlable. The best model in each 
category is shown in boldface. 
AIC 
Untrawlable 
prediction 
rate (%) 
Trawlable 
prediction 
rate (%) 
Overall 
prediction 
rate (%) 
Cross 
validation 
prediction (%) 
GLM 
Model 1 
12,353 
82.0 
70.6 
76.3 
Model 2 
12,230 
84.0 
69.7 
76.9 
75.0 
Model 3 
12,351 
84.0 
71.4 
77.7 
GAM 
Model 1 
10,435 
81.5 
82.4 
82.0 
Model 2 
11,496 
81.9 
75.2 
78.6 
Model 3 
10,343 
81.5 
83.2 
82.4 
75.0 
untrawlable segments combined) classification ac¬ 
curacy of 82.4% (Table 4). In contrast, the best GLM 
with 5 linear terms and 3 polynomial terms, but no 
interaction terms, produced an overall classification 
accuracy of only 76.9%, suggesting that the GAM was 
superior to the GLM. However, after subjecting these 
2 models to cross validation to estimate their expect¬ 
ed classification accuracy with new data, a different 
picture emerged with respect to their relative perfor¬ 
mance. Although the prediction accuracy of the GAM 
declined substantially (from 82.4% to 75.0%; Table 
4) in the cross validation, the prediction accuracy of 
the GLM remained relatively stable (76.9 to 75.0%). 
This difference indicates that the GAM, because of its 
greater number of estimated parameters, over-fitted 
the original data. Because the cross-validated predic¬ 
tion accuracy did not differ between models, the more 
stable and simpler GLM was chosen as the best model. 
A map showing the classification results of this model 
is provided in Figure 2. 
One potential shortcoming with our method for pre¬ 
dicting bottom trawlability is that it cannot be applied 
to the deeper parts of the survey area because of the 
requirement for a second echo in the acoustic data. 
The second echo is needed by the Echoview software to 
estimate 2 of the 9 parameters (hardness and second 
bottom length) and therefore contributes to prediction 
accuracy. However, resolution of the second echo in the 
acoustic data depends on water depth and ping rate, 
and the ping rate is limited to at least 1 Hz for vessel 
captains to recognize bottom features that are likely 
to result in net damage. This limitation on ping fre¬ 
quency, in turn, limits the maximum depth to -375 m 
at which our method could be applied. However, much 
of the deep areas of the continental slope tend to be 
relatively steep, which adversely affects the classifica¬ 
tion accuracy of single-beam systems (von Szalay and 
McConnaughey, 2002), and would therefore have been 
excluded anyway. Despite these limitations, only a rel¬ 
atively small portion of the survey area (-10%) needs 
to be excluded (Fig. 1). 
Unlike earlier acoustic software for determining 
bottom types, such as the programs that were part 
of the QTC VIEW (Ellingsen et al., 2002) and Rox- 
Ann (Greenstreet et ah, 1997) seabed classification 
systems, the Echoview bottom typing module requires 
calibration of the echosounder so that the strength of 
the bottom echo can be interpreted directly . Because 
GOA survey vessels routinely perform an echosounder 
calibration with copper spheres at the beginning and 
ending of each survey, this additional requirement 
did not add an additional burden for the collection of 
acoustic data and presumably provided additional in¬ 
formation that improved our ability to determine bot¬ 
tom trawlability. 
Although the primary motivation for this study was 
to develop a method that can be used to estimate the 
relative proportions of trawlable and untrawlable areas 
in the GOA bottom trawl survey area so that abun¬ 
dance estimates derived from survey trawl catches are 
extrapolated only to trawlable areas, another appli¬ 
cation of this method is to improve survey efficiency. 
The current survey design process involves randomly 
selecting stations within the survey grid that have 
been either declared trawlable, on the basis of crite¬ 
ria specified in the introduction, or whose trawlability 
status is unknown. Stations that have been declared 
untrawlable are not part of the sampling pool. The se¬ 
lection of stations that are unclassified with respect to 
trawlability contributes to an inefficient survey method 
because these stations are sampled with equal prob¬ 
ability and yet may result in much fruitless search 
