38 
Fishery Bulletin 1 12(1) 
vessels during the study period. Information on switch- 
ing of permits between vessels could not be disclosed 
for the few instances when it occurred, for reasons of 
confidentiality, but the distinction between permits and 
vessels is not important for our study. Data were col- 
lected on all trips and tows within the 2-month period 
for which a permit was selected. The data used for this 
analysis were collected from January 2002 through De- 
cember 2009. During this period, the observer program 
cycled through all nonexempt permits (exemptions 
were given because of safety concerns) in the limited 
entry bottom trawl fleet 10 times. 
While onboard, observers quantified the total dis- 
card weight of each species on each tow and collected 
biological samples from discards through subsampling 
procedures that are documented elsewhere (NWFSC 1 ). 
Observers focused their attention foremost on discard- 
ed catch because data on discards could not be obtained 
from other sources, unlike landed catch for which data 
were available from vessel logbooks and landing re- 
ceipts. Retained catch weights were acquired from the 
vessel logbook or by visual estimation of the propor- 
tion of the codend or trawl alley (the area where the 
trawl is placed after retrieval) that was filled. These 
estimates were then reconciled with weights from land- 
ing receipts for each observed trip. Through this pro- 
cess of reconciling the 2 data sources, changes were 
made to the retained weights on 94% of observed trips. 
When landings records were not available for an ob- 
served trip, retained weights originally recorded by the 
observer were used. Further information regarding the 
sampling scheme and data quality control process are 
available online at http://www.nwfsc.noaa.gov/research/ 
divisions/fram/observer/. 
To begin our analysis, presence and absence infor- 
mation for each tow was compiled from the observer 
data set. Although abundance data would give informa- 
tion on the magnitude of bycatch, the use of abundance 
data for our analysis would yield associations primar- 
ily between species that co-occur at similar catch lev- 
els. Although interesting for other research questions, 
those abundance-dominated associations were not in- 
formative for our analysis of co-occurrence of rebuilding 
bycatch species and target species. Additional available 
fields that were used in the analysis included average 
latitude, longitude, average depth, departure and re- 
turn ports, and tow duration, among others. The data 
contained catch information for 175 different species 
from 45,252 tows. All groundfish species that occurred 
in at least 5% of tows or more were included, eliminat- 
ing 138 species from the analysis that were not the tar- 
get and rebuilding bycatch species of interest for our 
study. In addition, 7 species designated as “overfished” 
by NMFS were considered in the analysis. Under fed- 
1 NWFSC (Northwest Fisheries Science Center). 2007. West 
coast groundfish observer training manual. [Available from 
NWFSC West Coast Groundfish Observer Program, 2725 
Montlake Blvd. East, Seattle, WA 98112.1 
eral law, a rebuilding plan must be developed for any 
fish species that is designated as “overfished” in rela- 
tion to limit reference points (standardized thresholds 
used to determine stock status) (Restrepo et al., 1998). 
These species included Bocaccio ( Sebastes paucispinis), 
Canary Rockfish (S. pinniger), Cowcod (S. leuis). Dark- 
blotched Rockfish (S. crameri), Pacific Ocean Perch (S. 
alutus), Widow Rockfish ( S . entomelas), and Yelloweye 
Rockfish (S. ruberrimus). 
Cluster analysis 
Cluster analyses are commonly used to identify fish 
species assemblages (Williams and Ralston, 2002). 
Many approaches to clustering analysis exist and re- 
sultant groupings are always relative to the units 
being grouped and the algorithm used to process the 
distance matrix (Gordon, 1999). Multiple methods of 
clustering the data were used to make results and con- 
clusions more robust (Mahon et al., 1998). We focused 
on 2 main approaches: 1) Hierarchical agglomerative 
cluster analysis (HCA) and 2) Nonhierarchical cluster 
analysis, or partitioning analysis (PA) (Cope and Hal- 
tuch, 2012). 
With the HCA approach, all elements are assumed 
to be a separate cluster and groups are established by 
subsequently merging elements to maximize the aver- 
age distances between all elements within each clus- 
ter. Partitioning analysis, with the ^-medoids approach, 
requires specifying beforehand the number of desired 
clusters from which the grouping algorithm minimizes 
dissimilarity between elements within clusters (Cope 
and Punt, 2009). Partitioning analysis thus requires 
the additional step of identifying the optimal number of 
clusters ( k ) supported by the data. This step is accom- 
plished by using cluster validity diagnostics. After con- 
sidering several of them through simulation, Cope and 
Punt (2009) found 2 cluster validity diagnostics that 
performed best: average silhouette coefficient (Kauff- 
man and Rousseeuw, 2005) and Hubert’s T (Gordon, 
1999). Because these diagnostics have a tendency to 
either overlump or oversplit groups, respectively, both 
of them were used to identify the optimal number of 
clusters. In instances where the 2 diagnostics support- 
ed different numbers of optimal clusters, both sets of 
clusters were retained for evaluation. The Bray-Curtis 
dissimilarity measure was used to transform species 
presence and absence information by tow into a dis- 
similarity matrix used by both clustering approaches. 
Once species were clustered, the next task was to 
identify which of the clusters were dissimilar enough 
from others to be considered distinct. Guidance for in- 
terpreting the clusters in a PA was provided in Kauff- 
man and Rousseeuw (2005), who identified an average 
group silhouette value >0.25 as being sufficiently dis- 
tinct from other groups. For the HCA, it was less clear 
what constituted a group. We followed the approach of 
Cope and Haltuch (2012) who introduced a null model 
approach to define significant groups when using HCA. 
