654 



Fishery Bulletin 101(3) 



(NEFSC"*; Gibson and Lazar''). However, Banneret and 

 Austin (1983) noted that the sampling distribution of rec- 

 reational catch data is often highly skewed with a longer 

 right-hand tail than might be expected even from a log- 

 normal distribution. Furthermore, depending on the way 

 the catch rate is defined (i.e. catch per trip, day, or hour), 

 recreational fishery catch-rate distributions may contain a 

 high proportion of zero catches. 



Hilbom ( 1985 ) presented a frequency distribution of num- 

 bers of salmon caught per trip in the British Columbia sport 

 fishery that appears to be best characterized by the nega- 

 tive binomial distribution, with a catch per hour frequency 

 best characterized by the Poisson distribution. Jones et al. 

 (1995) investigated the statistical properties of recreational 

 fishery sampling data collected in angler surveys in Virginia 

 and noted that the non-normality of recreational fishery 

 data may violate assumptions of lognormality in methods 

 used to develop indices of abundance, and especially the 

 validity of confidence intervals. Power and Moser (1999) 

 expressed similar concerns about sampled distributions of 

 fish and plankton collected by research trawl nets, noting 

 that the assumption of an underlying normal or lognormal 

 distribution for these types of data is commonplace, and 

 perhaps in error, and that distributions such as the Pois- 

 son or negative binomial may be more appropriate. Smith 

 (1990, 1996) recommended various nonparametric resam- 

 pling methods (e.g. bootstrap confidence intervals) for char- 

 acterizing the dispersion of highly skewed research trawl 

 survey catch distributions having a large proportion of zero 

 catches. Smith (1999) modeled angling success for salmon, 

 expressed as the catch after the first hour of angling, using 

 a negative binomial distribution model. 



In addition to the Poisson and negative binomial, al- 

 ternatives to the lognormal error model for recreational 

 fishery catch rates also include the delta-lognormal and 

 delta-Poisson error models. These models are combinations 

 of the delta distribution (Pennington, 1983) and lognormal 

 or Poisson model approaches. The delta distribution has 

 been used in modeling fish and plankton abundance indices 

 from research trawl survey data, which are characterized 

 by highly skewed distributions with a relatively high pro- 

 portion of zero catches (Pennington, 1983). In the combined 

 delta-lognormal and delta-Poisson approaches, indices of 

 abundance are modeled as a product of binomially distrib- 

 uted probabilities of a positive catch and lognormal or Pois- 

 son distributed positive catch rates. The delta-lognormal 

 model has been used in modeling fish-spotter data (Lo et 

 al., 1992) and in the standardization of recreational fishery 

 catch rates forbluefin tuna (Brown and Porch, 1997; Turner 



^ NEFSC. 1997. Report of the 23rd northeast regional stock 

 assessment workshop (23'''' SAW): Stock Assessment Review 

 Committee I SAR(^ ) consensus summary of assessments. North- 

 east Fisheries Science Center reference document 97-05. 191 p. 

 Northeast Fisheries Science Center, Woods Hole, MA 03543. 



■'■' Gibson, M. R.. and N. Lazar 1998. Assessment and projection 

 of the Atlantic coast bluefish using a biomass dynamic model. A 

 report to the Atlantic States Marine Kisheri(>s Commission Blue- 

 fish Technical Committee and Mid-Atlantic Fishery Manage- 

 ment Council Scientific and Statistics Committee, 29 p. Rhode 

 Island Division of Fish and Wildlife, Jamestown, RI 02835 



et al., 1997; Brown, 1999; Ortiz et al., 1999), both character- 

 ized by a highly contagious spatial distribution and a large 

 proportion of zeroes. Bluefin and yellowfin tuna catch rates 

 in the commercial and recreational fisheries have also been 

 standardized by using Poisson (Brown and Porch, 1997), 

 negative binomial (Turner et al., 1997), and delta-Poisson 

 error distributions (Brown, 2001; Brown and Turner, 2001) 

 to address these distributional characteristics. 



In this study I first examine the statistical properties 

 of recreational fishery catch-rate data as sampled by the 

 MRFSS. Next, I examine the goodness of fit to different 

 statistical distributions of empirical MRFSS catch rates, 

 on both per trip and per hour bases. I then explore the 

 effects of five different assumptions about the error struc- 

 ture of the catch-rate frequency distributions (lognormal, 

 delta-lognormal, Poisson, delta-Poisson, and negative 

 binomial) in deriving standardized indices of abundance 

 with general linear models, using simulated recreational 

 fishery and empirical MRFSS catch per trip (zero catches 

 included) data. 



Materials and methods 



Overview of statistical methods 



This work focuses on catch number per trip sampled in 

 the MRFSS as the index of abundance. The distributional 

 properties of MRFSS catch-per-hour rates are also exam- 

 ined, in order to explore whether the general conclusions 

 reached for catch-per-trip rates are likely to be similar to 

 catch-per-hour rates. Directed trips are defined as those 

 for which interviewed anglers indicated that they were 

 intending to catch a particular species as a primary or 

 secondary target, whether successful or not (zero catches 

 included). In analyses of trips for all species, all trips were 

 used regardless of target or success (zero catches included). 

 Catch rates were expressed as integer (natural) numbers 

 of fish per trip or per hour 



A value of 1 was added to all observations when applying 

 a lognormal transformation to allow inclusion of the zero 

 catch rate observations (this constant was subtracted upon 

 retransformation to the original scale). Expected sample 

 values for the lognormal distribution were calculated by 

 using the normal distribution and log-transformed catch 

 rates (Sokol and Rohlf 1981). Previous work on MRFSS 

 catch-per-trip data has shown that the value of 1 is the 

 appropriate constant to be added (Terceiro'; NEFSC'^) be- 

 cause it tends to minimize the sum of the absolute value of 

 skew and kurtosis for these distributions (Berry, 1987). The 

 standard logarithmic transform bias correction was applied 

 to express results in the original arithmetic scale (Finney, 

 1951; Bradu and Mundlak, 1970). No constant was added 

 when data were analyzed under the assumption of bino- 

 mial, Poisson, or negative binomial error distributions. 



The binomial distribution is a discrete frequency (prob- 

 ability) distribution of the number of times an event oc- 

 curs in a sample in which some proportion of the members 

 possess some variable attribute (Snedecor and Cochran, 

 1967). Each event is assumed independent of other prior 



