KISHLKY BULLETIN; VUL. 84, NO. Z 



pollock port. Although Gloucester fishermen direct 

 much of their effort towards cod, haddock, and 

 flounders (generally joint products), Gloucester has 

 developed a reputation as a port for both pollock and 

 ocean perch. 



Conventional wisdom in the New England ground- 

 fishery market holds that New England round (fish 

 as harvested) ex- vessel prices of fresh flounders are 

 formed in the New Bedford auction market, while 

 fresh cod, haddock, and pollock round ex-vessel 

 prices are set in the Boston auction. These widely 

 held beliefs serve as the null hypotheses to be tested 

 in this study of the ex-vessel groundfish price link- 

 ages in New Bedford, Gloucester, and Boston. 



Knowledge of ex-vessel price linkages has a num- 

 ber of applications. Efforts at improving market ef- 

 ficiency would find this information useful. The 

 broadcasting of daily ex- vessel fish prices by the Na- 

 tional Marine Fisheries Service can properly focus 

 upon the most crucial markets. Infrastructural or 

 institutional improvements can be more judicious- 

 ly targeted, an important consideration in a time of 

 tight public and private budgets. Price forecasts to 

 improve industry functioning can concentrate upon 

 those prices formed in markets which demonstrate 

 price leadership. Fishermen may want to land their 

 harvests in the market in which ex-vessel prices are 

 first formed, should fishermen want to affect the 

 pricing process, be less dependent upon the land- 

 ings of others, or capture advantageous prices. 

 Similar considerations apply to buyers. Knowledge 

 of the price formation process allows government 

 price policies to target the appropriate markets. 

 Finally, price linkage information is crucial to 

 studies of marketing margins, length of price trans- 

 mission, and asymmetric pricing. 



THE DATA 



The data are taken from the vessel weighout files 

 of the National Marine Fisheries Service. After 

 every trip of a commercial fishing vessel of any gear 

 type, port agents in each port obtain the value and 

 volume of landings for each species harvested. The 

 entire collection of this information constitutes the 

 weighout file. The output vector from the weighout 

 file is then linearly aggregated over vessels and trips 

 to form monthly round ex-vessel prices for each 

 port. The resulting nominal prices are subsequent- 

 ly deflated by the consumer price index for food. As 

 Sims (1974) and Feige and Pierce (1980) noted, the 

 use of seasonally adjusted data may confound lag 

 distributions and causality relationships. Conse- 

 quently, the data are left in their unseasonalized 



state. However, to account for seasonal differences, 

 quarterly dummy variables are employed. The time 

 domain of the data set extends from 1965 through 

 1981. 



METHOD OF ANALYSIS 



Granger (1977) provided a definition of causality 

 among a set of variables that is based upon predic- 

 tability as well as the fact that the effect of a change 

 in an exogeneous variable upon an endogeneous 

 variable requires time. A variable X causes another 

 variable Y, with respect to a given universe or in- 

 formation set that includes X and Y, if present Y 

 can be better predicted by using past values of X 

 than not doing so, all other information in the past 

 of the universe being used in either case. Causality 

 from Y and X is defined in the same manner. Feed- 

 back occurs if X causes Y and Y causes X. A causal 

 relationship between X and Y does not exist if 

 causality does not run from X to Y or from Y to X, 

 and feedback does not occur. 



Causality tests may be classified into two funda- 

 mental types at their most basic level, within-sample 

 and out-of-sample tests. The within-sample test is 

 widely applied and is the first one developed. This 

 test is developed over the full-time domain of the 

 data set, and essentially relies upon a measure of 

 fit. The definition of causality in the out-of-sample 

 test requires evidence of improved forecasts. This 

 approach is implemented by identifying and esti- 

 mating different models using the first part of the 

 sample and then comparing their respective fore- 

 casting abilities on the latter part of the sample. This 

 study utilizes the within-sample test, the one most 

 commonly applied, since the properties of the out- 

 of-sample test have yet to be systematically 

 examined. 



Two basic approaches have been advanced by 

 which to apply empirically the within-sample 

 bivariate Granger criterion to time series. The first 

 approach is represented by the test proposed by 

 Pierce (1977) based upon Haugh (1976). The proce- 

 dure first estimates whitening filters for each time 

 series, then subsequently estimates the cross- 

 correlation function for the first step's residuals. 3 

 However, Sims (1977) and Geweke (1981) indicated 

 that this approach may be limited. 4 A second basic 



3 Whitening filters remove serial correlation from a time series. 

 Each time series used in a test of causality will be a white noise 

 process, and any relationships will be based on actual, systematic 

 relationships between the two time series, instead of a spurious 

 relationship caused by the common serial correlation. 



4 Prefiltering each time series with separate autoregressive inte- 

 grated moving average (ARIMA) filters biases the test toward 



438 



