L N . (Y . . - Y . J 2 



SSE = I l J H_ 



where L represents the number of arbitrary classes defined such that in each class 

 P. . i 



Thus 



P.. is approximately constant (i.e., = P .) 



"Z-J "J 



L N . (Y . . - Y . J 2 

 55E , i 13 _H M_ 



Within each class Y. 



1-3 



N . 



V 



i=l 



Y../N. 



^3 J 



so 



LB. L L 



This approximation improves as the number of classes increases and as the width of each 

 class decreases. 



N 



Thus, MSE = — — where ^ = number of parameters to be estimated and 



N 



lim MSE = lim 



N-r 



RISK output includes three types of statistics that might be used to measure 

 goodness-of-fit. These are a set of t tests that test whether each of the estimated 

 parameters is significantly different from 0, an analysis of variance F test that tests 

 the significance of the amount of variation explained by regression, and a chi-square 

 table that evaluates goodness-of-fit over the range of predictions (e.g., 0<t/^<_l) . 



Each of these statistics tells us something different about the goodness-of-fit 

 of a model. Walker and Duncan (1967) imply that the F statistic is the appropriate 

 measure of goodness-of-fit for screening alternative models. Although I feel that the 

 chi-square statistic is preferable for screening alternative models, the method used 

 to measure goodness-of-fit is left to the discretion of the user. 



RISK is not an efficient procedure for screening alternative sets of independent 

 variables, especially when the data set is large. RISK seems to be most efficient for 

 estimating parameters after the optimal set of predictor variables has been selected. 

 The method proposed by Sterling and others (1969) based on information theory has been 

 successfuly used to select an optimal set of independent variables.^/ 



—A computer program of the screening algorithm is documented by David A. 

 Hamilton, Jr., and Donna Wendt, to be published as USDA Forest Service General Technical 

 Bulletin. 



11 



