APPENDIX D 

 CORRECTION FOR LOG BIAS 



If the assumption of normality of residuals cannot be met, then correction of a log model 

 for bias can be a problem for which no easy solution is apparent. The following, however, is 

 a proposed correction factor based more on intuition than mathematical rigor: 



If S is such that Jr!S'\'N(y ,a^) then it can be shown that: 



E[S] = EXP[p + 'io^] (Mood and others 1974) 



Therefore, an estimator of S is: 



S = EXP[ln(S) + 'i-MSE] 



This correction for lognormal bias, 'I'MSE, is the same as that proposed by Oldham (1965) and ^ 

 Baskerville (1972) as an approximate correction to the log regression model. Now, if S is not 

 lognormally distributed, it would still seem reasonable to assume that a constant, K, still 

 exists such that: 



S = EXP [in (S) + K] 

 Rearranging, 



S = EXP[Jr!(S)] EXP[K] 



EXP[K] 



EXP[in(S)] 



K = i/3(S) - ln(S) . 

 The regression equation: 



in(Y) - bf) + iixi + ••• + b X 



n n 



can also be expressed as: 



In(Y) - in(Y) = i?i (xi - xi) + i?? (xo - xo) + • • • + 2? (x - x ) 



n n n 



or. 



ln(Y) = in(Y) + 2?! (xi - xi) + i>2 (x? - xo) + " • + b {x x ) 



n n n 



65 



