Minimum Bias Estimate 



Suppose that noise terms n^ and Hg are not identically zero, but nevertheless we attempt to mini- 

 mize the biases 



(34) 



(35) 



" 2cosn0 "" ■ 2cos«(/) ^^^^ 



and 



'^ " 2 sin n0 ~ ^" ^ 2 sin n<t> ' ^^'^^ 



It is easy to show that the biases (Eqs. (34) and (35)) are both identically zero. The mean-square errors 

 in the estimate are 



a 



E[(d-a„)X=. ; ^ (38) 



2 cos'' n0 



and 



E{ib-b„)^}= f— . (39) 



2 sin^ n0 



This procedure only works if neither cos nd) nor sin n<p is zero. In the event that one of these fac- 

 tors is zero, one of the estimated components is undetermined. This, however, is not the worst: If one 

 of those factors is nearly zero, the estimation process can be carried out as in Eqs. (36) and (37), but 

 one of the errors (Equation (38) or (39)) is necessarily extremely large. 



Minimum Mean-Square Error Estimate 



The estimate leading to a minimum mean-square error is more subtle. We resort to a general 

 theorem that when the a priori distributions of the signals (a„ and b„) and noises (n^ and n^) aie 

 Gaussian, the processor leading to the minimum mean-square error estimate is a linear processor (Ref. 3, 

 pp. 286 ff, pp. 335 ff). With that fact, we can write an expression for the mean-square error, differenti- 

 ate with respect to the linear coefficients, and determine that the estimators are the following: 



3 

 Harry L. Van Trees, Detection, Estimation and Modulation Theory, Part I. Detection, Estimation and Linear Modulation 

 Theory, (WUey, New York, 1968). 



898 



