1(0)= Y ■*■ °l "^^^ 9 + b-^ sine + a^ cos 26 + b^ sin 26 



+ flg COS 39 + 63 sin 39 + a^ cos 46 + b^ sin 40 



+ a, cos 50 + b^ sin 50 + Cg cos 60 + 



+ a, cos 70 + 67 sin 70 + .... (18) 



The estimator is the precise analog, under sixfold symmetry conditions, of expression (13) with the last 

 form of expression (14) substituted. This function has all of the terms of the Fourier expansion of i(0) 

 except that it lacks the terms in sine of nd whenever n is a multiple of 6. The difference between / and 

 /is 



1(6) -1(6) = 6g sin (60) + 6^3 sin (120) + b^g sin (180) + .... (19) 



This error. represents an irreduceable bias. It is irreduceable for the same reason as before, i.e., as shown 

 in Eq. (17), all six of measurement sets Mg, Mj, M^, Afg, M^, and Mg are entirely independent of the 

 values of the coefficients bg, bjg ^**- 



ESTIMATION TO MINIMIZE ERROR 

 Error Criteria 



For simplicity, suppose the quantity we are trying to measure has a true value X. Suppose that 

 the variable that we observe is not quantity X, but another quantity, Y, which is a function of X. Our 

 measurement is then a measurement of Y with, possibly, some measurement error or noise, i.e., 



measurement = Y + noise = f(X) + noise. (20) 



Out of measurement Y we try to extract and estimate X which in some sense matches the true value X. 



One criterion which we may invoke is to minimize the bias in the estimate. The bias is defined as 

 the average value of the difference between the estimate and the true value, supposing that repeated 

 measurements are made keeping the true value X constant, i.e., minimize 



bias = E{X-X}. (21) 



Only rarely does minimization of the bias unambiguously define a processor. Whether it does or not, it 

 is a legitimate criterion of merit. 



A much more common criterion is to minimize the mean-square error between the true value of X 

 and the estimated X, measured in the metric of the source X, i.e., minimize 



mean-square error (in source space) = E{(X - X)^ }. (22) 



Another commonly invoked criterion is to choose the estimate which minimizes the mean-square devia- 

 tion between the actual observations (Y + noise) and what noise- and error-free observations Y would 

 have been, measured in the metric of the observations Y, i.e., minimize 



mean-square error (in observation space) =£{[/'(X) - (y + noise)] }. (23) 



These two are often confused, and each is called "minimizing the mean-square error." The confusion 

 arises because often the quantity Y = f(X) which is observed is the same or nearly the same as X. How- 

 ever, in our case, they are quite different. The quantity we are trying to measure is distribution 1(6), 



895 



