THE LAW OF ERRORS OF OBSERVATIONS. 
185 
We conclude therefore that if a great number of minute independent Errors be com- 
bined , and if we write 
m— a +/3 +7 +• • • - = sum of mean Errors, a * 
h=X -\-gj +j/ +. . . .=sum of mean squares of Errors, > .... (11) 
« = a 2 -f-/3 2 +y 2 -j-. . . , = sum of squares of mean Errors,! 
the resulting f unction of Error will be 
y— 
]\( ( x—m ) 3 
— g 2 0‘~i), 
V 27r(/i — i) 
( 12 ) 
The Probability of an error being found to lie between x and x-\-dx is of course 
J (.r - to) 2 
-e w-v dx f. 
a/ 27t(A — i 
If positive and negative errors in the observation are equally probable, as generally 
can be secured in practice, at least approximately, then m— 0 ; that is, the sum of the 
mean values of the elementary component Errors vanishes, and the Probability is ex- 
pressed by the usual value 
c\/n e 
dx. 
If we calculate by integration from equation (12) the mean value of the composite 
Error (or, as Gauss calls it, the constant part of the Error) and the mean value of its 
square, we shall find 
Mean Error —m — sum of mean values of component Errors, 
Mean Square of Error = h -f- m l — i. 
We have thus a verification of the correctness of our analysis, as the same results may 
be found from independent algebraical computation J. 
8. Considering the celebrity of the question, it may not be superfluous to show how 
the result might have been obtained without any antecedent knowledge of the peculiar 
property of combination of the Errors in equation (8). 
* We may observe that h — i is always positive ; for if we take any set of numbers, positive or negative, the 
mean of their squares is always greater than the square of the mean (see Todhuxter’s ‘ Algebra,’ p. 407). 
Therefore 
A >a 2 , also y >/3 2 , v>y 2 , &c. 
Consequently h > i. 
t This expression will be found to agree with Poisson’s final result in the memoir already cited. 
i if 
U = a + b -f c + d + &c., 
where each of the quantities a, b, c, d, &c. may take any number (different for each quantity) of different 
independent values, adopting for shortness the symbol M(K) for “ the mean value of K,” it is not difficult to 
prove, by elementary algebra, that 
H(U)— M(a) -f M(6) + M(c) + &c. = SM(a), 
M(U 2 )=M(a 2 ) + M(6 2 )+M(c 2 ) + . . . . + 2S{M(a)M(6)}, 
M(U 2 ) = 2M(a 2 ) + { 2M(a) } 2 — 2 { M(a) } 2 . 
or 
