Method of Least Squares. 367 
case of an infinite number of errors ; that, for the finite case, 
there would usually be substituted (n — 1) for n in the above 
value of c. It is with the greatest diffidence that I submit a 
statement which seems to be contradicted by the most distin- 
guished authorities, including Gauss and Airy*; and only 
not contradicted, but not, as far as I remember, asserted, by 
Laplace and Poisson. Perhaps I have misunderstood the 
qua3situm proposed by Airy, Merrimanf, and other writers. 
For I assert with some confidence that (in spite of the rather 
puzzling expressions of some authors) the procedure above em- 
ployed to determine the maximum of an expression involving 
two independent variables is correct. I submit also that the 
only significant, or at least the most important, quaesita 
afforded by the case in hand are either what was enounced 
above, or what is the "probable error "J incurred by taking 
the mean of the observations as the real value. To investigate 
this probable error, it is proper to consider the given observa- 
tions as generated by one of an indefinite number of consti- 
tutions corresponding to the different values of f and c's, each 
of which operates in the long run a number of times propor- 
we take the origin at the mean 
tionate to — ^.^ : that 
SP 
igm 
e to 
point of the observations, and put c= j proportionat 
~rM 2+Sx2 ^dhd£ 
77-2 
f" Cvdhd% 
J-*Jo 
It may be observed that when h is constant this expression 
reduces to 
-^Jnhe- nK1 ?. 
\/ 7T 1 
from which we see that in case (1) the probable error incurred 
by taking the average of operations is the unit probable error, 
as it may be called, '476 multiplied by -y=; as was to be ex- 
pected. But in case (3) this analogy is deserted. It appears 
from the preceding expression, that the number of times we 
* §§ 58-60. 
t ' Least Squares.' 
X Cf. Merriman, ' Least Squares, pp. "27 & 146 
