350 J, F, EXCKE ON THE METHOD OF LEAST SQUARES. 



exceed it. If there were no easterly deviation, there would be 

 hi X an error of 5"'-086 ; but as this is more than five times the 

 probable error of x, the existence of an easterly deviation 

 borders closely on certainty. If it were desired to determine 

 the absolute value within naiTower Hmits, it would be necessary 

 to malfe a considerably larger number of experiments of the 

 same kind. About 2600 would be required in order to reduce 

 probable error oi x to 0"'*1. 



It must not be overlooked, that the limit of error is clearly 

 much too naiTow, partly because Avith the absolute smallness of 

 w a constant error in the kind of obsen^ation would have pro- 

 portionally very great influence, partly because the exclusion of 

 observations which deviate more than two inches (of which 

 there were in all eleven in forty) can hardly be perfectly justi- 

 fied. Such an exclusion, moreover, if made after the result, 

 is open to the danger of leading away from the pure truth, 

 and it always produces an erroneous representation of the 

 certainty of the result. 



The most troublesome part of the calculation in this simplest 

 case being the determination of the sum of the squares of the 

 errors, Ave may wish to attain in a simpler manner to the know- 

 ledge of r and h. This examination is besides useful, as the 

 subject is considered in it from another quarter, and the de- 

 termination of h, from the sum of the squares of the errors, is 

 attained by another way. 



If the law of the errors were given generally by v^ A (with- 

 out determinate assumption of the above function 4> A), and if 

 this function were completely known, then, in respect to m 

 observations of any kind, we should be able, even before we 

 knew their result, to form a conclusion as to the distribution of 

 the errors and as to the magnitude of any of their functions ; 

 which conclusion would be so much the more confirmed after 

 the observations were made, as m is greater. So for example, 

 according to probability, there Avould be between A = a and 

 A = b a number of errors 



[A)dA; I 



= m/ ^{. 



also as m ^ (A) is the number of errors of the magnitude A, 

 the magnitude m A" ^ {A) will be the sum of the nth powers 

 of the errors of the magnitude A in ni observations ; and, con- 

 sequently. 



