208 Mb. ELLIS, ON THE METHOD OF LEAST SQUARES. 



(a,(r + 6,y + &c. - F,)' + («2'^ + hy + ■■■ - ^zf + ••••, 

 a minimum: thas is to say, they will make the sum of the squares of errors a minimum. 

 Hence the method of leagt squares. The conditions of the minimum give the linear equations; 



x'S.a^ + y'2ab + &c. = '2aV \ 



w^ab + y-^h' + hc. = -S.bV I (/3), 



&c. = &c. j 



in which system there are always the same number of equations as there are unknown quantities 

 to be determined. 



The next investigation of the principle of tiie method of least squares which I shall attempt 

 to analyze is that of Laplace. 



LAPLACE'S DEMONSTRATION. 



If, in order to determine tv from the equations of condition stated in the last paragraph, we 

 multiply the first by |U„ the second by jug, &c., and add: (fi, 1x2, &cc. fulfilling the conditions 



S/ua = >> ^M^ = 0, &c. = 0) 

 we find .r = 'S./j.V - 'E/xe; 



and if we assume that S/x e is equal to zero, then the resulting value of .r is Sju V: the error 

 of this determination being the quantity 2|Ue, which we have assumed to be equal to zero, 

 without knowing whether it really is so or not. 



Now supposing there are n equations of condition, and p quantities to be determined, and 

 that n is greater than p, then we see that there are w factors ix„ n.>...n„, and p conditions for 

 them to fulfil. They may therefore be subjected to n — p additional conditions. 



This beinc premised, let us consider the probability that the quantity S/^e will not be less 

 than a, or greater than /3, a and /3 being any quantities whatever. The law of probability of 

 error at each observation being given, the question is evidently analogous to the common problem 

 of findino- the chance, that with a given set of dice the number of points thrown shall not 

 be less than one given number or greater than another. 



We may therefore suppose that the probability in question has been determined : call it P. 

 Suppose also that we have taken a= -I and j3 = I, I being any positive quantity. 



Then P is a function of /, and of /jlj...ii^. 



Let us now so determine m,...m„5 (subject to the conditions already specified,) that P may be a 

 maximum. When this is done, it follows that there is a greater probability that the error in our 

 determination of ^r, viz. 2;u6, lies within the limits ± /, than if we had made use of any other set of 

 factors whatever. 



On this principle Laplace determines what he calls the most advantageous system of factors. 



It does not follow that the value thus obtained for x is the most probable value that could be 

 assigned for it. But if we consider a large number of sets of observations, (the quantities a, b, &c. 

 beino- the same for all) then the error which we commit by using Laplace's factors will in a greater 

 proportion of cases lie between ± /, than if we had used any other system of factors. 



The investigation has reference merely to the different ways in which by the method of factors 

 a given set of linear equations may be solved. 



We now enter on the analysis requisite to determine P. 



Let the probability that Ifxe will be precisely equal to u, hepdu. Then manifestly 



and we have therefore only to determine p. 



