KiRSTiNB Smith 
263 
vanished, and the 'best' value of o would be determined hy a = x/yi^, where /X4 is 
to be taken about the point for which /zg = 0*. 
From another standpoint, however, the ' best values ' of the frequency constants 
may be said to be those for which 
2 ^ S {n, - n,Y 
is a minimum, where is the observed frequency and the theoretical frequency 
of the sth group f. For when is a minimum then P, the probability of occurrence 
of a result as divergent as or more divergent than the observed, will be a maximum, 
or the frequency constants will have been so chosen as to make the probability 
P of results, as divergent from theory as the observed data occurring, a maximum. 
It sounds somewhat paradoxical, but it is none the less true to say that the 
'best value' of the mean is not necessarily the mean value, nor the 'best value' of 
the mean square deviation necessarily the mean square deviation J. I shall illus- 
trate this in the following five cases : 
I. Fit of a normal curve to unilateral data. 
II. Fit of a normal curve to bilateral data. 
III. Fit of a Poisson limit to the binomial. 
IV. Fit of a binomial to binomial data, 
V. Fit of regression lines. 
The general method is as follows. Suppose /to be any independent frequency 
constant ; then is to be a maximum with the variation of/. Accordingly we have 
from 
i+x' = .?^^) 
* UniYersity of London, Honours B.Sc, Papers in Statistics, Thursday, Oct. 28, 1915. 
t Phil. Mag. Vol. l. p. 157, 1900. 
J There is a point of some philosophical interest here which deserves further consideration. As ia 
well known the Gaussian demonstration depends on making the product 
, 1 (x s-xf . 
s being taken so as to include each individual observation, a maximum by varying a- and x, the result 
being that the 'best ' values are found from the first two moments. Now it will be observed that this 
is not the same idea as lies in the test of goodness of fit. The conception of 'goodness' in that case 
is that we should measure the probabihty of a drawing from a certain population giving as divergent 
or a more divergent result than that observed. In other words wliile the Gaussian test makes a single 
ordinate of a generahsed frequency surface a maximum, the test makes a real probability, namely 
the whole volume lying outside a certain contour surface defined by x'^ a maximum. Logically this seems 
the more reasonable, for the above product used in the Gaussian proof is not a probability at all. To 
make it a probability it must be multiphed by the product [hx^], and then the probabihty of the actually 
observed result, namely x-j, x,^, ... x^, ... Xq, will of course be infinitely small, and what is made a maximum 
is an infinitely small probabihty. The exact meaning of P {Sr.,} when is an actual observation is 
obscure, but it appears that the probabihty for constant indefinitely small ranges of the variates in the 
neighbourhood of the observed values is made a maximum. But probabihty means the frequency of 
recurrence in a repeated series of trials and this probability is in the case supposed indefinitely small. 
It seems far more reasonable to make a finite probability, i.e. the probabihty of a divergence as great or 
greater than the observed a maximum, i.e. to use the x^ test and not the Gaussian principle. 
