Karl Pearson 
123 
Next, considering the second summation and summing first n st y t for all values 
of t, it equals n s y s , where y s is mean of all y's in array corresponding to x s . But 
if the regression be linear y g = —^x s . Thus the second sum equals 
ro 'y <7 x — : S - n s+l ) c 
= ~r<r y a x — - {^(^.-j?^-! - a? <+1 « a+1 ) + hS(n s ^) + hS (n s+1 )\ 
Z"± (J x 
1 A 2 _ ,, 
as before. But in a term of this order we may put 
Nr<r y <T x = 8 (n st x g y t ). 
Similarly the third summation 
= h 8{n st x s y t ) x — . 
Or, finally, S {n st x s y t ) = 8 (n st x s y t ) [l - - j^ij (xvi) 6w. 
Thus 
\_N* x aJ a x a y b \N XslJt )V 12^ 12^ 
NafJ \N* y *J \ 12*JJ\ 12 oy 
or, since we neglect terms of fourth order, 
s (w as >y*) 
= r xy (xvii), 
the usual value, a x and a y having of course to be corrected by Sheppard. 
We see by (xv) that 
but by Sheppard 
Accordingly 
n s _ \ „ /if. 
which enables us with equal small subranges to use the standard deviation of 
means or mid-abscissae at our pleasure. 
(8) Of course it is absurd in practice to push our results to the extreme* of 
two categories only, but theoretically it is not without interest to note the results 
which flow from such an assumption. 
* We have seen that x st y sl can only be replaced by x a y t in (xi), or as a special case, (xii) used, 
provided the subranges are equal and fairly small. 
16—2 
