510 R. E. COMSTOCK AND H. F. ROBINSON 



that is significantly^ greater tlian one. The question to be considered is as 

 follows: Assuming a particular value (> 1.0) for a how much data is required 

 if P is to be one-half? Procedure and the argument involved will be given in 

 detail for Experiment III ; only comparative results will be indicated for the 

 other two. 



If the values of o-^ and alj listed in Table 30.4 are substituted into the ex- 

 pectations of M.31 and M-^-y of Table 30.3, we have 



£(M3i) =0-2 + ^2^2 



4 



Note that when 1u~ — 1,a-u~, i.e., when a = 1.0, the two e.xpectations are 

 equal. But if a > 1.0, which means Za-u- > i)//-; then EiMs^) > E{M-ix). 

 Also, the estimate of a will exceed one only where M32 > Mn- It follows that 

 a one-tailed test of the hypothesis that E{M^<^ — E{Mu ^ is also a test 

 of the hypothesis that d ^ 0. Since both mean squares are functions of ran- 

 dom variables (fixed effects do not contribute to either of them) the variance 

 ratio test, the F test, is applicable and P is equivalent to the probability that 

 the test ratio, M'i2/M;ii, will exceed Fa, where a is the probability level of the 

 test. 



Let EiMn)/E(M-ii) = </>. If = 1.0, Ms-jMu will be distributed in 

 samples in the same manner as F, otherwise it will be distributed as </)/^, i.e., 

 M-ii/Mn for any probability point in its distribution will be exactly 4> times 

 the value of F for the same point in the F distribution. Thus the probability 

 of a sample value of M32/ M31 equal or greater than Fa is the same as that of 

 a sample value of F equal or greater than Fa/<1>. When degrees of freedom are 

 equal for the two mean squares, as will always be true in Experiment III, the 

 50 per cent point of the F distribution is 1.0. Hence P will be one-half when 

 the amount of data is that for which Fa (the lowest value of Msa/M^i to be 

 considered significantly different from one) is equal to 0. 



We now must know the magnitude of </> when a is not unity. 



^ £_(M32^ _ 4o-H:_^_a:M^ 

 "^'EiMsJ' 4(r2+r2M^- 



It varies with r, the number of replications in the experiment; with the ratio 

 of I,a~u' to 2/r which is a-; and with the ratio of a- to 2w^. Let c = o-yZw-. 

 Then 



4c-hra2 



= 



4c+ r 



Number of replications is subject to the will of the experimenter, but c and a 



5. In the statistical sense, that the i)robabiHty of the observed or a larger estimate as 

 a consequence of random sampling is small. 



