Elementary Biometrical Inferences 



527 



2. The Confidence Interval Test of a 

 Parameter Involving One Variable 



In the examples just discussed the bi- 

 nomial test involved N values less than 10. 

 The binomial test can also be used when N 

 is larger. However, it is less cumbersome 

 to make use of the expected range for f 

 from an expected single-variable parameter 

 as given in Figure A-2. 



Suppose f = 0.3 and N = 100. What 

 could one conclude about Ho p = 0.5? If 

 p = 0.5 and N = 100, 95% of f's would 

 lie between 0.4 and 0.6. Since f = 0.3, one 

 may reject Ho p = 0.5. Had f = 0.43 and 

 N = 100, one could accept Ho p = 0.5. 

 Remember that the decisions made from 

 Figure A-2 about a parameter are at the 

 5% level of significance; and one can only 

 reject or accept parameters, for these repre- 

 sent idealized inferences about statistics. 

 Statistics are observations or facts, and are 

 not subject to rejection. 



3. Chi-square Test of a Parameter 

 Involving One Variable 



It will be useful to describe another 

 method of testing a single-variable param- 

 eter when N is reasonably large, which 

 differs from the expected range test. Sup- 

 pose one expects a 1 : 1 ratio and hence, 

 ideally, 50 cases of one type and 50 cases of 

 another out of a sample of 100. But sup- 

 pose one observes 55 of one type and 45 of 

 the other. In order to judge whether the 

 observations agree with expectation, one 

 must find the probability of obtaining, on 

 a null hypothesis, a result this extreme or 

 more extreme in samples of 100 taken from 

 an ideal population. Although the prob- 

 ability could be determined by summing 

 the appropriate terms of i}/2 + /^) 100 , the 

 time required is prohibitive (unless, of 

 course, one has access to a computer). It 

 has been found that an approximate value 

 of the desired probability may be obtained 



v 2 



X(d 



from a quantity called chi-square ix), a 

 comparatively easy computation : 



„ [(observed — expected) — %} 2 

 expected 



The term x /i is called Yates' correction. It 

 may be omitted when N and the expected 

 values are large, but it is safer to include 

 it in a routine calculation. The formula 

 requires that for each class (here there are 

 only two, success and failure, and hence the 

 X is considered to have one degree of free- 

 dom — x 2 (i)) on e find the absolute differ- 

 ence between the observed and expected 

 numbers, subtract \^ from this remainder 

 (making it closer to by ]^), and square 

 the result. This value is divided by the 

 expected number. We do this for each 

 class and sum the terms for all classes. 

 Thus, in our case: 



[(45 - 50) - y 2 }> [(55 - 50) - Y 2 Y 



X«ii — 



50 



(4^) 2 , (4^) 2 



50 ~ l ~ 50 



40.5 

 50 



50 

 = 0.8 



The probability is obtained from a chart of 

 X 2 (Figure A-3) under one degree of free- 

 dom. (The number of degrees of freedom 

 for such a test is one less than the number 

 of classes; that is, it equals the number of 

 variables). Thus, from Figure A-3 one 

 finds that the probability lies between 0.35 

 and 0.40. The difference between what is 

 observed and what is expected according 

 to the null hypothesis is nonsignificant. 

 Therefore, one may accept the hypothesis. 

 The chi-square method is an approxima- 

 tion and is valid for relatively large samples 

 only. Its use requires that no class have 

 an expected value of less than 2 and that 

 most of the expected values be at least 5. 



4. Chi-square Test of a Parameter 

 Involving Two or More Variables 



The x 2 test is applicable to parameters in- 

 volving more than 2 alternative outcomes, 

 hence involving two or more variables. 



