p (k + m) = p (k)p (m) (30) 



3 12 



The probability of obtaining a specific value of n is the sum of tlie probabilities of all 

 sums of k and m which yield n. Thus, * 



p (n) = 2 p (n- m + l)p (m) (31) 



3ml 2 



Substitution from equation (31) into equation (29) as applied to P yields: 



?,(") = jliP^ti) 



n J 

 P,(n) = S 2 P 0-m+l)p (m) (32) 



"* j = 1 m 1 '■ 



It can be shown by expanding a few terms of equation (32) that this is equivalent to 



n n + 1 - m 



P3(n)= 2 p (m) .2 p (j) 



^ m=l2 j-1 1 



P (n) = 2 p (m)P(n + l-m) 

 ** m= 1 2 * 



(33) 



Cramer (1946; Ch. 15) derives a similar theorem for continuous distribution function in 

 the form 



P3(x) = rPi(x-z)p (z)dz (34) 



— oo 1 



where P, is cumulative distribution function for one variable, y, p the distribution 

 function for a second independent variable z, and x = y + z. The integral on the right is 

 often called a convolution of P, and p . 



Wlien P, and p are given as analytic functions, and tlie integral on the right in 

 equation (34) can be solved easily, this formula can greatly simpUfy the computations. 

 When either function is available only in tabular form, and the calculations are made on a 

 computer, equation (34) does not appear to offer any computational advantage over 

 equations (29) and (30) which seem to provide more insight for the processes involved. 



(2) A Demonstration Calculation. The concept may be clarified by considering a 

 familiar example. Consider the sum of the numbers displayed by two unbiased dice. The 

 probability that any of the six faces will be uppermost is 1/6, and the sum of spots may be 



80 



