512 THE BELL SYSTEM TECHNICAL JOURNAL, MAY 1952 



a similar theory can be worked out. The simplest kind of noise in this 

 channel changes a digit into any one of the n other possible numbers with 

 probability p/n. Then the capacity of the channel is 



C = log (n + 1) + p log ? + (1 - p) log (1 - p). 



n 



Error-correcting alphabets for this channel can also be constructed and 

 the criterion (4) for good transmission remains unchanged. The proof 

 of theorem 1 can be repeated with little change using 



k 



N{D, k) = E Co. rn 



r=0 



as the number of sequences which can be reached after k or fewer errors 

 [the terms 2° in (1) and (3) are replaced by (n + 1)^ ]. Once more, using 

 the lower bound, one finds an expression for Ro which is the same as the 

 one for C but with p replaced by 2p. 



Part II 



THE LOW PASS FILTER 



1. Encoding and Detection 



If f(t) is a signal emerging from a low pass filter (so that its spectrum 

 is confined to the frequency band | i* | < W cycles per second) then 

 f{t) has a special analytic form given by the sampling theorem 



w=-cc \2[V / -n-{2\\ t — m) 



Thus the signal is completely determined l)y the sequence of sample 

 values j{m/2W). The average power of the signal /(/) is measured by 



P = lim -^ [ fit) dt 



7"— »«) 



2ft ^'« 



which can be expres.sed in terms of the sample values as follows 



As in Part I, consider a message source producing a sequence of letters 

 from an alphabet of K equally likely letters. To transmit this informa- 

 tion over the low pass filter we must encode the sequence into a function 



^ C. E. Shannon, "Communication in the Presence of Noise," Proc. I. R. E., 

 37, pp. 10-21, Jan. 1949. 



