A Primer on Information Theory 



31 



The information functions in a communication system are designated as 

 follows: 



H(x) 

 H{y) 



Tix;y) 



.uncertainty of source 



.uncertainty of destination 



. ambiguity 



.equivocation 



.information transmitted, or communicated 



Amounts of information transmitted must be referred to some unit of action. 

 In particular, it is customary to compute transmissions per symbol or per unit 

 time. 



A channel which associates one and only one output with each input, and 

 no output with more than one input, is called a noise-free channel or transducer; 

 in this case, 



H{x) = H(y) = H(x,y) = T(x;y); 

 HJ,y) = H,{x) = 0. 



We can think of a noise-free channel as a means by which information at 

 the source is represented at the destination. Physically, this involves two acts 

 of representation: first, states of the channel are selected so as to represent the 

 inputs, according to some agreed-upon code; this is called encoding. Next, the 

 states of the channel are translated into meaningful states at the destination ; 

 this is called decoding. All we have stated about representation, representability 

 and amounts of information could now be restated in terms of encoding and 

 decoding operations. In this sense, the relation which we introduced as the 

 'condition of representability' is also known as the Theorem of the Noise-free 

 Channel; and all the examples and exercises of representing information could 

 be re-interpreted as coding operations. 



Noise — Few real channels are noise-free; in general, more than one output 

 can follow a particular input. For instance, the 'channel' which links a daughter's 

 height to her father's is far from noise-free; the following table gives the 

 conditional probabilities: 



Table III. Data of Table II in Form of a Communication Channel 



