The Domain of Information Theory in Biology 193 



outcomes of such collisions are known, then it is possible to derive a number 

 of information functions for this system; on the other hand, a given set of 

 information functions is compatible with any number of chemical systems. 



2. Relativity of Information Measures 



In the early applications of information theory to problems of communi- 

 cation, the ensembles to be used were virtually defined by the situation. Thus, 

 in dealing with Morse code, the element is clearly the single symbol, the classes 

 are dot, dash, letter space and word space, the probabilities are the large 

 sample frequencies. Similarly, in dealing with printed English as an objective 

 phenomenon, one natural unit (not the only one, though!) is again the single 

 symbol, and classes and probabilities can be determined from any large sample. 

 The situation is immediately more complicated if we deal with a particular 

 person's concepts of printed English; the 'subjective probabihties' are not the 

 same as the objective relative frequencies. Much confusion has come to 

 psychologists from disregarding the fact that the probability measures upon 

 which a subject bases his operations are not necessarily those known to be 

 correct — in one sense — to the experimenter (17). 



There are situations where there is considerable leeway in defining the 

 elements, classifications, and probability measures of an ensemble, and accord- 

 ingly considerable variation in the infonnation measures which can be associated 

 with the situation. This is strikingly illustrated by the attempts to measure 

 the infonnation contents of molecules. Estimates have been based on con- 

 siderations of structure (10, 18, 19, 20) or function (15, 22). Recently, Rashev- 

 SKY and his associates (21) have shown that information measures can be 

 associated with the topological representation of molecules. Each of these 

 approaches yields some value of the information content of a molecule, and 

 these values do not have to be identical. Yet, every one of them is a legitimate 

 information measure. This may be disappointing, but hke all abstractions, 

 information measures are not 'right' or 'wrong' — they are only more or less 

 useful. In the case under discussion, we may legitimately ask how the various 

 ways of estimating information measures are related to the actual processes 

 of information storage and transmission by molecules, to reaction rates, to 

 the activity of antimetabolites, etc. 



As a rule, the specifications of an ensemble do not result unequivocally 

 from the given situation. Consequently, information measures are not properties 

 but functions of a given situation — they are defined by the situation and the 

 ensemble used in dealing with it. Information measures are irreducibly relative; 

 they can be accurate and precise, but they cannot be absolute. The usefulness 

 of a particular information measure in a particular context will depend on the 

 way the defining ensemble is set up. Unfortunately, there exists no calculus, 

 no set of hard and fast rules which tells one how to select the most appropriate 

 elements, classifications, and probability measures. The choice must be made 

 by guess, and its ultimate justification is only in the results it yields. 



3. Informational Capabilities and Performance 



An informational capability represents an upper bound to some class of 

 informational performances — but a particular performance does not have to 



