188 Henry Quastler 



watchfully opposed' (from The Rambler). Is this why so many scientists do 

 not mind too much having collected a lot of useless data but dread to be 

 found working with a useless theory ? 



I. APPLICATIONS 



Every kind of structure and every kind of process has its informational 

 aspect and can be associated with information functions. In this sense, the 

 domain of information theory is universal — that is, information analysis can be 

 applied to absolutely anything. The question is only what applications are 

 useful. 



1 . Use of Basic Concepts 



The basic concepts of information theory — measures of information, of 

 noise, of constraint, of redundancy — establish the possibility of associating 

 precise (although relative!) measures with things like form, specificity, lawful- 

 ness, structure, degree of organization. This alluring promise has introduced 

 the information concepts into the thinking of many biologists. The results of 

 conceptual applications range from harmless modernisms of language to very 

 serious reasoning. In particular, the information concepts seem to lend them- 

 selves readily to dealing with the problems of emergence and destruction of 

 order in complicated systems. 



The problem of emergence of order is usually treated in terms of Darwinian 

 machines, large more or less random assemblies of parts which can both 

 function and, in some manner, register the results of their functioning. The 

 resulting feedback loop produces some order amazingly fast (3, 4). The theory 

 of random networks is a very active field, and some very competent men expect 

 that the main contribution of information theory to biology (and to other 

 fields concerned with very complicated systems) will come from this endeavour. 



Closely related is the problem of destruction of orderhness. In biology, 

 this is the problem of aging and decay; it is the topic of a major fraction of 

 this conference (5, 6, 7). 



2. The Representation Theorem 



The use of the basic concepts of information theory becomes more powerful 

 if one considers that the behavior of information measures follows certain rules; 

 these rules are the theorems of information theory. There are two basic theorems 

 which I like to call the 'representation theorem' and the 'noise-and-redundancy 

 theorem'. The first has to do with the possibility of representing one kind of 

 information by another kind of information. There are absolutely no quahtative 

 limitations as to how information can be represented ; but, there is a quantita- 

 tive limitation: any physical entity can assume only a limited number of 

 distinguishable states, and this limits the degree to which it can represent 

 information. This degree is further modified by the rules of selecting successive 

 states. The applicability of the representation theorem depends to a high degree 

 on knowing the process by which states are selected. 



The representation theorem applies every time information is transferred — 

 because the transfer does involve representation of the information existing 



