and 
s s-S 
H(s) =-Ya™ “se Hya(s) m2 2 (14) 
The physical realization of a set of filters 
with impulse responses hy(t), ho(t), oe is 
readily accomplished by analogue-computer 
techniques, Denoting the input to the first 
filter by g(t) and the output by x,(+), one sees 
from (9) that these quantities are related by 
the linear differential equation 
C4(D) x(t) = Ky By(D) g(t) (15) 
in which D= 44, Similarly, from (10) it is 
seen that the outputs of the succeeding filters 
are recursively related by the linear differen- 
tial equations 
Koy On (D) og, (+) = K, Gyua(-D) x70) 222 
(16) 
Operational amplifiers connected to solve (15) 
and (16) are the physical realization of the 
orthonormal filters, 
OPTIMUM ORTHONORMAL EXPANSION 
In practice it is desirable to truncate the 
series expansion of the correlation function 
after as few terms as possible, Denoting the 
exact correlation function by 
(ee) 
BC) = 2 by bp (C) (17) 
and the N-term approximation by 
N 
gy(Z) = = A, Da(f) (18) 
one may write the integral-square error of the 
approximation: 
2 a 2 
€y = ik [oz) - A(e)] at (19) 
Squaring the integrand of (19), substituting 
(17) and (18) therein, interchanging the orders 
of integration and summation, and invoking (1) 
in evaluating the resulting integrals leads to 
ee) 
2 2 
én =f, PE) at 8, (20) 
in which N 
2 = 2 
i er (21) 
The orthonormal time functions h,(+) are 
also functions of a set of real parameters{a Ri ’ 
e.g., the poles and zeros of the Laplace trans- 
forms of the Laguerre and Kautz functions, Hence 
from (6), (20), and (21) so, too, are A,, so 
and are The optimum expansion of g(Z) for a 
given N and a particular set of orthonormal 
functions is defined as the expansion gbtained 
when the&s are selected to minimize€y, Since 
the right member of (20) is always nonnegative, 
the optimum expansion occurs when theo, are 
selected to give 
N 
2 2 
Max S. = Max 2) Ap (22) 
{o33 " {ta} n=2 
Hence the determination of the optimum ortho- 
normal expansion is the multivariate maximization 
of the sum of the squares of the coefficients of 
the orthonormal series over a set of parameters 
{« if ° 
If the orthonormal functions emplgyed to ex- 
pand the correlation function yleld Sy that are 
well-behaved functions of parameters{a;} , the 
gradient method” may be used to reduce the multi- 
variate maximization to a convergent iterative 
procedure involving a single variable 0, This is 
accomplished by transforming the initial set of 
parameters{o; } to a new set {a ifoy setting 
2 
S) 
a 0 en aS tly 2s oon 
Oe Neat 8 
i i Om; 820 
0 (23) 
in which the subscript O on the partial deriva- 
tive indicates that it is evaluated ate, = 2, 
fe) 
o, =@>, ... Thus (23) represents the equa- 
tions of a straight line in a hyperplane tangent 
to se at the point a Pie . « e)e Since in 
the neighborhood of tHis foint one may approxi- 
mate the function Se by the straight line, in 
this region 
2 Pane 
Sy. &,, wee) BS (ee) (24) 
Maximizing the right member of (24) yields 
2) ime S206 (25) 
wa 820 N 
(Ss 
The value of 6 that satisfies (25) is substituted 
in (23) to give the new set of parameter values 
ay 
{« re . Repeating the procedure by letting 
2 ae EER as ee 
a. =O, + 8 (26) 
Se d%i/, 220 
350 
