From this expression, the term Markov chain is used. The conditional probability 
function is also called the transitional probability function. 
Also 
c 
D2Gns tn; Xn-2, =D) = | D3(Xn, tn3 Xn—-15 tn-15 55 Di_5) aXp-1 
— 
and 
P2(Xn; tn Xn-2, tn-2) = Pc,Xn; tnlXn—2, tn2) P(%n-2; tr_2) 
P3(Xn, tn; Xn-15 ln-13 Xn-2, t=?) = PDc(Xns tn; Xn-1, lS Diino) 
x DXn-2; tn-2) 
therefore 
ce 
Pe{Xn» TalXee, tn2) = | DeGonstni knelt trenlcnls ty-2)AXn-1 (12.7) 
—0 
this Eq. 12.7 is called a Chapman—Kolomogorov equation or Smoluchowski equation for 
the transitional distribution of the Markov process. It implies that to go from x,_2 at time 
tn_2 tO x, at time ¢,, the path x,_; at time f,,_; is not important. We can take any path to 
go to x,. This characteristic is used to derive the Fokker—Planck equations, shown in the 
next section. 
For example, the AR model of the first order AR(1) as was treated in Chapter 5 is 
X(t) -—axX(t—1) = €(0), (12.8) 
where a is a constant, €(t) is a pure random process (a Markovian linear process), and so 
X(t) is statistically determined by the value X(t-1), that is, the value of X(t) at one preced- 
ing time step. Accordingly, X(r) is a Markovian process. 
12.3 GENERAL PROCESS 
If we need the values of a process not only at one preceding time point f,_;, but at 
two or more preceding time points, i.e., not only p; and p2, but also higher order joint 
probability functions p3, p4, .. . the process is called a general process. However, among 
general processes X(t), some can be inverted into vector Markov processes, combining 
with some other variables and introducing the concept of the state space (Markovian). 
Putting aside the concise theory, for example for the second order autoregressive 
model AR(2) as 
328 
