**CHAPTER 1 1.3. WDM NETWORKS AND WORKING VACATION QUEUES**

**1.4 Phase-type distribution**

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

### Motivation

The WV queues have been studied considering various features of system characteristics.

Queues with WVs are seen to arise in several communication systems including the WDM networks. Communication systems carry various types of data packets or voice packets which have special properties and accordingly they need special attentions. For example, packets are bursty and correlation between the packets is high and has to be maintained;

some packets need priority over the others and have to be kept separate to allocate different sets of wavelengths on priority basis; some packets make repeated attempts for services, mostly in mobile networks, and have to be assigned a buffer space for their temporary storage, until they receive the requested service; also, packets may abandon the system before receiving service because of impatience and have to be refrained from leaving the system, to maintain the system efficiency. Such features need earnest attention as they are very common in communication networks. Communication systems along with these specific features of packets can be modelled as queues with WVs. In literature, such queues have not been the focus of study. In this thesis, we consider some WV queueing models which incorporate these characteristics of packets. We model these modified WV queues as Markov processes and analyze the performance of those queueing systems.

In the following sections, some mathematical concepts relevant to this thesis are dis- cussed.

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

in instances where the demands for general arrival or service process arises. Analytic ap- proaches to such models under general distributional assumptions become complicated or essentially intractable. To study such complicated models, a phase-type (PH) distribution is introduced, which is capable to maintain the mathematical tractability of the models, which reflects their essential qualitative features and provides useful information on their physical behaviors. PH-distributions provide a simple framework to demonstrate how one can extend simple results on exponential distributions to more complex models, without losing computational tractability. They are based on the method of phases. The primary theoretical utility of PH-distribution is that, any distribution on the non-negative real numbers can be approximated by a PH-distribution and the resulting queueing models, such as M/PH/1, PH/PH/1, can be analyzed by means of Markovian environments. In practice, such an approximation is strictly limited to study those inferences which are warranted by their precise mathematical statements but not recommended to study a feature that is highly sensitive [135, 136].

To define a PH-distribution, let us consider a queueing model with finite capacity. If
upon arrival of a customer the system is filled, the new arrival will not be allowed to join
the system and the customer is said to be lost. Let us define the state when the new arrival
has to leave the system as an absorbing state. Now, from an system administrator’s point
of view, the loss of an event (customer) is regarded as a bad event and shows a poor
performance of the system. To keep track of the loss of a customer, we need to know
about the distribution of the time up and till the first loss, *i.e.,* when the system enters
the absorbing state. This problem can be addressed by the concept of PH-distribution
[32].

Let *χ*=*{X*_{t}*, t* *≥*0*}* be a time-homogeneous continuous-time Markov chain with a fi-
nite state-space*{*1*,*2*, ..., m*+1*}*and with infinitesimal generator matrix*Q*=

*T T*^{0}
0 0

*,*
where*T* is a square matrix of dimension*m*,*T*^{0}is a column vector and0is the zero row vec-
tor of appropriate dimension. The first*m*states*{*1*,*2*, ..., m}*are transient, while the state
*m*+ 1 is absorbing. The matrix *T* is nonsingular (Lemma 1.4.1 below) and describes the

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

transitions until absorption, where (*T*)_{ii} *<*0*,* 1*≤* *i≤* *m* and (*T*)_{ij} *≥*0*,* 1*≤* *i6*= *j* *≤* *m*.

The matrix *T*^{0} is a non-negative,*m*-dimensional column vector, grouping the absorption
rates from any state to the absorbing one, and satisfies *T*^{0} +*T*1 = 0, where 1 is the
column vector of appropriate dimension with all the entries equal to one. Let the initial
distribution of*χ*be the row vector ¯*α*= (*α, α*_{m+1}), with*α*being a row vector of dimension
*m* and *α*_{m+1} = 1*−α*1*.*

Lemma 1.4.1. *The states* *{*1*,*2*, . . . , m}* *of a the Markov chain are transient if and only*
*if its transition rate matrix* *T* *is nonsingular, [120].*

Now, let us give the definition of a PH-distribution.

Definition 1.4.1. *Let* *Z* = inf*{t≥*0*, X**t* =*m*+ 1*}* *be the random variable denoting the*
*time until absorption in state* *m* + 1 *of the Markov chain* *χ. Then* *Z* *is said to follow*
*a PH-distribution with parameters* (*α, T*)*, denoted by PH*(*α, T*)*, and with distribution*
*function*

*F*(*t*) =*P*(*Z* *≤t*) = 1*−αe*^{T t}*1,* *t≥*0*.* (1.1)
*The density function of* *Z* *is given by*

*f*(*t*) = *αe*^{T t}*T*^{0}*,* *t≥*0*,* (1.2)

*where the matrix exponential is defined for a square matrix* *T* *by*
*e*^{T t} =

X*∞*

*n*=0

*t*^{n}

*n*!*T*^{n}*.* (1.3)

The dimension *m* is called the order of the distribution PH(*α, T*) and the states
*{*1*,*2*, ..., m}* are called phases. The vector *T*^{0} is said to be the exit vector. The distri-
bution*F*(*t*) has a jump of height*α*_{m+1} at*t*= 0. We can assume, without loss of generality
that *α*_{m+1} = 0.

The infinitesimal generator *Q*^{∗} =*T* +*T*^{0}*α*, describes a new Markov chain with state-
space *{*1*,*2*, . . . , m}* in which the absorption state*m*+ 1 is an instantaneous return state.

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

That is, upon absorption in the Markov chain *χ*, the chain is instantaneously restarted
by selecting a new initial state (independent of the past), using the same probability
vector *α*. This resetting is repeated indefinitely. The epochs of successive visits to the

‘instantaneous’ state *m*+ 1 form a renewal process with interrenewal time distribution
*F*(*t*). This means, the interrenewal times of the new Markov chain with generator *Q*^{∗}
follows PH-distribution.

Markov chains are classified depending on different properties of its states.

Definition 1.4.2. *A Markov chain in which every state leads back to itself and also to*
*every other state is called an* *irreducible* *Markov chain; otherwise, the Markov chain is*
*said to be* *reducible.*

Definition 1.4.3. *Let* *p*^{n}_{ii} *be the probability that a Markov chain starting from state* *i,* *i*
*belongs to its state-space, will be again in state* *i* *aftern* *additional transitions. The state*
*i* *of the Markov chain has a period* *d*_{i} *if* *d*_{i} =*g.c.d* *{n, p*^{n}_{ii}*>*0*},* *that is,* *d*_{i} *is the greatest*
*common divisor of such* *n’s.*

In an irreducible Markov chain all states has a common period *d* and if *d >* 1 the
chain is said to be periodic whereas if*d*= 1, the chain is called aperiodic.

Definition 1.4.4. *Let* *f*_{ii} *be the probability that a Markov chain starting at state* *i* *will*
*ever return to* *i. Then the state* *i* *is called a recurrent state if* *f*_{ii} = 1 *and transient if*
*f*_{ii}*<*1*. A Markov chain is called a* *recurrent* *chain, if all of its states are recurrent and*
*a* *transient* *chain, if all of its states are transient.*

An irreducible Markov chain is necessarily either a transient chain or a recurrent chain.

A state of a Markov chain is an absorbing state if the probability of moving out of that state is zero.

Definition 1.4.5. *A non-absorbing recurrent state is said to* *positive recurrent* *or*
*null recurrent* *according as the mean return time to that state is finite or infinite. A*
*Markov chain is a positive recurrent chain if all its states are positive recurrent and a null*
*recurrent chain if all its states are null recurrent.*

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

The corresponding transition matrices of these Markov chains are named accordingly.

We have a result that an irreducible Markov chain in finite state-space is always positive recurrent. Another result says that, an irreducible and positive recurrent Markov chain has a unique stationary distribution [72].

For a PH-distribution, PH(*α, T*), the generator *Q*^{∗} is assumed to be irreducible. The
Markov chain with generator *Q*^{∗} is positive recurrent, as its state-space is finite, and so
its stationary distribution exists. The stationary probability vector *ξ* of *Q*^{∗} is obtained
by solving the equations *ξQ*^{∗} =0*, ξ*1= 1. The rate of arrival or rate of successive visits
to the instantaneous state is given by *λ* = *ξT*^{0}, which is always positive. Following are
some important properties of a PH-distribution.

Lemma 1.4.2. *The Laplace-Stieltjes transform of* *P H*(*α, T*) *with distribution function*
*F*(*t*) *is given by*

Φ(*s*) =
Z _{∞}

0

*e*^{st}*dF*(*t*) = *α*(*sI−T*)^{−1}*T*^{0}*,* *forRe*(*s*)*≥*0*.* (1.4)
Lemma 1.4.3. *If* *Z* *follows* *P H*(*α, T*)*, the mean and the raw moments of* *Z* *are respec-*
*tively, given by*

*E*(*Z*) = *−αT*^{−1}*1* (1.5)

*and* *E*(*Z*^{n}) = (*−*1)^{n}*n*!*αT*^{−n}*1,* *forn* = 2*,*3*, . . . .* (1.6)
*Another expression for mean, in terms of stationary vector* *ξ, is given by*

*E*(*Z*) = ¡

*ξT*^{0}¢_{−1}

*.* (1.7)

Lemma 1.4.4. *For aP H*(*α, T*)*distribution, the remaining timeZ* *until absorption, given*
*that the current phase is* ‘*i’ can be expressed by*

*P*(*Z* *≤t|X*_{0} =*i*) = 1*−e*_{i}*e*^{T t}*1,* (1.8)
*which has the same form as* *P H*(*e*_{i}*, T*)*distribution, with* *e*_{i} *denoting the* *i*^{th} *canonical row*
*base vector. This is called the* *conditional memoryless property* *of PH-distribution.*

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*
Lemma 1.4.5. *Let* *Z*_{i} *follows PH*¡

*α*^{(i)}*, T*^{(i)}¢

*of order* *m*_{i}*, for* *i* = 1*,*2*. Then the closure*
*property gives that* *Z* =*Z*_{1}+*Z*_{2} *will follow distribution PH*(*α, T*) *of order* *m* =*m*_{1}+*m*_{2}
*with representations*

*α*_{k} =

*α*^{(1)}_{k} *,* 1*≤k* *≤m*_{1};
*α*^{(1)}_{m}_{1}_{+1}*α*^{(2)}_{k−m}_{1}*, m*1+ 1*≤k* *≤m*
*and* *T* =

*T*^{(1)} *T*^{0(1)}*α*^{(2)}
*0* *T*^{(2)}

*,* *where* *T*^{0(1)}=*−T*^{(1)}*1.*

Lemma 1.4.6. *Let* *Z*_{i} *follows PH*¡

*α*^{(i)}*, T*^{(i)}¢

*of order* *m*_{i} *for* *i* = 1*,*2*,* *and* *p* *∈* [0*,*1]*.*

*Then* *Z* =*pZ*_{1} + (1*−p*)*Z*_{2} *will follow distribution PH*(*α, T*) *of order* *m* =*m*_{1}+*m*_{2} *with*
*representations*

*α*=¡

*pα*(1)*,*(1*−p*)*α*(2)

¢ *and* *T* =

*T*^{(1)} *0*
*0* *T*^{(2)}

*.*

Lemma 1.4.7. *Let* *Z*_{i} *follows PH*¡

*α*^{(i)}*, T*^{(i)}¢

*of order* *m*_{i} *for* *i* = 1*,*2*,* *and define* *Z* =
*min*(*Z*1*, Z*2)*. Then* *Z* *will have PH*(*α, T*) *of orderm* =*m*1*m*2 *with representation*

*α* =*α*^{(1)}*⊗α*^{(2)} *and* *T* =*T*^{(1)}*⊕T*^{(2)}*,*

*in terms of the Kronecker compositions ‘⊗’ and ‘⊕’ (given below, after the special cases).*

The proofs of the above Lemmas can be found in Breuer [32]. PH-representation of some special distributions are given as follows:

*•* Exponential distribution: A continuous random variable*X* follows an exponen-
tial distribution with parameter*λ >*0, denoted by Exp(*λ*), if its probability density
function (p.d.f.) is

*f*(*t*) =

( *λe*^{−λt}*, t≥*0;

0*,* *t <*0*.* (1.9)

Exponential distribution has the ‘memoryless property’*i.e.*, if*X* defines interarrival
times, the time we must wait for a new arrival is statistically independent of how
long we have already waited for it. PH-representation of Exp(*λ*) is

*α*= 1*, T* =*−λ*and *T*^{0} =*λ.* (1.10)

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

*•* Erlang-*k* distribution: A continuous random variable *X* is said to follow an
Erlang-*k* distribution with parameters (*λ, k*), denoted by Erl-*k*(*λ*), for *λ >* 0 and
*k* *∈ {*1*,*2*, . . .}*, if its density function is

*f*(*t*) =

( *λ*^{k}*t*^{k−1}

(*k−*1)!*e*^{−λt}*, t≥*0;

0*,* *t <*0*.* (1.11)

The exponential distribution is a special case of the Erlang-*k* distribution with
*k* = 1. Many processes in nature can be divided into sequential phases. If the
time spent by the process in each of *k* sequential phases have independent and
identical exponential distributions, then the overall time has Erlang-*k* distribution.

The PH-representation of Erl-*k*(*λ*) is given by

*α*= (1*,*0*, . . . ,*0)*, T* =

*−λ* *λ*
. .. ...

*−λ* *λ*

*−λ*

*k×k*

and *T*^{0} =

0

...

0
*λ*

*k×*1

*.* (1.12)

*•* Hyperexponential-*k* distribution: A continuous random variable *X* follows a
*k*-phase hyperexponential distribution with parameters (*α*_{i}*, λ*_{i}), denoted by Hyp-
*k*(*α*_{i}*, λ*_{i}), for*i*= 1*,*2*, . . . , k* and P_{k}

*i*=1*α*_{i} = 1, if it has p.d.f. as
*f*(*t*) =

( P_{k}

*i*=1*α*_{i}*λ*_{i}*e*^{−λ}^{i}^{t}*, t≥*0*, α*_{i} *>*0*, λ*_{i} *>*0;

0*,* *t <*0*.* (1.13)

If a process consists of alternate phases and it experiences one and only one of many alternate phases, instead of sequential phases, each of which have exponential distri- butions, then the resulting distribution is hyperexponential. Its PH-representation is given by

*α*= (*α*_{1}*, . . . , α*_{k})*, T* =

*−λ*1

. ..

*−λ**k*

and *T*^{0} =

*λ*1

...

*λ**k*

*.* (1.14)

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

### Kronecker compositions

Here we give some details on Kronecker compositions of matrices, mainly on Kronecker
products and Kronecker sums. Kronecker product of two matrices *A* = (*a*_{ij})_{n×m} and
*B* = (*b*_{ij})_{r×s} is given by

*A⊗B* =

*a*11*B* *· · ·* *a*1*m**B*
... ... ...

*a**n*1*B* *· · ·* *a**nm**B*

*nr×ms.*

If A and B are square matrices, *i.e.,* *n* =*m* and *r* =*s*, then the Kronecker sum is given
by *A⊕B* = *A⊗I**n×n* +*I**r×r**⊗B,* where *I**n* is the identity matrix of dimension *n*. For
matrices *A, B, C* and *D*, some properties of Kronecker products are given below.

1. *A⊗B* *6*=*B⊗A*, if *A*_{n×m}, *B*_{p×q};

2. *A⊗*(*B±C*) =*A⊗B* *±A⊗C*, if *A**n×m*,*B**p×q* and *C**p×q*;
3. (*A±B*)*⊗C* =*A⊗C±B⊗C*, if *A*_{n×m}, *B*_{n×m}, *C*_{p×q};

4. (*kA*)*⊗B* =*A⊗*(*kB*) = *k*(*A⊗B*), if *k* scalar and *A*_{n×m}, *B*_{p×q};
5. (*A⊗B*)*⊗C* =*A⊗*(*B⊗C*) = *A⊗B⊗C*, if*A*_{n×m}, *B*_{p×q}, *C*_{r×s};
6. (*A⊗B*)(*C⊗D*) =*AC⊗BD*, if *A*_{n×m},*B*_{p×q},*C*_{n×r}, *B*_{q×s};

7. (*A⊗B*)^{−1} =*A*^{−1}*⊗B*^{−1}, if*A*_{n×m} and *B*_{p×q} are invertible;

8. *e*^{A⊕B} =*e*^{A}*⊗e*^{B}, for *A*_{n×m},*B*_{p×q}.

### Discrete PH-distribution

Analogous to the definition of PH-distribution in continuous-time, a discrete-time PH
(DPH) distribution is defined as the distribution of the time until absorption in a discrete-
time Markov chain with *m* transient states,*{*1*,*2*, ..., m}*, and one absorbing state *m*+ 1,

*CHAPTER 1* *1.4. PHASE-TYPE DISTRIBUTION*

with transition probability matrix given by
*P* =

T *T*^{0}

0 1

*,* (1.15)

where T is a substochastic matrix such that *I−T* is nonsingular and the exit vector*T*^{0}
satisfies *T*^{0} =1*−T*1.

Definition 1.4.6. *LetZ* =*min{n* *∈*N:*X*_{n}=*m*+1*}* *denote a random variable denoting*
*time until absorption in state* *m*+ 1*. The probability distribution of* *Z* *is said to be the*
*DPH with parameter* (*α, T*)*, denoted by* *DP H*(*α, T*)*, and is given by*

*p**n*=*P*(*Z* =*n*) = *αT*^{n−1}*T*^{0}*, n≥*1 (1.16)

*and* *P*(*Z* *≤n*) = 1*−αT*^{n−1}*1.* (1.17)

In a DPH distribution *p**m*+1 =*P*(*Z* =*m*+ 1) =*α**m*+1 = 1*−α*1*.*We assume *α**m*+1 = 0
and also that *Q*^{∗} = *T* +*T*^{0}*α* is irreducible. The stationary probability vector *ξ* of *Q*^{∗}
satisfies *ξQ*^{∗} =*ξ, ξ*1= 1. The rate of arrival is given by *λ* =*ξT*^{0}. Some properties of a
DPH distribution are given below.

Lemma 1.4.8. *The probability generating function or z-transform ofDP H*(*α, T*)*is given*
*by*

*P*(*z*) =
X*∞*

*n*=0

*z*^{n}*p*_{n}=*zα*(*I−zT*)^{−1}*T*^{0}*,* *for|z|<*1*.* (1.18)
Lemma 1.4.9. *If* *Z* *follows* *DP H*(*α, T*)*, mean and raw moments of* *Z* *are respectively*
*given by*

*E*(*Z*) = *α*(*I−T*)^{−1}*1* (1.19)

*and* *E*(*Z*^{n}) = *n*!*αT*^{n−1}(*I−T*)^{−n}*1,* *forn* = 2*,*3*, . . . .* (1.20)
*Another expression for mean, in terms of stationary vector* *ξ, is given by*

*E*(*Z*) = ¡

*ξT*^{0}¢_{−1}

*.* (1.21)