• No results found

CHAPTER 1 1.3. WDM NETWORKS AND WORKING VACATION QUEUES

1.4 Phase-type distribution

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

Motivation

The WV queues have been studied considering various features of system characteristics.

Queues with WVs are seen to arise in several communication systems including the WDM networks. Communication systems carry various types of data packets or voice packets which have special properties and accordingly they need special attentions. For example, packets are bursty and correlation between the packets is high and has to be maintained;

some packets need priority over the others and have to be kept separate to allocate different sets of wavelengths on priority basis; some packets make repeated attempts for services, mostly in mobile networks, and have to be assigned a buffer space for their temporary storage, until they receive the requested service; also, packets may abandon the system before receiving service because of impatience and have to be refrained from leaving the system, to maintain the system efficiency. Such features need earnest attention as they are very common in communication networks. Communication systems along with these specific features of packets can be modelled as queues with WVs. In literature, such queues have not been the focus of study. In this thesis, we consider some WV queueing models which incorporate these characteristics of packets. We model these modified WV queues as Markov processes and analyze the performance of those queueing systems.

In the following sections, some mathematical concepts relevant to this thesis are dis- cussed.

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

in instances where the demands for general arrival or service process arises. Analytic ap- proaches to such models under general distributional assumptions become complicated or essentially intractable. To study such complicated models, a phase-type (PH) distribution is introduced, which is capable to maintain the mathematical tractability of the models, which reflects their essential qualitative features and provides useful information on their physical behaviors. PH-distributions provide a simple framework to demonstrate how one can extend simple results on exponential distributions to more complex models, without losing computational tractability. They are based on the method of phases. The primary theoretical utility of PH-distribution is that, any distribution on the non-negative real numbers can be approximated by a PH-distribution and the resulting queueing models, such as M/PH/1, PH/PH/1, can be analyzed by means of Markovian environments. In practice, such an approximation is strictly limited to study those inferences which are warranted by their precise mathematical statements but not recommended to study a feature that is highly sensitive [135, 136].

To define a PH-distribution, let us consider a queueing model with finite capacity. If upon arrival of a customer the system is filled, the new arrival will not be allowed to join the system and the customer is said to be lost. Let us define the state when the new arrival has to leave the system as an absorbing state. Now, from an system administrator’s point of view, the loss of an event (customer) is regarded as a bad event and shows a poor performance of the system. To keep track of the loss of a customer, we need to know about the distribution of the time up and till the first loss, i.e., when the system enters the absorbing state. This problem can be addressed by the concept of PH-distribution [32].

Let χ={Xt, t 0} be a time-homogeneous continuous-time Markov chain with a fi- nite state-space{1,2, ..., m+1}and with infinitesimal generator matrixQ=

T T0 0 0

, whereT is a square matrix of dimensionm,T0is a column vector and0is the zero row vec- tor of appropriate dimension. The firstmstates{1,2, ..., m}are transient, while the state m+ 1 is absorbing. The matrix T is nonsingular (Lemma 1.4.1 below) and describes the

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

transitions until absorption, where (T)ii <0, 1 i≤ m and (T)ij 0, 1 i6= j m.

The matrix T0 is a non-negative,m-dimensional column vector, grouping the absorption rates from any state to the absorbing one, and satisfies T0 +T1 = 0, where 1 is the column vector of appropriate dimension with all the entries equal to one. Let the initial distribution ofχbe the row vector ¯α= (α, αm+1), withαbeing a row vector of dimension m and αm+1 = 1−α1.

Lemma 1.4.1. The states {1,2, . . . , m} of a the Markov chain are transient if and only if its transition rate matrix T is nonsingular, [120].

Now, let us give the definition of a PH-distribution.

Definition 1.4.1. Let Z = inf{t≥0, Xt =m+ 1} be the random variable denoting the time until absorption in state m + 1 of the Markov chain χ. Then Z is said to follow a PH-distribution with parameters (α, T), denoted by PH(α, T), and with distribution function

F(t) =P(Z ≤t) = 1−αeT t1, t≥0. (1.1) The density function of Z is given by

f(t) = αeT tT0, t≥0, (1.2)

where the matrix exponential is defined for a square matrix T by eT t =

X

n=0

tn

n!Tn. (1.3)

The dimension m is called the order of the distribution PH(α, T) and the states {1,2, ..., m} are called phases. The vector T0 is said to be the exit vector. The distri- butionF(t) has a jump of heightαm+1 att= 0. We can assume, without loss of generality that αm+1 = 0.

The infinitesimal generator Q =T +T0α, describes a new Markov chain with state- space {1,2, . . . , m} in which the absorption statem+ 1 is an instantaneous return state.

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

That is, upon absorption in the Markov chain χ, the chain is instantaneously restarted by selecting a new initial state (independent of the past), using the same probability vector α. This resetting is repeated indefinitely. The epochs of successive visits to the

‘instantaneous’ state m+ 1 form a renewal process with interrenewal time distribution F(t). This means, the interrenewal times of the new Markov chain with generator Q follows PH-distribution.

Markov chains are classified depending on different properties of its states.

Definition 1.4.2. A Markov chain in which every state leads back to itself and also to every other state is called an irreducible Markov chain; otherwise, the Markov chain is said to be reducible.

Definition 1.4.3. Let pnii be the probability that a Markov chain starting from state i, i belongs to its state-space, will be again in state i aftern additional transitions. The state i of the Markov chain has a period di if di =g.c.d {n, pnii>0}, that is, di is the greatest common divisor of such n’s.

In an irreducible Markov chain all states has a common period d and if d > 1 the chain is said to be periodic whereas ifd= 1, the chain is called aperiodic.

Definition 1.4.4. Let fii be the probability that a Markov chain starting at state i will ever return to i. Then the state i is called a recurrent state if fii = 1 and transient if fii<1. A Markov chain is called a recurrent chain, if all of its states are recurrent and a transient chain, if all of its states are transient.

An irreducible Markov chain is necessarily either a transient chain or a recurrent chain.

A state of a Markov chain is an absorbing state if the probability of moving out of that state is zero.

Definition 1.4.5. A non-absorbing recurrent state is said to positive recurrent or null recurrent according as the mean return time to that state is finite or infinite. A Markov chain is a positive recurrent chain if all its states are positive recurrent and a null recurrent chain if all its states are null recurrent.

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

The corresponding transition matrices of these Markov chains are named accordingly.

We have a result that an irreducible Markov chain in finite state-space is always positive recurrent. Another result says that, an irreducible and positive recurrent Markov chain has a unique stationary distribution [72].

For a PH-distribution, PH(α, T), the generator Q is assumed to be irreducible. The Markov chain with generator Q is positive recurrent, as its state-space is finite, and so its stationary distribution exists. The stationary probability vector ξ of Q is obtained by solving the equations ξQ =0, ξ1= 1. The rate of arrival or rate of successive visits to the instantaneous state is given by λ = ξT0, which is always positive. Following are some important properties of a PH-distribution.

Lemma 1.4.2. The Laplace-Stieltjes transform of P H(α, T) with distribution function F(t) is given by

Φ(s) = Z

0

estdF(t) = α(sI−T)1T0, forRe(s)0. (1.4) Lemma 1.4.3. If Z follows P H(α, T), the mean and the raw moments of Z are respec- tively, given by

E(Z) = −αT11 (1.5)

and E(Zn) = (1)nn!αT−n1, forn = 2,3, . . . . (1.6) Another expression for mean, in terms of stationary vector ξ, is given by

E(Z) = ¡

ξT0¢1

. (1.7)

Lemma 1.4.4. For aP H(α, T)distribution, the remaining timeZ until absorption, given that the current phase isi’ can be expressed by

P(Z ≤t|X0 =i) = 1−eieT t1, (1.8) which has the same form as P H(ei, T)distribution, with ei denoting the ith canonical row base vector. This is called the conditional memoryless property of PH-distribution.

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION Lemma 1.4.5. Let Zi follows PH¡

α(i), T(i)¢

of order mi, for i = 1,2. Then the closure property gives that Z =Z1+Z2 will follow distribution PH(α, T) of order m =m1+m2 with representations

αk =



α(1)k , 1≤k ≤m1; α(1)m1+1α(2)k−m1, m1+ 1≤k ≤m and T =

T(1) T0(1)α(2) 0 T(2)

, where T0(1)=−T(1)1.

Lemma 1.4.6. Let Zi follows PH¡

α(i), T(i)¢

of order mi for i = 1,2, and p [0,1].

Then Z =pZ1 + (1−p)Z2 will follow distribution PH(α, T) of order m =m1+m2 with representations

α

(1),(1−p)α(2)

¢ and T =

T(1) 0 0 T(2)

.

Lemma 1.4.7. Let Zi follows PH¡

α(i), T(i)¢

of order mi for i = 1,2, and define Z = min(Z1, Z2). Then Z will have PH(α, T) of orderm =m1m2 with representation

α =α(1)⊗α(2) and T =T(1)⊕T(2),

in terms of the Kronecker compositions ‘⊗’ and ‘⊕’ (given below, after the special cases).

The proofs of the above Lemmas can be found in Breuer [32]. PH-representation of some special distributions are given as follows:

Exponential distribution: A continuous random variableX follows an exponen- tial distribution with parameterλ >0, denoted by Exp(λ), if its probability density function (p.d.f.) is

f(t) =

( λe−λt, t≥0;

0, t <0. (1.9)

Exponential distribution has the ‘memoryless property’i.e., ifX defines interarrival times, the time we must wait for a new arrival is statistically independent of how long we have already waited for it. PH-representation of Exp(λ) is

α= 1, T =−λand T0 =λ. (1.10)

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

Erlang-k distribution: A continuous random variable X is said to follow an Erlang-k distribution with parameters (λ, k), denoted by Erl-k(λ), for λ > 0 and k ∈ {1,2, . . .}, if its density function is

f(t) =

( λktk−1

(k−1)!e−λt, t≥0;

0, t <0. (1.11)

The exponential distribution is a special case of the Erlang-k distribution with k = 1. Many processes in nature can be divided into sequential phases. If the time spent by the process in each of k sequential phases have independent and identical exponential distributions, then the overall time has Erlang-k distribution.

The PH-representation of Erl-k(λ) is given by

α= (1,0, . . . ,0), T =







−λ λ . .. ...

−λ λ

−λ







k×k

and T0 =







 0

...

0 λ







1

. (1.12)

Hyperexponential-k distribution: A continuous random variable X follows a k-phase hyperexponential distribution with parameters (αi, λi), denoted by Hyp- k(αi, λi), fori= 1,2, . . . , k and Pk

i=1αi = 1, if it has p.d.f. as f(t) =

( Pk

i=1αiλie−λit, t≥0, αi >0, λi >0;

0, t <0. (1.13)

If a process consists of alternate phases and it experiences one and only one of many alternate phases, instead of sequential phases, each of which have exponential distri- butions, then the resulting distribution is hyperexponential. Its PH-representation is given by

α= (α1, . . . , αk), T =





−λ1

. ..

−λk



 and T0 =



 λ1

...

λk



. (1.14)

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

Kronecker compositions

Here we give some details on Kronecker compositions of matrices, mainly on Kronecker products and Kronecker sums. Kronecker product of two matrices A = (aij)n×m and B = (bij)r×s is given by

A⊗B =





a11B · · · a1mB ... ... ...

an1B · · · anmB





nr×ms.

If A and B are square matrices, i.e., n =m and r =s, then the Kronecker sum is given by A⊕B = A⊗In×n +Ir×r⊗B, where In is the identity matrix of dimension n. For matrices A, B, C and D, some properties of Kronecker products are given below.

1. A⊗B 6=B⊗A, if An×m, Bp×q;

2. A⊗(B±C) =A⊗B ±A⊗C, if An×m,Bp×q and Cp×q; 3. (A±B)⊗C =A⊗C±B⊗C, if An×m, Bn×m, Cp×q;

4. (kA)⊗B =A⊗(kB) = k(A⊗B), if k scalar and An×m, Bp×q; 5. (A⊗B)⊗C =A⊗(B⊗C) = A⊗B⊗C, ifAn×m, Bp×q, Cr×s; 6. (A⊗B)(C⊗D) =AC⊗BD, if An×m,Bp×q,Cn×r, Bq×s;

7. (A⊗B)1 =A1⊗B1, ifAn×m and Bp×q are invertible;

8. eA⊕B =eA⊗eB, for An×m,Bp×q.

Discrete PH-distribution

Analogous to the definition of PH-distribution in continuous-time, a discrete-time PH (DPH) distribution is defined as the distribution of the time until absorption in a discrete- time Markov chain with m transient states,{1,2, ..., m}, and one absorbing state m+ 1,

CHAPTER 1 1.4. PHASE-TYPE DISTRIBUTION

with transition probability matrix given by P =

 T T0

0 1

, (1.15)

where T is a substochastic matrix such that I−T is nonsingular and the exit vectorT0 satisfies T0 =1−T1.

Definition 1.4.6. LetZ =min{n N:Xn=m+1} denote a random variable denoting time until absorption in state m+ 1. The probability distribution of Z is said to be the DPH with parameter (α, T), denoted by DP H(α, T), and is given by

pn=P(Z =n) = αTn−1T0, n≥1 (1.16)

and P(Z ≤n) = 1−αTn−11. (1.17)

In a DPH distribution pm+1 =P(Z =m+ 1) =αm+1 = 1−α1.We assume αm+1 = 0 and also that Q = T +T0α is irreducible. The stationary probability vector ξ of Q satisfies ξQ =ξ, ξ1= 1. The rate of arrival is given by λ =ξT0. Some properties of a DPH distribution are given below.

Lemma 1.4.8. The probability generating function or z-transform ofDP H(α, T)is given by

P(z) = X

n=0

znpn=(I−zT)1T0, for|z|<1. (1.18) Lemma 1.4.9. If Z follows DP H(α, T), mean and raw moments of Z are respectively given by

E(Z) = α(I−T)11 (1.19)

and E(Zn) = n!αTn−1(I−T)−n1, forn = 2,3, . . . . (1.20) Another expression for mean, in terms of stationary vector ξ, is given by

E(Z) = ¡

ξT0¢1

. (1.21)