Non-Markovian Queueing Systems

J. MEDHI , in Stochastic Models in Queueing Theory (Second Edition), 2003

6.5 Queues with Finite Input Source: M/G/1//N System

Consider a situation like this. There are number of machines in an establishment that break down after being in operation for a random duration. A machine that breaks down is repaired by a single repairman, and when the repairman is busy repairing a machine, other broken machines form a queue in the service facility and wait for repair. Once repaired the machine starts working again and so on. A machine on an on-period (working) is said to be at source. Machines on an off-period (broken down) are said to be in the service facility, with one machine under repair with the repairman (server) and others waiting to get repaired (served). A machine is either at source (working) or at the service facility (failed). This is known as a machine interference problem.

Let N be the total number of machines. Assume that the lifetime of each machine is independent exponential with parameter λ—that is, the probability that a unit at the source arrives at the service facility during an infinitestimal interval Δt is λΔt. Assume that the service time has a general distribution with DF B(.),LST B* (.), and mean b. The model is denoted by M/G/1//N.

One is interested in the distribution of the queue size at the service facility as well as performance measures of the system in steady state. Denote

R: response time of a unit (queueing time plus service time of a unit) in the service facility

γ: throughput of the system (mean number of units served per unit time)

Po: probability that the server is idle (no unit in the service facility) at an arbitrary time

L: number of units in the service facility at an arbitrary time

ρ: long-run fraction of time that the sever is busy = 1 − p 0

I: length of server idle period, and T: length of server busy period

a = λb. Then

(6.5.1) γ = ρ b = 1 p 0 b ,

also

γ = N E ( R ) + 1 / λ .

Thus,

E ( R ) = N b 1 p 0 1 λ .

Further, the arrival rate to the service facility equals the throughput (departure rate from it), so that

γ = λ [ N E ( L ) ] .

It follows that

E ( L ) = γ E ( R )

(which is a relationship of the type of Little's Law).

Let π 0 be the probability that a busy period terminates after completion of service of a unit. (The unit is the last unit to be served in a busy period.) Then the mean number of units served during a busy period is 1 /πG0, and so E(T) = b/πG0. Using

(6.5.2) p 0 = E ( I ) E ( I ) + E ( T ) , one gets p 0 = π 0 π 0 + N λ b ,

so also γ, E(T), E(L) are expressible in terms of π 0

One has to find the distribution {π 0,π 1,…, πN-1}, where πk = probability that there are k units left behind in the service facility immediately after completion of service of a unit. Denote

Ln , = number of units in the service facility immediately after service completion of nth unit, n = 1,2,…

The sequence {Ln ,n = 1,2,…} constitutes an embedded Markov chain having transition probability

(6.5.3) p i j = P r { L n = j | L n 1 = i } . Then p i j = { ( j N 1 ) 0 e ( N 1 j ) λ x ( 1 e λ x ) j d B ( x ) , i = 0 ( j i + 1 N 1 ) 0 e ( N 1 j ) λ x ( 1 e λ x ) j i + 1 d B ( x ) , j i 1 0 0 , j 1 , 0 j i 1.

Now

π j = lim m p i j m = lim m Pr { L n + m = j | L n = i }

exist for the irreducible, aperiodic Markov chain {Ln ,n = 0,1,…,N- 1}.

From the ergodic theorem of Markov chains (Theorem 1.1, Section 1.2.2.3), we see that πj are given as solutions of

(6.5.4) π j = i = 0 N 1 π i p i j , 0 j N 1 and j = 0 N 1 π j = 1

(see also treatment of M/G/1 (Section 6.3.1)). Here, however, the pgf of {πj } cannot be put in an explicit form. An ingenious method is put forward to find {πj }.

For details refer to Takagi (1993, Vol. II, Section 4.1).

It is found that π 0 is given by

π 0 = 1 k = 0 N 1 ( k N 1 ) 1 ζ k ,

where

(6.5.5) ζ 0 1 , ζ k i = 1 k B * ( j λ ) 1 B * ( j λ ) , k = 1 , 2 , , N 1

Once π 0 is found, one can find the performance measures as described above.

Note:

Whereas, in our usual notation, λ is taken as the rate of arrival from an infinite source, here λ is taken as the rate of arrival of each unit from the source (to the service facility). Thus, here E ( I ) = 1 N λ . The limiting case N → ∞ is considered next.

Limiting Case: M/G/1 System

Taking limits as N → ∞, λ → 0 so that Nλ = λ′ has a fixed finite value, one can find the expressions for the M/G/1 system. Taking limits, it can be seen that

(6.5.6) π 0 = 1 ρ + λ N [ λ b ( 2 ) 2 ( 1 ρ ) + b ] + O ( 1 N ) .

And the Pollaczek-Khinchin mean value formula for response time is given by

(6.5.7) E ( R ) = λ b ( 2 ) 2 ( 1 ρ ) + b

(Takagi, 1993).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124874626500060

Queueing Systems

Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011

Appendix

We sketch a proof of the equivalence between the limiting queue size distribution and the limiting distribution for the embedded Markov chain in an M/G/1 model. First, beginning at t = 0 let η denote those instants when the queue size X (t) increases by one (an arrival), and let ξn denote those instants when X (t) decreases by one (a departure). Let Yn = X n −) denote the queue length immediately prior to an arrival and let Xn = X n +) denote the queue length immediately after a departure. For any queue length i and any time t, the number of visits of Yn to i up to time t differs from the number of visits of Xn to i by at most one unit. Therefore, in the long run the average visits per unit time of Yn to i must equal the average visits of Xn to i, which is π i , the stationary distribution of the Markov chain {Xn }.Thus, we need only show that the limiting distribution of {X (t)} is the same as that of {Yn }, which is X (t) just prior to an arrival. But because the arrivals are Poisson, and arrivals in disjoint time intervals are independent, it must be that X (t) is independent of an arrival that occurs at time t. It follows that {X (t)} and {Yn } have the same limiting distribution, and therefore {X (t)} and the embedded Markov chain {Xn } have the same limiting distribution.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123814166000095

Markov Processes

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

9.4 Continuous Time Markov Processes

In this section, we investigate Markov processes where the time variable is continuous. In particular, most of our attention will be devoted to the so-called birth–death processes which are a generalization of the Poisson counting process studied in the previous chapter. To start with, consider a random process X(t) whose state space is either finite or countably infinite so that we can represent the states of the process by the set of integers,

X(t) ∈ {…,–3,–2,–1,0, 1,2,3, …}. Any process of this sort that is a Markov process has the interesting property that the time between any change of states is an exponential random variable. To see this, define Ti to be the time between the ith and the i + 1 th change of state and let hi (t) be the complement to its CDF, hi (t) = Pr(T i > t). Then, for t > 0, s > 0

(9.29) h i ( t + s ) = Pr ( T i > t + s ) = Pr ( T i > t + s , T i > s ) = Pr ( T i > t + s | T i > s ) Pr ( T i > s ) .

Due to the Markovian nature of the process, Pr(Ti > t + s | Ti > s) = Pr(Ti > t) and hence the previous equation simplifies to

(9.30) h i ( t + s ) = h i ( t ) h i ( s ) .

The only function which satisfies this type of relationship for arbitrary t and s is an exponential function of the form h i ( t ) = e - ρ i t for some constant pi . Furthermore, for this function to be a valid probability, the constant pi must not be negative. From this, the PDF of the time between change of states is easily found to be f T i ( t ) = ρ i e - ρ i t u ( t ) .

As with discrete-time Markov chains, the continuous-time Markov process can be described by its transition probabilities.

Definition 9.11: Define pt, j (t) = Pr(X(t o + t) = j|X(to) = i) to be the transition probability for a continuous time Markov process. If this probability does not depend on to, then the process is said to be a homogeneous Markov process.

Unless otherwise stated, we assume for the rest of this chapter that all continuous time Markov processes are homogeneous. The transition probabilities, pi, j (t), are somewhat analogous to the n-step transition probabilities used in the study of discrete-time processes and as a result, these probabilities satisfy a continuous time version of the Chapman–Kolmogorov equations:

(9.31) p i , j ( t + s ) = k p i , k ( t ) p k , j ( s ) , for t , s > 0.

One of the most commonly studied class of continuous time Markov processes is the birth–death process. These processes get their name from applications in the study of biological systems, but they are also commonly used in the study of queueing theory, and many other applications. The birth–death process is similar to the discrete-time random walk studied in the previous section in that when the process changes states, it either increases by 1 or decreases by 1. As with the Poisson counting process, the general class of birth–death processes can be described by the transition probabilities over an infinitesimal period of time, Δt. For a birth–death process,

(9.32) p i , j ( Δ t ) = { λ i Δ t + o ( Δ t ) , i f j = i + 1 μ i Δ t + 0 ( Δ t ) , i f j = i - 1 1 - ( λ i + μ i ) Δ t + o ( Δ t ) , i f j = i o ( Δ t ) , i f j i - 1 , i , i + 1.

The parameter λi is called the birth rate while μi is the death rate when the process is in state i. In the context of queueing theory, λi and μi are referred to as the arrival and departure rates, respectively.

Similar to what was done with the Poisson counting process, by letting s = Δt in Equation (9.31) and then applying the infinitesimal transition probabilities, a set of differential equations can be developed that will allow us to solve for the general transition probabilities. From Equation (9.31),

(9.33) p i , j ( t + Δ t ) = k p i , k ( t ) p k , j ( Δ t ) = ( λ j - 1 Δ t ) p i , j - 1 ( t ) + ( 1 - ( λ j + μ j ) Δ t ) p i , j ( t ) + ( μ j + 1 + Δ t ) p i , j + 1 ( t ) + o ( Δ t ) .

Rearranging terms and dividing by Δt produces

(9.34) p i , j ( t + Δ t ) - p i , j ( t ) Δ t = λ j - 1 p i , j - 1 ( t ) - ( λ j + μ j ) p i , j ( t ) + μ j + 1 p i , j + 1 ( t ) + o ( Δ t ) Δ t .

Finally, passing to the limit as Δt → 0 results in

(9.35) d d t p i , j ( t ) = λ j - 1 p i , j - 1 ( t ) - ( λ j + μ j ) p i , j ( t ) + μ j + 1 p i , j + 1 ( t ) .

This set of equations is referred to as the forward Kolmogorov equations. One can follow a similar procedure (see Exercise 9.32) to develop a slightly different set of equations known as the backward Kolmogorov equations,

(9.36) d d t p i , j ( t ) = λ i p i + 1 , j ( t ) - ( λ i + μ i ) p i , j ( t ) + μ i p i - 1 , j ( t ) .

For all but the simplest examples, it is very difficult to find a closed-form solution for this system of equations. However, the Kolmogorov equations can lend some insight into the behavior of the system. For example, consider the steady-state distribution of the Markov process. If a steady state exists, we would expect that as t , p i , j ( t ) π j independent of i and also that d p i , j ( t ) / d t 0 Plugging these simplifications into the forward Kolmogorov equations leads to

(9.37) λ j - 1 π j - 1 - ( λ j + μ j ) π j + μ j + 1 π j + 1 = 0.

These equations are known as the global balance equations. From them, the steady-state distribution can be found (if it exists). The solution to the balance equations is surprisingly easy to obtain. First, we rewrite the difference equation in the more symmetric form

(9.38) λ j π j - μ j + 1 π j + 1 = λ j - 1 π j - 1 - μ j π j .

Next, assume that the Markov process is defined on the states j = 0,1,2,…. Then the previous equation must be adjusted for the end point j = 0 according to (assuming μ0 = 0 which merely states that there can be no deaths when the population size is zero)

(9.39) λ 0 π 0 - μ 1 π 1 = 0.

Combining Equations (9.38) and (9.39) results in

λ j π j - μ j + 1 π j + 1 = 0 , j = 0 , 1 , 2 , ,

which leads to the simple recursion

(9.41) π j + 1 = λ j μ j + 1 π j , j = 0 , 1 , 2 , ,

whose solution is given by

(9.42) π j = π 0 i = 1 j λ i - 1 μ i , j = 1 , 2 , 3 , .

This gives the πj in terms of π0. In order to determine π0, the constraint that the πj must form a distribution is imposed.

(9.43) j = 0 π j = 1 π 0 = 1 1 + j = 1 i = 1 j λ i - 1 μ i .

This completes the proof of the following theorem.

Theorem 9.4: For a Markov birth–death process with birth rate λn, n = 0,1,2, …, and death rate μn , n = 1,2, 3, …, the steady-state distribution is given by

(9.44) π k = lim t p i , k ( t ) = i = 1 k λ i - 1 μ i 1 + j = 1 i = 1 j λ i - 1 μ i .

If the series in the denominator diverges, then πk =0 for any finite k. This indicates that a steady-state distribution does not exist. Likewise, if the series converges, the will be non-zero resulting in a well-behaved steady-state distribution.

Example 9.12 (The M/M/1 Queue)

In this example, we consider the birth–death process with constant birth rate and constant death rate. In particular, we take

λ n = λ , n = 0 , 1 , 2 , a n d μ 0 = 0 , μ n = μ , n = 1 , 2 , 3 , .

This model is commonly used in the study of queueing systems and, in that context, is referred to as the M/M/1 queue. In this nomenclature, the first "M" refers to the arrival process as being Markovian, the second "M" refers to the departure process as being Markovian, and the "1" is the number of servers. So this is a single server queue, where the interarrival time of new customers is an exponential random variable with mean 1 and the service time for each customer is exponential with mean 1 . For the M/M/1 queueing system, λi– 1 i = λ/ μ for all i so that

1 + j = 1 i = 1 j λ i - 1 μ i = j = 0 ( λ μ ) j = 1 1 - λ / μ , for λ < μ .

The resulting steady-state distribution of the queue size is then

π k = ( λ / μ ) k 1 1 - λ / μ = ( 1 - λ / μ ) ( λ / μ ) k , k = 0 , 1 , 2 , , for λ < μ .

Hence, if the arrival rate is less than the departure rate, the queue size will have a steady state. It makes sense that if the arrival rate is greater than the departure rate, then the queue size will tend to grow without bound.

Example 9.13 (The M/M/co Queue)

Next suppose the last example is modified so that there are an infinite number of servers available to simultaneously provide service to all customers in the system. In that case, there are no customers ever waiting in line, and the process X{t) now counts the number of customers in the system (receiving service) at time t. As before, we take the arrival rate to be constant λn = λ, but now the departure rate needs to be proportional to the number of customers in service, μn = nμ. In this case, λ1–1 i = λ /{ ) and

1 + j = 1 i = 1 j λ i - 1 μ i = 1 + j = 1 i = 1 j λ i μ = 1 + j = 1 ( λ / μ ) j j ! = e λ / μ .

Note that the series converges for any λ and μ, and hence the M/M/∞ queue will always have a steady-state distribution given by

π k = ( λ / μ ) k k ! e - λ / μ .

Example 9.14

This example demonstrates one way to simulate the M/M/1 queueing system of Example 9.12. One realization of this process as produced by the code that follows is illustrated in Figure 9.4. In generating the figure, we use an average arrival rate of λ = 20 customers per hour and an average service time of 1 /μ = 2 minutes. This leads to the condition λ<μ and the M/M/1 queue exhibits stable behavior. The reader is encouraged to run the program for the case when λ>μ to observe the unstable behavior (the queue size will tend to grow continuously over time).

Figure 9.4. Simulated realization of the birth/death process for an M/M/1 queueing system of Example 9.12.

If the birth–death process is truly modeling the size of a population of some organism, then it would be reasonable to consider the case when λQ = 0. That is, when the population size reaches zero, no further births can occur. In that case, the species is extinct and the state X(t) = 0 is an absorbing state. A fundamental question would then be, is extinction a certain event and if not what is the probability of the process being absorbed into the state of extinction? Naturally, the answer to these questions would depend on the starting population size. Let qt be the probability that the process eventually enters the absorbing state, given that it is initially in state i. Note that if the process is currently in state i, after the next transition, the birth–death process must be either in state i – 1 or state i + 1. The time to the next birth, Bi , is a random variable with an exponential distribution with a mean of 1i , while the time to the next death is an exponential random variable, D i , with a mean of 1i . Thus, the process will transition to state i + 1 if Bi < Di , otherwise it will transition to state i – 1. The reader can easily verify that Pr(Bi < D i ) = λi/(λj + μi ). The absorption probability can then be written as

(9.45) q i = Pr ( absorbtion | in state i ) = Pr ( absorbtion , next state is i + 1 in state i ) + Pr ( absorbtion , next state is i - 1 in state i ) = Pr ( absorbtion | in state i + 1 ) Pr ( next state is i + 1 | in state i ) + Pr ( abortion | in state i - 1 ) Pr ( next state is i - 1 | in state i ) = q i + 1 λ i λ i + μ i + q i - 1 μ i λ i + μ i , i = 1 , 2 , 3 , .

This provides a recursive set of equations that can be solved to find the absorption probabilities. To solve this set of equations, we rewrite them as

(9.46) q i + 1 - q i = μ i λ i ( q i - q i - 1 ) , i = 1 , 2 , 3 , .

After applying this recursion repeatedly and using the fact that q 0 = 1,

(9.47) q i + 1 - q i = ( q 1 - 1 ) j = 1 i μ j λ j .

Summing this equation from i = 1, 2, …, n results in

(9.48) q i + 1 - q 1 = ( q 1 - 1 ) i = 1 n j = 1 i μ j λ j .

Next, suppose that the series on the right hand side of the previous equation diverges as n → ∞. Since the qi are probabilities, the left hand side of the equation must be bounded, which implies that q 1 = 1. Then from Equation (9.47), it is determined that qn must be equal to 1 for all n. That is, if

(9.49) i = 1 n j = 1 i μ j λ j = ,

then absorption will eventually occur with probability 1 regardless of the starting state. If q1 < 1 (absorption is not certain), then the preceding series must converge to a finite number.

It is expected in that case that as n → ∞, qn → 0. Passing to the limit as n → ∞ in Equation (9.48) then allows a solution for ql of the form

(9.50) q 1 = i = 1 n j = 1 i μ j λ j 1 + i = 1 j = 1 i μ j λ j .

Furthermore, the general solution for the absorption probability is

(9.51) q n = i = 1 n j = 1 i μ j λ j 1 + i = 1 j = 1 i μ j λ j .

Example 9.15

Consider a population model where both the birth and death rates are proportional to the population, λn = nλ, μn = nμ. For this model,

i = 1 j = 1 i μ λ = i = 1 j = 1 i μ λ = i = 1 ( μ λ ) i = ( μ λ ) i = μ / λ 1 - μ / λ = μ λ - μ for λ > μ .

Therefore, if λ<μ, the series diverges and the species will eventually reach extinction with probability 1. if λ > μ,

i = n j = 1 i μ j λ j = i = n ( μ λ ) i = ( μ / λ ) n 1 - μ / λ ,

and the absorption (extinction) probabilities are

q n = ( μ λ ) n , n = 1 , 2 , 3 , .

Continuous time Markov processes do not necessarily need to have a discrete amplitude as in the previous examples. In the following, we discuss a class of continuous time, continuous amplitude Markov processes. To start with, it is noted that for any time instants t 0 <tl<t 2, the conditional PDF of a Markov process must satisfy the Chapman–Kolmogorov equation

(9.52) f ( x 2 , t 2 | x 0 , t 0 ) = - f ( x 2 , t 2 | x 1 , t 1 ) f ( x 1 , t 1 | x 0 , t 0 ) d x 1 .

This is just the continuous amplitude version of Equation (9.31). Here, we use the notation f(x 2, t 2 |x 1, t 1) to represent the conditional probability density of the process X(t 2) at the point x 2 conditioned on X(t 1) = X 1. Next, suppose we interpret these time instants as t 0 = 0, t 1 = t, and t 2 = t+ Δt. In this case, we interpret x 2x 1 = Δx as the the infinitesimal change in the process that occurs during the infinitesimal time instant Δt and f(x 2, t 2 |x 1, t 1) is the PDF of that increment.

Define Φ Δx (ω) to be the characteristic function of Δx = x 2x 1:

(9.53) Φ Δ x ( ω ) = E [ e j ω Δ x ] = - e j ω ( x 2 - x 1 ) f ( x 2 , t + Δ t | x 1 , t ) d x 2 .

We note that the characteristic function can be expressed in a Taylor series as

(9.54) Φ Δ x ( ω ) = k = 0 M k ( x 1 , t ) k ! ( j ω ) k ,

where Mk (x 1, t) = E[(x 2 –x 1) k |(x 1, t)] is the kth moment of the increment Δx. Taking inverse transforms of this expression, the conditional PDF can be expressed as

(9.55) f ( x 2 , t + Δ t | x 1 , t ) = k = 0 M k ( x 1 , t ) k ! ( - 1 ) k k x 2 k ( δ ( x 2 - x 1 ) ) .

Inserting this result into the Chapman–Kolmogorov equation, Equation (9.52), results in

(9.56) f ( x 2 , t + Δ t | x 0 , t 0 ) = k = 0 ( - 1 ) k k ! - M k ( x 1 , t ) k x 2 k δ ( x 2 - x 1 ) f ( x 1 , t | x 0 , t 0 ) d x 1 = k = 0 ( - 1 ) k k ! k x 2 x [ M k ( x 2 , t ) f ( x 2 , t | x 0 , t 0 ) ] = f ( x 2 , t | x 0 , t 0 ) + k = 1 ( - 1 ) k k ! k x 2 k [ M k ( x 2 , t ) f ( x 2 , t | x 0 , t 0 ) ] .

Subtracting f(x 2, t|x 0, t 0) from both sides of this equation and dividing by Δt results in

(9.57) f ( x 2 , t + Δ t | x 0 , t 0 ) - f ( x 2 , t | x 0 , t 0 ) Δ t = k = 1 ( - 1 ) k k ! k k ! [ M k ( x 2 , t ) Δ t f ( x 2 , t | x 0 , t 0 ) ] .

Finally, passing to the limit as Δt → 0 results in the partial differential equation

(9.58) t f ( x , t | x 0 , t 0 ) = k = 1 ( - 1 ) k k ! k x k [ K k ( x , t ) f ( x , t | x 0 , t 0 ) ] ,

where the function Kk (x, t) is defined as

(9.59) K k ( x , t ) = lim Δ t 0 E [ ( X ( t + Δ t ) - X ( t ) ) k | X ( t ) ] Δ t .

For many processes of interest, the PDF of an infinitesimal increment can be accurately approximated from its first few moments and hence we take Kk (x, t) = 0 for k > 2. For such processes, the PDF must satisfy

(9.60) t f ( x , t | x 0 , t 0 ) = - x ( K 1 ( x , t ) f ( x , t | x 0 , t 0 ) ) + 1 2 x 2 ( K 2 ( x , t ) f ( x , t | x 0 , t 0 ) ) .

This is known as the (one-dimensional) Fokker–Planck equation and is used extensively in diffusion theory to model the dispersion of fumes, smoke, and similar phenomenon.

In general, the Fokker–Planck equation is notoriously difficult to solve and doing such is well beyond the scope of this text. Instead, we consider a simple special case where the functions K 1(x, t) and K 2 (x, t) are constants, in which case the Fokker–Planck equation reduces to

(9.61) t f ( x , t | x 0 , t 0 ) = - 2 c x f ( x , t | x 0 , t 0 ) + D 2 x 2 f ( x , t | x 0 , t 0 ) ,

where in diffusion theory, D is known as the coefficient of diffusion and c is the drift. This equation is used in models that involve the diffusion of smoke or other pollutants in the atmosphere, the diffusion of electrons in a conductive medium, the diffusion of liquid pollutants in water and soil, and the difussion of plasmas. This equation can be solved in several ways. Perhaps one of the easiest methods is to use Fourier transforms. This is explored further in the exercises where the reader is asked to show that (taking x 0 = 0 and t 0 = 0) the solution to this diffusion equation is

(9.62) f ( x , t | x 0 = 0 , t 0 = 0 ) = 1 4 π D t exp ( - ( x - 2 c t ) 2 4 D t ) .

That is, the PDF is Gaussian with a mean and variance which changes linearly with time. For the case when c = 0, this is the Wiener process discussed in Section 8.5. The behavior of this process is explored in the next example.

Example 9.16

In this example, we model the diffusion of smoke from a forest fire that starts in a National Park at time t = 0 and location X = 0. The smoke from the fire drifts in the positive X directioin due to wind blowing at 10 miles per hour, and the diffusion coefficient is 1 square mile per hour. The probability density function is given in Equation (9.62). We provide a three-dimensional rendition of this function in Figure 9.5 using the following MATLAB program.

Figure 9.5. Observations of the PDF at different time instants showing the drift and dispersion of smoke for Example 9.16.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869814500126

Queueing Theory

Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010

Proposition 8.1

In any system in which customers arrive and depart one at a time

the rate at which arrivals find n = the rate at which departures leave n

and

a n = d n

Proof.

An arrival will see n in the system whenever the number in the system goes from n to n + 1; similarly, a departure will leave behind n whenever the number in the system goes from n + 1 to n. Now in any interval of time T the number of transitions from n to n + 1 must equal to within 1 the number from n + 1 to n. (Between any two transitions from n to n + 1, there must be one from n + 1 to n, and conversely.) Hence, the rate of transitions from n to n + 1 equals the rate from n + 1 to n; or, equivalently, the rate at which arrivals find n equals the rate at which departures leave n. Now an , the proportion of arrivals finding n, can be expressed as

a n = the rate at which arrivals find n overall arrival rate

Similarly,

d n = the rate at which departures leave n overall departure rate

Thus, if the overall arrival rate is equal to the overall departure rate, then the preceding shows that an = dn . On the other hand, if the overall arrival rate exceeds the overall departure rate, then the queue size will go to infinity, implying that an = dn = 0.

Hence, on the average, arrivals and departures always see the same number of customers. However, as Example 8.1 illustrates, they do not, in general, see time averages. One important exception where they do is in the case of Poisson arrivals.

Proposition 8.2

Poisson arrivals always see time averages. In particular, for Poisson arrivals,

P n = a n

To understand why Poisson arrivals always see time averages, consider an arbitrary Poisson arrival. If we knew that it arrived at time t, then the conditional distribution of what it sees upon arrival is the same as the unconditional distribution of the system state at time t. For knowing that an arrival occurs at time t gives us no information about what occurred prior to t. (Since the Poisson process has independent increments, knowing that an event occurred at time t does not affect the distribution of what occurred prior to t.) Hence, an arrival would just see the system according to the limiting probabilities.

Contrast the foregoing with the situation of Example 8.1 where knowing that an arrival occurred at time t tells us a great deal about the past; in particular it tells us that there have been no arrivals in (t − 1, t). Thus, in this case, we cannot conclude that the distribution of what an arrival at time t observes is the same as the distribution of the system state at time t.

For a second argument as to why Poisson arrivals see time averages, note that the total time the system is in state n by time T is (roughly) PnT. Hence, as Poisson arrivals always arrive at rate λ no matter what the system state, it follows that the number of arrivals in [0, T] that find the system in state n is (roughly) λPnT. In the long run, therefore, the rate at which arrivals find the system in state n is λPn and, as λ is the overall arrival rate, it follows that λPn / λ = Pn is the proportion of arrivals that find the system in state n.

The result that Poisson arrivals see time averages is called the PASTA principle.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123756862000029

Parallel Computational Geometry: An Approach using Randomization

John H. Reif , Sandeep Sen , in Handbook of Computational Geometry, 2000

6.3 Overview of sorting and routing on fixed-connection networks

The algorithms use sorting and routing extensively at various stages and a brief review of these routines will help us in understanding the latter algorithms that are built on them. The problem of packet routing involves routing a message from processor i to Π(i) where Π is a permutation function. There has been a long and rich history of routing algorithms for fixed connection networks (see [86, 59, 74, 54]) and these can be summarized as following

Lemma 6.4

There exists an algorithm for permutation routing on a n-node butterfly network that executes in Õ(log n) steps and uses only constant size queues to achieve this running time.

A more general result has been proved by Maggs et al. [59] for layered networks. A layered network is one whose nodes can be assigned layer numbers and the edges connects a layer i node to a layer i +   1 node (butterfly is an example of such a network). Let d denote the maximum distance traveled by any packet and c the largest number of packets that must traverse a single edge (c is also called the congestion of the network). These parameters are fixed for a given selection of paths by all the packets to be routed. Then there exists a scheme for scheduling the movements of the packets such that with high probability the routing can be completed in O(c  + d  +   log n) steps where n is the size of the network and O(n) packets are being routed.

Remark 6.5

Given the above result and also the fact that d  =   O(log n) for most path selection strategies (especially in a butterfly network), it remains to bound the value of c to get a bound on the routing time. For packets being routed to a random location, c can be bounded by O(log n) with high probability.

The first optimal Õ(log n) time sorting algorithm called Flashsort for the butterfly network was due to Reif and Valiant [78]. It was based on a PRAM sorting algorithm due to Reishchuk [79] but required several additional techniques because of the constraints imposed by the network connectivity. A slightly simplified version can be presented as the following.

(1)

Select nε , ε <   1/2 size random subset from the given set of n keys.

(2)

Sort these, using a simple method like doing all the pairwise comparisons and ranking them.

(3)

Use these keys to set up a binary tree such that the leaves of the tree correspond to the intervals defined by a pair of consecutive splitter keys. Over-sampling techniques are used to ensure that these intervals partition the remaining keys into roughly equal sized subsets. This eliminates the need for dynamic load-balancing in the special case of sorting. The keys are assumed to be in random locations initially. For each subset a sub-network of appropriate size is set aside and the keys that belong to this subset are routed to this part of the network. This is done using a procedure called Splitter Directed Routing which will referred as SDR in future references. Since this is a very useful operation, we describe it in more details below.

(4)

These steps are applied recursively until the size of the subproblems is no more than log2 n.

Although the original analysis showed that Õ(log n) buffer size may be required the more recent results on routing enables one to do with a constant amount of storage in each buffer ([59]).

6.3.1 Splitter directed routing

Let X be the set of cN keys that are totally ordered by the relation <. V is the set of nodes in the network. Suppose that for some l (1   l  n) we are given a set of splitters Σ  X of size |Σ|   =   2 l     1. We index each splitter σ [w] ∈ Σ by a distinct binary string w ∈ {0, 1} L of length less than L. Let     denote the ordering defined as follows: For u, v. w ∈ {0, 1} L ,w0u  w  w1v. We require that for all w 1, w 2 ∈ {0, 1} L , σ[w 1]   < σ[w 2] if and only if w 1  w 2. We assume that a copy of each splitter σ[w] is available in each node V[w]. V[w] is the set of nodes with rank |w| with addresses prefixed by w (same as in Reif and Valiant [78]).

Let X[λ]   = X where λ is the empty string. Initially we assume that the keys of X[λ] are located in V[λ], that is, the nodes of V having stage 0. The splitter directed routing tree is executed in l temporarily overlapping stages i  =   0, 1,…, l    1. For each w ∈ {0, 1} i the set of keys X[w] that are eventually routed through V[w] is defined recursively. The splitter σ[w] partitions X[w]   σ[w] into disjoint subsets

X w 0 = x X w | x < σ w

which are subsequently routed through V[w0] and V[w1] respectively.

In our case, we assume that after each recursive call, the sub-networks (of varying sizes corresponding to different subroutine calls) are relabeled as if these were isolated networks. The V[w] are then defined accordingly. The time analysis for this procedure is carried out using a delay-sequence argument [54] and it can be shown that this takes Õ(log n) time in a BFn .

We would need a generalization of the result of Theorem 2.7 where the process tree is modified in the following manner. Instead of all the sub-routines from a node proceeding independently, all the subroutines for a fixed (constant) depth subtree are required to finish before proceeding to the next level (of subtrees). This can be reduced to the previous case if we contract a subtree of fixed depth (see Figure 6) into a single node of the tree. By appropriate adjustment of constants, we can prove the following result along the lines of Theorem 2.7.

Corollary 6.6.

All the leaf level procedures of this modified tree will terminate in Õ(log n) steps.

Fig. 6. By contracting subtrees of a fixed depth, we get another tree satisfying the preconditions of the lemma.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978044482537750019X

Miscellaneous Topics

J. MEDHI , in Stochastic Models in Queueing Theory (Second Edition), 2003

8.3.3 Poisson input queue with vacations: [exhaustive-service] queue-length distribution

We shall now discuss the M/G/1 queue with single and multiple vacations under exhaustive-service discipline. Assume that the vacation sequence v n is stationary and that the system is in steady state.

Let N* be the number of customers present at the start of a busy period following a vacation or vacation period. Clearly, N* 1; N* can be deterministic or a random variable.

First, consider that N* is an RV having PGF

R ( z ) = n = 1 P r { N * = n } z n , | z | < 1 .

Let P (z) be the PGF of the number in the system at a departure epoch of a usual M/G/1 queue without vacation. Note that the distribution of the number in the system at the random epoch, at an arrival epoch, or at a departure epoch are one and the same for a Poisson input queue (PASTA). P(z) is given by the Pollaczek-Khinchin formula. Let Q(z) be the PGF of the number in the system at a departure epoch of an M/G/1 queue with vacations.

Let V(z) be the PGF of the number in the system at a random point in time when the server is on vacation. For a Poisson input queue, the basic decomposition result is

(8.3.1) Q ( z ) = P ( z ) V ( z ) .

(Fuhrmann and Cooper, 1985; Ali and Neuts, 1984)

The basic decomposition result shows that the number of customers at a departure epoch of a Poisson input queue with vacations is the sum of two random variables: (i) the number of customers at a departure epoch at the corresponding Poisson input queue without vacation and (ii) the number of customers at a random point of time given that the server is on vacation.

While variable (i) is vacation-independent, variable (ii) is vacation-related. We now state the important decomposition result (without proof) and consider some special cases. For a proof, refer to Fuhrmann (1984) and Doshi (1986).

Theorem 8.6

For an M/G/1 queue with server vacations,

(8.3.2) Q ( z ) = P ( z ) 1 R ( z ) ( 1 z ) E ( N * ) .

We consider some special cases.

(A) N* is deterministic

(i)

Pr{N* = 1} = 1, we get the usual queue with the vacation period corresponding to the idle period of the system. Then Q(z) = P(z).

(ii)

N * is a fixed number, say N—that is, Pr[N* = N } = 1. This corresponds to the case when the server is on vacation (or remains busy with other work or secondary customers) until the (primary) queue size builds up to a preassigned fixed number N, known as N-policy, this was considered first by Heyman (1968), who shows that a system with such a policy possesses some optimal properties.

From (8.3.2) we get

(8.3.3) Q ( z ) = P ( z ) 1 z N ( 1 z ) N .

In the preceding two cases under ( A ), the length of the server vacation depends on the arrival process during but not after the vacation. Under ( B ) (given below), the length of server vacation is independent of the arrival process.

( B ) N* is an RV

( a ) M/G/1 − Vm system

Let Av be the number of arrivals during a typical vacation period v. Then the PGF α(z) of Av is given by

(8.3.4) α ( z ) = n = 0 Pr { A υ = n } z n = f ¯ υ [ λ ( 1 z ) ] .

We have

(8.3.5) P r { A υ = 0 } = f ¯ υ ( λ )

so that

(8.3.6) P r { A v 1 } = 1 f ¯ v ( λ ) .

Now the event N* = n is the event that the number of arrivals during the last vacation period equals n, given that this number is at least 1—that is,

(8.3.7) P r { N * = n } = P r { A υ = n | A υ 1 } , n = 1 , 2 , = P r { A υ = n } 1 f ¯ υ ( λ ) .

Thus,

(8.3.8) R ( z ) = n = 1 Pr { N * = n } z n = f ¯ υ [ λ ( 1 z ) ] f ¯ υ ( λ ) 1 f ¯ υ ( λ ) .

We have

E ( N * ) = R ( 1 ) = λ f ¯ υ ( 0 ) 1 f ¯ υ ( λ ) = λ E ( υ ) 1 f ¯ υ ( λ ) .

Substituting in (8.3.2), we get

(8.3.9) Q ( z ) = P ( z ) 1 f ¯ υ [ λ ( 1 z ) ] λ E ( υ ) ( 1 z ) .

Remark 1.

The second factor has an interesting interpretation. Let Z(t) be the forward recurrence time of a vacation (or residual lifetime of the vacation) random variable. Then the limiting distribution Z of Z(t) as t → ∞ is given by

(8.3.10) F z ( x ) = P r { Z x } = 0 x [ 1 F υ ( γ ) ] d γ E ( υ ) ,

where Fv (y) = Pr(vy). (See Eq. (1.7.9) in Ch. 1.) Let bn be the probability that n arrivals occur during Z and let

β ( z ) = n = 0 b n z n

be the PGF of the number of arrivals during Z. Then

(8.3.11) β ( z ) = n = 0 z n 0 e λ t ( λ t ) n n ! d F z ( t ) = 0 { e λ t ( λ t z ) n n ! } 1 F υ ( t ) E ( υ ) d t = 0 e λ t ( 1 z ) { 1 F υ ( t ) } E ( υ ) d t = 1 f ¯ υ [ λ ( 1 z ) ] λ E ( υ ) ( 1 z ) ,

which is equal to the second factor on the RHS of (8.3.9). Thus, while the first factor is the PGF of the number at departure epoch in the standard M/G/1 queue without vacation, the second factor is the PGF of the number of arrivals during the limiting forward recurrence time of the vacation period (residual vacation period).

Remark 2.

We have α′(l) = E(Av) = λE(v), so that the second factor on the RHS of (8.3.9) can be written as

1 α ( z ) ( 1 z ) α ( 1 ) .

Thus, for M/G/1 − Vm , (8.3.2) can be put as

(8.3.12) Q ( z ) = P ( z ) 1 α ( z ) ( 1 z ) α ( 1 ) ,

α(z) being the PGF of the number of arrivals during the vacation.

Note:

The factor

1 α ( z ) ( 1 z ) α ( 1 )

is the PGF of Pr{ Av > k}/E{Av } K = 0,1,2,….

This is also the PGF of the number of units that arrive during an interval from the commencement of a vacation period to a random point in the vacation period.

Remark 3.

In an M/G/1 − Vm queue, the (server) idle period I has mean

E ( I ) = E ( υ ) / [ 1 f ¯ υ ( λ ) ] .

Remark 4.

For Mx/G/1 − Vm system, Q(z) will become

Q ( z ) = V ( z ) 1 f ¯ υ { 1 λ A ( z ) } λ E ( υ ) { 1 A ( z ) } ,

where A(z) = PGF of X, and

V ( z ) = ( 1 ρ ) ( 1 z ) B * ( λ λ A ( z ) ) B * ( λ λ A ( z ) ) .

(b) M/G/ 1 − Vs model

Here there is only one vacation, and there may be no arrivals or one arrival or more than no one arrival during the server-vacation period. If there is no arrival, the server waits for an arrival to occur, and then N* = 1. If there is an arrival during the vacation, then N* is equal to the number of arrivals during the vacation. Thus,

P r { N * = 1 } = P r { A υ = 0 } + P r { A υ = 1 } P r { N * = n } = P r { A υ = n } , n = 2 , 3 ,

Thus, using (8.3.4) and (8.3.5), we get

(8.3.13) R ( z ) = n = 1 P r { N * = n } z n = P r { A υ = 0 } z + n = 1 P r { A υ = n } z n = z f ¯ υ ( λ ) + f ¯ υ [ λ ( 1 z ) ] f ¯ υ ( λ ) = f ¯ υ [ λ ( 1 z ) ] ( 1 z ) f ¯ υ ( λ ) and

(8.3.14) E ( N * ) = R ( 1 ) = λ f ¯ υ ( 0 ) + f ¯ υ ( λ ) = λ E ( υ ) + f ¯ υ ( λ ) .

Substitution in (8.3.2) gives

(8.3.15) Q ( z ) = P ( z ) 1 f ¯ υ [ λ ( 1 z ) ] + ( 1 z ) f ¯ υ ( λ ) ( 1 z ) [ λ E ( υ ) + f ¯ υ ( λ ) ] .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124874626500084

DEVS Markov Model Lumping

Bernard P. Zeigler , ... Ernesto Kofman , in Theory of Modeling and Simulation (Third Edition), 2019

22.7.3 Lumped Model of Multiprocessor

It turns out that an informative path to development of a lumped model follows from partitioning the states of the base model into two regimes as illustrated in Fig. 22.10. In the Normal regime, the communication medium is fast enough (i.e., CommServTime << CompTime) to keep the queue almost empty and almost all processors active. In this case CommTime is close to CommServTime and speed up is close to N.

Figure 22.10

Figure 22.10. Multi-regime representation derived from lumped model.

In the Congested regime, communication is slow and the queue fills up and can only service a finite number of processors, N c r i t at any time. We work out N c r i t in a moment. Since only a finite number of processors are active at any time, a processor must contend with for N N c r i t others that are also waiting so that CommTime = ( N N c r i t )*CommServTime increases with N so that relative speed up goes to zero as N increases. The switching point, N c r i t , occurs when the arrival rate of requests to the queue just exceeds the service rate – which is where standard queueing theory (itself Markov-based) predicts that the queue size grows without bound.

The components of the lumped model are Markov Matrix models of the processors and of the communication network (Fig. 22.11). The lumped models of the processors each have a probability of being active, P a c t i v e computed in steady state. The major assumption in constructing the lumped model is that in steady state, the number of active processors is:

Figure 22.11

Figure 22.11. Grouping of components in the lumped model.

N a c t i v e = N P a c t i v e

To justify this assertion we require 1) uniformity of structure, i.e., all processors have the same structure, and 2) that the transitions of the components are sufficiently mixed that there are no permanent deviations from the current probability value of any one component (see Chapter 16).

Now each active processor outputs a service request so that the arrival rate to the CommNet component is N a c t i v e C o m p R a t e .

Now we assume uniformity of distribution of the CommNet output to the processors. That is, there is no priority for processors in the underlying queueing discipline (but see the Appendix where priorities are considered).

In the Normal regime, we assume all processors are active and we will show that this is a consistent solution under the conditions that CommServTime << CompTime.

Let P a c t i v e 1 and the arrival rate is N C o m p R a t e . By uniformity of output, each processor expecting service gets it, with the waiting time CommTime being CompServTime since there are no others waiting. So

P A c t i v e = C o m p T i m e C o m p T i m e + C o m m T i m e = C o m p T i m e C o m p T i m e + C o m m S e r v e T i m e and since C o m m S e r v T i m e < < C o m p T i m e P A c t i v e 1 confirming our assumptions.

The transition from the Normal regime to the Congested regime occurs as indicated above when the arrival rate of requests to the queue just exceeds the service rate. This happens where

N C o m p R a t e C o m m S e r v R a t e

In terms of times,

N C o m p T i m e 1 C o m m S e r v T i m e N C o m m S e r v T i m e C o m p T i m e , i . e . , N c r i t = C o m p T i m e C o m m S e r v T i m e

In the Congested regime, we assume that N c r i t processors are active at any time. In this case, we can show that the probability of the active state goes to zero:

P A c t i v e 0 with N .

Indeed, after going to the wait state, an active processor must wait for the other N N c r i t processors to be served and its CommTime is ( N N c r i ) *CompServeTime. Thus

P A c t i v e = C o m p T i m e C o m p T i m e + C o m m T i m e = C o m p T i m e C o m p T i m e + ( N N c r i t ) C o m m S e r v e T i m e 0 with N increasing.

Recall that relative speedup is given by the probability of a processor being in the active state in steady state. The above analysis predicts that as the number of processors increases from one to the critical value, speed up increases linearly (a statement of Amdahl's law (Zeigler et al., 2015)). Beyond the critical value, the speed up decreases to zero. Note that realistically the computation cycle speed (1/CompTime) has to be much smaller than the network bandwidth (1/CommServTime) and therefore the critical point for network saturation ( N c r i t ) should be very large. Simulation results described in Zeigler and Sarjoughian (2017) Chapter 19 verifies these predictions and gives information about the region around the saturation value.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012813370500033X

A review on simulation models applied to emergency medical service operations

L. Aboueljinane , ... Z. Jemai , in Computers & Industrial Engineering, 2013

5.7 Scenarios regarding sensitivity analysis

Besides the different alternatives presented so far, several authors have performed sensitivity analysis on certain input factors to measure the resulting EMS system performance. One examined factor is the increase in demand that can results from several factors discussed in (Silva & Pinto, 2010) such as the growth of population, a better access to EMS or the enlargement of the scope of EMS system. Lubicz and Mielczarek (1987) described a simulation model of a rural EMS system in SW Poland and assumed a projected constant annual increase in demand of 5%. This model predicted that the current number of vehicles will satisfy the desired service level of 95% of calls responded within 30   min for only five years beyond which the fleet size have to be increased. Silva and Pinto (2010) studied the case of the EMS of Belo Horizonte in Brazil using a simulation model implemented in ARENA software to evaluate scenarios of a 10–100% increase in demand on average response time and size of queue. The authors found that a 30% increase in demand resulted in a congested system with an increase of average response time from 21.2   min to 38.4 min and an average queue size of 4.09, which is not acceptable for the studied EMS system.

Another sensitivity factor studied in literature is related to travel speeds of emergency vehicles. Aringhieri et al. (2007) used an agent based simulation model of Milano EMS system (Italy) to apply operational changes in travel speeds under different deployment strategies. The suggested scenarios tested both the increase and the decrease of the average speed which correspond respectively to the use of dedicated lanes for emergency vehicles or to increased traffic congestion. Similarly, Liu and Lee (1988) performed a sensitivity analysis on travel speeds applied to the case of the EMS system in Taipei (Taiwan) using a simulation model in which hospital emergency department resources (sick beds) are considered. The authors found the speed change to have few effects on system performance (round trip time, service time, sick beds and vehicle utilization rate).

Finally, other sensitivity analysis scenarios related to the improvement of the level of personal preparedness, the EMS system process efficiency and the population awareness were proposed in EMS simulation literature. Iskander (1989) suggested a scenario of a 25% reduction in the rate of emergency calls that could be achieved with the help of safety regulations and awareness programs, as well as a scenario of a 50% reduction in the dispatching time and 25% reduction in the time spent at the scene due to the use of more professionals with proper training rather than volunteers. These scenarios were applied to the EMS system of a rural region in the state of West Virginia and performed substantial improvements in the system performance measures (average waiting time, response time and round trip time). Su and Shih (2002) discussed scenarios of reducing idle errands i.e. calls not resulting in a dispatch of a rescue team or the fulfill of a rescue because of either a fraudulent or prank call, an insufficient description of call location given to the dispatcher, or having no vehicle available to answer the call. The authors pointed that reducing these time-consuming situations could be achieved thanks to personnel training and public education. A sensitivity analysis was performed in order to assess the effect of reducing idle errand rate on rescue team utilization rate, response time and round trip time performances of Taipei EMS system (Taiwan).

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S0360835213003100