1.2 Queues in communication systems
Queueing models arise in communication systems because they represent contention for resources. Performance modelling of communication systems has been carried out for many years with a view to assisting optimization and guiding the design of new generation systems. In a communication system, the messages or packets are transmitted through links from a source node to a destination node. In a queueing context, the messages are referred to as customers, channels as servers, message transmission times through a channel as service times and the number of links (from the source node to the destination node) as the number of servers. The emphasis on analysis of queueing models with application in communication systems is laid by many researchers including Boxma and Syski [31], Daigle [46], Gebali [66], Koole [96], Trivedi [157] among others.
Performance analysis of a communication network deals with the evaluation of the level of efficiency the network achieves, and the level of (dis)satisfaction of its users. A key element is the characterization of the impact of ‘user parameters’ on the performance offered by the network. Performance analysis is a probabilistic discipline, as the main underlying assumption of user behavior is that it is inherently random. Therefore, such analysis is described by a queueing model which defines the probabilistic properties of the traffic at the network. The performance measures in communication networks include the tuning range, processing requirements, propagation delay with respect to the packet transmission time, waiting time before packet transmission and channel allocation [48, 116]. Use of queueing theory in performance analysis of ATM networks [98], performance analysis of telephone systems [174], queueing analysis of IEEE 802.11 MAC based wireless networks [34, 155] and many others can be found in literature.
For high-speed networks, data traffic is seldom uniform and is characterized by periods of burstiness. Traffic bursts tax the network resources and lead to network congestion and data loss. Burstiness in sources like, voice, coded video and bulk data transfers; and correlation between interarrival times of packets or cells, are very important factors in the analysis of system performance. A Bernoulli/Poisson process or an independent renewal
CHAPTER 1 1.2. QUEUES IN COMMUNICATION SYSTEMS
arrival process is not an appropriate assumption for an arrival process of network traffic as it cannot capture the correlation of packets in a network, However, processes like Poisson process can capture burstiness in network traffic. Thus a generalized model is needed to study the performance of networks where arrivals are correlated. A Markov-modulated process or more generally a Markov arrival process (MAP) is widely used for non-renewal type arrival processes [45]. In a Markov-modulated traffic model, states are introduced to the model where the source changes its characteristic, depending on the current state.
The state of the source could represent its data rate, its packet length etc. When the Markov process represents data rate, the source can be in any of several active states and generates traffic with a rate that is determined by the state.
Packet losses are common in packet networking. They are caused by the limited buffering space in network devices. Packet loss probability depends on the type of network since this determines the number of paths that can be simultaneously established to reduce the amount of buffer occupancy. Wavelengths or channels are used to transfer packets within the network. The wavelength independence assumption, in which wavelength usage on each link is characterized by a fixed probability, independent of other wavelengths and links, makes it possible to study the blocking performance of networks quantitatively [132, 145]. The packet loss process is an important task as it enables better network design in terms of buffer sizing and management, congestion control mechanisms, protocols etc.
and can seriously influence the performance of the network.
Multiple services are required for high efficiency of bandwidth-intensive applications.
Different services may require different channel capacities and capacity of a channel de- pends upon the number of resources allocated to it. A wavelength division multiplexing (WDM) network divides the available fiber bandwidth into WDM channels [71, 158]. This division of bandwidth or channel allocation is based on the capacities required for var- ious services. For a high performance system, WDM channel allocation should lead to optimized resource utilization in a given network, which is physically feasible and cost- effective.
CHAPTER 1 1.2. QUEUES IN COMMUNICATION SYSTEMS
In a computer network, jobs can be divided into different classes. This enables quality of service (QoS) support. For instance, there may be a natural distinction between data, voice and video packets; and different classes may require different services. In such cases, it is common to implement service discipline at the networks that treat the jobs according to their ‘priority’. Priority based channel assignment ensures the transmission of a high priority packet prior to a low priority one. The highest priority queue does not need a buffer to store incoming data when preemptive static priority is employed. If a nonpreemptive scheme is employed, the highest priority queue will require a buffer to store incoming data until it is sent. Lack of priorities in the current channel assignment techniques can severely limit the viability of networks [55, 116].
Mobile ad hoc network (MANET) is a self-configuring network of mobile devices in which mobile subscribers are connected to a base station by wireless links where the retrial is a very common issue. When a call request comes to a base station, it assigns the mobile subscriber a link to the destination. Due to traffic congestion, if no links are available, the call either retries till a link is allocated successfully or it balks the system. Also in an optical access network, when a traffic request arrives, the network operator executes the routing and wavelength assignment (RWA) procedure which is responsible to find a working path from the source node to the destination node and assign an available wave- length to this connection as the working wavelength to carry data along the connection [134]. In case the path is not found or there is no available wavelength, the issued request is blocked [160, 71].
Optical Burst Switching (OBS) is a technology for reducing the gap between trans- missions and switching speeds. In OBS, incoming traffic from clients at the edge of the network is aggregated and further transmitted through WDM links [131, 171]. The oper- ation of an OBS controller can be seen as a queue with reneging or impatience. When a path is not assigned to a request, the burst control packet is accepted by the queue and is kept waiting for a path. If its delay budget is lower than the effective processing delay, it becomes impatient and leaves the system unserved. To have more efficient use of the network, the loss of packet burst has to be reduced and enhance its performance [30, 133].