Tao Dissertation

Preview:

Citation preview

  • 7/29/2019 Tao Dissertation

    1/126

    Analysis and Implementation of Multiplexing Techniques in

    Connection-Oriented Communication Networks

    ___________________________________________________________

    A Dissertation

    Presented to

    the faculty of the School of Engineering and Applied Science

    University of Virginia

    ___________________________________________________________

    In Partial Fulfillment

    of the requirement for the Degree

    Doctor of Philosophy

    (Electrical Engineering)

    ____________________________________________________________

    by

    Tao Li

    August 2006

  • 7/29/2019 Tao Dissertation

    2/126

    APPROVAL SHEET

    The dissertation is submitted in partial fulfillment of the

    requirement for the degree of

    Doctor of Philosophy (Electrical Engineering)

    ________________________________________Tao Li, Author

    This dissertation has been read and approved by the examining Committee:

    ________________________________________Prof. Malathi Veeraraghavan, Advisor

    ________________________________________Prof. Joanne Bechta Dugan, Chairperson

    ________________________________________Prof. Stephen G. Wilson

    ________________________________________

    Prof. Mat Brandt-Pearce

    ________________________________________Prof. Stephen D. Patek

    Accepted for the School of Engineering and Applied Science:

    ________________________________________Dean, School of Engineering and Applied Science

    August 2006

  • 7/29/2019 Tao Dissertation

    3/126

    Abstract

    Future implementations of wired and wireless communication networks are expected to

    support a variety of multimedia applications with diverse traffic characteristics and qual-

    ity-of-service (QoS) requirements. To meet these diverse requirements, there are two types

    of networking technologies, i.e., connectionless and connection-oriented. While some

    applications, such as small data-file transfers, are best served on a connectionless network,

    other applications such as large data-file transfers and audio/video applications that have

    stringent requirements on data rate, delay, delay jitter, and delay-bound violation probabil-

    ity are best served with a connection-oriented network because of its inherent support for

    QoS.

    This dissertation combines an analytical study of a connection-oriented packet-

    switched scheduling mechanism (a data-plane problem) with a hardware implementation

    of a signaling protocol for a (connection-oriented) circuit switch (a control-plane prob-

    lem). For the analytical study, we model and simulate a polling-based scheduling mecha-

    nism for its ability to support real-time applications. This is a connection-oriented packet-

    based bandwidth-sharing mechanism because it requires a call admission control phase to

    limit the maximum number of communicating endpoints, and it uses packets in the data

    plane. The real-time application considered in our study is telephony. We develop models

    to determine appropriate values for operational parameters, such as the number of voice

    calls that can be simultaneously supported while meeting a predetermined set of quality-

    of-service requirements (e.g., delay and loss). Our results can be used to dimension a poll-

    ing-based system, such as the polling-based operational mode of an IEEE 802.11 LAN.

    The implementation part of this dissertation is focused on demonstrating that signaling

  • 7/29/2019 Tao Dissertation

    4/126

    protocols, needed in all connection-oriented networks for the call admission phase, can, in

    spite of their complexity, be implemented in hardware. Advantages of a hardware imple-

    mentation are that (a) call setup delay is reduced by at least two-to-three orders of magni-

    tude, and (b) call-handling capacity is increased significantly. By reducing call setup

    delay, a significant overhead component in connection-oriented networks, resource utili-

    zation can be improved. With the high call-handling capacity of a hardware-accelerated

    signaling engine, connection-oriented switches can better support end-user applications

    with short-duration calls, which typically have high call arrival rates.

  • 7/29/2019 Tao Dissertation

    5/126

    Acknowledgements

    First and foremost, I would like to express my sincerest gratitude to my advisor, Profes-

    sor Malathi Veeraraghavan, for her inspiring guidance, encouragement, patience, and con-

    tinuous support over the past six years. I have benefited tremendously from her unique

    blend of energy, enthusiasm, vision, technical insights, and practical sensibility.

    I want to thank Professor Joanne Bechta Dugan, Professor Stephen G. Wilson, Profes-

    sor Mat Brandt-Pearce, and Professor Stephen D. Patek for serving on the dissertation

    committee and providing constructive comments. I would like to particularly thank Dr.

    Dimitris Logothetis for his involvement and inspiring direction in part of this research. I

    am honored to work together with him.

    I want to thank Haobo Wang, Xuan Zheng, Zhifeng Tao, Hojun Lee, Xiangfei Zhu,

    Xiuduan Fang, Zhanxiang Huang, Reinette Grobler, Anant P. Mudambi, and Murali Nethi,

    with whom I have enjoyed collaboration and numerous discussions. I would like to thank

    my fellow graduate students and friends, including Qianling Cao, Qun Xiao, Jun He, Wen-

    zhuo Jin, Haijun Fang, Bo Xu, Bin Huang, Xinmin Liu, Shuhao Chen, and Shixiao Zhou.

    They have made my life at Charlottesville enjoyable and memorable. I would also like to

    take this opportunity to thank Chen Chen, Yaogang Lian, Chengdu Huang, Ying Wang,

    Changlong Hu, Hong Xu, Yvan Pointurier, Chad Cole, Yoshihiro Masui, Kirtan Modi,

    Shilpa Deshpande, Jinlian Wang, Emily Jinwen Chong, and many others for their friend-

    ship and support.

    I would like to thank Professor Shivendra S. Panwar for his constructive comments on

    my research when I was with Polytechnic University. During my stay at Brooklyn, NY, I

    shared many wonderful times with my fellow students and friends. I especially thank Hui

  • 7/29/2019 Tao Dissertation

    6/126

    Lin, Lina Chen, Xiaoan Lu, Zilan Lin, Hua Zhang, Shiwen Mao, Yihan Li, Jiwu Duan,

    John Kuo, Hua Tang, Junxu Zhang, Xi Yang, Xueai Bai, Ying Meng, and Yi Qin for their

    friendship and support.

    I am grateful to Shizhong Xu, Jianhao Hu, Chun He, Xing Li, Xiang Ling, Hongyin Lu,

    Hengduan Luo, Jinghua Qian, Xiaoyu Fu, Yingming Lin, Yi Huang, and many others.

    Their advice, friendship, and generous support made my stay in Chengdu an enjoyable

    and memorable one.

    This work is supported by the National Science Foundation under grants ANI-0087487

    and ITR-0312376. I also acknowledge the New York State Center for Advanced Technol-

    ogy in Telecommunications (CATT) at Polytechnic University, Brooklyn, NY, for provid-

    ing funding for my Ph.D. study.

    Finally, I dedicate this dissertation to my parents and relatives for their love, support,

    and encouragement.

  • 7/29/2019 Tao Dissertation

    7/126

    i

    Contents

    Chapter 1 Introduction 1

    1.1 Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.2 Motivation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.3 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Chapter 2 Analysis of a Polling System with Application to Wireless LANs 8

    2.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    2.2 Polling System Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    2.3 Analysis of a Single-Queue Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    2.3.1 Distribution of Interpoll Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    2.3.2 Distribution of Packet Queueing Delay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    2.3.3 Service Time Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    2.3.4 Distribution ofDW. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.4 Multiple-Queue Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.5 Validation and Application of Analytical Model to a Wireless LAN . . . . . . . . . 20

    2.5.1 Background of IEEE 802.11 and Values Selected for Parameters . . . . . . . . 20

    2.5.2 Numerical Results for Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2.5.3 Numerical Results for Capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    2.5.4 Summary of the Numerical Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

  • 7/29/2019 Tao Dissertation

    8/126

    ii

    2.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    Chapter 3 Effects of Packetization of Voice Data 33

    3.1 An ON-OFF MMF Model with Packetization . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    3.2 Voice Capacity and Delay Bound in Small-N Regime of Operation . . . . . . . . . . 35

    3.3 Resource Allocation in Large-N Regime of Operation . . . . . . . . . . . . . . . . . . . . 36

    3.3.1 Performance Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    3.3.2 Analytical Assumptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    3.3.3 Analysis of Overflow Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    3.3.4 Relationship Between Overflow Probability and Packet Loss Ratio . . . . . . 42

    3.3.5 Analysis of Packet Loss Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    3.4 Numerical Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    3.4.1 Delay in Single-call Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    3.4.2 Delay in Multi-call Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    3.4.3 Distribution ofTsrv in Small-N Regime of Operation. . . . . . . . . . . . . . . . . . 50

    3.4.4 Overflow Probability and Packet Loss Ratio in Large-N Regime of Operation

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    3.4.5 Resource Allocation for Aggregated Flows . . . . . . . . . . . . . . . . . . . . . . . . . 56

    3.4.6 Asymmetry in the Two Directions of Voice Communication. . . . . . . . . . . . 58

    3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    Chapter 4 Implementation of a Signaling Control Card 64

    4.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    4.2 Architecture of Signaling Control Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    4.3 Implementation Details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

  • 7/29/2019 Tao Dissertation

    9/126

    iii

    4.3.1 Gigabit Ethernet Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    4.3.2 Hardware Signaling Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    4.3.3 PCI Interface Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

    4.3.4 Configuration Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    4.3.5 Power Regulation Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    4.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    Chapter 5 Conclusions 96

    5.1 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

    5.2 Future Research Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

    Bibliography 100

  • 7/29/2019 Tao Dissertation

    10/126

    iv

    List of Figures

    Fig. 1.1: (a) An unfolded view of a generic network switch; (b) Output scheduling in an

    output-buffered packet switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    Fig. 2.1: Polling system model: An example showing three vacations in one polling cycle.

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    Fig. 2.2: Timing diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    Fig. 2.3: Timing in the small-N regime of operation: the worst-case scenario. . . . . . . 18

    Fig. 2.4: Network architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    Fig. 2.5: CDF ofDWwith TSas a parameter in the single-call scenario. Twalkand Care set

    to 0.23ms and 8.5Kbps, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    Fig. 2.6: CCDF ofDWin the small-N regime of operation. , codec rate, and Twalkare set

    to 0.5, 64Kbps, and 0.23ms, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    Fig. 2.7: CCDF of delay withN'as a parameter in the large-N regime of operation. TS, ,

    and codec rate are equal to 30ms, 0.5, and 8.5Kbps, respectively. . . . . . . . . . 26

    Fig. 2.8: N'max versus TS, with Ploss and as parameters. Codec rate, Twalk, and stretch

    distribution are respectively set to (a) 8.5Kbps, 0.23ms, and S(t); (b) 8.5Kbps,

    0.23ms, and VSmax. The plot for Ploss=0 is obtained from one million samples.

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    Fig. 2.9: N'max versus TS, with Ploss and as parameters. Codec rate, Twalk, and stretch

    distribution are respectively set to (a) 64Kbps, 0.23ms, and VSmax; (b) 64Kbps,

  • 7/29/2019 Tao Dissertation

    11/126

    v

    0.13ms, and VSmax. ROHC is applied in (b). The plot for Ploss=0 is obtained from

    one million samples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    Fig. 2.10: Transmission efficiency as a function ofTS. Codec rate,H, and Twalkserve as

    parameters.R is fixed at 11Mbps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    Fig. 3.1: Illustration of the packetization of voice traffic. . . . . . . . . . . . . . . . . . . . . . . . 34

    Fig. 3.2: Discrete-time ON-OFF Markov model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    Fig. 3.3: Timing diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    Fig. 3.4: CDF ofDWwith TSas a parameter in a single-call scenario. Packetization period

    is set to 10ms. Vacation stretch follows the distribution specified in (2.18). . 46

    Fig. 3.5: CDF ofDWwith TSas a parameter in a single-call scenario with clock skew.

    Packetization period is set to 10.001ms. Vacation stretch follows the distribution

    specified in (2.18). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    Fig. 3.6: CCDF ofDWin the small-N regime of operation. and Twalkare set to 0.5 and

    0.13ms, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    Fig. 3.7: CCDF of frame delay withN'as a parameter in the large-N regime of operation.

    Vacation stretch is set to VSmax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    Fig. 3.8: CCDF ofTsrv forN'=5, 10, 20, and 30, respectively. TSequals 30ms. . . . . . . 51

    Fig. 3.9: CCDF ofTsrv forN'=5, 10, 20, and 30, respectively. TSequals 50ms. . . . . . . 52

    Fig. 3.10: Overflow probability and packet loss ratio in the large-N regime of operation.

    TS= 30ms.Dboundis set to TS+L+2ms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    Fig. 3.11: Difference between the two service disciplines. . . . . . . . . . . . . . . . . . . . . . . 55

    Fig. 3.12: Resource allocation for aggregated voice flows. TSequals 30ms. . . . . . . . . 57

    Fig. 3.13: Resource consumption of each voice call. TSequals 30ms. . . . . . . . . . . . . . 57

  • 7/29/2019 Tao Dissertation

    12/126

    vi

    Fig. 3.14: Ploss in the two directions assuming the mobile-assisted decision-feedback ap-

    proach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    Fig. 3.15: Ploss in the two directions assuming the call-based termination with traffic esti-

    mation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    Fig. 4.1: Architecture of a typical switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    Fig. 4.2: System architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    Fig. 4.3: Block diagram of the signaling control card. . . . . . . . . . . . . . . . . . . . . . . . . . 68

    Fig. 4.4: Block diagram of the Gigabit Ethernet module. . . . . . . . . . . . . . . . . . . . . . . . 70

    Fig. 4.5: Block diagram of the hardware signaling accelerator module. . . . . . . . . . . . . 73

    Fig. 4.6: State transition diagram of MAC interface unit for the receive path. . . . . . . . 74

    Fig. 4.7: State transition diagram of MAC interface unit for the transmit path. . . . . . . 76

    Fig. 4.8: Block diagram of the FIFO interface unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    Fig. 4.9: State transition diagram for memory segmentx. . . . . . . . . . . . . . . . . . . . . . . . 79

    Fig. 4.10: State transition diagram of the FIFO controller. . . . . . . . . . . . . . . . . . . . . . . 81

    Fig. 4.11: Block diagram of a switch fabric interface unit. . . . . . . . . . . . . . . . . . . . . . . 81

    Fig. 4.12: State transition diagram for PATH message processing. . . . . . . . . . . . . . . . . 83

    Fig. 4.13: Block diagram of the PCI interface module. . . . . . . . . . . . . . . . . . . . . . . . . . 87

  • 7/29/2019 Tao Dissertation

    13/126

    vii

    List of Tables

    Table 2.1: Values for parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    Table 2.2: Summary of numerical results at TS= 30ms. . . . . . . . . . . . . . . . . . . . . . . . . 30

    Table 4.1: 10-bit PHY interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    Table 4.2: MAC interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

    Table 4.3: MAC register interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

    Table 4.4: Control output on the receive path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

    Table 4.5: Interface signals of the RAM controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    Table 4.6: Control output of the RAM controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    Table 4.7: Interface signals of the FIFO controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    Table 4.8: Signals on TCAM and SRAM interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    Table 4.9: Control input to the TCAM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    Table 4.10: Signals to the configuration module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    Table 4.11: Memory mapping table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    Table 4.12: Configuration procedure for the GbE controller. . . . . . . . . . . . . . . . . . . . . . 91

    Table 4.13: Configuration procedure for the TCAM. . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

  • 7/29/2019 Tao Dissertation

    14/126

    viii

    List of Acronyms

    AP: Access Point

    ATM: Asynchronous Transfer Mode

    BE: Best Effort

    CAC: Call/Connection Admission Control

    CCDF: Complementary Cumulative Distribution Function

    CDF: Cumulative Distribution Function

    CFP: Contention Free Period

    codec: coder-decoder

    CP: Contention Period

    CPU: Central Processing Unit

    CRC: Cyclic Redundancy Check

    DCF: Distributed Coordination Function

    DMA: Direct Memory Access

    DOCSIS: Data Over Cable Service Interface Specification

    ERT-VT: Extended Real-Time Variable Rate

    FFT: Fast Fourier Transform

    FIFO: First In First Out

    FPGA: Field Programmable Gate Array

    FSM: Finite State Machine

  • 7/29/2019 Tao Dissertation

    15/126

    ix

    GbE: Gigabit Ethernet

    GMPLS: Generalized MPLS

    HWSAC: Hardware Signaling Accelerator Core

    i.i.d.: independent and identically distributed

    IEEE: Institute of Electrical and Electronics Engineers

    IP: Internet Protocol

    IPv4: Internet Protocol version 4

    LAN: Local Area Network

    LDP: Label Distribution Protocol

    LLC: Logic Link Control

    MAC: Media Access Control

    MMF: Markov Modulated Fluid

    MPLS: Multi-Protocol Label Switching

    MUX: Multiplexer

    PCF: Point Coordination Function

    PCI: Peripheral Component Interconnect

    PCM: Pulse Code Modulation

    PECL: Positive Emitter Coupled Logic

    PNNI: Private Network-Network Interface

    PSTN: Public Switched Telephone Network

    QoS: Quality of Service

    RAM: Random Access Memory

    ROHC: RObust Header Compression

  • 7/29/2019 Tao Dissertation

    16/126

    x

    RSVP-TE: Resource ReserVation Protocol - Traffic Engineering

    RTP: Real-time Transport Protocol

    RT-VR: Real-Time Variable Rate

    SONET: Synchronous Optical Network

    SRAM: Static Random Access Memory

    TCAM: Ternary Content Addressable Memory

    TCP: Transmission Control Protocol

    TOE: TCP Offload Engine

    TTL: Transistor Transistor Logic

    UDP: User Datagram Protocol

    VC: Virtual Circuit

    WDM: Wavelength Division Multiplexing

  • 7/29/2019 Tao Dissertation

    17/126

    1

    Chapter 1

    Introduction

    1.1 Background

    Future implementations of wired and wireless communication networks are expected to

    support a variety of multimedia applications with diverse traffic characteristics and qual-

    ity-of-service (QoS) requirements. For example, mission-critical applications may

    demand deterministic (or hard) guarantees on loss and delay. Interactive voice and video

    applications require a guarantee on delay but can generally tolerate a small packet loss

    rate. Hence, they only require statistical (or soft) QoS guarantees. Data transfers expect

    error-free transmission. Some applications such as e-mail do not require explicit QoS

    guarantees.

    To meet these diverse QoS requirements, there are two types of networking technolo-

    gies, i.e., connectionless and connection-oriented. While some applications, such as small

    data-file transfers, are best served on a connectionless network, other applications such as

    large data-file transfers and voice/video applications that have stringent requirements on

    data rate, delay, delay jitter, and delay-bound violation probability are best served with a

    connection-oriented network because of its intrinsic support for QoS. There are two types

    of connection-oriented networks: circuit-switched networks, such as time-division-multi-

  • 7/29/2019 Tao Dissertation

    18/126

    2

    plexed SONET (Synchronous Optical Network) and WDM (Wavelength Division Multi-

    plexing), and packet-switched networks, such as MPLS (MultiProtocol Label Switching)

    and ATM (Asynchronous Transfer Mode). Connection-oriented packet-switched networks

    are also referred to as virtual-circuit (VC) networks.

    A connection-oriented network switch has a data-plane module and a control-plane

    module, as shown in Fig. 1.1(a). The data-plane module of a network switch, typically

    implemented in hardware, consists of line cards and a switch fabric. The input sections of

    line cards demultiplex input signals and process protocol headers. In a circuit switch, the

    appropriate output interface for a demultiplexed input signal is identified according to the

    position information (e.g., interface number, time slot, and wavelength) of this signal and

    a set of table entries of the form {input channel identifier, output channel identifier}, used

    to describe the cross-connections. A channel identifier can be a combination of interface,

    time slot, and wavelength indexes. In a connection-oriented packet switch, the output

    interface of a packet is determined by checking the header information contained in the

    packet and a set of table entries of a form similar to the one just mentioned but with differ-

    Switch

    fabric

    Signaling/Routing/Link management

    engines

    Line card

    (a)

    Output

    scheduler

    (b)

    Control plane

    Data plane

    Fig. 1.1: (a) An unfolded view of a generic network switch; (b) Output

    scheduling in an output-buffered packet switch.

  • 7/29/2019 Tao Dissertation

    19/126

    3

    ent channel identifiers. The switch fabric switches demultiplexed signals or packets

    according to table entries {input channel identifier, output channel identifier}, which can

    be either statically set up through provisioning, or dynamically established using a signal-

    ing protocol such as Private Network-Network Interface (PNNI) [1], Resource ReserVa-

    tion Protocol (RSVP) [2], Label Distribution Protocol (LDP) [3], and RSVPTraffic

    Engineering (RSVP-TE) protocol [4]. After traversing the switch fabric, signals or packets

    are transmitted out through the output section of a line card. Additionally, in an output-

    buffered packet switch, packet schedulers are used to select packets for transmission on an

    output port based on the QoS commitments made during Call/Connection Admission

    Control (CAC). This is shown in Fig. 1.1(b).

    The controlplane module is an implementation of signaling protocols, routing proto-

    cols, and link management protocols. The chief characteristic of a connection-oriented

    network is that resources are reserved prior to data transfer. Resources are reserved in a

    circuit/virtual-circuit setup phase. Resources are released in a circuit/virtual-circuit release

    phase. Signaling protocols are used to set up and release circuits/virtual-circuits (or con-

    nections) dynamically. Setting up a connection typically consists of the following steps:

    Determine a route for the connection by consulting routing tables, which are created

    by routing protocols.

    Determine whether or not available network resources (i.e., bandwidth and buffer

    space) are sufficient to meet the declared QoS requirements of a connection request.

    If yes, the connection request is accepted and a fraction of resources is reserved

    along the path of the connection. Otherwise, the connection request is rejected. This

    step is referred to as CAC.

  • 7/29/2019 Tao Dissertation

    20/126

    4

    Allocate channel identifiers and construct table entries in the form {input channel

    identifier, output channel identifier}.

    Program the switch fabric of each switch on the end-to-end path in circuit-switched

    networks, or program schedulers in packet-switched networks.

    Maintain state information for each connection.

    After data transfer completes, the reserved resources are released and the table entries

    related to this connection are deleted.

    1.2 Motivation

    With recent advancements in networking and communication technologies, there have

    emerged several communication systems that simultaneously support connection-oriented

    mode of operation and connectionless mode of operation. For example, in the IEEE

    802.11 wireless Local Area Network (LAN) [5], the Point Coordination Function (PCF)

    mode of operation, a scheduling-based access mechanism, coexists with the Distributed

    Coordination Function (DCF) mode of operation, a contention-based channel access

    mechanism. The PCF mode of operation is a connection-oriented packet-based band-

    width-sharing mechanism because it requires a call admission control phase to limit the

    maximum number of communicating endpoints, and it uses packets in the data plane. In

    the Medium Access Control (MAC) layer of the IEEE 802.16 wireless Metropolitan Area

    Network (MAN) [6] and the Data Over Cable Service Interface Specification (DOCSIS)

    [7], the Real Time Variable Rate (RT-VR) Service and Extended Real-Time Variable Rate

    (ERT-VR) Service, which are used to support real-time applications, coexist with a Best

    Effort (BE) Service, which is used to support applications without explicit bandwidth or

    delay requirements. The RT-VR, ERT-VR, and BE services are provided through a cen-

  • 7/29/2019 Tao Dissertation

    21/126

    5

    tralized scheduler.

    The IEEE 802.11, IEEE 802.16, and DOCSIS standards adopt scheduling-based access

    schemes to support real-time applications that have stringent QoS requirements. Although

    a variety of scheduling and CAC algorithms have been proposed for QoS provisioning in

    wireline networks [8][9][10][11], these results can not be directly applied for QoS provi-

    sioning in the upstream direction (from mobile stations to a base station) of a shared-

    medium system, such as an infrastructured IEEE 802.11 wireless LAN, where queues are

    located across wireless stations. This is because that the scheduler of an infrastructured

    IEEE 802.11 wireless LAN, which is typically built in the base station, does not have a

    central knowledge of the instantaneous status of each queue. Given the limited informa-

    tion about data arrivals in a shared-medium environment, the performance of polling-

    based channel-sharing schemes has become an important topic of study in literature (see

    [12], [13], [14], and [15]).

    Motivated by the PCF mode of operation in IEEE 802.11 and other shared-medium

    systems, we develop a model for a polling system with vacations, where vacations repre-

    sent the time periods in which the resource sharing mechanism used is a non-polling

    mode. The real-time application served by the polling mode in our study is telephony.

    This dissertation combines an analytical study of a connection-oriented packet-

    switched scheduling mechanism (a data-plane problem) with an implementation of a

    signaling protocol for a circuit switch (a control-plane problem).

    Signaling protocols are traditionally implemented in software due to their complexity

    and the requirement for flexibility. While software implementations of signaling protocols

    can cope with the complexity and remain flexible, their performance level becomes a con-

  • 7/29/2019 Tao Dissertation

    22/126

    6

    cern if high call-handling capacities and small signaling overheads are required. For

    instance, a signaling protocol implementation in an off-the-shelf commercial SONET

    switch requires around 90ms to process a call setup message [16], which includes running

    a call admission control algorithm and configuring the switch fabric. Call setup delay is a

    significant overhead component in connection-oriented networks. The longer the delay,

    the fewer the applications that can reap the QoS-related benefits of connection-oriented

    networks. Furthermore, the longer the delay, the lower the link resource utilization

    because during call setup, the bandwidth being allocated for the call is not being used for

    user data transport. In addition, software implementations of signaling protocols can

    rarely achieve a call-handling capacity beyond the order of 1000-10000 calls per second

    [17]. However, the magnitude of call arrival rates at backbone switches can be several

    orders higher than the call-handling capacities of current-day switches if future connec-

    tion-oriented networks are directly used to support end-user applications such as file trans-

    fers and video conferencing.

    The implementation part of this dissertation is focussed on demonstrating that signaling

    protocols, needed in all connection-oriented networks for the call admission phase, can, in

    spite of their complexity, be implemented in hardware. Advantages of a hardware imple-

    mentation are that (a) call setup delay can be reduced by at least two-to-three orders of

    magnitude, and (b) call-handling capacity can be increased significantly.

    1.3 Outline

    The remainder of this dissertation is organized as follows. In Chapter 2, we model a

    polling system with vacations and apply this model to an IEEE 802.11 LAN. Appropriate

    values for operational parameters, codec rates, etc., are determined to obtain the highest

  • 7/29/2019 Tao Dissertation

    23/126

    7

    level of performance. Performance metrics include number of calls that can be simulta-

    neously supported while meeting a predetermined set of quality-of-service guarantees

    (such as delay and loss). We have published the main results of Chapter 2 in a journal

    paper [18].

    In Chapter 3, we consider the packetization of voice data. We first consider determinis-

    tic QoS guarantees and compute the worst-case delay bound for each voice flow. Then we

    consider the statistical multiplexing of independent voice flows. We compute analytically

    the amount of resources that should be reserved to guarantee a specific overflow probabil-

    ity or packet loss ratio for a polling-based scheduling algorithm. Finally, we illustrate

    through simulations how different polling-period termination schemes impact the symme-

    try in the two directions of voice communication in an IEEE 802.11 wireless LAN.

    In Chapter 4, we describe our implementation of a FPGA-based signaling control card,

    which supports RSVP-TE with extensions for General MultiProtocol Label Switching

    (GMPLS) [19]. Specifically, while a software-based implementation of this signaling pro-

    tocol in an off-the-shelf commercial SONET switch takes around 90ms to process a call

    setup message [16], our breakthrough hardware implementation demonstrates that the

    same set of actions involved in processing a call setup message (with the same signaling

    protocol) can be accomplished in 2.4 microseconds. Our signaling control card has a call-

    handling capacity of 400,000 calls/second. We have published a part of Chapter 4, the

    FPGA implementation of the signaling protocol, in a conference paper [20], and a journal

    paper [21]. Finally, in Chapter 5, we summarize this dissertation and discuss directions for

    future research.

  • 7/29/2019 Tao Dissertation

    24/126

    8

    Chapter 2

    Analysis of a Polling System with

    Application to Wireless LANs

    Polling systems were introduced for various time-sharing computer systems in the

    early 70s [22]. The basic concept of a polling system is to have a server poll a set of

    queues in a cyclic order. While prior work [22-28] on the performance modeling of polling

    systems yielded significant results, they were primarily targeted at computer data applica-

    tions and assumed that customers arrive at queues according to a Poisson process.

    Increasingly, interest in polling systems is shifting from computer data applications to

    multimedia applications, e.g., in IEEE 802.11 wireless LANs [5], Bluetooth [29], and

    DOCSIS systems [7]. Particularly in 802.11 MAC, the Point Coordination Function (PCF)

    mode of operation, which is a polling mechanism, coexists with the Distributed Coordina-

    tion Function (DCF) mode of operation, a contention-based channel access mechanism.

    The reason for doing this is that while in the past, local area networks were developed with

    only one sharing mechanism, such as Carrier Sense Multiple Access with Collision Detec-

    tion or Token Ring, more recently, with the emphasis on multimedia services, local area

    networks now support multiple sharing mechanisms simultaneously. We refer to time

    intervals in which the medium is used in a non-polling sharing mode as vacations and to

  • 7/29/2019 Tao Dissertation

    25/126

    9

    such system as apolling system with vacations. The impact of the coexistence of multiple

    sharing mechanisms on the performance of polling mode needs to be understood.

    In this chapter, we model and analyze the performance of a polling system for a real-

    time application, i.e., telephony, which brings two new dimensions to the modeling prob-

    lem addressed in [22-28]. First, telephony data generated within a call has been fitted

    quite closely to ON-OFF Markov Modulated Fluid (MMF) models [30, 31], which are not

    Poisson. Second, unlike computer data applications, telephony application is delay-sensi-

    tive but loss-tolerant. Specifically, it has a stringent end-to-end delay requirement of

    150ms for excellent-quality voice and 400ms for acceptable-quality voice, both with echo

    cancellers [32]. Meanwhile, it is believed that a packet loss ratio up to 5% is tolerable [33]

    for some voice encoding schemes.

    Our problem statement is as follows: analyze the delay and voice capacity of the poll-

    ing system with vacations. Towards solving this problem we have built an analytical

    model for the delay in a single-queue polling system. For a multiple-queue system, we

    have identified a small-N regime of operation in which deterministic service is provided,

    and a large-N regime of operation in which statistical service is provided. We have com-

    puted voice capacity and delay bounds for the small-N regime of operation, and simulated

    voice capacity and delay distribution for the large-N regime of operation.

    The rest of the chapter is organized as follows. Section 2.1 describes related work. Sec-

    tion 2.2 describes our polling system model. Section 2.3 presents the delay analysis of a

    single-queue system. Section 2.4 presents the capacity analysis of the small-N regime of

    operation. Section 2.5 validates our analytical model with simulation results and also dem-

    onstrates how our model can be applied to an IEEE 802.11b wireless LAN. Section 2.6

  • 7/29/2019 Tao Dissertation

    26/126

    10

    concludes this chapter.

    2.1 Related Work

    Broadly speaking, prior work related to this work can be classified into three catego-

    ries: papers on general polling systems, papers on QoS provisioning in wired and wireless

    networks, and papers on voice support over MAC protocols. In the first category, we have

    papers by Takagi [23, 24], which model single-buffer and infinite-buffer systems. The

    arrival process is typically Poisson. An MMF model of the type that we assume for tele-

    phony traffic is not considered. Other work such as [22, 25, 26, 34] are useful for general

    modeling techniques used but they do not consider supporting telephony on a polling sys-

    tem.

    The literature on QoS provisioning in wired networks is extensive (see [10], [8], [35],

    [36], [37], [38], and [39]), and a full review is beyond the scope of this section. These

    papers did not specifically address the polling scheme considered in this chapter. Deter-

    ministic services have been considered in [40] for wireless LANs and in [12] for wireless

    ATM networks. Kim and Krunz studied how to provide statistical QoS guarantees for a

    single ON/OFF fluid source or multiplexed ON/OFF fluid sources over a non-shared wire-

    less link [41] [42]. They assumed a FIFO scheduler in their study. QoS provisioning in

    wireless networks was also investigated in [43], [44], and [45] under different settings.

    The scheduling scheme assumed in this chapter was not considered in these papers.

    On the topic of how to support voice traffic on MAC protocols there is a very rich liter-

    ature [46-48]. Focusing on polling-based MAC systems, there were several proposals to

    use polling to support real-time communications in wireless environments (e.g., [49] and

    [50]). Furthermore, the industry-standard IEEE 802.11 MAC protocol [5] includes a poll-

  • 7/29/2019 Tao Dissertation

    27/126

    11

    ing scheme for real-time communications. This led to a number of papers on voice over

    802.11 MAC schemes [51-61]. Most of these papers [51-61] use simulations to determine

    how to support telephony on the 802.11 polling mode. Some conclude that it is feasible to

    support telephony and provide operating points, such as the number of telephone calls to

    admit to the polling list and corresponding delay bounds [e.g., 53, 58, 60]. Others [e.g.,

    59] conclude that there are better MAC schemes (when compared to the polling scheme)

    for telephony. Equipment vendors such as Cisco and Symbol [62, 63] have proprietary

    implementations of access points that support telephony traffic. Even the IEEE 802.11

    working group is now specifying an enhanced version of the MAC protocol, labelled

    802.11e [64], to support other MAC schemes for quality of service. Nevertheless, our

    interest in this question of understanding the behavior of a polling system with vacations

    when supporting telephony traffic remains, and given that there are other communication

    networks, such as Bluetooth and DOCSIS, proposing polling schemes for real-time traffic,

    we decided to study this problem in a general context.

    2.2 Polling System Model

    The polling system model with vacations is illustrated in Fig. 2.1. In this model, time is

    divided into repetitive intervals, each lasting . These intervals are further divided into

    alternatingpolling periods and vacation periods. A centralized server schedules transmis-

    sion opportunities among all queues during polling periods. After each polling period, the

    server goes on vacation for at least a fraction ( ) over interval . If a vacation

    period does not finish at the completion instant of a interval, it can stretch into the next

    polling period, which is in turn foreshortened by the amount of time that equals the length

    of the vacation stretch, a random variable upper-bounded by . The length of a poll-

    TS

    0 1< < TS

    TS

    VSmax

  • 7/29/2019 Tao Dissertation

    28/126

    12

    ing period would be if not foreshortened by a vacation stretch. The notion of

    and are important for applications receiving service in vacation periods. controls

    the frequency of occurrence of vacation periods, while governs the partition inside each

    .

    We assume that voice sources are independent and identical ON-OFF Markov Modu-

    lated Fluid (MMF) sources. A constant-rate bit stream is generated while a source is in the

    ONstate (see Fig. 2.1). We assume that the server polls all queues in a round-robin fash-

    ion. All data accumulated at a queue up to the instant when it is polled is served immedi-

    ately after that poll (i.e., the gated-service discipline). With telephony traffic, the limited-k

    service discipline is not an option because this could lead to excessive delays. Walk time,

    , is the time needed for the server to move from one queue to another (i.e., the over-

    head of a polling scheme). A polling cycle is the time taken to poll all queues. With

    the assumed service discipline, two situations may happen in a polling period:

    b

    aON OFF

    The queue is filled with acontinuous bit stream when

    the source is in the ON state.

    source rate

    c

    1. The ON-OFF model shownabove is assumed to be the sourcefor each of theNqueues.

    2. Numbers k, m,p are allarbitrary; in other words,a vacation can occur

    at arbitrary points within apolling cycle. It can evenoccur multiple times within onepolling cycle. The only constraint

    is that a vacation occurs once inevery TSinterval.

    Queue 1

    Queue 2

    Queue k

    Vacation

    Vacation

    Vacation

    Queue k+1 Queue k+2

    Queue k+m

    Queue k+m+1

    Queue k+m+p

    QueueN Queue k+m+p+1

    TS TSpolling pollingvacation vacation

    periodperiodperiodperiod

    vacation stretchtime

    Fig. 2.1: Polling system model: An example showing three vacations in one polling cycle.

    1 ( )TS

    TS TS

    TS

    Twalk

    Tc N

  • 7/29/2019 Tao Dissertation

    29/126

    13

    The server needs to go on vacation before completing a polling cycle because the

    polling period is exhausted. If this happens, the server will continue polling the next

    queue when the next polling period starts (i.e., after the vacation), as shown in Fig.

    2.1.

    The server completes a polling cycle. If this happens, in our system, a vacation

    period starts immediately. In other words, a queue is served at most once in each

    polling period. The reason for this latter assumption is that sources in our model

    generate data at low rates relative to service rates (e.g., if the server is a transmission

    link, service rates are likely to be in the order of a few Mbps while voice codec rates

    are in the order of tens of Kbps). This implies that the amount of data accumulated

    may not be sufficiently large compared to the overhead represented by if a

    queue is polled more than once in a polling period.

    2.3 Analysis of a Single-Queue Scenario

    The mode of operation in this single-queue scenario (as per our system model

    described in Section 2.2) is that the server will poll and serve the queue once and then

    immediately go on vacation until next polling period. A voice packet, which consists of

    the data accumulated in the queue at a polling instant, would experience two delay compo-

    nents, i.e., a queueing delay , the waiting time in the queue, and a service delay ,

    the time taken to serve the voice packet. Since voice data is created in the form of bit

    stream, the data that arrives earlier would have had a longer waiting time at the polling

    instant. We define to be the time gap between the arrival instant of the earliest bit in

    the voice packet and the polling instant of this voice packet. Fig. 2.2 shows a scenario in

    which a voice source enters the ONstate between two consecutive polls.

    Twalk

    DQ DS

    DQ

  • 7/29/2019 Tao Dissertation

    30/126

    14

    We are interested in total delay , which is defined as the sum of and . The

    size of a voice packet, which impacts its service time , is affected by the queueing

    delay. Thus, and are dependent random variables. The queueing delay also

    depends on the interpoll time between two consecutive polls, which is in turn exclu-

    sively determined by and the two vacation stretches. With this observation, we start

    with analyzing the distribution of in Section 2.3.1. Then we derive the conditional dis-

    tribution of and in Section 2.3.2 and 2.3.3, respectively. Finally we combine all

    together and obtain the distribution of in Section 2.3.4.

    2.3.1 Distribution of Interpoll Time

    Consider two consecutive polls, th poll and th poll. As shown in Fig. 2.2, the

    interpoll time equals , a constant, plus the difference between two consecutive vacation

    stretches. We assume that vacation stretches are i.i.d. random variables with known Prob-

    ability Density Function (PDF) , . Then the PDF of , denoted by

    , can be computed by convolution

    . (2.1)

    2.3.2 Distribution of Packet Queueing Delay

    The source state at the th polling instant can be either OFF, denoted by , or ON,

    kth pollTI= t

    OFF ONSource

    DQ

    (k+1)th poll

    TS time

    state

    Eventsstretch

    stretch

    Fig. 2.2: Timing diagram.

    DW DQ DS

    DS

    DQ DS

    TI

    TS

    TI

    DQ DS

    DW

    k k 1+( )

    TS

    s x( ) x 0 VSmax,[ ] TI

    i t( )

    i t( ) s t TS +( )s ( ) d0

    VSma x

    =

    k A

  • 7/29/2019 Tao Dissertation

    31/126

    15

    denoted by . Consider a packet created at the th polling instant. Let denote

    that the packet is nonempty. Then for and , we obtain the following con-

    ditional probability

    , (2.2)

    where is the transition rate of the voice source out of the OFFstate as shown in Fig. 2.1.

    If the source is in the ONstate at the th polling instant, the resulting queueing delay is

    . Thus we have

    , (2.3)

    where is the unit step function defined by

    . (2.4)

    Removing conditioning on the source state at the th polling instant, we obtain

    , (2.5)

    where

    , (2.6)

    A k1

    +( ) B

    TI t= 0 q t

  • 7/29/2019 Tao Dissertation

    32/126

    16

    and . Parameters and are the transition rates shown in Fig.

    2.1. Unconditioning on , we obtain as follows

    , (2.7)

    where is given by (2.1).

    2.3.3 Service Time Distribution

    Service time depends upon the amount of voice data accumulated in the time period

    that equals . The reason for considering the queueing time instead of the interpoll time

    is that no data is generated before the voice source transitions into the ONstate (see Fig.

    2.2). Let be the total amount of time spent in the ONstate during . By using a uni-

    formization technique [65], we obtain the conditional Cumulative Distribution Function

    (CDF) of as

    . (2.8)

    Applying our polling model to a shared communication link with rate , where service

    means transmission on the link, we define service time as , where

    denotes the source rate. This leads to the following relationship between the conditional

    CDFs of and

    . (2.9)

    2.3.4 Distribution ofDW

    Let denote the joint PDF of and . Given , the CDF

    P A B{ } 1 P A B{ }= a b

    TI P DQ q B{ }

    P DQ

    q B{ } P DQ

    q B T,I

    t={ } i t( )dtTS VSma x

    TS VSma x+

    =

    i t( )

    DQ

    Z DQ

    Z

    FZ DQz q( ) P= Z z DQ q={ } e

    a b+( )q a b+( )q( )n

    n!---------------------------

    nk 1

    ba b+------------

    k1

    aa b+------------

    n k1

    + ni

    zq---

    i1

    zq---

    n i

    i k=

    n

    k 1=

    n

    n 1=

    =

    R

    DS DS Z c R= c

    DS Z

    FDS DQs q( ) FZ DQ

    R s

    c---------- q

    =

    DS DQ,s q,( ) DS DQ DW DQ DS+=

  • 7/29/2019 Tao Dissertation

    33/126

    17

    of can be obtained by

    , (2.10)

    where is given by (2.9). The PDF of the queueing delay, , can be

    found by taking the derivative of (2.7).

    2.4 Multiple-Queue Scenario

    In this section, we focus on the voice capacity of a multiple-queue scenario. Given that

    source rates and vacation stretches are upper bounded, we recognize that there is a param-

    eter such that if , the number of queues, is no more than , all queues are guaran-

    teed to be served in each polling period even in the worst-case scenario. Although all

    queues enjoy guaranteed service when , the problem is that, in most cases, a size-

    able fraction of the time allocated to polling periods would be left unused because of the

    OFFstate of the source model. A strategy that can improve utilization is to increase

    beyond to fill in blanks in polling periods that would otherwise be left to vacation peri-

    ods at the expense of the absolute service guarantee. This leads to statistical multiplexing.

    If , as illustrated in Fig. 2.1, large interpoll times would occur if a polling period is

    exhausted before a polling cycle completes. Large interpoll times will bring about large

    queueing delays, which impose a negative impact on human perceived voice quality.

    Therefore, should be controlled such that the frequency of occurrence of such large

    delays is kept to within a tolerable limit.

    We refer to the regime of operation in which as the small-N regime of opera-

    tion, and to the regime in which as the large-N regime of operation. The choice of

    DW

    FDWw( ) P DW w{ }= fDS DQ,

    s q,( ) sd qd

    w q

    fDS DQ s q( ) fDQ q( ) sd qd

    w q

    FDS DQ w q q( ) fDQ q( ) qd0

    w

    =

    = =

    FDS DQw q q( ) DQ

    q( )

    Np N Np

    N Np

    N

    Np

    N Np>

    N

    N Np

    N Np>

  • 7/29/2019 Tao Dissertation

    34/126

    18

    the regime of operation is up to implementation. Next we compute , the voice capacity

    of the small- regime of operation.

    We consider two consecutive intervals, the th interval and the th interval, as

    shown in Fig. 2.3. In the worst-case situation, all polls in the th interval result in an

    empty response while all polls in the th interval result in a maximum-sized packet.

    This can happen if every voice source transitions into the ONstate right after a poll in the

    th interval. The th interval is shown to have a stretch, which is of maximum length

    in the worst-case. Since the total time spent on all queues should not exceed the

    remaining time of a polling period, the inequality

    (2.11)

    holds, where denotes the maximum service time for the th queue.

    Denote as the worst-case interpoll time of the th queue. If the voice source

    stays in the ONstate during the whole interpoll time, the service time reaches its maxi-

    mum value

    , for , (2.12)

    where and are the source rate and service rate regarding the th queue, respectively.

    Np

    N

    k k1

    +( )

    1 2 i-1 i... 1 2 i-1

    Timeinterpoll time for the ith queue

    kth interval (k+1)th interval

    stretchwalk times

    ... ivacation

    service timesTS

    Fig. 2.3: Timing in the small-N regime of operation: the worst-case scenario.

    k

    k1

    +( )

    k k

    VSmax

    DSmax i, Twalk+( )i

    1=

    N

    1 ( )TS VSmax

    DSmax i, i

    TIm ax i, i

    DSmax i, ciTIm ax i, Ri= i 1

    ci Ri i

  • 7/29/2019 Tao Dissertation

    35/126

    19

    As shown in Fig. 2.3, we have

    , for . (2.13)

    It immediately follows that

    , (2.14)

    for , given . Using (2.12) and (2.14), (2.11) can be rewritten

    as

    . (2.15)

    Interestingly, we see that the value of is not impacted by the

    polling order. If source rates and service rates are homogeneous, i.e., and ,

    (2.15) can be simplified as

    . (2.16)

    The parameter equals the greatest for which (2.15) or (2.16) holds. We can thus

    compute iteratively. The running time of each stage of iteration is only . For call

    admission control, the inequality (2.15) or (2.16) needs to be tested only once at a given

    . The number of multiplications is in the heterogeneous scenario, or in the

    homogeneous scenario.

    Denote as the worst-case delay experienced by a packet from the th queue.

    TIm ax i, TIm ax i 1,= DSmax i 1,+ i 2

    TIm ax i, 1 ci 1 Ri 1+( )TIm ax i 1, TS VSmax+( ) 1 cj Rj+( )j 1=

    i 1

    = =

    i 2 TIm ax 1, TS VSmax+=

    DSmax i, Twalk+( )i

    1=

    N

    ci TIm ax i, Rii

    1=

    N

    N Twalk+

    TIm ax N 1+, TIm ax 1, N Twalk+

    TS VSmax+( ) 1cj

    Rj-----+

    1

    j 1=

    N

    N Twalk+

    1 ( )TS VSmax

    =

    =

    =

    DSmax i, Twalk+( )i1=

    N

    ci c= Ri R=

    TS VSmax+( ) 1 c R+( )N

    1

    [ ] N Twalk 1 ( )TS VSmax+

    Np N

    Np O 1( )

    N O N( ) O 1( )

    DWmax i, i

  • 7/29/2019 Tao Dissertation

    36/126

    20

    We obtain as the sum of the worst-case queueing delay, which equals ,

    and the maximum service time, i.e.,

    , for . (2.17)

    2.5 Validation and Application of Analytical Model to a Wireless LAN

    In this section, we describe how we validate our analytical model with simulations, and

    obtain numerical results for a polling system based on the IEEE 802.11 wireless LAN

    standard [5]. We assume a network architecture shown in Fig. 2.4, which is typical in cur-

    rent-day implementations. In Section 2.5.1, we briefly review the IEEE 802.11 Media

    Access Control protocol and list parameters for our simulation study and analytical model

    validation. Then we present numerical results for delay and voice capacity in Section 2.5.2

    and 2.5.3, respectively.

    2.5.1 Background of IEEE 802.11 and Values Selected for Parameters

    The 802.11 MAC protocol supports two modes of operation: a random-access mode,

    called DCF, and a polling mode, called PCF. The time period during which the LAN oper-

    ates in the DCF mode is known as Contention Period (CP) while the time period in which

    DWmax i, TIm ax i,

    DWmax i,

    TIm ax i,

    = DSmax i,

    + TS

    VSmax

    +( ) 1 cj

    Rj

    +( )j 1=

    i

    = i 1

    Wireless

    Internet

    Access

    station

    Point (AP)

    Wireless

    station

    Wireless

    station

    AccessPoint (AP)

    PSTN

    Ethernet

    Voicegateways

    Fig. 2.4: Network architecture.

  • 7/29/2019 Tao Dissertation

    37/126

    21

    the LAN operates in the PCF mode is known as Contention-Free Period (CFP). Details of

    the polling mode of operation such as the number of stations to be admitted or the polling

    order, are not specified in the standard, but left up to implementations. A superframe of

    length consists of a CFP and a CP. If a station begins to transmit a frame just before the

    end of a CP, it may result in a stretched CP. Compared to our model described in Fig. 2.1,

    the CFP and CP correspond to the polling period and vacation period, respectively.

    The values selected for the parameters of this LAN are shown in Table 2.1. We choose

    in the (20ms, 50ms) range to balance delay and efficiency. Two values are used for

    to show the impact of the size of vacation periods. For vacation stretch, we assume the fol-

    lowing distribution

    , (2.18)

    where parameter and are listed in Table 2.1. Other than this simple model, measure-

    ment-based distributions can be developed to characterize vacation stretches in practice.

    We assume that voice data for a wireless station is sent in a Poll+Data frame to that

    station. The parameters of telephony traffic model (see the Markov chain included in Fig.

    2.1) are set to the values in the May and Zebo model [31]. Two values of codec rate , the

    Truespeech codec rate of 8.5Kbps [66] and the Pulse Code Modulation (PCM) codec rate

    of 64Kbps [67], are used for sensitivity analysis. Source rate equals the sum of voice

    codec rate and the overheads of all layers above the 802.11 MAC. Specifically, the

    overheads come from the Real-time Transfer Protocol (RTP) [68], the User Datagram Pro-

    tocol (UDP), the Internet Protocol (IPv4), and the Logical Link Control (LLC) protocol.

    The data rate of the overheads is calculated as , where the value of is 43 bytes if

    TS

    TS

    S x( )

    0 x 0,

    =

    p1

    p2

    C

    c

    C

    H TS H

  • 7/29/2019 Tao Dissertation

    38/126

    22

    assuming one set of RTP/UDP/IP/LLC headers in each interval. This value is large

    since it is comparable to the typical payload size of a voice packet. To improve efficiency,

    the RObust Header Compression (ROHC) [69] protocol can be optionally applied to

    reduce down to 4 bytes. We assume homogeneous source rate and link rate for

    simplicity although (2.15) allows for heterogeneous rates. We assume a link rate of

    11Mbps and error-free transmission.

    The value of equals the time spent on an empty 802.11 frame. If we assume the

    IEEE 802.11b physical layer with the standard long preamble, the value of is

    approximately 0.23ms. If the optional short preamble is adopted at the physical layer, the

    value of can be reduced down to 0.13ms. Table 2.1 also lists the transmission times

    ofBeacon and Contention-Free-End (CF-End) frames, which are overheads added to each

    polling period according to the IEEE 802.11 MAC protocol.

    In this application of our model, the total number of queues is twice the number of calls

    since telephony is bidirectional. We use , which equals , to denote the voice

    Table 2.1: Values for parameters.

    Parameter Symbol Value

    Superframe length 20ms to 50ms

    Minimum fraction allocated to

    vacation period

    0.5 or 0.7

    Maximum vacation stretch 2.8ms

    Parameters for CP stretch , 0.6, 0.4

    Average duration of the ONstate 352ms

    Average duration of the OFFstate 650ms

    Voice codec rate 8.5Kbps or 64Kbps

    Overhead of RTP/UDP/IP/LLC 43 bytes or 4 bytes

    Source rate

    Service rate 11Mbps

    Walk time 0.23ms or 0.13msTime spent on a Beacon frame 0.512ms

    Time spent on a CF-end frame 0.352ms

    TS

    H c R

    Twalk

    Twalk

    Twalk

    TS

    VSma x

    p1

    p2

    1

    a

    1 b

    C

    H

    c H TS C+

    R

    TwalkTBe ac on

    TC F e nd

    Np' Np 2

  • 7/29/2019 Tao Dissertation

    39/126

    23

    capacity in units of calls. We implemented a custom event-driven simulation of our poll-

    ing system with vacations. Each simulation run is long enough for the system to reach

    steady state.

    2.5.2 Numerical Results for Delay

    The delay result for the single-call scenario is presented in subsection A. The simula-

    tion results for the delays in the multiple-call scenario are shown in subsections B and C.

    A. Single-call scenario

    Fig. 2.5 shows the CDF of delay obtained for four values of . We observe that

    is more likely to have a value around than other values. For example, when

    equals 50ms, the probability of being less than 47ms is only 0.12. We explain this as

    follows. First, since all four values are much smaller than the mean ONstate holding

    time (352ms), it is quite likely that the voice station stays in the ONstate for the entire

    duration of . Second, data created after a poll will have to wait for the next poll since

    the queue is polled only once every (see Section 2.2). Therefore, is highly corre-

    DW TS

    DW TS

    Fig. 2.5: CDF ofDW with TS as a parameter in the single-call scenario. Twalk

    and Care set to 0.23ms and 8.5Kbps, respectively.

    0 10 20 30 40 500

    0.2

    0.4

    0.6

    0.8

    1

    x (ms)

    Probability{DWx

    }

    Simulation: TS = 20ms

    Simulation: TS = 30ms

    Simulation: TS = 40ms

    Simulation: TS = 50ms

    Analysis: TS = 20ms

    Analysis: TS = 30ms

    Analysis: TS = 40ms

    Analysis: TS = 50ms

    TS

    DW

    TS

    TS

    TS DW

  • 7/29/2019 Tao Dissertation

    40/126

    24

    lated to interpoll time.

    Another observation is that queueing delay dominates in most cases because of the

    relatively low source rate and high service rate. With an interpoll time of 52.8ms (50ms

    for and 2.8ms for ), at a codec rate of 8.5Kbps, the length of the maximum-

    sized voice payload is only 56 bytes. The transmission time of an 802.11 frame with 56-

    byte payload is about 0.3ms if assuming 43 bytes for and 0.23ms for .

    B. Multiple-call scenario: the small-N regime of operation

    Fig. 2.6 shows the empirical Complementary CDF (CCDF) of obtained for three

    values: 20ms, 35ms, and 49ms. The corresponding voice capacities in units of calls are

    7, 13, and 17, respectively, according to (2.16). Fig. 2.6 illustrates that calls further down

    in the polling list is not impacted much by the variability in the service times of calls in the

    front of the polling list. For a total of 17 calls, the plots corresponding to the first call and

    the 17th call almost coincide with each other in Fig. 2.6 although their maximum delays in

    the worst-case scenario can be quite different (see (2.17)). Here is a qualitative explana-

    DW

    TS VSmax

    H Twalk

    DW

    TS

    Fig. 2.6: CCDF ofDWin the small-N regime of operation. , codec rate,

    and Twalkare set to 0.5, 64Kbps, and 0.23ms, respectively.

    0 5 10 15 20 25 30 35 40 45 50 5510

    -3

    10-2

    10-1

    100

    x (ms)

    Probability{DW>

    x}

    TS=20ms, the 1st

    call

    TS=20ms, the 7th

    call

    TS=35ms, the 1st

    call

    TS=35ms, the 13th

    call

    TS=49ms, the 1st callTS=49ms, the 17

    thcall

  • 7/29/2019 Tao Dissertation

    41/126

    25

    tion for this behavior.

    We have observed from subsection A that delay is highly correlated to interpoll time.

    The difference between the interpoll time of the th and that of the first queue can be

    expressed as

    , (2.19)

    where denotes the service time of the th queue in the th interval. First, it is likely

    that most voice sources do not change state during two consecutive interpoll times since

    the average ONand OFFtimes of voice sources are much larger than the values selected

    for . This means that and roughly cancel out for these queues. Second, for

    the voice sources that do change state, the result of could be either posi-

    tive or negative. Given that voice sources are assumed to be independent of each other, the

    random variable is expected to be Normal-like with zero mean when

    is large. Finally, the maximum value of , which is

    (see Fig. 2.3), could be just a small fraction of because of the

    large values. The significance of this observation is two-fold: first, the delay distri-

    bution derived for the single-queue scenario in (2.10) can be used as a fair approximation

    for the delay distribution in the small-N regime of operation; second, the value of pursuing

    an accurate analytical answer of the delay distribution in the small-N regime of operation

    is low, at least for the parameter values listed in Table 2.1.

    C. Multiple-call scenario: the large-N regime of operation

    We simulate the large-N regime of operation by choosing , the number of voice calls

    admitted to the polling list, to be larger than , which is 19 for a of 30ms and a of

    TI j,k 1+( )

    TI 1,k 1+( )

    DS i,k 1+( )

    DS i,k( )

    ( )i 1=

    j1

    =

    DS i,k( )

    i k

    TS DS i,k

    1+( )DS i,

    k( )

    DS i,k

    1+( )DS i,

    k( )( )

    TI j,k

    1+( )TI 1,

    k1+( )

    ( ) j

    TI j,k

    1+( )TI

    1,k

    1+( )

    1 ( ) TS NpTwalk[ ] TS

    Twalk

    N'

    N'p TS

  • 7/29/2019 Tao Dissertation

    42/126

    26

    0.5 according to (2.16). For , we see staircase-like regions in the CCDF plots

    shown in Fig. 2.7 where two drops occur around 30ms and 50ms, respectively. These

    staircases suggest that a packet may experience a delay spike around 50ms, which is sig-

    nificantly larger than the normal delay values around or below 30ms. This behavior is

    caused by the strong correlation between queueing delay and interpoll time, which has

    been discussed in subsection A. Interpoll times normally appear in the neighborhood of

    . However, a larger interpoll time may occur if a queue is missed in a polling period. As

    increases, such large interpoll times appear more often, leading to an increase in the

    frequency of occurring of the delay spike.

    2.5.3 Numerical Results for Capacity

    As shown in Fig. 2.7, the cost of operating in the large-N regime of operation is the

    occurring of delay spike. Although delay spikes have negative impact on perceived voice

    quality, telephony applications could typically tolerate such delay degradation if delay

    spikes do not appear very often [33]. We set a delay threshold , the maximum

    delay in the small-N regime (see (2.17)), and declare a packet that has a delay greater than

    N'1 9>

    TS

    N'

    Fig. 2.7: CCDF of delay with N'as a parameter in the large-N regime of operation.

    TS, , and codec rate are equal to 30ms, 0.5, and 8.5Kbps, respectively.

    0 10 20 30 40 50 6010

    -4

    10-3

    10-2

    10-1

    100

    x (ms)

    Probability{DW>

    x}

    19 calls20 calls

    22 calls

    23 calls

    24 calls

    21 calls

    DWmax 2 Np',

  • 7/29/2019 Tao Dissertation

    43/126

    27

    as a loss. Loss ratio, , is defined as . Voice capac-

    ity in the large-N regime of operation, , is defined as the maximum number of calls

    that allows to be kept to within a tolerable value such as 1% or 3%. is equal to

    when the tolerable value is 0.

    We use simulations to obtain under various , , , and stretch dis-

    tribution assumptions. In Fig. 2.8, values are compared against values com-

    puted from (2.16). We observe that increases with for all values because

    DWmax 2 Np',Ploss P DW DWmax 2 Np',

    >{ }

    N'max

    Ploss N'max

    N'p

    N'max TS Ploss Twalk

    20 25 30 35 40 45 500

    5

    10

    15

    20

    25

    30

    35

    40

    45

    TS (ms)

    Maximumnumbe

    rofcalls,

    N'm

    ax

    Simulation: Ploss 0.01, = 0.7

    Analysis/Simulation: Ploss = 0, = 0.7

    Analysis/Simulation: Ploss = 0, = 0.5

    Simulation: Ploss 0.03, = 0.7

    Simulation: Ploss 0.01, = 0.5

    Simulation: Ploss 0.03, = 0.5

    20 25 30 35 40 45 500

    5

    10

    15

    20

    25

    30

    35

    40

    45

    TS (ms)

    Maximumnumberofcalls,N'm

    ax

    Analysis/Simulation: Ploss = 0, = 0.7

    Simulation: Ploss 0.01, = 0.7

    Simulation: Ploss 0.03, = 0.7

    Analysis/Simulation:Ploss = 0, = 0.5

    Simulation:Ploss 0.01, = 0.5

    Simulation: Ploss 0.03, = 0.5

    Fig. 2.8:N'max versus TS, with Ploss and as parameters. Codec rate, Twalk, and stretch distri-

    bution are respectively set to (a) 8.5Kbps, 0.23ms, and S(t); (b) 8.5Kbps, 0.23ms, and

    VSmax. The plot for Ploss=0 is obtained from one million samples.

    (b)

    (a)

    N'max N'p

    N'max TS Ploss

  • 7/29/2019 Tao Dissertation

    44/126

    28

    more data can accumulate in a queue at a larger . Let us take the computation of as

    an example. Since is much smaller than 1 for the parameter values listed in Table

    2.1, the term in (2.16) can be approximated to . Thus (2.16) can

    be simplified as

    . (2.20)

    The first term of the denominator, i.e., , is much smaller than

    . Therefore we see that and appear to have a linear-like relationship in Fig.

    2.8(a). This suggests a trade-off between voice capacity and delay.

    There are two sources leading to the capacity gain shown in Fig. 2.8(a) when is

    relaxed from 0 to 0.01, and further to 0.03. First, vacation stretches can be exploited to

    carry more voice traffic if loss is allowed. Second, statistical multiplexing gain can be

    obtained by exploiting the OFFstate of voice sources. To evaluate the gain from the first

    source, we need statistical information about vacation stretches, which, however, varies

    with load pattern in vacation periods. Thus we focus on the second source. To isolate the

    statistical multiplexing gain, we replot against in Fig. 2.8(b) for the same set of

    parameter values used in Fig. 2.8(a) except that vacation stretches are fixed at . We

    observe that the multiplexing gain is small because the polling overhead dominates

    over the service time of voice payload. The gain is only 3 calls (from 19 to 22) when

    and are set to 30ms and 0.5, respectively.

    Fig. 2.9 shows the results for a codec rate of 64Kbps. Although the voice capacities are

    smaller compared to the counterpart shown in Fig. 2.8(b), the multiplexing gain is more

    noticeable in Fig. 2.9(a). For example, when is relaxed from 0 to 0.03,

    TS N'p

    c R

    1

    c R+( )N

    1

    N c R+( )

    N1 ( )TS VSmax

    TS VSmax+( ) c R Twalk+------------------------------------------------------------------

    TS VSmax+( ) c R

    Twalk N'p TS

    Ploss

    N'max TS

    VSmax

    Twalk

    TS

    Ploss N'max

  • 7/29/2019 Tao Dissertation

    45/126

    29

    increases from 11 to 18, which is roughly a 64% gain in ratio, as opposed to 3, or 16%, in

    Fig. 2.8(b).

    Fig. 2.9(b) shows that the multiplexing gain is significant when is set to 0.13ms.

    We see that increases from 17 to 30, which is a 76% gain, at a of 30ms and a

    of 0.5, when is relaxed from 0 to 0.03.

    2.5.4 Summary of the Numerical Results

    Table 2.2 lists a summary of the numerical results obtained at a of 30ms. We see

    Fig. 2.9:N'max versus TS, with Ploss and as parameters. Codec rate, Twalk, and stretch

    distribution are respectively set to (a) 64Kbps, 0.23ms, and VSmax; (b) 64Kbps,

    0.13ms, and VSmax. ROHC is applied in (b). The plot for Ploss=0 is obtained

    from one million samples.

    20 25 30 35 40 45 500

    5

    10

    15

    20

    25

    30

    TS (ms)

    Maximumnumberofcalls,

    N'm

    ax

    Analysis/Simulation: Ploss = 0, = 0.7

    Simulation: Ploss 0.01, = 0.7

    Analysis/Simulation:Ploss = 0, = 0.5

    Simulation: Ploss 0.03, = 0.7

    Simulation: Ploss 0.01, = 0.5

    Simulation: Ploss 0.03, = 0.5

    20 25 30 35 40 45 505

    10

    15

    20

    25

    30

    35

    40

    45

    TS (ms)

    Maximumnumberofcalls,

    N'm

    ax

    Analysis/Simulation: Ploss = 0, = 0.7

    Simulation: Ploss 0.01, = 0.7

    Analysis/Simulation:Ploss = 0, = 0.5

    Simulation: Ploss 0.03, = 0.7

    Simulation: Ploss 0.01, = 0.5

    Simulation: Ploss 0.03, = 0.5

    (a)

    (b)

    Twalk

    N'max TS

    Ploss

    TS

  • 7/29/2019 Tao Dissertation

    46/126

    30

    that by accepting a certain probability of loss, we can run the system in the large-N regime

    of operation, accommodating more calls while still allowing a large ratio of link band-

    width to be used for other sharing modes. We observe that by reducing the polling over-

    head and header overheads, we can use higher-quality voice (64Kbps instead of 8.5Kbps)

    and yet accommodate a greater number of voice calls (28 vs. 22, or 30 vs. 22) if we are

    willing to accept some loss (1%, or 3%). The maximum delay is shown for all

    cases in 30-40ms range. Given the end-to-end delay requirement of 150ms, a delay in this

    range is considered to be acceptable since the remaining links are likely to be higher speed

    wired links.

    Voice capacity can be increased in a number of ways: 1) accept some packet loss, 2)

    increase at the cost of larger delay, 3) decrease at the cost of a smaller vacation

    period, 4) decrease codec rate at the cost of lower voice quality, 5) adopt ROHC at the cost

    of signaling overhead, and 6) decrease at the cost of robustness. We conclude that 1

    and 6 are the most effective methods; 2, 3, 4 and 5 could also be considered if necessary.

    We define transmission efficiency as the ratio of average service time of voice pay-

    load to the average total time spent on a frame. If we ignore large interpoll times, which

    occur infrequently, the average interpoll time of a queue is roughly . Then the average

    Table 2.2: Summary of numerical results at TS= 30ms.

    Codec

    rate

    ROHC Vacation

    stretch ( )

    ( )

    ( )

    0.23ms 8.5Kbps no randoma

    a. Vacation stretch length is a random variable with the distribution function defined in

    (2.18).

    0.5 19 (0) 24 (0.01) 26 (0.01) 35.1ms

    0.23ms 8.5Kbps no 0.5 19 (0) 22 (0.01) 22 (0.01) 35.1ms

    0.23ms 64Kbps no random 0.5 11 (0) 21 (0.01) 22 (0.01) 38.1ms

    0.23ms 64Kbps no 0.5 11 (0) 17 (0.01) 18 (0.01) 38.1ms

    0.13ms 64Kbps applied random 0.5 17 (0) 32 (0.01) 34 (0.01) 39.6ms

    0.13ms 64Kbps applied 0.5 17 (0) 28 (0.01) 30 (0.01) 39.6ms

    DWmax 2 Np',

    Twalk Np'Ploss

    N'ma xPloss

    N'maxPloss

    DWmax2

    Np',

    S x( )

    VSma x

    VSma x

    VSma x

    TS

    Twalk

    E

    TS

  • 7/29/2019 Tao Dissertation

    47/126

    31

    payload size of a frame is , where and are the codec rate

    and the probability of a source being in the ONstate, respectively. We compute as fol-

    lows

    , (2.21)

    where is the overheads of all protocol layers above the IEEE 802.11 MAC. Fig. 2.10

    shows that transmission efficiency is low (less than 0.05) when and

    . If larger codec rate and smaller are assumed, transmission effi-

    ciency can be improved significantly (up to 0.44) for the range of under consideration.

    The numerical values shown in Fig. 2.10 is mainly determined by the relatively high over-

    head of the IEEE 802.11b when supporting low-data-rate applications such as telephony.

    2.6 Chapter Summary

    In this chapter, we modeled a polling system with vacations. We started by deriving an

    analytical solution for delay distribution in a single-queue scenario, which has later been

    TS C b a b+( ) C b a b+( )

    E

    ETS C b a b+( ) R

    TS C H+( ) b a b+( ) R Twalk+-------------------------------------------------------------------------------------=

    H

    Fig. 2.10: Transmission efficiency as a function ofTS. Codec rate,H,

    and Twalkserve as parameters.R is fixed at 11Mbps.

    20 25 30 35 40 45 500

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    0.45

    TS (ms)

    TransmissionEfficiency

    C = 64 KbpsH = 4 bytesTwalk = 0.13 ms

    C = 64 KbpsH = 44 bytesTwalk = 0.23 ms

    C = 8.5 KbpsH = 4 bytesTwalk = 0.13 ms

    C = 8.5 KbpsH=44 bytesTwalk = 0.23 ms

    C 8 . 5 Kbps=

    Twalk 0 . 2 3 ms= Twalk

    TS

  • 7/29/2019 Tao Dissertation

    48/126

    32

    found to be a fair approximation for delay distribution in a multiple-queue scenario, i.e.

    the small-N regime of operation. For voice capacity, we established a procedure to calcu-

    late the number of telephone calls that can be supported with a guarantee of being polled

    every polling period. To admit more calls and yet keep the delay guarantee, we allow for

    some packet loss. We demonstrate that for the IEEE 802.11b wireless LAN and a codec

    rate of 8.5Kbps, we are not able to obtain much statistical multiplexing gain by exploiting

    the ON-OFF characteristics of telephony traffic because of large overheads. However, by

    decreasing these overheads by half (which is feasible with an optional short preamble and

    a header compression technique), we demonstrate that the system can indeed exploit the

    silences in telephony traffic and accommodate a greater number of voice calls even with a

    higher-rate codec (64Kbps). The numbers identified appear to be acceptable because the

    range of an 802.11b access point is small (on the order of 100m).

  • 7/29/2019 Tao Dissertation

    49/126

    33

    Chapter 3

    Effects of Packetization of Voice Data

    We assumed the ON-OFF Markov Modulated Fluid (MMF) model for voice sources in

    one analysis in Chapter 2. Although this model is considered suitable for characterizing

    digitized human speech, it does not take into account the packetization of voice traffic that

    exists in many voice communication systems [70][71]. In these systems, the output of a

    voice encoder is a stream of data blocks filled with compressed voice data. Then several

    blocks (or one block) are encapsulated into a packet for transmission over a packet-

    switched network [72]. Packetization delay typically ranges from 10ms to 30ms. There-

    fore, the incoming voice data seen by the MAC layer of an end point is a stream of voice

    packets instead of the constant-rate bit stream assumed in the ON-OFF MMF model. In

    order to better predict the performance of such voice communication systems, we study

    this packetization effect in this chapter.

    3.1 An ON-OFF MMF Model with Packetization

    Fig. 3.1 illustrates how voice packets are created when assuming an ON-OFF MMF

    model with packetization. This model assumes that the ON and OFF states still have expo-

    nentially distributed holding times. Time is divided into repetitive periods of fixed length,

    which is shown to be 10ms in Fig. 3.1. A voice packet is generated at the end of a period

  • 7/29/2019 Tao Dissertation

    50/126

    34

    only if the source is detected active during some portion of this period. As an example, see

    Fig. 3.1, which shows five voice packets created at ms, respec-

    tively. No voice packet is generated at ms or ms because the voice source

    was silent in the entire previous period.

    This packetization of voice traffic is important to us since it not only introduces a pack-

    etization delay, but also impacts the number of voice packets waiting in a queue. The latter

    affects both the voice capacity and the queueing delay. For example, in Fig. 3.1, a server

    polls a queue at time ms and ms, consecutively. The first queueing delay of

    20ms shown in Fig. 3.1 is caused by the poll arriving 20ms after the transition from the

    OFF to the ON state. The second queueing delay is 35ms because the last three packets,

    which contain all voice data generated in the time period between time ms and the

    end of the ON period, are served immediately after the second polling instant. This delay

    would have been 30ms instead of 35ms had there been no packetization. We refer to the

    first and the second queueing delays as a queueing delay of the first type, and a queueing

    delay of the second type, respectively.

    Other delay components introduced by voice compression include look-ahead delay

    and processing delay. These delays are not considered in this model because they are of

    fixed length and hence can be easily dealt with by adding a fixed component to the end-to-

    OFF ONVoice activity OFF

    Packet creationinstant

    10 20 30 40 50 60 70t (ms)

    Polling instant

    first queueing

    delay = 20ms second queueingdelay =35ms

    Fig. 3.1: Illustration of the packetization of voice traffic.

    t2 0 3 0 4 0 5 0 6 0 , , , ,=

    t1 0

    = t7 0

    =

    t3 5

    = t6 5

    =

    t3 0

    =

  • 7/29/2019 Tao Dissertation

    51/126

    35

    end delay.

    3.2 Voice Capacity and Delay Bound in Small-N Regime of Operation

    We assume the same polling system model as the one described in Section 2.2. Follow-

    ing the approach used in Section 2.4, we divide the multiple-queue scenario into a small-N

    regime of operation and a large-N regime of operation. In order to compute , the largest

    number of queues that can be admitted in the small-N regime of operation, we construct a

    worst-case scenario similar to the one described in Section 2.4, but for the ON-OFF MMF

    model with packetization. Let and represent packetization delay and the size of a

    voice packet, respectively. Assume all queues have the same service rate . For the first

    queue, the maximum length of an interpoll time is . Thus the greatest possi-

    ble number of voice packets accumulated in the queue at a polling instant is the smallest

    integer that is greater than . This could happen if a voice packet is created

    immediately after the beginning instant of an interpoll period of in length.

    For example, for an interpoll time of 25ms and an of 10ms, there can be up to three

    voice packets waiting in the queue at the end of this interpoll period. Let denote

    the service time for the th queue in the worst-case scenario. can be computed by

    . (3.1)

    Similarly, a largest interpoll period for the th queue occurs if all polls for the previous

    queues in the previous polling interval result in zero payload but all polls in the

    following polling period find a maximum number of voice packets. Thus is given

    by

    , for . (3.2)

    Nl

    L S

    R

    TS VSmax+( )

    TS VSmax+( ) L

    TS VSmax+( )

    L

    DSmax i,

    i DSmax 1,

    DSmax 1, TS VSmax+( ) L S R=

    i

    i1

    ( )

    DSmax i,

    DSmax i, TS VSmax DSmax k,k 1=i 1

    + +( ) L S R= i 2 3 4 , , ,=

  • 7/29/2019 Tao Dissertation

    52/126

    36

    Since the total time spent on serving all queues should not exceed the length of a poll-

    ing period, equals the greatest integer for which the following inequality holds

    . (3.3)

    The number of multiplications or additions involved in computing is .

    We can compute the worst-case queueing delay, or the delay bound, for the first queue

    as

    . (3.4)

    Similarly, the worst-case queueing delay for the th queue is

    , for . (3.5)

    Equations (3.4) and (3.5) apply to both frame queueing delay and packet queueing

    delay. Moreover, it is worth noting that (3.1)-(3.5) can be easily adapted to a polling sys-

    tem with heterogeneous , , , and .

    3.3 Resource Allocation in Large-N Regime of Operation

    So far we have investigated a voice capacity problem, i.e., finding the largest number

    of admissible calls if a fraction of each superframe is allocated to carry voice traf-

    fic. We can alternatively formulate a resource allocation problem, where the resource

    being shared is the time in each superframe. Given queues, we need to compute the

    minimum time needed to transmit voice packets accumulated in all queues. For the

    small-N regime of operation, this minimum time is the sum of the worst-case service times

    plus walk times, i.e.,

    . (3.6)

    Nl N

    DSmax i, Twalk+( )i 1=N

    1 ( )TS VSmax

    Nl O N( )

    Dbound1, T= S VSmax L+ +

    i

    Dbound i, T= S VSmax DSmax k,

Recommended