wcnc07

Embed Size (px)

Citation preview

  • 7/27/2019 wcnc07

    1/6

    Experiences using Gateway-Enforced Rate-Limiting

    Techniques in Wireless Mesh Networks

    Kamran Jamshaid and Paul A.S. Ward

    Shoshin Distributed Systems Group

    Department of Electrical and Computer Engineering

    University of Waterloo, Waterloo, Ontario N2L 3G1

    Email: {kjamshai, pasward}@shoshin.uwaterloo.ca

    Abstract Gateway nodes in a wireless mesh network (WMN)bridge traffic between the mesh nodes and the public Inter-net. This makes them a suitable aggregation point for policyenforcement or other traffic-shaping responsibilities that maybe required to support a scalable, functional mesh network.In this paper we evaluate two gateway-enforced rate-limitingmechanisms so as to avoid congestion and support network-levelfairness: Active Queue Management (AQM) techniques that havepreviously been widely studied in the context of wired networks,and our Gateway Rate Control (GRC) mechanism. We evaluatethe performance of these two techniques through simulations ofan 802.11-based multihop mesh network. Our experiments showthat the conventional use of AQM techniques fails to provide

    effective congestion control as these mesh networks exhibit differ-ent congestion characteristics than wired networks. Specifically,in a wired network, packet losses under congestion occur atthe router queue feeding the bottleneck link. By contrast, in aWMN, many such geographically dispersed points of contentionmay exist due to asymmetric views of the channel state betweendifferent mesh routers. As such, gateway rate-limiting techniqueslike AQM are ineffective as the gateway queue is not the onlybottleneck. Our GRC protocol takes a different approach by ratelimiting each active flow to its fair share, thus preserving enough

    capacity to allow the disadvantaged flows to obtain their fairshare of the network throughput. The GRC technique can befurther extended to provide Quality of Service (QoS) guaranteesor enforce different notions of fairness.

    I. INTRODUCTION

    In recent years Wireless Mesh Networks (WMNs) have

    emerged as a successful architecture for providing cost-

    effective, rapidly deployable network access in a variety

    of different settings. We are investigating their use as an

    infrastructure-based or community-based network. Typically,

    these networks provide last-mile Internet access through mesh

    routers affixed to residential rooftops, forming a multihop

    wireless network. Clients typically connect to their preferredmesh router, either via wire or over a (possibly orthogonal)

    wireless channel. In this regard, a wireless-connected client

    views a WMN as just another WLAN. Any mesh router that

    also has Internet connectivity is referred to as a gateway.

    Gateway nodes provides wide-area access which is then shared

    between all the nodes in the network.

    We first highlight some key characteristics of these

    infrastructure-based mesh networks that distinguish them from

    other ad hoc wireless networks. As the mesh routers are

    typically fixed to building structures, there are no topological

    variations due to mesh-router mobility, though infrequent

    topology changes might still occur because of addition, re-

    moval, or failure of mesh routers. Thus client mobility is not

    relevant for this work, as such clients access the network per

    the standard WLAN mode of operation. This static topology

    also precludes other mobility requirements like battery-powerconservation; these mesh routers are typically powered through

    the electricity grid. Finally, the traffic pattern in these networks

    is highly skewed, with most of the traffic being either directed

    to or originating from the wired Internet through one of the

    gateway nodes. For the purpose of this paper, we restrict our

    analysis to single gateway systems in which client access is

    non-interfering with mesh-router operation. This is consistent

    with systems developed by wireless equipment vendors such

    as Nortel [13], and is a generalization of the TAP model [4]

    from chains to arbitrary graphs.

    Most WMNs use commodity 802.11 hardware because of its

    cost advantage. However, the CSMA/CA MAC demonstrates

    its limitations in a multihop network. Specifically, because ofthe hidden and exposed terminal problems [17], nodes may

    experience varying spatial (location-dependent) contention for

    the wireless channel. This produces an inconsistent view of the

    channel state among the nodes, resulting in throughput unfair-

    ness between different flows. This unfairness trend deteriorates

    with increasing traffic loads and multiple back-logged flows,

    eventually resulting in flow starvation for disadvantaged flows.

    In particular, nodes multiple hops away from the gateway

    starve while all the available network capacity is consumed

    by nodes closer to the gateway [8].

    We are currently exploring the use of traffic-aggregation

    points like gateway nodes for policy-enforcement and other

    traffic-shaping responsibilities that may support scalable, func-tional mesh networks. Given WMN traffic patterns, the gate-

    way a natural choice for enforcing fairness and bandwidth-

    allocation policies. It has a unified view of the entire network,

    and thus is better positioned to manage a fair allocation of

    network resources. In this paper, we compare the use of

    two gateway-enforced flow-control schemes: Active Queue

    Management (AQM) and our recently proposed Gateway Rate

    Control (GRC) mechanism [7].

    One way to perform allocation of resources is to use queue-

  • 7/27/2019 wcnc07

    2/6

    Fig. 1. The gateway mesh node acts as a bridge between the wired high-speedpublic Internet and the shared-access multihop mesh network.

    management techniques. AQM takes a proactive approachtowards congestion avoidance by actively controlling flow

    rates for various connections. It exploits the fact that con-

    gestion, in the form of queue buildups, typically occurs at

    network boundaries where flows from high throughput links

    are aggregated across slower links. In WMNs, if we consider

    the broadcast wireless medium to be a system bottleneck, then

    a similar scenario emerges in which a high-speed wired link

    is feeding this shared bottleneck through the gateway router

    (Fig. 1), thus creating opportunities for potentially reusing

    the rich AQM research literature in this new networking

    domain. This paper describes our experiences testing AQM

    techniques in WMNs. We discovered that these techniques fail

    as the gateway queue does not exhibit the same congestion

    characteristics that are observed in wired routers that interface

    across a high-bandwidth and a low-bandwidth link.

    We have recently proposed GRC [7], a gateway-enforced

    rate-control mechanism that provides network-level fairness in

    a WMN. Unlike AQM techniques, we do not monitor queue

    sizes but use a simple computational model that calculates the

    fair-share rate per active stream. By limiting the throughput

    of aggressive TCP sources to their fair share, we operate the

    network at traffic loads that preserve enough network capacity

    to allow disadvantaged distant nodes to obtain their fair-share

    throughput. In this paper, we compare the performance of GRC

    against AQM, and explain why AQM techniques fail to yieldthe expected results.

    The remainder of this paper is organized as follows. We

    first cover the background and the related work, contrasting it

    with our approach. In Sect. III we investigate the congestion

    characteristics of multihop mesh networks through a series

    of network simulations. In Sect. IV, we describe RED and

    FRED, two popular AQM techniques that have successfully

    been used to provide congestion control and fairness in wired

    networks. In Sect. V we review our GRC mechanism that

    enforces implicit flow control by dropping or delaying excess

    traffic at the gateway. In Sect. VI we provide simulation

    results that compare the performance of FRED and GRC in a

    simple WMN topology, and describe why FRED fails to yieldthe desired results. We also include additional experiments

    illustrating the performance of GRC in other mesh topologies.

    Finally, we conclude by observing what issues remain open.

    I I . RELATED WOR K

    Fairness issues have recently received significant attention.

    Gambiroza et al. [4] propose a time-fairness reference model

    that removes the spatial bias for flows traversing multiple hops,

    and propose a distributed algorithm that allows them to achieve

    their fair-rate allocation. Rangwala et al. [14] have proposed

    a mechanism that allows a distributed set of sensor nodes to

    detect incipient congestion, communicate this to interfering

    nodes, and to follow an AIMD rate-control mechanism for

    converging to the fair-rate. There is also ongoing work in

    adapting TCP to multihop wireless networks (e.g., [15]). In

    contrast to these, our work explores a different approach

    by enforcing centralized flow control that allows us to vary

    between different rate-control criterion without requiring any

    modifications to the wireless nodes.

    In wireless networks congestion is determined by measur-

    ing MAC-layer utilization, typically obtained through snoop-

    ing [5], or by observing instantaneous transmission queue

    lengths ( [14], [18]). Congestion control is then exercised

    through one of the following mechanisms: source rate lim-

    iting, where the sources rate limit themselves either accord-

    ing to neighborhood activity [5], or per some computational

    model [10] that takes into account the network topology and

    stream-activity information; hop-by-hop flow control, where

    nodes other than the source can also enforce rate control along

    the path to the destination by either monitoring local queue [5]or neighborhood queue sizes [18]; and prioritized MAC

    layer, where the MAC-layer backoff information is explicitly

    shared [1], or adjusted to allow prioritized access to nodes

    higher in the connectivity graph [5].

    There are a number of publications that focus on the

    mathematical modeling of a given topology for determining

    the optimal fair share per active flow. Our GRC mechanism

    uses the fair-share computational model used by Li et al. [10],

    which is derived from the nominal capacity model of Jun and

    Sichitiu [9]. We defer description of this model to Sect. V.

    Another common model is the clique graph model [12] that

    uses the link-contention graph to determine the maximal clique

    that bounds the capacity of the network.We defer description of relevant AQM techniques to

    Sect. IV. However, we observe that there has been little

    work discussing the applicability (or lack thereof) of AQM

    techniques for multihop wireless networks. One noticeable

    exception is Xu et al.s [18] use of RED over a virtual

    distributed neighborhood queue comprising all nodes that

    contend for channel access. Each node computes the drop

    probability based on its notion of the size of this distributed

    queue, and asks it neighbours to drop packets in case con-

    gestion is detected. Our work explores the traditional use

    of AQM techniques at the wired-wireless boundary at the

    gateway interface.

    III. 802.11 MULTIHOP NETWORKS UNDER VARIABLE

    TRAFFIC LOADS

    Consider the simple chain topology shown in Fig. 2.

    Using ns-2 [16] we simulate a traffic upload scenario in which

    we attach a variable bit rate UDP-traffic generator to each

    mesh router with traffic destined to the wired Internet through

    the gateway. We source rate limit (in consort) all nodes over a

    range of traffic generation rates, starting from 0 to a rate that

    is well-above fair-share allocation per stream. The simulation

  • 7/27/2019 wcnc07

    3/6

    Fig. 2. A simple 5-node chain topology. All nodes are 200 m. apart, thusallowing only the neighboring nodes to directly communicate with each other.

    0

    50000

    100000

    150000

    200000

    250000

    300000

    350000

    0 100000 200000 300000 400000 500000 600000 700000 800000 900000

    Throughput(bps)

    Load (bps)

    Fair Share Point

    1 -> GW2 -> GW3 -> GW4 -> GW

    Fig. 3. Offered load vs. Throughput for the topology shown in Figure 2.

    uses the default ns-2 radio model [16]. RTS/CTS handshake

    was disabled for these simulations. We assume that the wired

    interface on the gateway has zero loss with negligible latency;

    i.e., the link capacities are provisioned such that the wireless

    domain remains the bottleneck. This is consistent with extant

    WMN deployments. The throughput plot produced in this

    experiment is shown in Fig. 3.

    We observe that as the offered load increases, the throughput

    for each stream increases linearly till we hit the fair-sharepoint. For the given topology, this corresponds to around

    140 kbps. Increasing the traffic load beyond this fair share

    rate produces network congestion (i.e., the traffic generated

    exceeds the carrying capacity of the network). At these higher

    traffic loads, we see an increasing unfairness experienced by

    the 2-hop flows (flows 1GW and 4GW). Our analysisof the simulation trace corroborate that this is primarily

    due to hidden terminal problems that are exacerbated under

    increasing traffic load. Nodes 1 and 3 are hidden terminals

    (as are nodes 2 and 4). The use of RTS/CTS handshake

    does not solve this problem as these hidden terminals are

    outside transmission range; node 1 cannot hear node 3s RTS

    and cannot decode GWs subsequent CTS. Instead, node 1can discover transmission opportunities only through random

    backoff. This produces an asymmetric view of the channel

    state between nodes 1 and 3. The degree of this asymmetry

    increases with increasing traffic loads as node 1 experiences

    backoff far more frequently and in greater degree, resulting

    in flow unfairness and subsequent starvation at high enough

    traffic loads. The same phenomenon is observed for node 4

    which has to contend unfairly with node 2s transmissions.

    We wish to emphasize that the link-layer issues are the

    0.0

    100.0k

    200.0k

    300.0k

    400.0k

    500.0k

    600.0k

    700.0k

    100.0 150.0 200.0 250.0 300.0 350.0 400.0

    Th

    roughput(bps)

    Time (sec)

    2 -> GW3 -> GW4 -> GW

    1 -> GW

    Fig. 4. TCPs greedy nature combined with asymmetric local views of thechannel state results in complete starvation of the 2-hop flows. The 1-hopflows that are within carrier-sense range fairly share the network bandwidth.

    root cause of unfairness shown in Fig. 3. These cannot be

    completely resolved by higher-layer congestion control proto-

    cols like TCP. To illustrate this, Fig. 4 shows the throughputplot (averaged over 5 sec.) with TCP NewReno [2] sources

    attached to the mesh routers for the same topology (Fig. 2).

    The asymmetric view of the channel state combined with

    TCPs aggressive nature results in complete starvation of the

    disadvantaged nodes 1 and 4. TCP builds up a large congestion

    window for advantaged nodes based on their favorable (though

    incorrect) local view of the channel state, allowing these nodes

    to inject traffic into the network beyond their fair-share rate at

    the cost of starving the 2-hop flows.

    IV. AQM TECHNIQUES

    One way to perform allocation of resources is to use

    queue-management techniques. AQM takes a proactive ap-proach towards congestion avoidance by actively controlling

    flow rates for various connections. Random Early Detection

    (RED) [3] is an example of such a queue-management proto-

    col. RED gateways are typically used at network boundaries

    where queue build-ups are expected when flows from high-

    throughput networks are being aggregated across slower links.

    RED gateways provide congestion avoidance by detecting in-

    cipient congestion through active monitoring of average queue

    sizes at the gateway. When the queue size exceeds a certain

    threshold, the gateway can notify the connection through ex-

    plicit feedback or by dropping packets. RED gateways require

    that the transport protocol managing those connections be

    responsive to congestion notification indicated either throughmarked packets or through packet loss.

    While RED gateways have proven effective in avoiding

    congestion, it has been shown that they provide little fair-

    ness improvement [11]. This is because RED gateways do

    not differentiate between particular connections or classes of

    connections [3]. As a result, when incipient congestion is

    detected, all received packets (irrespective of the flow or size)

    are marked with the same drop probability. The fact that all

    connections see the same instantaneous loss rate means that

  • 7/27/2019 wcnc07

    4/6

    even a connection using less than its fair share will be subject

    to packet drops.

    Flow Random Early Drop (FRED) [11] is an extension

    to the RED algorithm designed to reduce the unfairness

    between the flows. In essence, it applies per-flow RED to

    create an isolation between the flows. By using per-active-

    flow accounting, FRED ensures that the drop rate for a flow

    depends on its buffer usage [11].

    A brief description of FRED is as follows: A FRED gateway

    uses flow classification to enqueue flows into logically separate

    buffers. For each flow i, it maintains the corresponding queue

    length qleni. It defines minq and maxq , which respectively

    are the minimum and the maximum number of packets indi-

    vidual flows are allowed to queue. Similarly, it also maintains

    minth, maxth, and avg for the overall queue. All new packet

    arrivals are accepted as long as avg is below the minth. When

    avg lies between minth and maxth, a new packet arrival is

    deterministically accepted only if the corresponding qleni is

    less than minq. Otherwise, as in RED, the packet is dropped

    with a probability that increases with increasing queue size.

    V. GATEWAY RATE CONTROL

    As with FRED, our recently proposed GRC technique [7]

    requires the gateway to perform flow classification for all the

    traffic entering the gateway. In contrast with FRED, rather

    than probabilistic traffic policing, it explicitly rate limits each

    flow to its fair share. This leads to packet drops or delays

    for aggressive data sources. Adaptive data sources like TCP

    register this packet loss as an indication of congestion, and

    slow down by reducing their congestion window size. This

    frees up the wireless medium, providing an opportunity for

    starving nodes to transmit their packets. Per Fig. 3, when each

    source is rate limited to its fair share, the WMN is generally

    able to provide fair access to the gateway to all flows. Ouranalysis in Sect. VI shows that this allows an equilibrium to

    be established where each source can only consume its fair

    share of the network throughput.

    Our gateway rate-control protocol consists of three steps:

    1) Gather information required to compute the fair share

    bandwidth

    2) Compute the fair share for each stream

    3) Enforce the computed rate for each stream at the gate-

    way

    We now describe the three steps.

    A. Information Gathering

    The type of information required depends upon the com-plexity of the computational model. In general, we need some

    notion of network topology as well as information as to which

    links interfere with each other. The topology information can

    be extracted from routing protocols ( e.g., link-state routing

    protocols like OLSR, or source-routing protocols like DSR).

    For link interference information, we use the simple model

    of [9] which only requires neighborhood information. We also

    need stream-activity information since there is no need to

    reserve bandwidth for nodes that are not transmitting. As all

    Fig. 5. Simulation topology with downstream TCP flows emanating fromthe wired network and terminating at mesh routers.

    flows pass through the gateway, this can simply be determined

    by performing per-packet inspection.

    B. Fair Share Computation

    We adopt a restricted version of the model developed by

    Li et al. [10]. The network is modeled as a connectivity graph

    with mesh nodes as vertices and wireless links as bidirectional

    edges. A link interferes with another link if either endpoint of

    one link is within transmission range of either endpoint of

    the other link. Thus, the set of all links that interfere with a

    given link, referred to as the collision domain of that link,

    are those within two hops of either endpoint of the link.

    The model assumes that the links within a collision domaincannot transmit simultaneously. This actually over-estimates

    link contention. However, given that link interference, defined

    by transmission range rather than interference range, is under-

    estimated, the presumption (born out by detailed simulation

    studies) is that the overall model is approximately correct.

    It is then sufficient to determine the bottleneck collision

    domain, which will be a function of the usage of the links

    within each collision domain. Link usage is determined by

    routing and demand. For the work in this paper, we presume

    that routing is relatively static. That is, it changes infrequently

    compared with traffic demand changes. This is generally true,

    though it would not be difficult to remove this assumption,

    by simply recomputing the feasibility as routing changed. Weconsider network demand to be binary. That is, either a node

    is silent or its demand is insatiable. This corresponds to TCP

    behavior, which either is not transmitting or will increase

    its transmission rate to the available bandwidth. Given the

    stream activity, we can then compute the load over each link,

    and in turn compute the load in each collision domain. Then

    the bottleneck collision domain is simply the domain with

    the greatest load, and the fair share is determined simply by

    dividing the link capacity by that load.

    C. Fair share Enforcement

    To enforce the fair share rate, the gateway node sorts all

    incoming packets by stream, placing them into a token-bucket-controlled FIFO. Each token bucket has an adjustable rate,

    releasing a packet from the FIFO after an average delay oflastPacketSize

    lastRatefrom when it last released a packet.

    VI . ANALYSIS

    A. Performance comparison between AQM and GRC

    We compared the performance of FRED against a sim-

    ple Drop-Tail queue as well as against GRC. We used a

    simple 3-hop chain shown in Fig. 5. We assume that the

  • 7/27/2019 wcnc07

    5/6

    bandwidth of the wired Internet connection at the gateway

    exceeds the wireless MAC-layer bandwidth. As such, the

    wireless domain remains the bottleneck (Fig. 1). As a result

    the output queue on the wired interface of the gateway will

    never exceed one packet, and thus the FRED algorithm never

    reaches the minimum queue size necessary to start dropping

    packets. Therefore, we only test the performance of these

    queue-management protocols for downstream flows (i.e., flows

    originating in the wired domain and terminating at the mesh

    nodes) because a queue build-up (required for FRED) only

    takes place in this direction. Both FRED and GRC rely on

    the inherent responsive nature of the transport protocol to

    congestion notification indicated through delayed or dropped

    packets. TCP is the canonical example of such a protocol, and

    dominates Internet traffic. As such, we use it, specifically TCP

    NewReno [2], for our initial performance studies. We simu-

    lated the three algorithms using the ns-2 [16] simulator with

    the radio model defaults described earlier in Sect. III. The ns-2

    default parameters for FRED operation were used, while the

    GRC technique computes its own fair-share rate. We use Jains

    Fairness Index (JFI) [6] and an estimate of

    minthroughput

    avgthroughput asquantitative measures of network-level fairness and starvation.

    Table I shows the summary of our results. Drop Tail exhibits

    the basic unfairness scenario similar to the results we observed

    in Fig. 4. The throughput decreases for flows multiple hops

    away from the gateway, with flow 3 getting starved. The

    use of FRED does not prevent node 3 from starvation. Only

    GRC is able to enforce absolute fairness between the flows.

    We enabled queue monitoring at the gateway to explain this

    behavior. The queue size at the gateway was set to 50 packets.

    At this size, this queue is not the bottleneck as shown by zero

    queue-controlled drops in the 150 sec. simulation run with

    the Drop-Tail discipline. While the FRED experiment does

    cause some queue drops at the gateway, they do not slowdown flows 1 and 2 sufficiently to preclude the starvation of

    flow 3. GRC, by contrast, does not lose packets at the gateway,

    but does delay them sufficiently as to cause flows 1 and 2 to

    infer loss and invoke congestion control. Reducing the queue

    size in GRC would cause explicit packet loss rather delay,

    slightly reducing the load on the wireless medium. Aggregate

    bandwidth achieved by GRC is noticeably less than that of

    the other approaches, but this is a direct result of the fairness

    requirement [4].

    Fig. 6 shows the per-flow data arrival rate (not ACKs)

    in the FRED queue at the gateway during the simulation

    run. The queue space is evenly shared between the flows

    at the start of the simulation, but continues deterioratingduring the simulation execution. New data packets are not

    being generated for flow 3 because ACKs for the previously

    transmitted ones have not been received (loss rate of 39.6% for

    flow 3 ACKs with FRED). This is because the gateway acts

    as a hidden terminal for TCP ACKs generated by node 3. As

    discussed previously in Sect. III, this hidden-terminal scenario

    cannot be resolved using RTS/CTS as nodes GW and 3 are

    out of each others transmission range. Because of frequent

    collisions, node 3 repeatedly increases its contention window

    0

    1000

    2000

    3000

    4000

    5000

    6000

    0 20 40 60 80 100 120 140

    TotaldatapacketsreceivedinQueue

    Time (seconds)

    Packet arrivals rate in FRED queue

    S1S2S3

    Fig. 6. New data packet arrival rate in FRED queue.

    to a point where TCP timeouts occur, and the packets have

    to be retransmitted by the gateway. Though flow 1 transmits

    fewer packets with FRED, the extra available bandwidth is

    acquired by flow 2 because there is very little traffic to be sent

    out for flow 3 because of the combined effect of the 802.11contention window and the TCP congestion window.

    We conclude that unlike in wired networks, the use of

    AQM techniques show negligible fairness improvement in

    multihop wireless networks. Flows in these networks can

    starve as packet drops can occur in any congested region of

    the physical network. Unlike in wired networks, this loss does

    not always occur at the queue interfacing the high-speed and

    the low-speed networks (the gateway node for WMNs), but

    can occur at any intermediate node that is disadvantaged due

    to asymmetric view of channel state between the mesh nodes.

    Protocols like our GRC fare better because they reduce the

    degree of this asymmetry by limiting aggressive TCP sources

    to their fair share, thus preserving enough channel capacity soas to allow disadvantaged nodes to obtain their fair share.

    B. GRC Evaluation

    We tested the performance of GRC on a number of different

    chain, grid, and random topologies. As detailed results have

    been presented elsewhere [7], we only present a brief summary

    of our experiments in Table VI-B. These experiments represent

    the scenario when all mesh routers had an active TCP stream

    to the wired Internet via the gateway. FS corresponds to the

    computed fair share per stream, while is the standard devi-

    ation between average throughput of all active TCP streams.

    Overall, we observed that our algorithm successfully operated

    the network at a capacity that meets the fair-share requirements

    of all active streams.

    VII. CONCLUSION AND FUTURE WOR K

    WMNs, particularly those based on the 802.11 MAC, ex-

    hibit extreme fairness problems, requiring existing deploy-

    ments to limit the maximum number of hops to the gateway to

    prevent distant nodes from starving. In this paper we evaluated

    the use of two gateway-enforced rate-limiting mechanisms to

    improve the fairness characteristics of these networks. We

  • 7/27/2019 wcnc07

    6/6

    http://www.isi.edu/nsnam/ns/