75
REDUCTION OF QUEUE OVERFLOW PROBABILITY USING WIRELESS SCHEDULING ALGORITHMS A PROJECT REPORT Submitted by R.SWETHA (30408205098) K.VIJAYRAM (30408205108) N.VISWANATH (30408205109) In partial fulfillment for the award of the degree Of BACHELOR OF TECHNOLOGY In INFORMATION TECHNOLOGY

My Documentation Final!!

Embed Size (px)

Citation preview

Page 1: My Documentation Final!!

REDUCTION OF QUEUE OVERFLOW PROBABILITY

USING WIRELESS SCHEDULING ALGORITHMS

A PROJECT REPORT

Submitted by

R.SWETHA (30408205098)

K.VIJAYRAM (30408205108)

N.VISWANATH (30408205109)

In partial fulfillment for the award of the degree

Of

BACHELOR OF TECHNOLOGY

In

INFORMATION TECHNOLOGY

EASWARI ENGINEERING COLEGE: CHENNAI 60089

ANNA UNIVERSITY: CHENNAI 600 025

APRIL 2012

Page 2: My Documentation Final!!

ANNA UNIVERSITY : CHENNAI

BONAFIDE CERTIFICATE

Certified that this project report “REDUCTION OF QUEUE OVERFLOW

PROBABILITY USING WIRELESS SCHEDULING ALGORITHMS” is the

bonafide work of “R.SWETHA(30408205098), K.VIJAYRAM

(30408205108), N.VISWANATH (30408205109)” who carried out the project

work under my supervision.

SIGNATURE SIGNATURE

Dr. K. Kathiravan,M.Tech.,Ph.D Mrs.R.Radha,M.E.,( Asst.Prof Sr.Gr)

HEAD OF THE DEPARTMENT SUPERVISOR

Department Of Information Technology, Department Of Information Technology,

Easwari Engineering College, Easwari Engineering College,

Ramapapuram,Chennai-89. Ramapapuram,Chennai-89.

INTERNAL EXAMINER EXTERNAL EXAMINER

Page 3: My Documentation Final!!

ACKNOWLEDGEMENT

We would like to express our sincere thanks to our beloved Director Thiru

V.N.Pattabiraman, B.E.,(Hons), I.O.F.S.,[Retd]., and Principal Dr.Jothimohan

Balasubramanium, M.E., Ph.D. for the encouragement extended by them. We are

highly indebted to the department of Information Technology for providing all the

facilities for the successful completion of the project.

With a deep sense of gratitude we would like to thank Dr.K.Kathiravan, M.Tech,

Ph.D., Head of department of Information Technology for his kind support,

encouragement and valuable guidance in our project work.

We are extremely grateful to our project supervisor Mrs.R.Radha M.E., Assistant

Professor Senior Graduate Lecturer for her constant guidance and valuable

suggestions.

We would like to thank our project coordinator Mrs.R.Radha M.E., Assistant

Professor Senior Graduate Lecturer for her valuable suggestions and

encouragement.

We also wish to thank all teaching and non-teaching staff for their kind co-

operation throughout this project work.

Page 4: My Documentation Final!!

ABSTRACT

In this project we are interested in the reduction of queue overflow

probability for the downlink of a single cell.In a large deviation system,we are

interested in algorithms that maximizes the asymptotic decay rate of the queue

overflow probability when the queue overflow threshold approaches infinity.We

focus on a group of algorithms called “alpha-algorithms”.

The “alpha-algorithms”, picks an user at a time, who has the largest product

of transmission rate and backlog raised to power “alpha”. When the overflow

metrics are appropriately modified,the minimum cost to overflow can be achieved .

Using this,we can show that,when “alpha” approaches infinity the “alpha-

algorithms” achieve the largest decay rate of the queue overflow probability.This

result enables to design algorithm that is optimal in terms of queue overfow

probability and maintaining queue overflow probabilities of various queue lengths.

Page 5: My Documentation Final!!

TABLE OF CONTENTS

CHAPTER NO. TITLE PAGE NO.

ABSTRACT

LIST OF TABLES

LIST OF FIGURES

1. INTRODUCTION 1

1.1 Introduction

1.2 Overview

1.3 Objective

2. LITERATURE SURVEY

2.1 General

2.2 Literature Survey

2.3 Summary

Page 6: My Documentation Final!!

3. SYSTEM ANALSIS

3.1 General

3.2 Existing System

3.3 Proposed System

3.4 Summary

4. SYSTEM DESIGN

4.1 General

4.2 Architecture Diagram

4.2.1 ………………

4.3 Functional Modules

4.3.1……………….

4.4 Summary

5. SYSTEM TESTING

5.1 General

5.2 Testing

5.3 Summary

Page 7: My Documentation Final!!

6. SIMULATION RESULTS

9.1 General

9.2 Snapshots

9.2.1 ……………….(Explanation for each )

9.3 Summary

7. CONCLUSION AND FUTURE ENHANCEMENT

10.1 Conclusion

10.2 Future Enhancement

CHAPTER 1

Page 8: My Documentation Final!!

1.1 INTRODUCTION

WIRELESS NETWORK

Wireless network refers to any type of computer network that is not

connected by cables of any kind. It is a method by which

homes,telecommunication networks and enterprise (business)

installations avoid the costly process of introducing cables into a

building, or as a connection between various equipment

locations.Wireless telecommunications networks are generally

implemented and administered using a transmission system

called radio waves. This implementation takes place at the physical

level (layer) of the OSI model network structure.

1.1.1 TYPES OF WIRELESS NETWORK

Page 9: My Documentation Final!!

Fig 1.1 TYPES OF WIRELESS NETWORK

1.1.2 CELLULAR NETWORK

A cellular network is a radio network distributed over land areas called

cells, each served by at least one fixed-location transceiver, known as a cell

site or base station. When joined together these cells provide radio coverage

over a wide geographic area. This enables a large number of portable

Page 10: My Documentation Final!!

transceivers (e.g., mobile phones, pagers, etc.) to communicate with each

other and with fixed transceivers and telephones anywhere in the network,

via base stations, even if some of the transceivers are moving through more

than one cell during transmission.

Cellular networks offer a number of advantages over alternative solutions:

flexible enough to use the features and functions of almost all public and private networks

increased capacity

reduced power use

larger coverage area

reduced interference from other signals

1.1.2.1 CELLULAR RADIO SYSTEM CONCEPT

In a cellular radio system, a land area to be supplied with radio service is

divided into regular shaped cells, which can be hexagonal, square, circular

or some other irregular shapes, although hexagonal cells are conventional.

Each of these cells is assigned multiple frequencies (f1 - f6) which have

corresponding radio base stations. The group of frequencies can be reused in

other cells, provided that the same frequencies are not reused in adjacent

neighboring cells as that would cause co-channel interference.

The increased capacity in a cellular network, compared with a network with

a single transmitter, comes from the fact that the same radio frequency can

be reused in a different area for a completely different transmission. If there

Page 11: My Documentation Final!!

is a single plain transmitter, only one transmission can be used on any given

frequency. Unfortunately, there is inevitably some level of interference from

the signal from the other cells which use the same frequency. This means

that, in a standard FDMA system, there must be at least a one cell gap

between cells which reuse the same frequency.

In the simple case of the taxi company, each radio had a manually operated

channel selector knob to tune to different frequencies. As the drivers moved

around, they would change from channel to channel. The drivers knew

which frequency covered approximately what area. When they did not

receive a signal from the transmitter, they would try other channels until

they found one that worked. The taxi drivers would only speak one at a time,

when invited by the base station operator (in a sense TDMA).

Fig 1.2 Cellular radio system concept

Page 12: My Documentation Final!!

1.1.3 UPLINK AND DOWNLINK

The communication going from a satellite to ground is called downlink, and

when it is going from ground to a satellite it is called uplink. When an uplink is

being received by the spacecraft at the same time a downlink is being received by

Earth, the communication is called two-way. If there is only an uplink happening,

this communication is called upload. If there is only a downlink happening, the

communication is called one-way.

1.1.4 CHANNEL ALLOCATION

In radio resource management for wireless and cellular network, channel

allocation schemes are required to allocate bandwidth and communication

channels to base stations, access points and terminal equipment. The objective is

to achieve maximum system spectral efficiency in bit/s/Hz/site by means

of frequency reuse, but still assure a certain grade of service by avoiding co-

channel interference and adjacent channel interference among nearby cells or

networks that share the bandwidth. There are two types of strategies that are

followed:-

Fixed: FCA, fixed channel allocation: Manually assigned by the

network operator

Dynamic: DCA, dynamic channel allocation.

Page 13: My Documentation Final!!

In Fixed Channel Allocation or Fixed Channel Assignment (FCA) each cell is

given a predetermined set of frequency channels. FCA requires manual frequency

planning, which is an arduous task in TDMA and FDMA based systems, since

such systems are highly sensitive to co-channel interference from nearby cells that

are reusing the same channel. Another drawback with TDMA and FDMA systems

with FCA is that the number of channels in the cell remains constant irrespective

of the number of customers in that cell. This result in traffic congestion and some

calls being lost when traffic gets heavy in some cells, and idle capacity in other

cells.

If FCA is combined with conventional FDMA and perhaps or TDMA, a fixed

number of voice channels can be transferred over the cell. A new call can only be

connected by an unused channel. If all the channels are occupied than the new call

is blocked in this system. There are however several dynamic radio-resource

management schemes that can be combined with FCA. A simple form is traffic-

adaptive handover threshold, implying that that calls from cell phones situated in

the overlap between two adjacent cells can be forced to make handover to the cell

with lowest load for the moment. If FCA is combined with spread spectrum, the

maximum number of channels is not fixed in theory, but in practice a maximum

limit is applied, since too many calls would cause too high co-channel interference

level, causing the quality to be problematic. Spread spectrum allows cell

breathing to be applied, by allowing an overloaded cell to borrow capacity

(maximum number of simultaneous calls in the cell) from a nearby cell that is

sharing the same frequency.

Page 14: My Documentation Final!!

A more efficient way of channel allocation would be Dynamic Channel

Allocation or Dynamic Channel Assignment(DCA) in which voice channel are not

allocated to cell permanently, instead for every call request base station request

channel from MSC. The channel is allocated following an algorithm which

accounts likelihood of future blocking within the cell. It requires the MSC to

collect real time data on channel occupancy, traffic distribution and Radio Signal

Strength Indications(RSSI). DCA schemes are suggested for TDMA/FDMAbased

cellular systems such as GSM, but are currently not used in any

products. OFDMA systems, such as the downlink of 4G cellular systems, can be

considered as carrying out DCA for each individual sub-carrier as well as each

timeslot.

1.1.5 MAC LAYER SCHEDULING

The Media Access Control Layer is one of two sublayers that make up the Data

Link Layer of the OSI model. The MAC layer is responsible for moving

data packets to and from one Network Interface Card (NIC) to another across a

shared channel. The MAC sublayer uses MAC protocols to ensure that signals

sent from different stations across the same channel don't collide. Different

protocols are used for different shared networks, such as Ethernets,Token

Rings, Token Buses, and WANs.

Medium Access Control (MAC) layer scheduling for data communication

involves assignment of timeslots and channels to either links or nodes in the

network. The number of channels available and the channel identities vary from

Page 15: My Documentation Final!!

one node to another within the network. This is in contrast to the existing use of

multiple channels where all the nodes have the same set of channels available (for

example in IEEE 802.11 networks). 

1.1.6 LINK SCHEDULING

Allocation of link for use, to a particular node, by the base station is called link

scheduling. Scheduling the link for use, by various nodes.

1.1.6.1 NEED FOR LINK SCHEDULING

1. Channel fading and mobility of wireless nodes

2. Variation in transmission power

3. interference due to other wireless nodes

1.1.6.2 TYPES OF LINK SCHEDULING

1.1.6.2.1 PROPORTIONALLY NON-FAIR SCHEDULING

Round robin scheduling

Round-robin (RR) is one of the simplest scheduling

algorithms for processes in an operating system. As the term is generally

Page 16: My Documentation Final!!

used, time slices are assigned to each process in equal portions and in

circular order, handling all processes without priority (also known as cyclic

executive). Round-robin scheduling is simple, easy to implement,

and starvation-free. Round-robin scheduling can also be applied to other

scheduling problems, such as data packet scheduling in computer

networks.

The name of the algorithm comes from the round-robin principle known

from other fields, where each person takes an equal share of something in

turn.

1.1.1.6.2 PROPORTIONALLY FAIR SCHEDULING

1.Alpha algorithms

The following are alpha algorithms.

1.1.User prioritization algorithm

Here we schedule the channel for the station that has the maximum

priority.

1.2.Max-weight scheduling algorithm (or) throughput optimal

algorithm

For each time slot, a (scheduling) decision has to be made as to which

transmitters should send data to which mobiles, and at which rates. In the

simplest case when there is only one transmitter, only one user can be served

in one slot, and transmission rates are fixed, there are exactly N scheduling

decisions, namely, “which of the N users to serve.” In general, multiple

users can be picked for service in one slot, and the data rates that can be

Page 17: My Documentation Final!!

assigned to the transmissions are user dependent (due to differences in radio

channel quality) and, moreover, highly interdependent (due to transmitter

power constraints and mutual radio signal interference).

1.2 OVERVIEW

Algorithms on scheduling the channels based on their state ,substantially

increase the system performance . It also increases the throughput. Consider the

downlink of a single cell in a cellular network. The base-station transmits to users.

We assume that perfect channel information is available at the base-station.

In a stability problem, the goal is to find algorithms for scheduling the

transmissions such that the queues are stabilized at given offered loads. An

important result along this direction is the development of the so-called

“throughput-optimal” algorithms. A scheduling algorithm is called throughput-

optimal if, at any offered load under which any other algorithm can stabilize the

system, this algorithm can stabilize the system as well.

The probability of queue overflow is equivalent to delay violation under

certain conditions.The question that we attempt to answer is the following: Is there

an optimal algorithm in the sense that, at any given offered load, the algorithm can

achieve the smallest probability that any queue overflows. Note that if we impose a

quality-of-service (QoS) constraint on each user in the form of an upper bound on

the queue-overflow probability, then the above optimality condition will also imply

that the algorithm can support the largest set of offered loads subject to the QoS

Page 18: My Documentation Final!!

constraint. Unfortunately, calculating the exact queue distribution is often

mathematically intractable. In this paper, we use large-deviation theory and

reformulate the QoS constraint in terms of the asymptotic decay rate of the queue-

overflow probability as approaches infinity.

Specifically, in order to apply the large-deviation theory to queue-length-

based scheduling algorithms, one has to use sample-path large deviation and

formulate the problem as a multidimensional calculus-of-variations (CoV) problem

for finding the “most likely path to overflow.” The decay rate of the queue-

overflow probability then corresponds to the cost of this path, which is referred to

as the “minimum cost to overflow.” Unfortunately, for many queue-length-based

scheduling algorithms of interest, this multidimensional calculus-of-variations

problem is very difficult to solve.

In this project, in order to overcome the Cov problem,we use Lyapunov

function to map the multidimension Cov problem to one dimension problem which

allows us to bound the minimum cost to overflow by solutions of simple vector

optimization problems. This technique is used for the analysis of other queue

length based problems.

1.3 OBJECTIVE

The objective of the project is to reduce the queue overflow probability and

increase the stability of the network.

Page 19: My Documentation Final!!

CHAPTER 2

LITERATURE SURVEY

2.1 GENERAL

This chapter gives the overall description of the reference papers, through which we can identify the problems of existing technologies. Also the methods to overcome such problems can be identified.

S.NO

TITLE AUTHOR YEAR DESCRIPTION

1. Order optimal delay for opportunistic scheduling in multi user wireless uplinks and downlinks

Michael Neely

2008 This paper considers one hop wireless channel for uplinks and downlinks with independent time varying channels.

The goal is the construction of dynamic queue length aware algorithm that maximises the throughput and reduces the average delay independant of users with the help of a concept called queue grouping to achieve the delay bounds.

2. Delay analysis for maximal scheduling in wireless networks with bursty traffic

Michael Neely

2008 This paper considers the one hop wireless networks with interference constraints .

It also derives the average delay for one hop wireless netwoks and shows that average delay grows logarithmically in the largest number of interferers and links when we use the “max-weight” scheduling.

3. Effective S.Shakottai 2008 The channel state

Page 20: My Documentation Final!!

capacity and QoS for wireless scheduling

information exploited at the Base station can result in the significant increase of throughput to users.

We analyse the channel state greedy rule and max weight rule when QoS constraints are added.

We also study that by increasing the channel burstiness ,long term throughput increases along with the channel access delay resulting in poor QoS.

4. On characterising the delay performance of wireless scheduling algorithms.

Xiajoun Lin 2007 We study the probability of characterising the delay performance of wireless scheduling algorithms.

The delay violation probability can be studied via the Sample path deviation technique. But it leads to the problem of Multi dimensional calculus of variance.

We use Lyapunov function that map the Multi dimensional CoV to One dimensional CoV that helps to study the delay performance of large class of wireless scheduling algorithms.

5. A large deviation analysis of

Lei Ying,R.Srik

2006 In this paper ,we consider a cellphone network with a base station and a no. Of

Page 21: My Documentation Final!!

scheduling in wireless networks

ant,

G.Dellurad

receivers.

Channel states of receivers are independant of each other.

The goal is to compare the two scheduling policies-Greedy scheduling and the queue length based scheduling.

With the given upper bound of the queue overflow probabilty we show that the throughput of the queue length based policy is a strictly increasing function while that of the greedy algorithm eventually goes constant.

6. Optimal scheduling algorithms for the input queued switches

Devavrat Shah,Damon Wischik

2006 Input queued switches are widely used in internet architectures where the selection of a good scheduling algorithm remains a concern.

This paper shows a new technique for analysing the scheduling algorithms.

The basic idea is that when one or more ports of the switch is heavily loaded,the switch spendsits time near the invariant states.

By studying the geometry of the invariant states, the performance of the

Page 22: My Documentation Final!!

algorithms can be studied.

7. A tutorial on the cross layer optimization in wireless networks

Xiajoun Lin,R.Srikant,Ness Shroff

2006 This paper overviews the recent developments in optimization based approach for resource allocation based problems.

This paper demonstarates how to use imperfect scheduling in the cross layer framework.

It mainly uses the important results of opportunistic scheduling where the system performance is optimized.

8. Stable scheduling policies for fading wireless channels

Atilla Eryilmaz,R.Srikant,James Perking

2005 In this paper we discuss the stable scheduling policies of a class of wireless networks.

We assume that the mean arrival rate lies in the achievable rate region.

For any mean arrival rate that lies in the capacity region , the queues will be stable.

In the context of time varying channelswith many users, our work is an example of exploiting the multi-user diversity to maximize the throughput of the system.

9. Max-weight scheduling in a generalized switch-State

A.L.Stolyar 2004 We consider the generalised switch model with multi-users scheduling over the

Page 23: My Documentation Final!!

space collapse and work load minimization in heavy traffic

wireless media.

Input flows in a discrete time Markov chain.The switch chooses a scheduling decision based on the Max-weight policy for each channel state.

Even for the quite general queueing system, allocation of resources,randomness of service environment,workload minimization can be achieved dynamicaaly without precomputation with the help of properties of Max-weight scheduling.

2.2 SUMMARY

This chapter explains some of the information present in these papers which are used as references for the development of this project.

Page 24: My Documentation Final!!

CHAPTER 3

SYSTEM ANALYSIS

3.1 GENERAL

This chapter gives an overall description about the existing system on search

log publishing and also our proposed system for the same.

3.2 EXISTING SYSTEM:

Existing problem of scheduling packets from multiple flows over a

Rayleigh fading wireless channel recently, there has been much interest in

opportunistic scheduling, i.e., scheduling packets from a user who has the highest

SNR (signal-to-noise ratio), to maximize the network's throughput. In this paper,

we compare the throughput achievable under fair opportunistic scheduling (i.e., a

modification of opportunistic scheduling to ensure fair resource allocation) with

the throughput under time-division multiplexing (TDM) scheduling. Using large

deviations to characterize the probability that the QoS constraint (an upper bound

on delay) is violated, we numerically compare the performance of the two

scheduling algorithms under various channel conditions. We show that the

opportunistic scheduler outperforms the TDM scheduler when the number of users

is small but the TDM scheduler performs better when the number of users exceeds

a threshold which depends on the channel parameters.

Page 25: My Documentation Final!!

3.3 PROPOSED SYSTEM:

We proposed in wireless scheduling algorithms for the downlink

of a single cell that can minimize the queue-overflow probability. Specifically, in a

large-deviation setting, we are interested in algorithms that maximize the

asymptotic decay rate of the queue-overflow probability, as the queue-overflow

threshold approaches infinity. We first derive an upper bound on the decay rate of

the queue-overflow probability over all scheduling policies. We show that when

the overflow metric is appropriately modified, the minimum-cost-to-overflow

under the alpha -algorithm can be achieved by a simple linear path, and it can be

written as the solution of a vector-optimization problem. Using this structural

property, we then show that when approaches infinity, the alpha-algorithms

asymptotically achieve the largest decay rate of the queue-overflow probability.

Finally, this result enables us to design scheduling algorithms that are both close to

optimal in terms of the asymptotic decay rate of the overflow probability and

empirically shown to maintain small queue-overflow probabilities over queue-

length ranges of practical interest.

Page 26: My Documentation Final!!

3.4 SUMMARY:

This chapter explains the existing system and its drawbacks are analyzed. The

proposed system is explained and its benefits are also specified.

Page 27: My Documentation Final!!

CHAPTER 4

SYSTEM DESIGN

4.1 GENERAL

This chapter gives the diagrammatic representation of our proposed system

(architecture diagram) and also the UML diagrams used for designing our system.

4.2 ARCHITECTURE DIAGRAM

The architecture diagram of our project is shown below.

Page 28: My Documentation Final!!

Fig4.1 System Architecture

Page 29: My Documentation Final!!

4.3 FUNCTIONAL MODULES

4.3.1 NETWORKING MODULE:

Client-server computing or networking is a distributed application

architecture that partitions tasks or workloads between service providers

(servers) and service requesters, called clients. Often clients and servers

operate over a computer network on separate hardware. A server machine is

a high-performance host that is running one or more server programs which

share its resources with clients. A client also shares any of its resources;

Clients therefore initiate communication sessions with servers which await

(listen to) incoming requests.

Fig 4.2 Networking module

Page 30: My Documentation Final!!

4.3.2 MAXWEIGHT SCHEDULING:

We consider a generalized switch model, which includes as special cases the

model of multiuser data scheduling over a wireless medium, the input-

queued cross-bar switch model of a parallel server queuing system. For each

time slot, a (scheduling) decision has to be made as to which transmitters

should send data to which mobiles, and at which rates. In the simplest case

when there is only one transmitter, only one user can be served in one slot,

and transmission rates are fixed, there are exactly N scheduling decisions,

namely, “which of the N users to serve.” In general, multiple users can be

picked for service in one slot, and the data rates that can be assigned to the

transmissions are user dependent (due to differences in radio channel

quality) and, moreover, highly interdependent (due to transmitter power

constraints and mutual radio signal interference).

This module involves designing the following algorithm,

Phase 1: Accept all incoming packets.

Phase 2: check if the node has the maximum number of incoming packets.

Phase 3: Remove any packets that arrive at their destination.

Page 31: My Documentation Final!!

Figure 4.3: In the dynamic graph adversarial model, at each time slot the adversary determines packet arrivals and edge capacities. The Max-Weight (¯) protocol then determines which packets will be transmitted along each edge. Data is stored at each node according to its eventual destination.

4.3.3 REDUCTION OF QUEUE-OVERFLOW:

Wireless scheduling algorithms for the downlink of a single cell that can

minimize the queue-overflow probability. Specifically, in a large-deviation

setting, we are interested in algorithms that maximize the asymptotic decay

rate of the queue-overflow probability, as the queue- overflow threshold

approaches infinity. We first derive an upper bound on the decay rate of the

queue-overflow probability over all scheduling policies.

We design hybrid scheduling algorithms that are both close to optimal in

terms of the asymptotic decay rate of the overflow probability and

Page 32: My Documentation Final!!

empirically shown to maintain small queue-over flow probabilities over

queue-length ranges of practical interest. For future work, we plan to extend

the results to more general network and channel models.

Finally, this result enables us to design scheduling algorithms that are both

close to optimal in terms of the asymptotic decay rate of the overflow

probability and empirically shown to maintain small queue-overflow

probabilities over queue-length ranges of practical interest.

Deriving I optimal –upper bound of queue And J optimal-lower bound of

queue to reduce overflow.

The goal is to find algorithms for scheduling the transmissions such that the

queues are stabilized at given offered loads. An important result along this

direction is the development of the so-called “throughput-optimal”

algorithms. A scheduling algorithm is called throughput-optimal if, at any

offered load under which any other algorithm can stabilize the system, this

algorithm can stabilize the system as well. It is well known that the

following class of scheduling algorithms are throughput-optimal service at

each time that has the largest product of the transmission.

Page 33: My Documentation Final!!

Fig 4.4 Concept of queue-bounds

Page 34: My Documentation Final!!

CHAPTER 5

SYSTEM TESTING

5.1 SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to

discover every conceivable fault or weakness in a work product. It provides a way

to check the functionality of components, sub assemblies, assemblies and/or a

finished product It is the process of exercising software with the intent of ensuring

that the

Software system meets its requirements and user expectations and does not fail in

an unacceptable manner. There are various types of test. Each test type addresses a

specific testing requirement.

5.2 TYPES OF TESTS

5.2.1 Unit testing

Unit testing involves the design of test cases that validate that the internal

program logic is functioning properly, and that program inputs produce valid

outputs. All decision branches and internal code flow should be validated. It is the

testing of individual software units of the application .it is done after the

completion of an individual unit before integration. This is a structural testing, that

relies on knowledge of its construction and is invasive. Unit tests perform basic

tests at component level and test a specific business process, application, and/or

system configuration. Unit tests ensure that each unique path of a business process

Page 35: My Documentation Final!!

performs accurately to the documented specifications and contains clearly defined

inputs and expected results.

5.2.2 Integration testing

Integration tests are designed to test integrated software components to

determine if they actually run as one program. Testing is event driven and is more

concerned with the basic outcome of screens or fields. Integration tests

demonstrate that although the components were individually satisfaction, as shown

by successfully unit testing, the combination of components is correct and

consistent. Integration testing is specifically aimed at exposing the problems that

arise from the combination of components.

5.2.3 Functional test

Functional tests provide systematic demonstrations that functions tested are

available as specified by the business and technical requirements, system

documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Page 36: My Documentation Final!!

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key

functions, or special test cases. In addition, systematic coverage pertaining to

identify Business process flows; data fields, predefined processes, and successive

processes must be considered for testing. Before functional testing is complete,

additional tests are identified and the effective value of current tests is determined.

5.2.4 System Test

System testing ensures that the entire integrated software system meets

requirements. It tests a configuration to ensure known and predictable results. An

example of system testing is the configuration oriented system integration test.

System testing is based on process descriptions and flows, emphasizing pre-driven

process links and integration points.

5.2.5 White Box Testing

White Box Testing is a testing in which in which the software tester has

knowledge of the inner workings, structure and language of the software, or at least

its purpose. It is purpose. It is used to test areas that cannot be reached from a black

box level.

Page 37: My Documentation Final!!

5.2.6 Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner

workings, structure or language of the module being tested. Black box tests, as

most other kinds of tests, must be written from a definitive source document, such

as specification or requirements document, such as specification or requirements

document. It is a testing in which the software under test is treated, as a black

box .you cannot “see” into it. The test provides inputs and responds to outputs

without considering how the software works.

5.2.7 Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test

phase of the software lifecycle, although it is not uncommon for coding and unit

testing to be conducted as two distinct phases.

5.2.8 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant

participation by the end user. It also ensures that the system meets the functional

requirements.

Page 38: My Documentation Final!!

5.3 Test strategy and approach

Field testing will be performed manually and functional tests will be written in detail.

5.3.1 Test objectives

All field entries must work properly.

Pages must be activated from the identified link.

The entry screen, messages and responses must not be delayed.

5.3.2 Features to be tested

Verify that the entries are of the correct format

No duplicate entries should be allowed

All links should take the user to the correct page.

5.4 Test Results:

All the test cases mentioned above passed successfully. No defects encountered.

Page 39: My Documentation Final!!

CHAPTER 6

SIMULATION RESULTS

6.1GENERAL

This chapter showcases the step by step procedure of our project with the

help of snapshots.

1. Open MS Client 1 and Enter destination IP address

2. Enter Source path of the file and Press Split

Page 40: My Documentation Final!!

1. Open MS Client 2 and Enter destination IP address

2. Enter Source path of the file and Press Split

Page 41: My Documentation Final!!

1. Open MS Client 3 and Enter destination IP address

Page 42: My Documentation Final!!

2. Enter Source path of the file and Press Split

1. Send data from the above three clients one by one

2. Click start in the following window to start wireless scheduling process

Page 43: My Documentation Final!!
Page 44: My Documentation Final!!

6.2 GRAPHS

Page 45: My Documentation Final!!
Page 46: My Documentation Final!!

6.3 SUMMARY

All the steps done are indicated and explained using snapshots.

Page 47: My Documentation Final!!

CHAPTER 8

CONCLUSION AND FUTURE ENHANCEMENTS

8.1CONCLUSION

In this paper, we study wireless scheduling algorithms for the

downlink of a single cell that can maximize the asymptotic decay rate of the

queue-overflow probability as the overflow threshold approaches infinity.

Specifically, we focus on the class of “alpha-algorithms,” which pick the user for

service at each time that has the largest product of the transmission rate multiplied

by the backlog raised to the power. We show that when approaches infinity, the -

algorithms asymptotically achieve the largest decay rate of the queue-overflow

probability. A key step in proving this result is to use a function to derive a simple

lower bound for the minimum cost to overflow under the ”alpha-algorithms”. This

technique, which is of independent interest, circumvents solving the difficult

multidimensional calculus- of-variations problem typical in this type of problem.

Finally, using the insight from this result, we design hybrid scheduling algorithms

that are both close to optimal in terms of the asymptotic decay rate of the overflow

probability and empirically shown to maintain small queue-over flow probabilities

over queue length ranges of practical interest.

8.2FUTURE ENHANCEMENT

For future work, we plan to extend the results to more general network and channel

models.

Page 48: My Documentation Final!!

APPENDICES

FEATURES OF .NET

Microsoft .NET is a set of Microsoft software technologies for rapidly building and

integrating XML Web services, Microsoft Windows-based applications, and Web

solutions. The .NET Framework is a language-neutral platform for writing

programs that can easily and securely interoperate The .NET framework provides

the foundation for components to interact seamlessly, whether locally or remotely

on different platforms. It standardizes common data types and communications

protocols so that components created in different languages can easily interoperate.

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the

environment within which programs run. The most important features are

● Conversion from a low-level assembler-style language, called Intermediate

Language (IL), into code native to the platform being executed on.

● Memory management, notably including garbage collection.

● Checking and enforcing security restrictions on the running code.

● Loading and executing programs, with version control and other such

features.

● The following features of the .NET framework are also worth description:

Page 49: My Documentation Final!!

Managed Code

The code that targets .NET, and which contains certain extra

Information - “metadata” - to describe itself. Whilst both managed and

unmanaged code can run in the runtime, only managed code contains the

information that allows the CLR to guarantee, for instance, safe

execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and

Deal location facilities, and garbage collection As with managed and unmanaged

code, one can have both managed and unmanaged data in .NET applications - data

that doesn’t get garbage collected but instead is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly

enforce type-safety. This ensures that all classes are compatible with each other, by

describing types in a common way. It ensures types that are only used in

appropriate ways, the runtime also ensures that code doesn’t attempt to access

memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that

you can develop managed code that can be fully used by developers using any

programming language, a set of language features and rules for using them called

the Common Language Specification (CLS) has been defined. Components that

follow these rules and expose only CLS features are considered CLS-compliant.

Page 50: My Documentation Final!!

THE CLASS LIBRARY:

.NET provides a single-rooted hierarchy of classes. The root of the namespace is

called System; this contains basic types like Byte, Double, Boolean, and String, as

well as Object. All objects derive from System. Apart from objects, there are value

types. Value types can be allocated on the stack, which can provide useful

flexibility. There are also efficient means of converting value types to object types

if and when necessary.

LANGUAGES SUPPORTED BY .NET

The .NET framework supports new versions of Microsoft’s old favorites Visual

Basic and C++ (as VB.NET and Managed C++), but there are also a number of

new additions to the family. Other languages for which .NET compilers are

available include

● FORTRAN

● COBOL

● Eiffel

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework

monitors allocated resources, such as objects and variables. In addition, the .NET

Framework automatically releases memory for reuse by destroying objects that are

no longer in use. In C#.NET, the garbage collector checks for the objects that are

not currently in use by applications. When the garbage collector comes across an

object that is marked for garbage collection, it releases the memory occupied by

the object.

Page 51: My Documentation Final!!

FEATURES OF SQL SERVER:

The OLAP Services feature available in SQL Server version 7.0 is now called SQL

Server 2000 Analysis Services. The term OLAP Services has been replaced with

the term Analysis Services. Analysis Services also includes a new data mining

component. The Repository component available in SQL Server version 7.0 is now

called Microsoft SQL Server 2000 Meta Data Services. References to the

component now use the term Meta Data Services. The term repository is used only

in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Page 52: My Documentation Final!!

Design View

To build or modify the structure of a table we work in the table design view.

We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view

mode.

QUERY: A query is a question that has to be asked the data. Access gathers data

that answers the question from one or more table. The data that make up the

answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each

time we run query, we get latest information in the dynaset. Access either displays

the dynaset or snapshot for us to view or perform an action on it, such as deleting

or updating.

Page 53: My Documentation Final!!

REFERENCES

[1] M. J. Neely, .Order Optimal Delay for Opportunistic Scheduling in Multi-User

Wireless Uplinks and Downlinks,. IEEE/ACM Transactions on Networking, 2008.

[2] Delay Analysis for Maximal Scheduling in Wireless Networks with

BurstyTraf_c,. in Proceedings of IEEE INFOCOM, Phoenix, AZ, April 2008.

[3] S. Shakkottai, .Effective Capacity and QoS for Wireless Scheduling,.IEEE

Transactions on Automatic Control, vol. 53, no. 3, April 2008.

[4] X. Lin, .On Characterizing the Delay Performance of Wireless Scheduling

Algorithms,. in 44th Annual Allerton Conference on Communication,Control, and

Computing, Monticello, IL, September 2006.

[5] L. Ying, R. Srikant, A. Eryilmaz, and G. E. Dullerud, .A Large Deviations

Analysis of Scheduling in Wireless Networks,. IEEE Transactions on Information

Theory, vol. 52, no. 11, November 2006.

[6] D. Shah and D. Wischik, .Optimal Scheduling Algorithms for Input-Queued

Switches,. in Proceedings of IEEE INFOCOM, Barcelona,Spain, April 2006.

[7] X. Lin, N. B. Shroff, and R. Srikant, .A Tutorial on Cross-Layer Optimization

in Wireless Networks,. IEEE Journal on Selected Areas in Communications, vol.

24, no. 8, August 2006.

Page 54: My Documentation Final!!

[8] A. Eryilmaz, R. Srikant, and J. Perkins, .Stable Scheduling Policies for Fading

Wireless Channels,. IEEE/ACM Transactions on Networking, vol. 13, no. 2, pp.

411.424, April 2005.

[9] A. L. Stolyar, “MaxWeight Scheduling in a Generalized Switch: State Space

Collapse and Workload Minimization in Heavy Trafc,” Annals of Applied

Probability, vol. 14, no. 1, pp. 1–53, 2004.