145
A Project Report on HIGH AVAILABILITY OF NETWORK USING OSPF & EIGRP Submitted towards partial fulfillment of the Requirement for the award of the degree of BACHELOR OF ENGINEERING IN INFORMATION TECHNOLOGY BY K.NAGA SAI ANIRUDH 04-08-5015 AZHAR PASHA 04-08-5021 RAHILA SABA 04-08-5095 Under the supervision of Mr. N.Md.JUBAIR BASHA (Asst. Professor)

Project Be Report

Embed Size (px)

Citation preview

Page 1: Project Be Report

A Project Report on

HIGH AVAILABILITY OF NETWORK USING OSPF & EIGRP

Submitted towards partial fulfillment of the Requirement for the award of the degree of

BACHELOR OF ENGINEERING

IN

INFORMATION TECHNOLOGY

BY

K.NAGA SAI ANIRUDH 04-08-5015

AZHAR PASHA 04-08-5021

RAHILA SABA 04-08-5095

Under the supervision of

Mr. N.Md.JUBAIR BASHA (Asst. Professor)

MUFFAKHAM JAH COLLEGE OF ENGINEERING AND TECHNOLOGY

(Affiliated to OSMANIA UNIVERSITY)

BANJARAHILLS Rd.No.3, HYDERABAD.

2011

Page 2: Project Be Report

DECLARATION

We hereby declare that the project entitled “HIGH AVAILABILITY OF NETWORK USING OSPF & EIGRP”, done at BHEL, Hyderabad by students of MUFFAKHAM JAH COLLEGE OF ENGINEERING AND TECHNOLOGY, submitted to the department of Information Technology, MJCET, Osmania University, Hyderabad, in partial fulfillment of the award of ‘Bachelor of Engineering’ is a record of the original project done by us under the guidance of Mr. Diwaker Chakrapani (Sr. Engineer, B.H.E.L.) and Mr.N.Md.Jubair Basha, Asst.professor, Department of Information Technology, MUFFAKHAM JAH COLLEGE OF ENGINEERING AND TECHNOLOGY, Hyderabad.

K.NAGA SAI ANIRUDH 04-08-5015

AZHAR PASHA 04-08-5021

RAHILA SABA 04-08-5095

Page 3: Project Be Report

ACKNOWLEDGEMENT

The satisfaction that accompanies the successful completion of any work would be incomplete without the mention of the people who made it possible and whose encouragement and guidance has been a source of inspiration throughout the course of the project.

We specially thank Mr.Diwaker Chakrapani (Sr. Engineer, BHEL)our technical guide for lending his unconditional support, help and cooperation in making this project a success.

We would also like to thank Mr.A.A.Moiz Khaizer, Head of the Department (Information Technology) and Mr.N.Md.Jubair Basha internal mentor for supporting us in our endeavors.

Furthermore we would also thank all the people, who were directly or indirectly or involved in successful completion of the project.

Page 4: Project Be Report

ABSTRACT

TITLE: HIGH AVAILABILITY OF NETWORK USING OSPF AND EIGRP

Availability has always been an important design goal for network architectures. As Enterprise customers increasingly deploy mission-critical web-based services, they require a deeper understanding of designing optimal network availability solutions. There are several approaches to implementing high-availability network solutions. High availability network Design can be Static routing or dynamic routing.

Static routing is simply the process of manually entering routes into a device's routing table via a configuration file that is loaded when the routing device starts up. As an alternative, these routes can be entered by a network administrator who configures the routes manually.

Dynamic routing protocols are supported by software applications running on the routing device (the router) which dynamically learn network destinations and how to get to them and also advertise those destinations to other routers. This advertisement function allows all the routers to learn about all the destination networks that exist and how to those networks.

A router using dynamic routing will 'learn' the routes to all networks that are directly connected to the device. Next, the router will learn routes from other routers that run the same routing protocol.

In our project we are going to simulate both static & dynamic routings between selected nodes. We demonstrate how the complexity increases in dynamic routing protocol as number of hops increases like rip & OSPF, advantages of static routing in smaller & medium campus Network.

OSPF (Open Shortest Path First) defines its hierarchy based on areas. An area is a common grouping of routers and their interfaces. OSPF has one single common area through which all other areas communicate. Due to the use of the OSPF algorithm and its demand on router resources it is necessary to keep the number of routers at 50 or below per OSPF area. Areas with unreliable links will therefore require many recalculations and are best suited to operate within small areas.

EIGRP (Enhanced Interior Gateway Routing Protocol) is an advanced distance-vector routing protocol, with optimizations to minimize both the routing instability incurred after topology changes, as well as the use of bandwidth and processing power in the router. Unlike most other distance vector protocols, EIGRP does not rely on periodic route dumps in order to maintain its topology table. Routing information is exchanged only upon the establishment of new neighbor adjacencies, after which only changes are sent

Page 5: Project Be Report

CONTENTS

1. INTRODUCTION1.1. Introduction1.2. Layer 3 Advantages Over Layer 2 Switches1.3. EIGRP1.4. OSPF 1.5. Scope & Purpose1.6. Summary

2. SYSTEM ANALYSIS2.1. Existing System2.2. Proposed System2.3. Summary

3. REQUIREMENTS ANALYSIS3.1. Feasibility Study3.2. Data Flow Diagram

4. SYSTEM SPECIFICATION4.1. Modules4.2. System Requirements

4.2.1. Functional Requirements4.2.2. Performance Requirements4.2.3. Hardware Specifications4.2.4. Software Specifications

5. SYSTEM DESIGN5.1. SDLC5.2. Architectural Design5.3. Use-Case Diagrams5.4. Class Diagrams5.5. Activity Diagrams5.6. Sequence Diagrams5.7. Collaboration Diagrams

6. SYSTEM CODING 6.1. Interface Design6.2. Sample Code6.3. Screen Shots

Page 6: Project Be Report

7. FORMS7.1. User Interface Design7.2. Procedural Design7.3. Route

8. SYSTEM IMPLEMENTATION8.1. Installation Details8.2. Software & Hardware Requirements for Case Simulation

9. SYSTEM TESTING9.1. Levels of Testing9.2. Test Case Design9.3. Testing Strategies

9.3.1. Unit Testing9.3.2. Integration Testing9.3.3. System Testing9.3.4. Condition Testing9.3.5. Data Flow Testing9.3.6. Loop Testing9.3.7. Validation Testing9.3.8. Alpha Beta Testing

10. SYSTEM MAINTAINANCE 10.1. User Manual10.2. ASP.Net Accessing Data with C#S10.3. Making Database Connection10.4. Overview Of SQL Server 200510.5. Networking Features

10.5.1. .NET Frame work

11. CONCLUSION

12. BIBLIOGRAPHY

Page 7: Project Be Report

CHAPTER-1

Page 8: Project Be Report

1. Introduction

1.1 INTRODUCTION:

The hierarchical design segregates the functions of the network into separate building blocks to provide for availability, flexibility, scalability, and fault isolation. The Distribution block provides for policy enforcement and Access control, route Aggregation, and the demarcation between the Layer 2 subnet (VLAN) and the rest of the Layer 3 routed network. The Core layers of the network provide high capacity transport between the attached Distribution building blocks and the Access layer provides connectivity to end devices such as PCs, PoE, Unified Communication components like IP phone, voicemail, e-mail, and instant messaging etc.

For campus designs requiring a simplified configuration, common end-to-end troubleshooting tools and fastest convergence, a distribution block design using Layer 3 switching in the access layer (routed access) in combination with Layer 3 switching at the distribution layer provides the fastest restoration of voice and data traffic flows.Many of the potential advantages of using a Layer 3 access design include the following: •Improved convergence •Simplified multicast configuration •Dynamic traffic load balancing•Single control plane

•Single set of troubleshooting tools ( eg. ping and traceroute)

•HSRP / VRRP not required

Of these, perhaps the most significant is the improvement in network convergence times possible when using a routed access design configured with EIGRP or OSPF as the routing protocol. Comparing the convergence times for an optimal Layer 2 access design against that of the Layer 3 access design, four fold improvement in convergence times can be obtained, from 800-900msec for Layer 2 design to less than 200 msec for the Layer 3 access.

Although the sub-second recovery times for the Layer 2 Access designs are well within the bounds of tolerance for most enterprise networks, the ability to reduce convergence times to a sub-200 msec range is a significant advantage of the Layer 3 routed Access design. This reduction in convergence times to sub 200 msec reduces the impact on voice and video to minimal disruption and supports critical data environments.

For those networks using a routed Access (Layer 3 Access switching) within their Distribution blocks, Cisco recommends that a full-featured routing protocol such as EIGRP or OSPF be implemented as the Campus Interior Gateway Protocol (IGP). Using EIGRP or OSPF end-to-end within the Campus provides faster convergence, better fault tolerance, improved manageability, and better scalability than a design using static routing or RIP, or a design that leverages a combination of routing protocols (for example, RIP redistributed into OSPF).

1.2 Layer 3 Advantages over Layer 2: Utilizing the Layer 3 routing technologies at all layers in the hierarchical campus network design allows us to minimize the layer 2 deficiencies such as; faster convergence times. Routing protocols can be tuned to converge more quickly in the event of a failure than the spanning-tree protocol can. Routing protocols also fail close instead of failing open. By this I mean that if a router loses a peer it will close that route and try to find an alternate path, where as the spanning-tree protocol will broadcast out all ports creating a broadcast storm trying to find a path for packets to take.

Page 9: Project Be Report

You reduce the risk of layer 2 attacks such as bridging loops. If the network comes under attack from a bridging loop you are able to reduce the impact to a single portion of the network, which would most likely be a single access layer switch instead of a larger portion of the network spanning multiple switches and possibly the entire LAN.

1.3 Enhanced Interior Gateway Routing Protocol (EIGRP): EIGRP is a Cisco proprietary routing protocol. It has the advantage of being simple to configure, has fast convergence without tuning, and it is scalable to larger network topologies. The biggest disadvantage to utilizing EIGRP as the routing protocol in your fully routed network is that you would be limiting yourself to only utilizing Cisco hardware. That may be fine if you intend on running a Cisco only shop for your network requirements, but if there is any doubt of that then EIGRP may not be the best choice to use.

1.4 Open Shortest Path First (OSPF): OSPF is a routing protocol that is an open standard. That means that any vendor can implement OSPF and have it interoperate with devices from other vendors. For example a Cisco router will be able to send and receive OSPF updates from a Juniper router. This can be very advantageous in an environment where using network equipment from multiple vendors. The disadvantage of OSPF is that requires more tuning than EIGRP to achieve similar convergence times in the event of a failure.

1.5 Scope and Purpose:

Network routing problems are generally multidimensional in nature, and in many

cases the explicit consideration of multiple objectives is adequate. Objectives related

to cost, time, accessibility, environmental impact, reliability and risk are appropriated

for selecting the most satisfactory (“best compromise”) route in many problems. In

general there is no single optimal solution in a multi objective problem but rather, a

set of non-dominated solutions from which the decision maker must select the most

satisfactory. However, generating and presenting the whole set of non-dominated

paths to a decision maker, in general, is not effective because the number of these

paths can be very large. Interactive procedures are adequate to overcome these

drawbacks.

Analysis:

Graph operations Method incident Edges is called once for each vertex

Page 10: Project Be Report

Label operations We set/get the distance and locator labels of vertex z O(deg(z)) times Setting/getting a label takes O(1) time

Priority queue operations Each vertex is inserted once into and removed once from the priority

queue, where each insertion or removal takes O(log n) time The key of a vertex in the priority queue is modified at most deg(w)

times, where each key change takes O(log n) time

Dijkstra’s algorithm runs in O((n + m) log n) time provided the graph is represented by the adjacency list structure

Recall that Sv deg(v) = 2m

The running time can also be expressed as O(m log n) since the graph is connected

Extension:

Using the template method pattern, we can extend Dijkstra’s algorithm to return a tree of shortest paths from the start vertex to all other vertices

We store with each vertex a third label: parent edge in the shortest path tree

In the edge relaxation step, we update the parent label.

Summary:

Utilizing a fully routed network is an efficient way of providing a reliable and available hierarchical network. It has many advantages over a flat network and is more effective than utilizing layer 2 in the access layer only. While there are some drawbacks in the design, I feel that those drawbacks are minimal when considering a network design.

Page 11: Project Be Report

CHAPTER-2

Page 12: Project Be Report

2. SYSTEM ANALYSIS

2.1 Existing System:Shortest path is an optimization problem that’s relevant to a wide range of applications, such as network routing, gaming, circuit design, and mapping,” Goldberg says. “The industry comes up with new applications all the time, creating different parameters for the problem. Technology with more speed and capacity allows us to solve bigger problems, so the scope of the shortest-path problem itself has become more ambitious. And now there are Web-based services, where computing time must be minimized so that we can respond to queries in real time.The shortest-path problem, one of the fundamental quandaries in computing and graph theory, is intuitive to understand and simple to describe. In mapping terms, it is the problem of finding the quickest way to get from one location to another. Expressed more formally, in a graph in which vertices are joined by edges and in which each edge has a value, or cost, it is the problem of finding the lowest-cost path between two vertices. There are already several graph-search algorithms that solve this basic challenge and its variations, so why is shortest path perennially fascinating to computer scientists?

2.2 Proposed System:The classic general algorithm for shortest path is called Dijkstra's algorithm, first presented in 1959. But although this solves the problem, it does so essentially by searching the entire graph to compute its lowest-cost path. For small graphs, this is feasible, but for large graphs, the computing time just takes too long. For example, in a solution for driving directions, the road network is represented as a graph, where each vertex corresponds to an intersection and each edge to a segment of road between intersections. A complete map of the U.S. road system contains more than 20 million intersections, a huge amount of data to process if the algorithm has to search every segment of the graph.Shortest path algorithms are applied to automatically find directions between physical locations, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state Shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.

Summary:Availability has always been an important design goal for network architectures. As enterprise customers increasingly deploy mission-critical web-based services, they require a deeper understanding of designing optimal network availability solutions.OSPF provides a much better failure detection and recovery than RIPv2 and is recommended for inter-switch availability. However, OSPF is not recommended to run a routing protocol on a server because of potential security issues, scalability, and performance. The calculation of route tables from Link State Database inOSPF, for example, can impact server performance, depending on the number of routes in the autonomous system; the rate of changes in routing state; and the size of the server.

Page 13: Project Be Report

CHAPTER-3

Page 14: Project Be Report

3. REQUIREMENT ANALYSIS:

3.1 Feasibility study:

Preliminary investigation examine project feasibility, the likelihood the

system will be useful to the organization. The main objective of the feasibility

study is to test the Technical, Operational and Economical feasibility for adding

new modules and debugging old running system. All system is feasible if they

are unlimited resources and infinite time. There are aspects in the feasibility

study portion of the preliminary investigation:

Technical Feasibility

Operational Feasibility

Economical Feasibility

Technical Feasibility

The technical issue usually raised during the feasibility stage of the

investigation includes the following:

Does the necessary technology exist to do what is suggested?

Do the proposed equipments have the technical capacity to hold the data required

to use the new system?

Will the proposed system provide adequate response to inquiries, regardless of the

number or location of users?

Can the system be upgraded if developed?

Are there technical guarantees of accuracy, reliability, ease of access and data

security?

Earlier no system existed to cater to the needs of ‘Secure Infrastructure

Implementation System’. The current system developed is technically feasible. It is a

web based user interface for audit workflow at NIC-CSD. Thus it provides an easy

access to the users. The database’s purpose is to create, establish and maintain a

workflow among various entities in order to facilitate all concerned users in their

various capacities or roles. Permission to the users would be granted based on the

roles specified.

Page 15: Project Be Report

Therefore, it provides the technical guarantee of accuracy, reliability and

security. The software and hard requirements for the development of this project are

not many and are already available in-house at NIC or are available as free as open

source. The work for the project is done with the current equipment and existing

software technology. Necessary bandwidth exists for providing a fast feedback to the

users irrespective of the number of users using the system.

Operational Feasibility

Proposed projects are beneficial only if they can be turned out into information

system. That will meet the organization’s operating requirements. Operational

feasibility aspects of the project are to be taken as an important part of the project

implementation. Some of the important issues raised are to test the operational

feasibility of a project includes the following: -

Is there sufficient support for the management from the users?

Will the system be used and work properly if it is being developed and

implemented?

Will there be any resistance from the user that will undermine the possible

application benefits?

This system is targeted to be in accordance with the above-mentioned issues.

Beforehand, the management issues and user requirements have been taken into

consideration. So there is no question of resistance from the users that can undermine

the possible application benefits.

The well-planned design would ensure the optimal utilization of the computer

resources and would help in the improvement of performance status.

Economical Feasibility

A system can be developed technically and that will be used if installed must

still be a good investment for the organization. In the economical feasibility, the

development cost in creating the system is evaluated against the ultimate benefit

derived from the new systems. Financial benefits must equal or exceed the costs.

Page 16: Project Be Report

The system is economically feasible. It does not require any addition hardware

or software. Since the interface for this system is developed using the existing

resources and technologies available at NIC, There is nominal expenditure and

economical feasibility for certain.

Page 17: Project Be Report

3.2 Data Flow Diagrams (DFD):

A data flow diagram is graphical tool used to describe and analyze movement

of data through a system. These are the central tool and the basis from which the

other components are developed. The transformation of data from input to output,

through processed, may be described logically and independently of physical

components associated with the system. These are known as the logical data flow

diagrams.

The physical data flow diagrams show the actual implements and movement

of data between people, departments and workstations. A full description of a system

actually consists of a set of data flow diagrams. Using two familiar notations

Yourdon, Gane and Sarson notation develops the data flow diagrams. Each

component in a DFD is labeled with a descriptive name. Process is further identified

with a number that will be used for identification purpose. The development of

DFD’s is done in several levels. Each process in lower level diagrams can be broken

down into a more detailed DFD in the next level. The lop-level diagram is often

called context diagram. It consists a single process bit, which plays vital role in

studying the current system. The process in the context level diagram is exploded

into other process at the first level DFD.

Data Flow Diagram Notations:

In the DFD, there are four symbols

1. A square defines a source (originator) or destination of system data.

2. An arrow identifies data flow. It is the pipeline through which the

information flows.

3. A circle or a bubble represents a process that transforms incoming data flow

into outgoing data flows.

4. An open rectangle is a data store, data at rest or a temporary repository of

data.

Page 18: Project Be Report

Dataflow diagram:

Page 19: Project Be Report

3.3 UML Diagrams:

The unified modeling language allows the software engineer to express an

analysis model using the modeling notation that is governed by a set of syntactic

semantic and pragmatic rules.A UML system is represented using five different views

that describe the system from distinctly different perspective. Each view is defined by

a set of diagram, which is as follows.

User Model View

i. This view represents the system from the users’ perspective.

ii. The analysis representation describes a usage scenario from the end-users

perspective.

Structural model view

In this model the data and functionality are arrived from inside the system.

This model view models the static structures

Behavioral Model View

It represents the dynamic of behavioral as parts of the system, depicting the

interactions of collection between various structural elements described in the

user model and structural model view.

Implementation Model View

In this the structural and behavioral as parts of the system are represented as

they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the environment in which the system is

to be implemented are represented.

UML is specifically constructed through two different domains they are

UML Analysis modeling, which focuses on the user model and structural

model views of the system?

UML design modeling, which focuses on the behavioral modeling,

implementation modeling and environmental model views.

Page 20: Project Be Report

Use case Diagrams represent the functionality of the system from a user’s

point of view. Use cases are used during requirements elicitation and analysis

to represent the functionality of the system. Use cases focus on the behavior of

the system from external point of view. Actors are external entities that

interact with the system. Examples of actors include users like administrator,

bank customer …etc., or another system like central database.

Page 21: Project Be Report

CHAPTER-4

Page 22: Project Be Report

4. SYSTEM SPECIFICATION

4.1 Modules:

1. Waited module

2. Location adding module

3. Dijkstra algorithms

4. Calculating shortest

1. Waited module:

This module you can give waited values of edges. Or you can also give

randomized value for values in between vertices.

2. Location Modules:

This module can put dynamic location on white space and put relations on

different points in heterogeneously and draw the line in it.

3. Dijkstra algorithms:

After completion of waited module and location module .you have to this

graph

Our algorithms to find the shortest path. To solve the DSP problem, one

could apply Dijkstra’s algorithm repeatedly to compute the SPTs. However,

this well-studied static algorithm may become ineffective when only a

small number of edges in a graph experience weight changes. Therefore,

researchers have been studying dynamic algorithms to minimize shortest

path re-computation time.

Calculate module:

This module gives the best possible shortest paths to specified vertex

using dijkstra algorithms

Architecture Diagram:

Page 23: Project Be Report

4.2 SYSTEM REQUIREMENTS:

Page 24: Project Be Report

4.2.1 Functional Requirements:

NS2 Simulator : Installed on System

Within the Network.

Layer 3 Switches

Routers & Hubs

Repeaters

4.2.2 Performance Requirements:

Serial & Parallel Connection of various nodes throughout the campus available as per

the need of connectivity of network. Over all available area with connected network is

around 20Km. (B.H.E.L. Campus, Hyderabad)

4.2.3 Software Specifications:

Operating System : Windows XP/2003

User Interface : Window Application

Frame work : MS Visual Studio 3.5

Programming Language : C#.net

4.2.4 Hardware Specifications:

Processor : Pentium IV

Hard Disk : 40GB

RAM : 256MB

Page 25: Project Be Report

CHAPTER-5

5. SYSTEM DESIGN

5.1 SDLC:

Page 26: Project Be Report

Systems Development Life Cycle (SDLC) is any logical process used by a systems analyst to develop an information system including requirements, validation and training and user ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance. Computer systems have become more complex and often (especially with the advent of Service Oriented Architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of system development life cycle (SDLC) models have been created: “waterfall”," "foution," "spiral," "build and fix," " Rapid Prototyping and "synchronize and stabilize."SDLC models can be described along a spectrum of agile to iterative to sequential. Agile Methodologies such as XP and Scrum focus on light-weight processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and DSDM, focus on limited project scopes and expanding or improving products by multiple iterations. Sequential or big-design-upfront (BDUF) models, such as waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results.Some agile and iterative proponents confuse the term SDLC with sequential or "more traditional" processes; however, SDLC is an umbrella term for all methodologies for the design, implementation, and release of software. In Project Management a project has both a life cycle and a "systems development life cycle," during which a number of typical activities occur. The project life cycle (PLC) encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product Requirements

Model of the Systems Development Life Cycle with the Maintenance

bubble highlighted:

Page 27: Project Be Report
Page 28: Project Be Report

Spiral SDLC

Main Highlights

Main characteristicsPlaceholder (“framework” or “meta-model”) for other, less elaborate, development modelsIterativePrototype-orientedStarts with planning and ends with customer evaluationLow risk

Page 29: Project Be Report

Spiral “areas”PlanningGetting requirementsProject planning (based on initial reqs.)Project planning (based on customer eval.)

Risk analysisCost/Benefit and threats/opportunities analysisBased on initial reqs. and later on customer feedback

EngineeringPreview itDo it

Customer evaluation

Page 30: Project Be Report

5.2 Architectural Design:

Architecture flow:

Below architecture diagram represents mainly flow of requests from users to

database through servers. In this scenario overall system is designed in three tiers

separately using three layers called presentation layer, business logic layer and data

link layer. This project was developed using 3-tier architecture.

The top tier is a client, which contains query and responding tools, analysis

tools. The logic layer which is used to fetch the information from the database for the

user purpose. The bottom tier that is database contains large amount of data. Queries

are passed by the user from the top tier and which is passed to the database bottom

layer through server that is middle layer or tier. Database systems are designed to

manage large bodies of information management of data involves both defining

Diagrammatically said…

Plan

Assess risks

Evaluate Build

“Point of no return”

Plane of application

Costs

Threats

Prototypes

Page 31: Project Be Report

structures for storage of information and providing mechanisms for the manipulation

of information.

System Architecture:

1. Three Tier Layer Model1. Tier: Tier indicates a physical separation of components, which may mean

different assemblies such as DLL, EXE, etc. on the same server or multiple servers. Data Tier have no direction with Presentation Tier, but there is an intermediate Tier called Business Tier which is mainly responsible to pass the data from Data Tier to Presentation Tier and to add defined business logic to Data

Figure.1

Page 32: Project Be Report

Figure.2

2. Layer: Layer indicates logical separation of components, such as having distinct namespaces and classes for the Database Access Layer, Business Logic Layer and User Interface Layer.

Figure.3

3. Data Tier is basically the server which stores all the application’s data. Data tier contents Database Tables, XML Files and other means of storing Application Data.

4. Business Tier is mainly working as the bridge between Data Tier and Presentation Tier. All the Data passes through the Business Tier before passing to the presentation Tier. Business Tier is the sum of Business Logic Layer, Data Access Layer and Value Object and other components used to add business logic.

5. Presentation Tier is the tier in which the users interact with an application. Presentation Tier contents Shared UI code, Code Behind and Designers used to represent information to user.

Page 33: Project Be Report

Figure.4

6. The above figure is a mixture of Three Tier and Three Layer Architecture. Here, we can clearly see a different between Tier and Layer. Since each component is independent of each other, they are easily maintainable without changing the whole code.

7. This approach is really very important when several developers are working on the same project and some module needs to be re-used in another project. In a way, we can distribute work among developers and also maintain it in the future without many problems.

8. Testing is also a very important issue for Architecture when we are considering writing a test case for the project. Since it’s like a modular architecture, it’s very handy testing each module and to trace out bugs without going through the entire code.

Page 34: Project Be Report

5.3 USE CASE DIAGRAMS:

Use case diagrams model the functionality of a system using actors and use

cases. UCDs are fundamentally different from sequence diagrams or flow charts

because they do not make any attempt to represent the order or number of times that

the systems actions and sub-actions should be executed.

Use case:

Use cases are services or functions provided by the system to its users.It

describes a sequence of actions that provide something of measurable value to an

actor and is drawn as a horizontal ellipse.

Actors:

An actor is a person, organization, or external system that plays a role in one

or more interactions with your system. Actors are drawn as stick figures.

Associations: 

Associations between actors and use cases are indicated by solid lines. An

association exists whenever an actor is involved with an interaction described by a use

case.  These are modeled as lines connecting use cases and actors to one another, with

an optional arrowhead on one end of the line. The arrowhead is often used to

indicating the direction of the initial invocation of the relationship or to indicate the

primary actor within the use case. 

System boundary boxes (optional):

You can draw a rectangle around the use cases, called the system boundary

box, to indicate the scope of your system.  Anything within the box represents

functionality that is in scope and anything outside the box is not.  System boundary

boxes are rarely used, although on occasion I have used them to identify which use

cases will be delivered in each major release of a system. 

Basic Use Case Diagram Symbols and Notations:

Use Case:

Draw use cases using ovals. Label with ovals with verbs that represent the system's

functions.

Page 35: Project Be Report

Actors:

Actors are the users of a system. When one system is the actor of another system,

label the actor system with the actor stereotype.

Relationships:

Illustrate relationships between an actor and a use case with a simple line. For

relationships among use cases, use arrows labeled either "uses" or "extends." A "uses"

relationship indicates that one use case is needed by another in order to perform a

task. An "extends" relationship indicates alternative options under a certain use case.

Page 36: Project Be Report

Use case diagram

User

Dynamic vertex

manual interactions

randomizations

Drow edges

assign wieghts

calculate

display

System:

Draw your system's boundaries using a rectangle that contains use cases. Place

actors outside the system's boundaries.

Page 37: Project Be Report

5.4 ACTIVITY DIAGRAMS:

Activity Diagram:

An activity diagram illustrates the dynamic nature of a system by modeling the

flow of control from activity to activity. An activity represents an operation in the

system that results in a change in the state of the system. Typically, activity diagrams

are used to model workflow or business processes and internal operation. Activity

diagrams can show activities that are conditional or parallel.

UML activity diagrams are used to document the logic of a single operation or

method, a single use case, or the flow of logic of a business process. In many ways,

activity diagrams are the object-oriented equivalent of flow charts and data-flow

diagrams (DFDs) from structured development. Activity Diagrams are also useful for

analyzing a use case by describing what actions need to take place and when they

should occur

Basic Activity Diagram Symbols and Notations

Action states:

Action states represent the non-interruptible actions of objects.

Action Flow:

Action flow arrows illustrate the relationships among action states.

Initial State:

A filled circle followed by an arrow represents the initial action state.

Page 38: Project Be Report

Final State:

An arrow pointing to a filled circle nested inside another circle represents the

final action state.

Branching:

A diamond represents a decision with alternate paths. The outgoing alternates

should be labeled with a condition or guard expression. You can also label one of the

paths “else.”

Synchronization:

A synchronization bar helps illustrate parallel transitions. Synchronization is

also called forking and joining.

Page 39: Project Be Report

Activity diagram:

Page 40: Project Be Report

Admin

User login

generate session id

Authenticate user

Data access

5.5 Class diagram:

In software engineering, a class diagram in the Unified Modeling Language

(UML) is a type of static structure diagram that describes the structure of a system by

showing the system's classes, their attributes, and the relationships between the

classes

Association:

Page 41: Project Be Report

Class diagram example of association between two classes

An Association represents a family of links. Binary associations (with two

ends) are normally represented as a line, with each end connected to a class box.

Higher order associations can be drawn with more than two ends. In such cases, the

ends are connected to a central diamond.

Aggregation is a variant of the "has a" or association relationship; aggregation

is more specific than association. It is an association that represents a part-whole or

part-of relationship. As a type of association, an aggregation can be named and have

the same adornments that an association can. However, an aggregation may not

involve more than two classes.

Class diagram

Page 42: Project Be Report

Object diagram:

An object diagram in the Unified Modeling Language (UML), is a diagram

that shows a complete or partial view of the structure of a modeled system at a

specific time.

An Object diagram focuses on some particular set of object instances and

attributes, and the links between the instances. A correlated set of object diagrams

provides insight into how an arbitrary view of a system is expected to evolve over

time. Object diagrams are more concrete than class diagrams, and are often used to

provide examples, or act as test cases for the class diagrams.

State diagram:

A state diagram is a type of diagram used in computer science and related

fields to describe the behavior of systems. State diagrams require that the system

described is composed of a finite number of states; sometimes, this is indeed the case,

while at other times this is a reasonable abstraction. There are many forms of state

diagrams, which differ slightly and have different semantics.

State diagram

Login

Registration

Identify user

Authentication

Access data

Page 43: Project Be Report

5.6 Sequence diagram:

A sequence diagram in Unified Modeling Language (UML) is a kind of

interaction diagram that shows how processes operate with one another and in what

Page 44: Project Be Report

order. It is a construct of a Message Sequence Chart.Sequence diagrams are

sometimes called Event-trace diagrams, event scenarios, and timing diagrams.

Diagram on the right describes the sequences of messages of a (simple) Restaurant

System. This diagram represents a Patron ordering food and wine, drinking wine then

eating the food, and finally paying for the food. The dotted lines extending

downwards indicate the timeline, time flows from top to bottom. The arrows represent

messages (stimuli) from an actor or object to other objects. For example, the Patron

sends message 'pay' to the Cashier. Half arrows indicate asynchronous method calls.

Sequence diagram:

Page 45: Project Be Report

: User

vertex weights edges calculate display

1 : enter dynamic vertex()

2 : Enter weights()

3 : drow Edges()

4 : Calculate()

5 : result displays()

6 : displays()

5.6 Collaboration Diagram:

UML Collaboration diagrams (interaction diagrams) illustrate the relationship

and interaction between software objects. They require use cases, system operation

contracts, and domain model to already exist. The collaboration diagram illustrates

messages being sent between classes and objects (instances). A diagram is created for

each system operation that relates to the current development cycle (iteration).

When creating collaboration diagrams, patterns are used to justify

relationships. Patterns are best principles for assigning responsibilities to objects and

are described further in the section on patterns. There are two main types of patterns

used for assigning responsibilities which are evaluative patterns and driving patterns.

Each system operation initiates a collaboration diagram. Therefore, there is a

collaboration diagram for every system operation. An example diagram for

purchasing a bus ticket.

The route and seat objects are multi objects which mean they are a collection

of objects. The message, “purchase Ticket (route, preference) is the initializing

message which is generated by the initializing actor. All other messages are generated

Page 46: Project Be Report

by the system between objects. The initializing message is not numbered. The first

message after the initializing message is numbered. Messages that are dependent on

previous messages are numbered based on the number of the message they are

dependent on. Therefore the message, “r=findRoute(route)” is numbered “1.1” since it

is dependent on the message “s=findSeat(route, preference)”. It is a construct of a

Message Sequence Chart.Sequence diagrams are sometimes called Event-trace

diagrams, event scenarios, and timing diagrams.

Collaboration Diagram:

Page 47: Project Be Report

: User

vertex

weights

edges

calculatedisplay

1 : enter dynamic vertex() 2 : Enter weights()

3 : drow Edges()

4 : Calculate()

5 : result displays()

6 : displays()

Page 48: Project Be Report

CHAPTER-6

6. CODING

Page 49: Project Be Report

6.1Interface Design:

public partial class Form1 : Form { bool _addLoc = false; List<GuiLocation> _guiLocations = new List<GuiLocation>(); List<Connection> _connections = new List<Connection>();

GuiLocation _selectedGuiLocation=null; Color normalColor;

public Form1() { InitializeComponent(); normalColor = btnAddLoc.BackColor; }

private void btnAddLoc_Click(object sender, EventArgs e) { if (_addLoc) { _addLoc = false; btnAddLoc.BackColor = normalColor; } else { _addLoc = true; btnAddLoc.BackColor = Color.Red; } }

private void pnlView_Click(object sender, EventArgs e) {

}

private void pnlView_MouseDown(object sender, MouseEventArgs e)

Page 50: Project Be Report

{ if (_addLoc) {

if (getGuiLocationAtPoint(e.X, e.Y) == null) { GuiLocation _guiLocation = new GuiLocation(); _guiLocation.Identifier = _guiLocations.Count().ToString(); _guiLocation.X = e.X; _guiLocation.Y = e.Y; _guiLocations.Add(_guiLocation); cmbLocations.Items.Add(_guiLocation); } } else { GuiLocation _guiLocation = getGuiLocationAtPoint(e.X, e.Y); if (_guiLocation != null) { if (_selectedGuiLocation != null) { int weight = 0; if (chkRandom.Checked) { Random random=new Random(); weight = random.Next(1, 25); } else { weight = int.Parse(txtweight.Text); } Connection connection = new Connection(_selectedGuiLocation, _guiLocation, weight); _connections.Add(connection); _selectedGuiLocation.Selected = false;

_selectedGuiLocation = null; } else { _guiLocation.Selected = true; _selectedGuiLocation = _guiLocation; } } } PaintGui(); }

GuiLocation getGuiLocationAtPoint(int x, int y) { foreach (GuiLocation _guiLocation in _guiLocations) { int x2=x-_guiLocation.X; int y2=y-_guiLocation.Y; int xToCompare = _guiLocation.Width / 2; int yToCompare = _guiLocation.Width / 2;

Page 51: Project Be Report

if (x2 >= xToCompare * -1 && x2 < xToCompare && y2 > yToCompare * -1 && y2 < yToCompare) { return _guiLocation; } } return null; }

private void pnlView_Paint(object sender, PaintEventArgs e) { PaintGui(); }

void PaintGui() { Brush _brushRed = new SolidBrush(Color.Red); Brush _brushBlack = new SolidBrush(Color.Black); Brush _brushWhite = new SolidBrush(Color.White); Brush _brushBlue = new SolidBrush(Color.Blue); Font _font = new Font(FontFamily.GenericSansSerif, 15); Pen _penBlue = new Pen(_brushBlue); Pen _penRed = new Pen(_brushRed);

foreach (GuiLocation _guiLocation in _guiLocations) { int _x = _guiLocation.X - _guiLocation.Width / 2; int _y = _guiLocation.Y - _guiLocation.Width / 2;

if (_guiLocation.Selected) pnlView.CreateGraphics().FillEllipse(_brushRed, _x, _y, _guiLocation.Width, _guiLocation.Width); else pnlView.CreateGraphics().FillEllipse(_brushBlack, _x, _y, _guiLocation.Width, _guiLocation.Width); pnlView.CreateGraphics().DrawString(_guiLocation.Identifier, _font, _brushWhite, _x, _y); }

foreach (Connection _connection in _connections) { Point point1 = new Point(((GuiLocation)_connection.A).X, ((GuiLocation)_connection.A).Y); Point point2 = new Point(((GuiLocation)_connection.B).X, ((GuiLocation)_connection.B).Y);

Point Pointref = Point.Subtract(point2, new Size(point1)); double degrees = Math.Atan2(Pointref.Y, Pointref.X); double cosx1 = Math.Cos(degrees); double siny1 = Math.Sin(degrees);

double cosx2 = Math.Cos(degrees + Math.PI); double siny2 = Math.Sin(degrees + Math.PI);

Page 52: Project Be Report

int newx = (int)(cosx1 * (float)((GuiLocation)_connection.A).Width + (float)point1.X); int newy = (int)(siny1 * (float)((GuiLocation)_connection.A).Width + (float)point1.Y);

int newx2 = (int)(cosx2 * (float)((GuiLocation)_connection.B).Width + (float)point2.X); int newy2 = (int)(siny2 * (float)((GuiLocation)_connection.B).Width + (float)point2.Y);

if (_connection.Selected) { pnlView.CreateGraphics().DrawLine(_penRed, new Point(newx, newy), new Point(newx2, newy2)); pnlView.CreateGraphics().FillEllipse(_brushRed, newx - 4, newy - 4, 8, 8); } else { pnlView.CreateGraphics().DrawLine(_penBlue, new Point(newx, newy), new Point(newx2, newy2)); pnlView.CreateGraphics().FillEllipse(_brushBlue, newx - 4, newy - 4, 8, 8); } pnlView.CreateGraphics().DrawString(_connection.Weight.ToString(), _font, _brushBlue, newx - 4, newy - 4); } }

private void btnCalc_Click(object sender, EventArgs e) { if (cmbLocations.SelectedIndex != -1) { RouteEngine.RouteEngine _routeEngine = new RouteEngine.RouteEngine(); foreach (Connection connection in _connections) { _routeEngine.Connections.Add(connection); }

foreach (Location _location in _guiLocations) { _routeEngine.Locations.Add(_location); }

Dictionary<Location, Route> _shortestPaths = _routeEngine.CalculateMinCost((Location)cmbLocations.SelectedItem); listBox1.Items.Clear();

List<Location> _shortestLocations = (List<Location>)(from s in _shortestPaths orderby s.Value.Cost select s.Key).ToList(); foreach (Location _location in _shortestLocations) { listBox1.Items.Add(_shortestPaths[_location]); }

Page 53: Project Be Report

} else { MessageBox.Show("Please select a position"); } }

private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { Route route = (Route)listBox1.SelectedItem; foreach (Connection _connection in _connections) { _connection.Selected = false; }

foreach (Connection _connection in route.Connections) { _connection.Selected = true; } PaintGui(); }

}

AlgorithmsCls:public class GuiLocation:Location { int x, y; bool selected;

public int Width { get { return 25; } }

public bool Selected { get { return selected; } set { selected = value; } }

public int Y { get { return y; } set { y = value; } }

public int X { get { return x; } set { x = value; } }

}

Page 54: Project Be Report

public class Connection { Location _a, _b; int _weight; bool selected=false;

public bool Selected { get { return selected; } set { selected = value; } }

public Connection(Location a, Location b, int weight) { this._a = a; this._b = b; this._weight = weight; } public Location B { get { return _b; } set { _b = value; } }

public Location A { get { return _a; } set { _a = value; } }

public int Weight { get { return _weight; } set { _weight = value; } }

}

public class Location {

string _identifier; public Location() {

} public string Identifier { get { return this._identifier; } set { this._identifier=value; } } public override string ToString() { return _identifier; } }public class Route { int _cost;

Page 55: Project Be Report

List<Connection> _connections; string _identifier;

public Route(string _identifier) { _cost = int.MaxValue; _connections = new List<Connection>(); this._identifier = _identifier; }

public List<Connection> Connections { get { return _connections; } set { _connections = value; } } public int Cost { get { return _cost; } set { _cost = value; } }

public override string ToString() { return "Id:" + _identifier + " Cost:" + Cost; } }

public class RouteEngine { List<Connection> _connections; List<Location> _locations;

public List<Location> Locations { get { return _locations; } set { _locations = value; } } public List<Connection> Connections { get { return _connections; } set { _connections = value; } }

public RouteEngine() { _connections = new List<Connection>(); _locations = new List<Location>(); }

/// <summary> /// Calculates the shortest route to all the other locations /// </summary> /// <param name="_startLocation"></param> /// <returns>List of all locations and their shortest route</returns> public Dictionary<Location, Route> CalculateMinCost(Location _startLocation) {

Page 56: Project Be Report

//Initialise a new empty route list Dictionary<Location, Route> _shortestPaths = new Dictionary<Location, Route>(); //Initialise a new empty handled locations list List<Location> _handledLocations = new List<Location>(); //Initialise the new routes. the constructor will set the route weight to in.max foreach (Location location in _locations) { _shortestPaths.Add(location, new Route(location.Identifier)); }

//The startPosition has a weight 0. _shortestPaths[_startLocation].Cost = 0;

//If all locations are handled, stop the engine and return the result while (_handledLocations.Count != _locations.Count) { //Order the locations List<Location> _shortestLocations = (List < Location > )(from s in _shortestPaths orderby s.Value.Cost select s.Key).ToList();

Location _locationToProcess = null;

//Search for the nearest location that isn't handled foreach (Location _location in _shortestLocations) { if (!_handledLocations.Contains(_location)) { //If the cost equals int.max, there are no more possible connections to the remaining locations if (_shortestPaths[_location].Cost == int.MaxValue) return _shortestPaths; _locationToProcess = _location; break; } }

//Select all connections where the startposition is the location to Process var _selectedConnections = from c in _connections where c.A == _locationToProcess select c;

//Iterate through all connections and search for a connection which is shorter foreach (Connection conn in _selectedConnections) { if (_shortestPaths[conn.B].Cost > conn.Weight + _shortestPaths[conn.A].Cost)

Page 57: Project Be Report

{ _shortestPaths[conn.B].Connections = _shortestPaths[conn.A].Connections.ToList(); _shortestPaths[conn.B].Connections.Add(conn); _shortestPaths[conn.B].Cost = conn.Weight + _shortestPaths[conn.A].Cost; } } //Add the location to the list of processed locations _handledLocations.Add(_locationToProcess); }

return _shortestPaths; } }

6.2 SAMPLE CODE:

using System;

Page 58: Project Be Report

using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Windows.Forms;using RouteEngine;

namespace Gui{ public partial class Form1 : Form { bool _addLoc = false; List<GuiLocation> _guiLocations = new List<GuiLocation>(); List<Connection> _connections = new List<Connection>();

GuiLocation _selectedGuiLocation=null; Color normalColor;

public Form1() { InitializeComponent(); normalColor = btnAddLoc.BackColor; }

private void btnAddLoc_Click(object sender, EventArgs e) { if (_addLoc) { _addLoc = false; btnAddLoc.BackColor = normalColor; } else { _addLoc = true; btnAddLoc.BackColor = Color.Red; } }

private void pnlView_Click(object sender, EventArgs e) {

}

private void pnlView_MouseDown(object sender, MouseEventArgs e) { if (_addLoc) {

if (getGuiLocationAtPoint(e.X, e.Y) == null) { GuiLocation _guiLocation = new GuiLocation(); _guiLocation.Identifier = _guiLocations.Count().ToString(); _guiLocation.X = e.X; _guiLocation.Y = e.Y; _guiLocations.Add(_guiLocation); cmbLocations.Items.Add(_guiLocation);

Page 59: Project Be Report

} } else { GuiLocation _guiLocation = getGuiLocationAtPoint(e.X, e.Y); if (_guiLocation != null) { if (_selectedGuiLocation != null) { int weight = 0; if (chkRandom.Checked) { Random random=new Random(); weight = random.Next(1, 25); } else { weight = int.Parse(txtweight.Text); } Connection connection = new Connection(_selectedGuiLocation, _guiLocation, weight); _connections.Add(connection); _selectedGuiLocation.Selected = false;

_selectedGuiLocation = null; } else { _guiLocation.Selected = true; _selectedGuiLocation = _guiLocation; } } } PaintGui(); }

GuiLocation getGuiLocationAtPoint(int x, int y) { foreach (GuiLocation _guiLocation in _guiLocations) { int x2=x-_guiLocation.X; int y2=y-_guiLocation.Y; int xToCompare = _guiLocation.Width / 2; int yToCompare = _guiLocation.Width / 2;

if (x2 >= xToCompare * -1 && x2 < xToCompare && y2 > yToCompare * -1 && y2 < yToCompare) { return _guiLocation; } } return null; }

private void pnlView_Paint(object sender, PaintEventArgs e) { PaintGui();

Page 60: Project Be Report

}

void PaintGui() { Brush _brushRed = new SolidBrush(Color.Red); Brush _brushBlack = new SolidBrush(Color.Black); Brush _brushWhite = new SolidBrush(Color.White); Brush _brushBlue = new SolidBrush(Color.Blue); Font _font = new Font(FontFamily.GenericSansSerif, 15); Pen _penBlue = new Pen(_brushBlue); Pen _penRed = new Pen(_brushRed);

foreach (GuiLocation _guiLocation in _guiLocations) { int _x = _guiLocation.X - _guiLocation.Width / 2; int _y = _guiLocation.Y - _guiLocation.Width / 2;

if (_guiLocation.Selected) pnlView.CreateGraphics().FillEllipse(_brushRed, _x, _y, _guiLocation.Width, _guiLocation.Width); else pnlView.CreateGraphics().FillEllipse(_brushBlack, _x, _y, _guiLocation.Width, _guiLocation.Width); pnlView.CreateGraphics().DrawString(_guiLocation.Identifier, _font, _brushWhite, _x, _y); }

foreach (Connection _connection in _connections) { Point point1 = new Point(((GuiLocation)_connection.A).X, ((GuiLocation)_connection.A).Y); Point point2 = new Point(((GuiLocation)_connection.B).X, ((GuiLocation)_connection.B).Y);

Point Pointref = Point.Subtract(point2, new Size(point1)); double degrees = Math.Atan2(Pointref.Y, Pointref.X); double cosx1 = Math.Cos(degrees); double siny1 = Math.Sin(degrees);

double cosx2 = Math.Cos(degrees + Math.PI); double siny2 = Math.Sin(degrees + Math.PI);

int newx = (int)(cosx1 * (float)((GuiLocation)_connection.A).Width + (float)point1.X); int newy = (int)(siny1 * (float)((GuiLocation)_connection.A).Width + (float)point1.Y);

int newx2 = (int)(cosx2 * (float)((GuiLocation)_connection.B).Width + (float)point2.X); int newy2 = (int)(siny2 * (float)((GuiLocation)_connection.B).Width + (float)point2.Y);

if (_connection.Selected) { pnlView.CreateGraphics().DrawLine(_penRed, new Point(newx, newy), new Point(newx2, newy2));

Page 61: Project Be Report

pnlView.CreateGraphics().FillEllipse(_brushRed, newx - 4, newy - 4, 8, 8); } else { pnlView.CreateGraphics().DrawLine(_penBlue, new Point(newx, newy), new Point(newx2, newy2)); pnlView.CreateGraphics().FillEllipse(_brushBlue, newx - 4, newy - 4, 8, 8); } pnlView.CreateGraphics().DrawString(_connection.Weight.ToString(), _font, _brushBlue, newx - 4, newy - 4); } }

private void btnCalc_Click(object sender, EventArgs e) { if (cmbLocations.SelectedIndex != -1) { RouteEngine.RouteEngine _routeEngine = new RouteEngine.RouteEngine(); foreach (Connection connection in _connections) { _routeEngine.Connections.Add(connection); }

foreach (Location _location in _guiLocations) { _routeEngine.Locations.Add(_location); }

Dictionary<Location, Route> _shortestPaths = _routeEngine.CalculateMinCost((Location)cmbLocations.SelectedItem); listBox1.Items.Clear();

List<Location> _shortestLocations = (List<Location>)(from s in _shortestPaths orderby s.Value.Cost select s.Key).ToList(); foreach (Location _location in _shortestLocations) { listBox1.Items.Add(_shortestPaths[_location]); } } else { MessageBox.Show("Please select a position"); } }

private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { Route route = (Route)listBox1.SelectedItem; foreach (Connection _connection in _connections) { _connection.Selected = false; }

Page 62: Project Be Report

foreach (Connection _connection in route.Connections) { _connection.Selected = true; } PaintGui(); }

}}-

using System;using System.Collections.Generic;using System.Linq;using System.Text;using RouteEngine;

namespace Gui{ public class GuiLocation:Location { int x, y; bool selected;

public int Width { get { return 25; } }

public bool Selected { get { return selected; } set { selected = value; } }

public int Y { get { return y; } set { y = value; } }

public int X { get { return x; } set { x = value; } }

}}using System;using System.Collections.Generic;using System.Linq;using System.Text;

namespace RouteEngine

Page 63: Project Be Report

{ public class Location {

string _identifier; public Location() {

} public string Identifier { get { return this._identifier; } set { this._identifier=value; } } public override string ToString() { return _identifier; } }}using System;using System.Collections.Generic;using System.Linq;using System.Text;

namespace RouteEngine{ public class Route { int _cost; List<Connection> _connections; string _identifier;

public Route(string _identifier) { _cost = int.MaxValue; _connections = new List<Connection>(); this._identifier = _identifier; }

public List<Connection> Connections { get { return _connections; } set { _connections = value; } } public int Cost { get { return _cost; } set { _cost = value; } }

public override string ToString() { return "Id:" + _identifier + " Cost:" + Cost; } }}using System;using System.Collections.Generic;using System.Linq;

Page 64: Project Be Report

using System.Text;

namespace RouteEngine{ public class RouteEngine { List<Connection> _connections; List<Location> _locations;

public List<Location> Locations { get { return _locations; } set { _locations = value; } } public List<Connection> Connections { get { return _connections; } set { _connections = value; } }

public RouteEngine() { _connections = new List<Connection>(); _locations = new List<Location>(); }

/// <summary> /// Calculates the shortest route to all the other locations /// </summary> /// <param name="_startLocation"></param> /// <returns>List of all locations and their shortest route</returns> public Dictionary<Location, Route> CalculateMinCost(Location _startLocation) { //Initialise a new empty route list Dictionary<Location, Route> _shortestPaths = new Dictionary<Location, Route>(); //Initialise a new empty handled locations list List<Location> _handledLocations = new List<Location>(); //Initialise the new routes. the constructor will set the route weight to in.max foreach (Location location in _locations) { _shortestPaths.Add(location, new Route(location.Identifier)); }

//The startPosition has a weight 0. _shortestPaths[_startLocation].Cost = 0;

//If all locations are handled, stop the engine and return the result while (_handledLocations.Count != _locations.Count) { //Order the locations

Page 65: Project Be Report

List<Location> _shortestLocations = (List < Location > )(from s in _shortestPaths orderby s.Value.Cost select s.Key).ToList();

Location _locationToProcess = null;

//Search for the nearest location that isn't handled foreach (Location _location in _shortestLocations) { if (!_handledLocations.Contains(_location)) { //If the cost equals int.max, there are no more possible connections to the remaining locations if (_shortestPaths[_location].Cost == int.MaxValue) return _shortestPaths; _locationToProcess = _location; break; } }

//Select all connections where the startposition is the location to Process var _selectedConnections = from c in _connections where c.A == _locationToProcess select c;

//Iterate through all connections and search for a connection which is shorter foreach (Connection conn in _selectedConnections) { if (_shortestPaths[conn.B].Cost > conn.Weight + _shortestPaths[conn.A].Cost) { _shortestPaths[conn.B].Connections = _shortestPaths[conn.A].Connections.ToList(); _shortestPaths[conn.B].Connections.Add(conn); _shortestPaths[conn.B].Cost = conn.Weight + _shortestPaths[conn.A].Cost; } } //Add the location to the list of processed locations _handledLocations.Add(_locationToProcess); }

return _shortestPaths; } }}

6.3 SCREEN SHOTS:

Page 66: Project Be Report

Home:

Page 67: Project Be Report
Page 68: Project Be Report
Page 69: Project Be Report
Page 70: Project Be Report

CHAPTER-7

Page 71: Project Be Report

7. FORMS

7.1 User Interface Design:All points are locations. The connections between the points have a specific weight. Not all connections are bidirectional (a dot marks a start travel point). When Calculate is pressed, all routes from the selected location are calculated. When a route is selected in the list box, the shortest route is visually shown by coloring the start dots red.In this example, the shortest route from 0 to 4 is going through location 2, 1 and then 4.

7.2 Procedural Design:

Introduction

Dijkstra was a Dutch computer scientist who invented a fast and simple way to calculate the shortest path between two points. Many examples I have found on the Internet implement that algorithm but none of them have done it in an Object Oriented way. So I thought of making my own.

Page 72: Project Be Report

Using the Code

The code contains two Project classes:

1. GUI: Shows the information visually o To add locations, click on the 'Add Location' button and then click on

the map where you want to add locations. o To add routes, click on the 'Add Location' button to deactivate the add

location, then click on a start location, then click on a end location. The weight of the route can be configured on top.

2. RouteEngine: Calculates the route

I will only go into details about the RouteEngine. How the UI is handled is not so important for this project but if you need information about it, you can always ask.

Project RouteEngine

1. Connection: This class holds the information about the connection between two dots. This is a one directional connection from A (the startpoint is visually shown with a dot) to B with a specific weight attached.

2. Location: Just a location (for example 1). 3. RouteEngine: This class will calculate all routes from one given startPoint. 4. Route: This class holds the information about a route between two points

(generated with the RouteEngine class).

Location

The most simple class. It only holds a name to display.

Connection

This class contains two Location objects and a weight.

public Connection(Location a, Location b, int weight){ this._a = a; this._b = b; this._weight = weight;}

Page 73: Project Be Report

7.3 Route:

This class contains a route. It has only a list of connections and the total weight. This class is generated by the route engine.

Route Engine

This is the class that drives the component. The algorithm is as follows:

1. Set the startPosition as active 2. Set the total weight to all routes to infinite 3. Iterate through all connections of the active position and store their weight if

their weight is smaller than their current weight 4. Set the active position as used 5. Set the nearest point (on whatever location) that isn't used as active 6. Repeat 3, 4, 5 until all positions are used

The following method will perform all these steps (and some extra checking and thinking). The Dictionary returned is a list of destination locations and the corresponding route to each destination location.

/// /// Calculates the shortest route to all the other locations/// /// List of all locations and their shortest routepublic Dictionary CalculateMinCost(Location _startLocation){ //Initialise a new empty route list Dictionary _shortestPaths = new Dictionary(); //Initialise a new empty handled locations list List _handledLocations = new List();

//Initialise the new routes. the constructor will set the route weight to in.max foreach (Location location in _locations) { _shortestPaths.Add(location, new Route(location.Identifier)); }

//The startPosition has a weight 0. _shortestPaths[_startLocation].Cost = 0;

//If all locations are handled, stop the engine and return the result while (_handledLocations.Count != _locations.Count) { //Order the locations List _shortestLocations = (List < Location > )(from s in _shortestPaths orderby s.Value.Cost select s.Key).ToList();

Location _locationToProcess = null;

//Search for the nearest location that isn't handled foreach (Location _location in _shortestLocations)

Page 74: Project Be Report

{ if (!_handledLocations.Contains(_location)) { //If the cost equals int.max, there are no more possible connections //to the remaining locations if (_shortestPaths[_location].Cost == int.MaxValue) return _shortestPaths; _locationToProcess = _location; break; } }

//Select all connections where the startposition is the location to Process var _selectedConnections = from c in _connections where c.A == _locationToProcess select c;

//Iterate through all connections and search for a connection which is shorter foreach (Connection conn in _selectedConnections) { if (_shortestPaths[conn.B].Cost > conn.Weight + _shortestPaths[conn.A].Cost) { _shortestPaths[conn.B].Connections = _shortestPaths[conn.A].Connections.ToList(); _shortestPaths[conn.B].Connections.Add(conn); _shortestPaths[conn.B].Cost = conn.Weight + _shortestPaths[conn.A].Cost; } } //Add the location to the list of processed locations _handledLocations.Add(_locationToProcess); }

return _shortestPaths;}

Page 75: Project Be Report

CHAPTER-8

Page 76: Project Be Report

8. SYSTEM IMPLEMENTATION 

8.1 Installation Details:

      Implementation is the process of having systems personnel check out and put new

equipment into use, train users, install the new application depending on the size of

the organization that will be involved in using the application and the risk associated

with its use, systems developers may choose to test the operation in only one area of

the firm, say in one department or with only one or two persons. Sometimes they will

run the old and new systems together to compare the results. In still other situation,

developers will stop using the old system one-day and begin using the new one the

next.

      Once installed, applications are often used for many years. However, both the

organization and the users will change, and the environment will be different over

weeks and months. Therefore, the application will undoubtedly have to be

maintained; modifications and changes will be made to the software, files, or

procedures to meet emerging user requirements. Since organization systems and the

business environment undergo continual change, the information systems should keep

pace. In this sense, implementation is ongoing process.

      Evaluation of the system is performed to identify its strengths and weakness. The

actual evaluation can occur along any of the following dimensions.

      Operational Evaluation: assessment of the manner in which the system

functions, including ease of use, response time, suitability of information formats,

overall reliability, and level of utilization.

      Organization Impact: Identification and measurement of benefits to the

organization in such areas as financial concerns operational efficiency, and

competitive impact. Includes impact on internal and external information flows.

      User Manager Assessment: Evaluation of the attitudes of senior and user

mangers within the organization, as well as end-users.

Page 77: Project Be Report

      Development Performance: Evaluation of the development process in

accordance with such yardsticks as overall development time and effort, conformance

to budgets and standards, and other project management criteria. Includes assessment

of development methods and tools.

8.2 Software/Hardware Requirements for Simulation:

The configuration of the system on which the package is developed is as follows:

a) HARDWARE:

(1) Processor : 866 MHz Pentium III or higher.

(2) Monitor : VGA or SVGA Color.

(3) Hard disk : 40 GB

(4) Ram : 256 MB

(5) Key Board : 104 Keys

(6) Mouse : Any

(7) Printer : Any

(8) Layer 3 Switches

(9) Router & Hubs

b) SOFTWARE:-

(1) Operating system : Windows XP-2 or above.

(2) Front End Tool : ASP.Net with C#

(3) Back End Tool : SQL Server 2005

(4) NS2 Simulator : Installed on System Connected to

Network

Page 78: Project Be Report

CHAPTER-9

Page 79: Project Be Report

9. SYSTEM TESTING

Testing is the process of exercising or evaluating a system or system

component by manual or automated means to verify that it satisfies specified

requirements. Testing is a process of executing program with the intent of finding

errors. A good test case is one that has highly probability of finding an error. A

successful test case is one that detects an as yet undiscovered error.

Testing involves operation of a system or application under controlled

conditions and evaluating the results. The controlled conditions should include both

normal and abnormal conditions. Testing should d intentionally attempt to make

things go wrong to determine if things happen when they shouldn’t or things don’t

happen when they should.

Testing Objective

1. Testing is a process of executing a program with the intent of finding an error.

2. A good test case is one that has a high probability of finding an as yet undiscovered

error.

3. A successful test is one that uncovers an as yet undercover error

Secondary benefits includes

1. Demonstrates the software functions appear to be working according to

specifications.

2. That performance requirements appear to have been met.

3. Data collected during testing provides a good indication of software reliability and

some indication of software quality.

Page 80: Project Be Report

Acceptance Testing

System Testing

Integration Testing

Unit Testing

9.1 Levels of Testing:

In order to uncover the errors present in different phases we have the concept

of levels of testing. The basic levels of testing are as shown below…

Client Needs

Requirements

Design

Code

9.2 Test Case Design:

To have a comprehensive testing scheme the test must cover all methods or a

good majority of them all the services of your system must be checked by at least one

test. To test a system you must construct some test input cases, and then describe how

the output will look. Next, perform the tests and compare the outcome with the

expected outcome the objectives of testing are:

Testing is the process of executing a program with the intent of finding errors.

A good test case is the one that as a high probability of detecting an as yet

undiscovered error.

A successful test case is the one that detects an as ye undiscovered error. If

testing is conducted successfully it will uncover errors in software, Testing

cannot show the absences of defects are present. It can only show that

software defects are present.

Page 81: Project Be Report

White Box Testing

Knowing the internal working of a product, tests can be conducted to ensure

that “all gears mesh”, that is, that internal operation performs according to

specifications and all internal components have been adequately exercised.

It is predicted on close examinations of procedural detail logical providing

test cases that exercise specific sets of conditions and/or loops tests paths through the

software. Basis path testing is a white box testing technique. The basis path method

enables the test case designer to derive a logical complexity of a procedural design

and use this measure as a guide for defining as basis set of execution paths.

We used white box testing for proper execution of loops, functions in the

advocate assistant system.

Black Box Testing

Black box testing allows to tests that are conducted at the software interface.

These are used to demonstrate the software functions operational that input is properly

accepted and the output is correctly produced, at the same time searching for error.

In this system, we checked by using sample input for setting proper output and

this works and black box testing was used.

Page 82: Project Be Report

9.3 Testing Strategies

A strategy for software testing must accommodate low level tests that are

necessary to verify that a small source code segment has been correctly implemented

as well as high level tests that validate major system functions against customer

requirements. A strategy must provide guidance for the practitioner.

Different Testing strategies

9.3.1 Unit Testing

Unit testing focuses verification efforts in smallest unit of software design the

module. Using the procedural design description as a guide, important control paths

are tested uncover error with in the boundary of the module. The relative complexity

of the tests and uncovered errors is limited by the constrained scope established for

unit testing, the unit test is normally a white box testing oriented and the step can

conducted in parallel for multiple modules.

1. Unit test considerations

2. Unit test procedures

9.3.2 Integration Testing

Integration testing is a systematic technique for constructing the program

structure while conducting tests to uncover errors associated with interfacing. There

are two types of integration testing:

1. Top-Down Integration: Top down integration is an incremental approach to

construction of program structures. Modules are integrated by moving down

wards throw the control hierarchy beginning with the main control module.

2. Bottom-Up Integration: Bottom up integration as its name implies, begins

construction and testing with automatic modules.

3. Regression Testing: In this contest of an integration test strategy, regression

testing is the re execution of some subset of test that have already been conducted

to ensure that changes have not propagate unintended side effects.

Page 83: Project Be Report

9.3.3 System Testing

The following testing strategies have to be followed by the programmers

during the development and the coding phase of the project. Develop each unit

separately and then perform Unit Test for proper functioning. During this check

whether the unit is properly functional using some of the following methods.

9.3.4 Condition Testing

Properly exercise the logical conditions in a program module. Avoid Boolean

operator errors, variable error, parenthesis errors, relational operator’s errors and

arithmetic errors as far as possible.

9.3.5 Data Flow Testing

This test is to be carried out as follows. Select test paths of program according

to locations of definitions and uses of variable in the program. Now, consider the

selected flow one by one and test it to proper functioning.

9.3.6 Loop Testing

Loop testing is to be performed on all types of the loops, nested loops,

concatenated loops and unconditional loops. Simple loops may not have errors but

even then they don’t leave them untested. Properly dry run and examine the nested

loops, concatenated and unstructured ones.

Once you complete development of the units, the next step is to integrate these

units as a package. During integration of these units, perform integration testing and

regression testing so that integration of these units may not create any problems.

Repeat this entire test as recursive activity so that there is minimum possibility of

error.

These tests are to be carried out by the programmer of the project.

Any engineering product can be tested in one of two ways:

White Box Testing: This testing is also called as glass box testing. In this

testing, by knowing the specified function that a product has been designed to perform

test can be conducted that demonstrates each function is fully operation at the same

time searching for errors in each function. It is a test case design method that uses the

control structure of the procedural design to derive test cases. Basis path testing is a

white box testing.

Page 84: Project Be Report

Basis Path Testing:

i. Flow graph notation

ii. Cyclomatic Complexity

iii. Deriving test cases

iv. Graph matrices

Control Structure Testing:

i. Condition testing

ii. Data flow testing

iii. Loop testing

Black Box Testing: In this testing by knowing the internal operation of a

product, tests can be conducted to ensure that “ all gears mesh”, that is the internal

operation performs according to specification and all internal components have been

adequately exercised. It fundamentally focuses on the functional requirements of the

software.

The steps involved in black box test case design are:

i. Graph based testing methods

ii. Equivalence partitioning

iii. Boundary value analysis

iv. Comparison testing

9.3.7 Validation Testing

At the culmination of integration testing, software is completely assembled as

a package; interfacing errors have been uncovered and corrected, and a final series of

software tests – validation testing may begin. Validation can be fined in many ways,

but a simple definition is that validation succeeds when software functions in a

manner that can be reasonably expected by the customer.

Reasonable expectation is defined in the software requirement specification –

a document that describes all user-visible attributes of the software. The specification

contains a section titled “Validation Criteria”. Information contained in that section

forms the basis for a validation testing approach.

Page 85: Project Be Report

9.3.8 ALPHA AND BETA TESTING

It is virtually impossible for a software developer to foresee how the customer

will really use a program. Instructions for use may be misinterpreted; strange

combination of data may be regularly used; and output that seemed clear to the tester

may be unintelligible to a user in the field.

When custom software is built for one customer, a series of acceptance tests

are conducted to enable the customer to validate all requirements. Conducted by the

end user rather than the system developer, an acceptance test can range from an

informal “test drive” to a planned and systematically executed series of tests.

Page 86: Project Be Report

10. SYSTEM MAINTENANCE

10.1User Manual:

System Installation

To Install the system, copy all the form files, ASP.NET files, to the hard disk

and also import the relevant table into the hard disk. To start operation of the system,

clock on START, which displays a popup menu? The menu contains program files,

clock on program file to get a popup menu. Click on the Visual Studio 2005 and then

click on Visual Basic 2005 icon. This will loop the ASP.NET software and the

MAIN MENU of Mobile Showroom Maintenance System.

System will be displayed. Since the system being menu driven, it carries the

user through various operations he wants to perform from this point on wards.

Behind the Screen

The software Mobile Showroom Maintenance System is developed for the

customer records and preparation of bills for customer, it has been implemented on

PENTIUM with memory capacity of 16MB,1.44MB FDD & 4.1 GB HDD . The

system designed to work on MS WINDOWSNT environment.

Overview of ASP.NET with C#

      The purpose of this tutorial is to provide you with a brief introduction to

ASP.NET MVC views, view data, and HTML Helpers. By the end of this tutorial,

you should understand how to create new views, pass data from a controller to a view,

and use HTML Helpers to generate content in a view.

Understanding Views

      For ASP.NET or Active Server Pages, ASP.NET MVC does not include anything

that directly corresponds to a page. In an ASP.NET MVC application, there is not a

page on disk that corresponds to the path in the URL that you type into the address

bar of your browser. The closest thing to a page in an ASP.NET MVC application is

something called a view.

Page 87: Project Be Report

      ASP.NET MVC application, incoming browser requests are mapped to controller

actions. A controller action might return a view. However, a controller action might

perform some other type of action such as redirecting you to another controller

action. 

Using HTML Helpers to Generate View Content

      To make it easier to add content to a view, you can take advantage of something

called an HTML Helper. An HTML Helper, typically, is a method that generates a

string. You can use HTML Helpers to generate standard HTML elements such as

textboxes, links, dropdown lists, and list boxes.

Using View Data to Pass Data to a View

You use view data to pass data from a controller to a view. Think of view data

like a package that you send through the mail. All data passed from a controller to a

view must be sent using this package. For example, the controller in Listing 6 adds a

message to view data.

Summary:

This tutorial provided a brief introduction to ASP.NET MVC views, view

data, and HTML Helpers. In the first section, you learned how to add new views to

your project. You learned that you must add a view to the right folder in order to call

it from a particular controller. Next, we discussed the topic of HTML Helpers. You

learned how HTML Helpers enable you to easily generate standard HTML content.

Finally, you learned how to take advantage of view data to pass data from a controller

to a view.

Page 88: Project Be Report

ASP.NET MVC Overview (C#)

The Model-View-Controller (MVC) architectural pattern separates an

application into three main components: the model, the view, and the controller. The

ASP.NET MVC framework provides an alternative to the ASP.NET Web Forms

pattern for creating MVC-based Web applications. The ASP.NET MVC framework is

a lightweight, highly testable presentation framework that (as with Web Forms-based

applications) is integrated with existing ASP.NET features, such as master pages and

membership-based authentication. The MVC framework is defined in the

System.Web.Mvc namespace and is a fundamental, supported part of the

System.Web namespace.

MVC is a standard design pattern that many developers are familiar with.

Some types of Web applications will benefit from the MVC framework. Others will

continue to use the traditional ASP.NET application pattern that is based on Web

Forms and postbacks. Other types of Web applications will combine the two

approaches; neither approach excludes the other.  

The MVC framework includes the following components

Models. Model objects are the parts of the application that implement the

logic for the application's data domain. Often, model objects retrieve and store

model state in a database. For example, a Product object might retrieve

information from a database, operate on it, and then write updated information

back to a Products table in SQL Server.In small applications, the model is

often a conceptual separation instead of a physical one. For example, if the

application only reads a data set and sends it to the view, the application does

not have a physical model layer and associated classes. In that case, the data

set takes on the role of a model object.

Views. Views are the components that display the application's user interface

(UI). Typically, this UI is created from the model data. An example would be

an edit view of a Products table that displays text boxes, drop-down lists, and

check boxes based on the current state of a Products object.

Page 89: Project Be Report

Controllers. Controllers are the components that handle user interaction, work

with the model, and ultimately select a view to render that displays UI. In an

MVC application, the view only displays information; the controller handles

and responds to user input and interaction. For example, the controller handles

query-string values, and passes these values to the model, which in turn

queries the database by using the values.

The MVC pattern helps you create applications that separate the different aspects

of the application (input logic, business logic, and UI logic), while providing a loose

coupling between these elements. The pattern specifies where each kind of logic

should be located in the application. The UI logic belongs in the view. Input logic

belongs in the controller. Business logic belongs in the model. This separation helps

you manage complexity when you build an application, because it enables you to

focus on one aspect of the implementation at a time.In addition to managing

complexity, the MVC pattern makes it easier to test applications than it is to test a

Web Forms-based ASP.NET Web application. For example, in a Web Forms-based

ASP.NET Web application, a single class is used both to display output and to

respond to user input. Writing automated tests for Web Forms-based ASP.NET

applications can be complex, because to test an individual page, you must instantiate

the page class, all its child controls, and additional dependent classes in the

application. Because so many classes are instantiated to run the page, it can be hard to

write tests that focus exclusively on individual parts of the application. Tests for Web

Forms-based ASP.NET applications can therefore be more difficult to implement than

tests in an MVC application. Moreover, tests in a Web Forms-based ASP.NET

application require a Web server. The MVC framework decouples the components

and makes heavy use of interfaces, which makes it possible to test individual

components in isolation from the rest of the framework. The loose coupling between

the three main components of an MVC application also promotes parallel

development. For instance, one developer can work on the view, a second developer

can work on the controller logic, and a third developer can focus on the business logic

in the model.  

Page 90: Project Be Report

Deciding When to Create an MVC Application

You must consider carefully whether to implement a Web application by using

either the ASP.NET MVC framework or the ASP.NET Web Forms model. The MVC

framework does not replace the Web Forms model; you can use either framework for

Web applications. (If you have existing Web Forms-based applications, these

continue to work exactly as they always have.)  Before you decide to use the MVC

framework or the Web Forms model for a specific Web site, weigh the advantages of

each approach.

Advantages of an MVC-Based Web Application

The ASP.NET MVC framework offers the following advantages:

It makes it easier to manage complexity by dividing an application into the

model, the view, and the controller.

It does not use view state or server-based forms. This makes the MVC

framework ideal for developers who want full control over the behavior of an

application.

It uses a Front Controller pattern that processes Web application requests

through a single controller. This enables you to design an application that

supports a rich routing infrastructure. For more information, see Front

Controller on the MSDN Web site.

It provides better support for test-driven development (TDD).

It works well for Web applications that are supported by large teams of

developers and Web designers who need a high degree of control over the

application behavior.

Advantages of a Web Forms-Based Web Application

The Web Forms-based framework offers the following advantages:

It supports an event model that preserves state over HTTP, which benefits

line-of-business Web application development. The Web Forms-based

Page 91: Project Be Report

application provides dozens of events that are supported in hundreds of server

controls.

It uses a Page Controller pattern that adds functionality to individual pages.

For more information, see Page Controller on the MSDN Web site.

It uses view state or server-based forms, which can make managing state

information easier.

It works well for small teams of Web developers and designers who want to

take advantage of the large number of components available for rapid

application development.

In general, it is less complex for application development, because the

components (the Page class, controls, and so on) are tightly integrated and

usually require less code than the MVC model.

10.2 ASP.NET: Accessing Data with C#S

Introduction

When working with classic asp we had ADO, object model for

communication with the database. Microsoft.NET has introduced ADO.NET

components that lets the developer communicate with the database more efficiently

and easily. In this article we will see how we can make use of the ADO.NET classes

to perform different operations on the database.

ADO. NET Classes:

ADO .NET classes are put in the System.data namespace. You can access the classes

using the following code:

using System.Data.SqlClient;  

using System.Data.Odbc;  

using System.Data.OleDb;  

using System.Data.Oracle;

Different classes are used for different purpose.

Page 92: Project Be Report

System.Data.SqlClient: This class is used to communicate with the Sql Server

database. The database can be version 7.0 or version 2000.

System.Data.SqlClient: This class is used to perform operations on the MySQL

databases. 

System.Data.OleDb: This class is used to perform operations on the Access

Database.  

System.Data. Oracle: This class is used to perform operations on the

Oracle database.  

In this article we will focus on the Sql Server 2000 database and hence we will be

using

System.Data.SqlClient namespace to perform different operations on the Sql

Server 2005.

10.3 Making the Database Connection

Let's see how we can make a database connection. There are several

ways of making a database connection. You can simple drag and drop the

database connection on the asp.net web form and the connection will be made.

Let's see how we can do that:

Open you Visual Studio.NET and start a new asp.net web application. In

the toolbox you will see a tab called data. Click on the tab and it will dropdown

showing various ADO objects. Drag and Drop the SqlConnection object on the

screen. As soon as you drop the connection object you will see it at the bottom of the

screen.

Right click on the connection object and select properties. In the properties

you can see the property named "ConnectionString". When you click on it will take

you to a wizard where you can select your database. In this article I will be using

Northwind database which can be found in every Sql Server 2005 database.

Page 93: Project Be Report

Once you select the database, test your connection by clicking on the Test

connection button. If the connection is correct a message box will pop saying that

the connection has been tested and connection is right.

Problems using this approach of making the connection String

As you have just seen that we just dragged and dropped the connection

string on the screen and the new connection to the database was made in seconds.

This approach should never be used since if in the future you change your connection

string you will have to change everywhere in the application.

Using Web.config to store the connection String:

As you can see above that you can make your connection string with just one

line. Take a look at the "key" represents the keyword that we will use to refer to it in

our application have to point out that saving the connection string like this is not

secure. Usually you store the connection string after encrypting it. I will not

perform encryption in this article and keep the article simple enough.

10.4 OVERVIEW OF SQL SERVER 2005

In this tutorial you will learn about SQL Server 2005 Architecture, What’s

New in SQL Server 2005—Enhancements for Data Base Administrators. indexing

capabilities, peer to peer model of replication, Table and index partitioning, snapshot

isolation, Replication monitor tool, security model, Encryption capabilities, Secure

computing, A new application framework, SQL Server Express Manager (XM),

Business Intelligence in SQL Server 2005, Integration services, Analysis Services,

Data mining, Reporting services, Windows Server System Common Engineering

Roadmap.

Page 94: Project Be Report

Introduction

The innumerable data challenges faced by modern day organizations have

leveraged the need for faster and more data driven decisions. The drive is to increase

productivity, flexibility of human resources, to reduce overall investments in

technology while scaling the infrastructure to meet the growing demand for

information that enable informed mission critical decisions.

The release of SQL Server 2005 as one of the corner stones of Microsoft’s

strategy for the back office. Its integration with the .NET family of server applications

has gone a long way in establishing SQL server as one of the most robust servers for

enterprise database management.

MSSQL Server 2005 truly the next generation data management and analysis

solution that is built for scalability, availability, analysis and security of data. The

increasing ease with which database applications can be built has reduced the

complexities of deploying and managing database applications. Data can now be

shared across platforms, applications and devices making it possible to network

internal and external systems seamlessly. Performance, availability, scalability and

security are now available for lower costs. It is now a secure, reliable and productive

platform for enterprise data and business intelligence tools.

SQL Server 2005 has a number of tools to help the Database Administrator

and the Developer. The relational database engine has been improved to give better

performance and support for both structured and unstructured (XML) data. The

Replication services include services for distributed or mobile data processing

applications. It provides for high systems availability, scalable concurrency with

secondary data stores, enterprise reporting solutions and integration with

heterogeneous systems such as Oracle databases. The deployment of scalable,

personalized, timely information updates through web based applications has been

made possible with the advanced Notification capabilities of the SQL Server 2005.

The extraction, transformation and load process has been further enhanced and online

analytical processing render rapid, sophisticated analysis of large and complex data

sets using multidimensional storage. The Reporting services features have been

honed up to create comprehensive solutions for managing, creating and delivering

Page 95: Project Be Report

traditional and paper oriented reports or interactive, web based reports. Management

tools for database management and tuning have been fine tuned to integrate with

Microsoft operations manager and Microsoft System Management Server. The data

access protocols reduce the time taken for integrating data in the SQL server with

existing systems. A number of development tools have been provided and integrated

with Microsoft Visual Studio to provide an end to end application development

capability.

With SQL Server 2005 customers will now be able to leverage data assets to

get more value from their data by using the reporting, analysis and data mining

functionalities embedded in the software. The Business Intelligence capabilities are

integrated to the Microsoft Office System, to enable transmission of mission critical

business information across the organization. The complexity of developing,

deploying and managing line of business and analytical applications has been greatly

reduced by use of a flexible development environment and automated tools for

database management. Finally, the cost of ownership has been reduced by a focus on

ease of use and integrated approach.

What’s New in SQL Server 2005—Enhancements for Data Base Administrators.

SQL Server 2005 has a single management console that enables data base

Administrators monitor, manage and tune all databases and services. The SQL

Management Object(SMO) is an extensible management infrastructure can be easily

programmed. It exposes all the management functionalities of the SQL Server and is

implemented as a Microsoft .NET Framework assembly. The primary purpose of the

SMO is to automate administrative tasks such as retrieving configuration settings,

creating new databases, applying T-SQL scripts, Creating SQL Server Agent jobs and

so on. The users can customize or extend the management environment and build

additional tools and functions to extend the capabilities that come packaged in the

box. It is in short more reliable, scalable than Distributed Management Objects

(DMO).

Page 96: Project Be Report

The SQL Server Management Studio is a one point access to a number of

services – the relational database, the Integration services, Analysis services,

Reporting services, Notification Services and SQL Mobile. Using this interface DBAs

can author or execute a query, view server objects, manage an object, monitor system

activity or even seek online help. As it is integrated with source control, scheduling of

SQL Server Agent jobs also becomes possible. Daily maintenance and operation tasks

can be monitored. Administrators can now proactively monitor and tune the server

using the Dynamic Management Views (DMVs). There are more than 70 new

measures of internal database performance and resource usage. One of the major

concerns of the database administrator is to ensure continuous availability of data.

Database mirroring, failover clustering, snapshots or fast recovery would be areas he

would be concerned with. SQL Server 2000 allows continuous streaming of the

transaction log from a source server to a destination server which takes over

seamlessly in the case of failure of the primary server. Support for server clustering

has been extended to Analysis services, Notification Services and SQL Server

replication and the number of nodes has also been increased to eight. Instant, read

only views of the database can be created using snapshots. These provide a stable

view without the time or storage overhead normally required in these instances. The

snapshot pages are added automatically as and when the pages are modified. Hence

quick recovery becomes possible.Running server connections can be accessed using

the dedicated administrator connection even when the server refuses to respond. As a

result diagnostic functions or T-SQL statements can be executed to troubleshoot

problems on a server. The sysadmin fixed server role is activated by the members and

can be accessed using the SQLCMD command prompt utility remotely or locally.

The indexing capabilities of the SQL Server 2000 have been greatly enhanced.

Indexes can be created, rebuilt or dropped online without disturbing existing indexes.

This online indexing capability allows parallel processing, concurrent modifications

to the table in the database or clustered index data or any other associated indexes.

Additionally the online restore option improves the availability of data even when

restore operations are being performed.

The peer to peer model of replication enables synchronization of

transactions with an identical peer database. This further improves availability.

Page 97: Project Be Report

Enhancements that ensure scalability include table partitioning, snapshot isolation,

and 64 bit support. This improves query performance.

Table and index partitioning eases the management of large databases by

dividing the whole into manageable chunks. The concept is not new to the SQL

Server, but the partitioning of tables horizontally across file groups in the database is

new. The partitioning can be made for gigabytes and terabytes and more.

The snapshot isolation feature allows users access the last row that was

committed by providing a transactional and consistent view of the database. It makes

for increased data availability for read only applications; it allows non blocking read

only operations in OLTP environment; it automatically detects conflicts in write

transactions and makes for simplified migration of applications from Oracle to SQL

Server.

The Replication monitor tool defines a new standard for managing complex

data replication operations. Its interface is intuitive and has a number of data metrics

that are useful.

The new SQL Server 2000 is optimized for the Intel Itanium processor and

takes advantage of the advanced memory capabilities for essential resources such as

buffer pools, caches and sort heaps. This reduces the need to perform multiple I/O

operations and makes for greater processing capacity without the disadvantage of I/O

latency. The support for 32 bid applications is retained while 64 bit capabilities have

been introduced to make the migration smooth and efficient.

The security model of the database platform now provides more precise and

flexible control for ensuring security of data. It enforces passwords for authentication,

provides granularity in terms of specifying permissions in the authorization space and

separates owners and schemas for the manager.

The Encryption capabilities of the database have been integrated with the

management infrastructure for centralization of security assurance and server policy.

Secure computing measures have been put in place to enable deployment of

a secure environment. Confidentiality, integrity and availability of data and systems is

Page 98: Project Be Report

the primary focus at every stage of the software life cycle—from design to delivery

and maintenance.

A new application framework with Service Broker, Notification Services,

Server Mobile and Server Express has been introduced. The Service Broker is a

distributed application that provides reliable asynchronous messaging at the database

to data base level. Notification services helps in development and deployment of

applications that generate and send personalized notifications to a wide variety of

devices based on preferences specified by the application user. SQL Server Mobile

edition enables the creation of a mobile edition database on the desktop or device

directly from SQL Server Management Studio. SQL Server Express Manager (XM)

is a free Query Editor tool that is available for download and allows for easy database

management and query analysis capabilities.

Business Intelligence in SQL Server 2000 is scalable, comprehensive and

comes with a number of reporting capabilities. Both basic and innovative kinds of

analytical applications can be built from end to end.

The Integration services are a redesigned enterprise ETL platform that

enables users integrate and analyze data from multiple heterogeneous sources.

Significantly, SQL Server 2000 goes beyond traditional services and supports Web

services and XML and out of the box services through SSIS to bring analytics to the

data without persisting data, Data mining and text mining in data flow for data quality

and data cleansing.Analysis Services provides a unified and integrated view of the

business data by using the Unified Dimensional Model which is mapped to a host of

heterogeneous back end data sources. User friendly descriptions and navigation

hierarchies make it a pleasure to use.

The Data mining and Intelligence technology is designed to build complex

analytical models and integrate such models to the business operations. The rich set of

tools, API’s and algorithms provides customized data driven solutions to a broad

range of business data mining requirements.

Reporting services is a server based BI platform managed via Web Services.

Reports can be delivered in multiple formats interactively. Relational and OLAP

Page 99: Project Be Report

Reports comes with an inbuilt query editors—SQL Query Editor and MDX Query

Editor. The reports can be built together or separately.

Windows Server System Common Engineering Roadmap defines a standard set of

capabilities of the server system such as common patch management, Watson Support

and tools such as Microsoft Baseline Security Analyzer for delivery of a consistent

and predictable experience for the Administrator. It creates a set of services that can

be implemented across all Windows platforms and raises the bar on server

infrastructure by ensuring that security, reliability, manageability and flexibility are

taken into consideration. It adopts services oriented architecture and integrates

with .NET to connect people, systems and devices through software. It focuses on

delivering systems that are focused on dynamic operations building and monitoring.

Page 100: Project Be Report

10.6 NETWORKING FEATURES:

10.6.1 .NET framework

Introduction To .Net Framework

The Microsoft .NET Framework is a software technology that is available with

several Microsoft Windows operating systems. It includes a large library of pre-coded

solutions to common programming problems and a virtual machine that manages the

execution of programs written specifically for the framework. The .NET Framework

is a key Microsoft offering and is intended to be used by most new applications

created for the Windows platform.

The pre-coded solutions that form the framework's Base Class Library cover a

large range of programming needs in a number of areas, including user interface, data

access, database connectivity, cryptography, web application development, numeric

algorithms, and network communications. The class library is used by programmers,

who combine it with their own code to produce applications.

Programs written for the .NET Framework execute in a software environment

that manages the program's runtime requirements. Also part of the .NET Framework,

this runtime environment is known as the Common Language Runtime (CLR). The

CLR provides the appearance of an application virtual machine so that programmers

need not consider the capabilities of the specific CPU that will execute the program.

The CLR also provides other important services such as security, memory

management, and exception handling. The class library and the CLR together

compose the .NET Framework.

Principal design features

Interoperability 

Because interaction between new and older applications is commonly

required, the .NET Framework provides means to access functionality that is

implemented in programs that execute outside the .NET environment. Access to COM

components is provided in the System.Runtime.InteropServices and

System.EnterpriseServices namespaces of the framework; access to other

functionality is provided using the P/Invoke feature.

Page 101: Project Be Report

Common Runtime Engine 

The Common Language Runtime (CLR) is the virtual machine component of

the .NET framework. All .NET programs execute under the supervision of the CLR,

guaranteeing certain properties and behaviors in the areas of memory management,

security, and exception handling.

Base Class Library 

The Base Class Library (BCL), part of the Framework Class Library (FCL), is

a library of functionality available to all languages using the .NET Framework. The

BCL provides classes which encapsulate a number of common functions, including

file reading and writing, graphic rendering, database interaction and XML document

manipulation.

Simplified Deployment 

Installation of computer software must be carefully managed to ensure that it

does not interfere with previously installed software, and that it conforms to security

requirements. The .NET framework includes design features and tools that help

address these requirements.

Security

The design is meant to address some of the vulnerabilities, such as buffer

overflows, that have been exploited by malicious software. Additionally, .NET

provides a common security model for all applications.

Portability 

The design of the .NET Framework allows it to theoretically be platform

agnostic, and thus cross-platform compatible. That is, a program written to use the

framework should run without change on any type of system for which the framework

is implemented. Microsoft's commercial implementations of the framework cover

Windows, Windows CE, and the Xbox 360.

Page 102: Project Be Report

Architecture

Visual overview of the Common Language Infrastructure (CLI)

Common Language Infrastructure

The core aspects of the .NET framework lie within the Common Language

Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral

platform for application development and execution, including functions for exception

handling, garbage collection, security, and interoperability. Microsoft's

implementation of the CLI is called the Common Language Runtime or CLR.

Assemblies

The intermediate CIL code is housed in .NET assemblies. As mandated by

specification, assemblies are stored in the Portable Executable (PE) format, common

on the Windows platform for all DLL and EXE files. The assembly consists of one or

more files, one of which must contain the manifest, which has the metadata for the

assembly. The complete name of an assembly (not to be confused with the filename

on disk) contains its simple text name, version number, culture, and public key token.

The public key token is a unique hash generated when the assembly is

compiled, thus two assemblies with the same public key token are guaranteed to be

identical from the point of view of the framework. A private key can also be specified

known only to the creator of the assembly and can be used for strong naming and to

Page 103: Project Be Report

guarantee that the assembly is from the same author when a new version of the

assembly is compiled (required to add an assembly to the Global Assembly Cache).

Metadata

All CLI is self-describing through .NET metadata. The CLR checks the

metadata to ensure that the correct method is called. Metadata is usually generated by

language compilers but developers can create their own metadata through custom

attributes. Metadata contains information about the assembly, and is also used to

implement the reflective programming capabilities of .NET Framework.

Security

.NET has its own security mechanism with two general features: Code Access

Security (CAS), and validation and verification. Code Access Security is based on

evidence that is associated with a specific assembly. Typically the evidence is the

source of the assembly (whether it is installed on the local machine or has been

downloaded from the intranet or Internet). Code Access Security uses evidence to

determine the permissions granted to the code. Other code can demand that calling

code is granted a specified permission. The demand causes the CLR to perform a call

stack walk: every assembly of each method in the call stack is checked for the

required permission; if any assembly is not granted the permission a security

exception is thrown.

When an assembly is loaded the CLR performs various tests. Two such tests

are validation and verification. During validation the CLR checks that the assembly

contains valid metadata and CIL, and whether the internal tables are correct.

Verification is not so exact. The verification mechanism checks to see if the code does

anything that is 'unsafe'. The algorithm used is quite conservative; hence occasionally

code that is 'safe' does not pass. Unsafe code will only be executed if the assembly has

the 'skip verification' permission, which generally means code that is installed on the

local machine.

.NET Framework uses appdomains as a mechanism for isolating code

running in a process. Appdomains can be created and code loaded into or unloaded

from them independent of other appdomains. This helps increase the fault tolerance of

Page 104: Project Be Report

the application, as faults or crashes in one appdomain do not affect rest of the

application. Appdomains can also be configured independently with different security

privileges.

Namespaces in the BCL

System

System. CodeDom

System. Collections

System. Diagnostics

System. Globalization

System. IO

System. Resources

System. Text

System.Text.RegularExpressions

Class library

Microsoft .NET Framework includes a set of standard class libraries. The class

library is organized in a hierarchy of namespaces. Most of the built in APIs are part of

either System.* or Microsoft.* namespaces. It encapsulates a large number of

common functions, such as file reading and writing, graphic rendering, database

interaction, and XML document manipulation, among others. The .NET class libraries

are available to all .NET languages. The .NET Framework class library is divided into

two parts: the Base Class Library and the Framework Class Library.

The Base Class Library (BCL) includes a small subset of the entire class

library and is the core set of classes that serve as the basic API of the Common

Language Runtime. The classes in mscorlib.dll and some of the classes in

System.dll and System.core.dll are considered to be a part of the BCL. The BCL

classes are available in both .NET Framework as well as its alternative

implementations including .NET Compact Framework, Microsoft Silver light and

Mono.

Page 105: Project Be Report

The Framework Class Library (FCL) is a superset of the BCL classes and

refers to the entire class library that ships with .NET Framework. It includes an

expanded set of libraries, including Win Forms, ADO.NET, ASP.NET, Language

Integrated Query, Windows Presentation Foundation, Windows Communication

Foundation among others. The FCL is much larger in scope than standard libraries for

languages like C++, and comparable in scope to the standard libraries of Java.

Memory management

The .NET Framework CLR frees the developer from the burden of managing

memory (allocating and freeing up when done); instead it does the memory

management itself. To this end, the memory allocated to instantiations of .NET types

(objects) is done contiguously from the managed heap, a pool of memory managed by

the CLR. As long as there exists a reference to an object, which might be either a

direct reference to an object or via a graph of objects, the object is considered to be in

use by the CLR. When there is no reference to an object, and it cannot be reached or

used, it becomes garbage.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-

and-sweep garbage collector. The GC runs only when a certain amount of memory

has been used or there is enough pressure for memory on the system. Since it is not

guaranteed when the conditions to reclaim memory are reached, the GC runs are non-

deterministic. Each .NET application has a set of roots, which are pointers to objects

on the managed heap (managed objects). These include references to static objects

and objects defined as local variables or method parameters currently in scope, as well

as objects referred to by CPU registers.

When the GC runs, it pauses the application, and for each object referred to in

the root, it recursively enumerates all the objects reachable from the root objects and

marks them as reachable. It uses .NET metadata and reflection to discover the objects

encapsulated by an object, and then recursively walk them. It then enumerates all the

objects on the heap (which were initially allocated contiguously) using reflection. All

objects not marked as reachable are garbage. This is the mark phase. Since the

memory held by garbage is not of any consequence, it is considered free space.

Page 106: Project Be Report

However, this leaves chunks of free space between objects which were

initially contiguous. The objects are then compacted together, by using memory to

copy them over to the free space to make them contiguous again. Any reference to an

object invalidated by moving the object is updated to reflect the new location by the

GC. The application is resumed after the garbage collection is over.

This helps increase the efficiency of garbage collection, as older objects tend

to have a larger lifetime than newer objects. Thus, by removing older (and thus more

likely to survive a collection) objects from the scope of a collection run, fewer objects

need to be checked and compacted.

Versions

Microsoft started development on the .NET Framework in the late 1990s

originally under the name of Next Generation Windows Services (NGWS). By late

2000 the first beta versions of .NET 1.0 were released.

Page 107: Project Be Report

The .NET Framework stack.

Version Version Number Release Date

1.0 1.0.3705.0 2002-01-05

1.1 1.1.4322.573 2003-04-01

2.0 2.0.50727.42 2005-11-07

3.0 3.0.4506.30 2006-11-06

3.5 3.5.21022.8 2007-11-09

Page 108: Project Be Report

CHAPTER-11

11. CONCLUSION

Page 109: Project Be Report

The Proposed protocol is that a user will receive a personalized smart card from

the GW-node at the time of the registration process and then, with the help of user’s

password and smart card the user can login to the sensor/GW node and access data

from the network. The protocol is divided into two phases: Registration phase and

Authentication phase. The proposed protocol avoids many logged in users with the

same login-id and stolen-verifier attacks, the proposed protocol resists other attacks in

WSN except the denial-of-service and node compromise attacks.

12. BIBLIOGRAPHY

Page 110: Project Be Report

[1] High Availability Campus Network Design-Routed Access Layer using EIGRP or OSPFhttp://www.cisco.com/application/pdf/en/us/guest/netsol/ns432/c649/

ccmigration_09186a00805fccbf.pdf

[2] Cisco IOS SIP Configuration Guidehttp://www/en/US/products/sw/iosswrel/ps5207/

products_configuration_guide_book09186a00807517b8.html

[3] N. Sastry and D. Wagner, “Security considerations for IEEE 802.15.4 networks,"

in Proc. ACM Workshop Wireless Security, ACM Press, pp. 32-42, 2004.

[4] C. Karlof, N. Sastry, and D. Wagner. “TinySec: a link layer security architecture

for wireless sensor networks," in Proc. International Conf. Embedded Networked

Sensor Syst., ACM Press, pp. 162-175, 2004.

[5] “RSA SecureID, “Secure identity." [Online] Available

[6] M. L. Das, A. Saxena, and V. P. Gulati. “A dynamic ID-based remote user

authentication scheme," IEEE Trans. Consumer Electron., vol. 50, no. 2, pp. 629-631,

2004.

[7] Cisco Nonstop Forwarding and Stateful Switchover Deployment Guidehttp://www.cisco.com/en/US/technologies/tk869/tk769/

technologies_white_paper0900aecd801dc5e2.html