19
Use of OPNET in Modelling Operational Performance of Service Oriented Architecture-based Frameworks 1. INTRODUCTION Accurate characterisation and evaluation of performance in Service Oriented Architecture (SOA) domains requires in-depth understanding of the quantitative parameters associated with the behaviours of both the service and physical hardware landscapes. As shown in Figure 1, SOA performance can be broken down into components at operational, service and process levels. These performance components are respectively derived from the SOA stack’s functional elements of Physical Resource Fabric, Utility/Business Services and Business Process layers. Figure 1: Performance Components Associated with SOA Protocol Layers. Our work focuses on modelling the runtime operational performance of SOA-based applications on the physical resource infrastructure. We use the OPNET simulation framework to define our models, which are based on Tier 2 and Tier 3 configurations for SOA-based composite applications. 1.1 Main Components for Modelling Operational Performance 1

Use of OPNET in Modelling Operational Performance of Service

  • Upload
    zubin67

  • View
    2.522

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Use of OPNET in Modelling Operational Performance of Service

Use of OPNET in Modelling Operational Performance of Service Oriented Architecture-based

Frameworks

1. INTRODUCTION

Accurate characterisation and evaluation of performance in Service Oriented Architecture (SOA) domains requires in-depth understanding of the quantitative parameters associated with the behaviours of both the service and physical hardware landscapes. As shown in Figure 1, SOA performance can be broken down into components at operational, service and process levels. These performance components are respectively derived from the SOA stack’s functional elements of Physical Resource Fabric, Utility/Business Services and Business Process layers.

Figure 1: Performance Components Associated with SOA Protocol Layers.

Our work focuses on modelling the runtime operational performance of SOA-based applications on the physical resource infrastructure. We use the OPNET simulation framework to define our models, which are based on Tier 2 and Tier 3 configurations for SOA-based composite applications.

1.1 Main Components for Modelling Operational Performance

Since the parameter of Runtime Operational performance is made up of essentially hardware-based metrics, considerable attention is directed at using models of the following physical resource components in the OPNET Simulation package [1, 2]:

(a) User Application Environment,

(b) CPU Servers,

(c) Storage Servers and,

(d) Network devices.

1

Page 2: Use of OPNET in Modelling Operational Performance of Service

2. THE USER ENVIRONMENT

We define the user routines that are responsible for the launch of service requests to the processing environment. Given that there is currently no direct support for SOA-based applications in OPNET, the service requests are modelled as custom application activities [1, 2].

Figure 2: Definition Custom Application Events

As Figure 2 shows, four levels are provided in OPNET for defining the hierarchy of our custom application activities; i.e. at Profile, Application, Task and Phase levels. Phases make up the lowest level of user activity and it is at phase level that the actual user requests and network response messages are exchanged between the application and server environments. When grouped together, phase events constitute a task, and in turn a group of tasks are combined into an application. At the highest level of event composition, applications can be grouped to make the profile entity that represents an application user [2].

3. HARDWARE RESOURCE MODELS

Four main components are specified for the basic operation of server nodes in OPNET Server Definition Objects. Each server node is characterised by: (a) Operating System definitions that specify time slice durations and the selection of the scheduling mechanisms (i.e. round robin or pre-emptive schemes) (b) Disk Drive definitions that specify the spindle speeds, disk transfer rates, latency times, and the average and maximum seek times associated with the server's disks. (c) Disk Interface definitions that determine the bus transfer rates and activation or disabling SSA mechanisms on the storage interface device (d) Runtime Requirements for Jobs that specify the following attributes associated with each job instance: CPU time, memory requirements, amounts of input and output data, and page fault rates. Detailed definitions of other server characteristics within OPNET are provided in the specialised server models.

2

User Environment App1

Task N

Task 1

Phase 1

Profile 1 User 1

Packet Flow

User requests and Server responses

Profile N

Sell from Stock User N

Phase N

App 1

App N

Page 3: Use of OPNET in Modelling Operational Performance of Service

3.1 CPU Attributes for Server NodesThe CPU resource entities provide the core computational capacity and in our model we assumed an extensible cluster topology with an initial 1 CPU node per server implementation. Inside each CPU node, arrays of CPU partitions are defined with each CPU partition containing the individual processor elements that perform the actual computational work on the application data [1, 2]. Reference can be made to Figure 3 which shows the basic functional components inside the processing server.

Figure 3: Main Components of CPU Server

Our model assumes a single CPU partition which uses the 3.000GHz IBM System x306 Pentium 4 processor that runs on a Windows 2003 operating system. The paging system definitions are specified in terms of the following parameters: average CPU time, page I/O read and write counts, page sizes and percentages of hard faults.

3.2 Attributes for the Storage Server Nodes The CPU processing entities in our model will require these types of storage to work with in exchanging input and output data: (a) the Local Storage that is directly attached to CPU processing nodes, and (b) Remote Storage from which source input data is obtained.

Definitions of server storage capacity are provided for in OPNET's storage interface entities [1, 2]. On each storage interface, a set of interface channels can be assigned. Each individual interface channel can have storage disk devices assigned to it thereby giving data storage capability to the modelled server devices. The ATA/UDMA-133 interface is specified for our storage server model. The number of disk devices on each server model can be varied depending on the storage capacity required. For the definition of storage capacity, we selected the IBM Deskstar (120GXP) 120 GB, which is then attached to the interface. As Figure 4 shows, our modelled storage server uses one storage interface with a single interface channel.

3

CPU Server

Page Faults

Operating System

CPU partition

Virt

ual

Mem

ory

Storage Access Weights Lo

cal D

isk

Parti

tion

CPU

N

CPU

1

Page Size

CPU Time

Page 4: Use of OPNET in Modelling Operational Performance of Service

Figure 4: Main Components of Storage Server

3.3 Definitions for the Communication Device AttributesThe connections for data transfer are achieved through fibre channel communication links with the data rate of 1 Gbps and speed-of-light propagation. The use of Fibre Channel to access Storage Area Network- (SAN)-based data partitions overcomes the bandwidth bottlenecks associated with LAN-based server storage [3]. Fibre Channel-based SAN implementations also address the scalability limitations found in SCSI bus based storage implementations [4].

3.4 Application Runtime AttributesTwo sets of definitions were made to characterise the behaviour of application routines as they execute in the CPU server. One set of definitions is made in OPNET's General Server Attributes Object and the other in the CPU Server devices attribute list.

Figure 5: Runtime Execution of SOA Composite Application

4

I/O Read and Write to/from Local and Remote

Storage Partitions

CPU Server

Read Data Blocks

CPU PartitionMemory

Page Faults

Queue holding incoming requests

Write Data Blocks

Resident Set Size

Storage Server

Disk 1

Disk Cache 1

Disk N

Disk Cache N

Storage Interface Channel

Storage Interface

Storage Interface Cache

Page 5: Use of OPNET in Modelling Operational Performance of Service

In the Server Attribute Object, information on the resource requirements of the composite application is provided and the job requirements are tuned in terms of CPU time, file counts for input/output data transfers, and number of read/write operations. As Figure 5 shows, the following definitions govern the job's runtime behaviour: (a) actual CPU partition(s) which the job instance is directed to, (b) the storage partitions which the job writes to and reads from and, (c) the policy of controlling excess job instances.

4. RESULTS FROM SOME MODELLED SCENARIOS

Experimental scenarios were generated based on the Tier 2 and Tier 3 configurations, which are commonly employed in most IT implementations. The Tier 2 setting accommodates the CPU and Database operations locally so that the processing and data access functions run on the same node i.e. the Application Server. For the Tier 3 configuration, database operations are hosted away from CPU activities i.e. in a separate Database Server

Figure 6 presents the results of the job response times obtained for the modelled Tier 2 and 3 cases. We can establish that the Tier 3 mode of operation results in longer completion time as indicated by the cumulative distribution function of application response times associated with the two modelled scenarios. The offset in the two graphs confirms that the data access operations associated with the external database server account for the increased completions times obtained for the Tier 3 scenario.

Figure 6: Cumulative Distribution Functions of Job Completion Times

Figure 7 shows the levels of queuing delays experienced for varying degrees of network loading whenever data is accessed by application servers using the Tier 3 configuration. As the screenshot

5

Page 6: Use of OPNET in Modelling Operational Performance of Service

displays, the duration of queuing delays increase with the rise in the bandwidth utilisation on the communication links.

Figure 7: Cumulative Network Delays for increasing Network Traffic in Tier 3 Implementations

REFERENCES

[1]. OPNET Technologies Inc. Standard Models User Guide – Applications Model User Guide. http://opnet.com, Jan 2004.

[2]. OPNET Technologies Inc. Standard Models User Guide – Modelling Concepts Reference Manual. http://opnet.com, Jan 2004.

[3]. J. Hunt. Fibre Channel and Storage Network Infrastructure Design. Technical Report, 2004.

[4]. D Bird. Storage Basics - Storage Area Networks. http://www.enterprisestorageforum.com/technology/features/article.php/981191, February 2002.

6

Page 7: Use of OPNET in Modelling Operational Performance of Service

Using OPNET for the Research in Next Generation Network (NGN) and Multi Protocol Label Switching (MPLS) Technologies

Summary: This document provides brief account of the usage of OPNET in the research work on Next Generation Networks.

Introduction: The research area concerns developing real-time data analytics for next generation networks. It deals with finding techniques for monitoring network elements for management data and processing it to extract ‘intelligent’ information to be used in initiating appropriate actions.

Methodology of work: A majority part of the research will involve simulating Next Generation Networks and evaluating performances under varying traffic and topology scenarios. OPNET is an ideal simulation tool for such a kind of work. OPNET has already been used in this research for simulating a case study for the purpose of congestion control in an IP network. A few screenshots from the study are illustrated in the Appendix. OPNET is very useful in simulating different scenarios, collecting simulation results and analysing results.

Future use: OPNET will be used for simulating MPLS networks, wireless networks and carrying out their performance evaluation under varying traffic and topology scenarios.

7

Page 8: Use of OPNET in Modelling Operational Performance of Service

Appendix-I

Figure 1: The node model

Figure 2: The results browser

8

Page 9: Use of OPNET in Modelling Operational Performance of Service

Using Opnet 11.5

to Determine the Protocol Radius of TCP

Opnet has been used to determine the protocol radius of TCP1. The aim of this work was to determine the maximum propagation distances to which a communication using TCP can occur, with the overall objective being an assessment of its potential for deployment in delay-tolerant networks. If the distance to which a protocol is operational to is known in advance of a communication beginning, transmission performance can be maximised by only deploying the protocol in suitable scenarios.

A simple simulation scenario was created in Opnet using client and server nodes connected using a point-to-point link (Figure 1).

Figure 1: Opnet Simulation Scenario

While the scenario has been created to test the performance of TCP in delay-tolerant networks, or in other words, in wireless deployments, a wired link was chosen instead of radio propagation models. This choice was made to eliminate the effects of MAC layer timers. As the performance of TCP is timer-dependent, it was necessary to ensure that only the effects of the TCP timer at the transport layer were seen. In addition, a large bandwidth link was used to remove the occurrence of packet serialisation, and associated delays.

The client and server nodes were configured to use the stack shown in Figure 2.

Use of this stack allowed the TCP module to be used for the transmission, and the Application module to characterise traffic. The distance between nodes (i.e. the length of the link) was varied up to propagation distances beyond which the performance of TCP ceased to operate. All noise and errors were removed from the link to eliminate the occurrence of TCP backing off due to any reason other than its timer.

1 L. Wood, C. Peoples, G. Parr, B. Scotney, A. Moore, “TCP’s Protocol Radius: the Distance where Timers Prevent Communication”, Third International Workshop on Satellite and Space Communications, September 2007.

9

Page 10: Use of OPNET in Modelling Operational Performance of Service

Figure 2: Client and Server Protocol Stack

The performance of TCP was tested when transmitting FTP traffic of variable file sizes (up to a maximum of 500,000 bytes), with the aim of proving that the performance of TCP is independent of transmission volume. Buffer sizes at client and server nodes were subsequently adapted in size to accommodate the variable file sizes, up to a maximum of 64,000 bytes.

Figure 3: Application Traffic Configuration

While the default configuration was generally used for the configuration of FTP traffic, the inter-request time was changed from exponential to uniform (Figure 3).

The version of TCP used in the simulations was Reno with timestamps turned on. Window scaling and selective acknowledgements were disabled. The initial RTO was 3 seconds, the minimum RTO 1 second, and the maximum RTO 64 seconds. The default timer granularity of 0.5 seconds was used.

The simulation results represent the time taken to receive the last packet in an FTP transfer. To determine this, a trace file was generated from the TCP module which collected the times at which the packets were received. This trace file was also used to determine the amount of goodput and throughput occurring with each FTP transfer.

The results identified that the limiting protocol radius of TCP in the Opnet 11.5 implementation is 22.5 seconds (Figure 4).

10

Page 11: Use of OPNET in Modelling Operational Performance of Service

Figure 4:Time to Transfer a File via FTP

This corresponds to protocol timeouts after 3, 6, 12, and 24 seconds. Summing these timeouts (3, 6, 12, 24) identifies the round-trip protocol radius of 45 seconds, or a one-way propagation delay of 22.5 seconds (45/2). While the 3 second (one-way 1.5 seconds) and 45 second (one-way 22.5 seconds) timeouts are clearly shown in the results graph (Figure 4), the 9 second (one-way 4.5 seconds) and 21 second (one-way 10.5 seconds) are not clearly identified. They appear to be hidden within the jittery plot between 1.5 and 22.5 seconds.

TCP’s performance in terms of transfer delay was correlated with goodput (Figure 5).

Figure 5:Efficiency Ratio for FTP File Transfer

Again, the step changes in performances are shown at the protocol time-out intervals of 1.5 and 22.5 seconds, yet are hidden for the timeouts at 4.5 and 10.5 seconds. Overall, however, it was identified that TCP goodput declines as distance between nodes increases, particularly when distance exceeds the initial protocol radius of 1.5 seconds.

Jitter was evident in both Figure 4 and Figure 5. The coarse timer granularity of 0.5 seconds has been identified as being responsible for this. Future experiments will therefore reduce the timer granularity to 0.1 seconds to remove the amount of jitter occurring. The primary aim of doing this is to determine if it is possible to identify the two inner protocol radii which are being hidden with the current simulation configuration. Future experiments will also involve performing similar experiments for protocols at other layers of the stack, to determine a number of performance radii for different protocols.

11

Page 12: Use of OPNET in Modelling Operational Performance of Service

Using Opnet 11.5

to Provide Service Differentiation in 21st Century Networks

Opnet has also been used to experiment with the ability to automatically provide service differentiation in 21st Century networks2. Applications have become increasingly bandwidth intensive and have strict latency requirements, while network resources are becoming increasingly constrained. Consider Figure 6, which shows that the volume of video conferencing data received is significantly lower than that which is transmitted from a client in a resource-constrained network.

Figure 6: Traffic Sent and Received between Client and Server Nodes (packets/sec)

Coupled with this, users expect increasingly higher levels of quality of service. Such factors place demands on the operational performance of the network. As network resources are finite, this demands a level of network intelligence to maximise the possibility that application quality of service will be achieved.

Figure 7: Application Process Model

Within Opnet, intelligent capabilities are being integrated. Environmental information, which has not been provisioned for within Opnet, is being inserted into the Function Block of the Application Layer. The aim of integrating such environmental information is to allow an awareness of the operational environment of the network to be taken into account prior to a communication beginning to ensure that appropriate transport layer decisions are made. The changes built into the Application Layer are subsequently called from the process model states when spawning the application profiles (Figure 7). Dependencies between the layers have also been provisioned for. This has involved integrating the

2 C. Peoples, G. Parr, B. Scotney, A. Moore, P. Dini, “Bringing IPTV to the Market through Differentiated Service Provisioning”, International Journal of Computers, Communications and Control, Volume 1, 2006, pp. 61-69.

12

Page 13: Use of OPNET in Modelling Operational Performance of Service

changes made in the application layer within the subsequent modules called. Also, Opnet’s dependency on a global data structure to determine the transport protocol being used for each application had to be removed, to ensure that the transport protocol would be determined in an adaptive approach.

Figure 8: Data Flows from Application to Tpal Layers

Changes were required in the tpal layer and the process models generated by the spawning of the application process, the video_calling_mgr and gna_profile_mgr models. After the tpal layer, the transport protocol module and the lower layers of the stack are called. As the changes are being made to the choice of transport protocol, all changes must therefore be made before this layer. Figure 8 shows the dependencies between the process modules and the span of changes required to achieve implementation at the client.

The changes which have been implemented in a top-down fashion between the application and tpal layers have ensured developments at the client. Developments must now be made in a bottom-up approach between the same layers to allow incorporation of the same changes at the server.

13