View
227
Download
7
Category
Preview:
Citation preview
Transaction Processing and Distribution with COPAR and Opnet
Michael A. Hosein, The University of the West Indies, St. Augustine, Trinidad, michael.hosein@sta.uwi.edu Rhea L. Seegobin, The University of the West Indies, St. Augustine, Trinidad, rhea.seegobin@sta.uwi.edu
Abstract- This paper outlines the results that were
collected from the implementation of the COPAR Service [5][6] in an Opnet Modeler simulation environment. The system carries out two (2) types of transaction processing techniques; Optimistic (Temporary) and Pessimistic (Permanent) Processing. The Opnet simulation environment was used to expand the projects capacity by increasing the number of servers and transactions that could be processed by the system. Results show the processing times for simulations that ran with nodes ranging from 20 to 100 nodes and transactions ranging from 200 to 1000.
Results also showed statistics for the distribution of transactions amongst servers with different classifications. Classifications of server ranged from 1 to 6, 1 being the lowest and 6 being the highest. The number of transactions used for testing transaction distribution ranged from 1000 up to 4000, with each simulation run using 100 server nodes. These results can act as a guide to help implement systems that can provide optimal processing times in a real world environment.
Keywords- COPAR Service, Opnet Modeler, Optimistic Processing, Pessimistic Processing, Transaction Distribution
1. Introduction
The COPAR service today is still running as a Java
based system that runs off of 8 nodes located in the USA and the Caribbean (Trinidad). Developments and upgrades on the COPAR system have been ongoing for the last 16 years with versions written in C/C++ and Java. Improvements on this project have been outlined in several papers [2] [3][4].
Recent enhancements to this project involved the implementation of COPAR using the OPNET modeler simulation environment. This implementation was completed in September of 2011. Enhancements includes the expansion of the project [5] and the design changes [6] that had to be done for it to be implemented for the project to be successful. Implementation details are not discussed to in this paper. This expansion allowed us to test the availability and scalability of the COPAR service. Simulations were run with varying number of transactions, server nodes and cost bound resource; these were three main inputs. Other input parameters include the transaction
delay. This is a specific delay period which is set at the transaction generator. It should be noted that regardless of the number of server nodes and transactions in the simulation there is only one transaction generator node, with its main function to generate transactions for the simulation. The transaction delay parameter for the results displayed in this paper is 10 seconds. The number of nodes for each simulation ranges from 20 to 100 nodes. Transaction numbers ranges from 200 to 4000 transactions.
Results for Optimistic and Pessimistic processing times are based on input parameters of 20 to 100 nodes with 200 to 1000 transactions. Optimistic processing results are divided into local optimistic processing only vs. local and remote optimistic processing. Transaction distribution results were collected for simulations running 100 nodes with 1000 to 4000 transactions.
2. Local and Remote Optimistic Transaction
Processing
Optimistic Processing occurs when a transaction is processed at a node temporarily, granted that it does not violate its cost bound, until it can be processed and committed at all nodes in the system [2][3]. In the OPNET environment, how optimistic processing is achieved has not been changed. The main factors that have been changed are the number of transactions and the number of nodes in the simulation. Local optimistic processing means that duplicate transactions are not sent to remote nodes for processing. Redundancy processing was not conducted and this can only be reliably if servers are up and running 100% of the time. This is not usually true; therefore transactions are also processed optimistically at local nodes as well as all other remote server nodes in the simulation.
Local and remote optimistic transaction processing times for this paper were collected from simulations that ran with 200, 400, 600, 800 and 1000 transactions on 20, 40, 60, 80 and 100 nodes respectively. Simulations that ran on 20 nodes were only run with 200 transactions because of the limited resource referred to as the cost bound of the system. The results would not have been realistic if the same cost bound for 20 nodes with 200 transactions was used for a simulation with 20 nodes and 400 transactions.
DOI: 10.5176/2251-3043_3.4.298
GSTF International Journal on Computing (JoC) Vol.3 No.4, April 2014
©The Author(s) 2014. This article is published with open access by the GSTF
122
Received 09 Mar 2014 Accepted 17 Mar 2014
DOI 10.7603/s40601-013-0049-2
Table 1 below, shows the Optimistic Processing Time (OPT) in seconds, for the 200th, 400th, 600th, 800th and 1000th transaction for the different number of nodes in the simulation. It shows that for local and remote optimistic processing, the processing time remains similar despite an increase in the number of nodes and the number of transactions.
Table 1. Local and Remote Optimistic Processing Times in Seconds for 200th Transaction Intervals
Nu
mbe
r of
Nod
es
OP
T (
secs
) 20
0th
T
ran
sact
ion
OP
T (
secs
) 40
0th
T
ran
sact
ion
OP
T (
secs
) 60
0th
T
ran
sact
ion
OP
T (
secs
) 80
0th
T
ran
sact
ion
OP
T (
secs
) 10
00th
T
ran
sact
ion
20 28 40 30 50 60 30 50 80 80 30 52 80 100
100 30 52 80 100 110
The processing time for the 200th transaction for all
simulation sizes remain between 28 and 30 seconds despite an incremental increase of 20 nodes for each simulation size. Processing the 400th transaction for simulation with node sizes of 40 – 100 nodes remain between 50 and 52 seconds.
Figure 1 is a graphical representation of Table 1 with
the x-axis displaying the number of nodes and the y-axis displaying the processing time in seconds.
Figure 1. Optimistic Processing Times: Local and Remote (in Seconds)
Statistics for local optimistic processing were also collected for the simulation with various sizes. It was seen that regardless of the number of servers or the number of
transactions in the simulations, the processing times for the majority of transaction vary from 0.1 seconds to 1.9 seconds. This is demonstrated in Figure 2 and Figure 3 below.
Figure 2. Local Optimistic Processing Times for 20 Nodes with 200 Transactions
Figure 3. Local Optimistic Processing Times for 100 Nodes with 1000 Transactions
3. Pessimistic Processing with 20-100 Server Nodes
Pessimistic processing is the time take for a
transaction to be processed permanently and committed at all nodes in the simulation. The change in cost bound / total available resource is reflected at all server nodes. This accounts for the high processing times for pessimistic processing. Table 2 shows the pessimistic processing times for every 200th transaction. It should be noted that simulations with 400 transactions and up were run simulations with 40 or mode nodes. Simulations with 600 transactions were run with 60 or more nodes. Similar runs were conducted for nodes with 80 and 100 nodes.
The processing times for the 200th transaction for all simulations with 20, 40, 60, 80 and 100 nodes remain between 210 – 225 seconds regardless of the number of server nodes in the simulation. The increase in processing time can be attributed to the number of servers, number of
0
20
40
60
80
100
120200thTransaction
400thTransaction
600thTransaction
800thTransaction
1000thTransaction
GSTF International Journal on Computing (JoC) Vol.3 No.4, April 2014
©The Author(s) 2014. This article is published with open access by the GSTF
123
transactions in the system and the transaction delay between the generation of transactions.
Table 2. Pessimistic Processing Times in Seconds for 200th Transaction Intervals
Nu
mbe
r of
N
odes
200t
h
Tra
nsa
ctio
n
400t
h
Tra
nsa
ctio
n
600t
h
Tra
nsa
ctio
n
800t
h
Tra
nsa
ctio
n
1000
th
Tra
nsa
ctio
n
20 nodes 212 40 nodes 210 420 60 nodes 220 430 640 80 nodes 225 440 650 860 100 nodes 225 450 665 885 1100
Figure 4. Pessimistic Processing Times (in Seconds)
4. Reliability of Processing Response Times
Delay is measured as the time taken for a packet to be delivered from the sending host to the destination host. In Opnet Modeler the end-to-end delay was implemented as sleep functions in the main processing modules (Finite State Machine) of the system. These delays helped to mimic real-world Internet delays in the simulation. When a transaction was sent from one node to another, half of the round trip time was implemented in a ‘Sleep’ function,
which was usually the case for optimistic processing.
Round trip time delay was used in the pessimistic processing cycle because of the two phase commit protocol that the system implemented.
The statistics used for implementing a realistic delay were referenced from the website of Verizon [8]. Verizon is a reliable global Internet Service Provider, for this reason their website was used to implement realistic time delays in the simulation. The Verizon statistics reflected the round-trip time taken for packets to be sent and received from one continent to another.
Ping tests were done to verify the average worldwide delays. Third-party websites allow users to input sender and receiver geographical locations and perform ping tests between the two physical locations. The ping tool referenced in [9] was utilized for this purpose and the results were compared to the statistics provided by Verizon. The results showed little variation from Verizon’s
statistics.
5. Transaction Distribution between Server Nodes
Servers in this simulation carry different server ratios. In this paper it is also referred to as server classifications. There were six server classes, ranging from 1 to 6. Servers with class 1 received the least number of transactions whilst servers with classes 4 to 6 received the highest number of transactions. Servers that were of the same class were expected to receive a similar number of transactions. Transactions were also expected to be distributed evenly across servers bearing the same classifications. Transaction distribution statistics were collected for simulations with 100 server nodes and 1000, 1500, 3000, 3500 and 4000 transactions.
Figure 5. Transaction Distribution Percentages for Class 6 Servers
0
200
400
600
800
1000
1200
20
no
des
40
no
des
60
no
des
80
no
des
10
0 n
od
es
200thTransaction
400thTransaction
600thTransaction
800thTransaction
1000thTransaction
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Oxford Paris
Transaction %of 4000
Transaction %of 3500
Transaction %of 3000
Transaction %of 1500
Transaction %1000
GSTF International Journal on Computing (JoC) Vol.3 No.4, April 2014
©The Author(s) 2014. This article is published with open access by the GSTF
124
The results showed that no server was overwhelmed with transactions. Servers that had a higher classification number did receive a larger number of transactions. The transactions were also distributed evenly amongst servers with the same classification. There were 2, class 6 servers; 18, class 5 servers; 28, class 4 servers; 40, class 3 servers and 12, class 1&2 servers. The number of transactions (randomly distributed) sent by the transaction generator are shown in Figure 5, Figure 6 and Figure 7. Let’s take the
Oxford server for example, in the five different simulations with varying number of transactions, it still managed to receive similar amounts of the transactions for each run. Paris follows a similar pattern. It was expected that a class
6 server will receive the most transactions in each simulation. This expectation was reflected when comparing the distribution percentages from Figure 5, Figure 6 and Figure 7. The average transaction distribution percentage for a class 6 server in Figure 5 from between 1.49% to 2.22%. The average transaction distribution percentage for a class 5 server in Figure 6 ranges from 1.03% to 1.84%. The average transaction distribution percentage for a class 4 server in Figure 7ranges from 0.62% to 1.59%. Statistics for the remaining classes are similar, however, they are not presented in this paper.
Figure 6. Transaction Distribution Percentages for Class 5 Servers
Figure 7. Transaction Distribution Percentages for Class 4 Servers
0.00
0.50
1.00
1.50
2.00
2.50
Transaction % of 4000 Transaction % of 3500 Transaction % of 3000
Transaction % of 1500 Transaction % 1000
0.00
0.20
0.40
0.60
0.80
1.00
1.20
1.40
1.60
1.80
2.00
Kag
osh
ima
Ch
icag
o
Edm
on
ton
Bar
celo
na
Du
blin
Live
rpo
ol
Bo
sto
n
Ch
en
gdu
Bru
sse
ls
Bo
rde
aux
Ham
bu
rg
Mila
n
Vie
nn
a
Mia
mi
Yich
ang
Ph
ilad
elp
hia
Win
nip
eg
Hu
ain
an
Las
Veg
as
Mo
ntr
eal
Ve
nic
e
Bad
ajo
z
Can
ber
ra
Trin
idad
Osa
ka
Bill
ings
Nap
les
Nas
sau
Transaction % of 4000 Transaction % of 3500 Transaction % of 3000
Transaction % of 1500 Transaction % 1000
GSTF International Journal on Computing (JoC) Vol.3 No.4, April 2014
©The Author(s) 2014. This article is published with open access by the GSTF
125
6. Conclusion
Like many simulations and processing techniques there is no one right and wrong way, just different ways. This system used a combination of processing styles to achieve transaction processing in a distributed environment. The obstacles of physical and human resources to expand the project are no longer an issue. Thanks to the OPNET Modeler simulation environment the expansion of this project is a reality.
The project is now capable of spanning 100 nodes and thousands of transactions can now be implemented in varying sizes in order to reach optimal functionality. The processing times can be used as a predictor for actual implementations. An ideal size and number of transactions can be reached using alterations of the main input parameters.
It is seen that the system will not overwhelm any one server and the workload is indeed distributed evenly. Servers that have a higher classification are given more workload. Servers with similar classification are also assigned similar workloads. In an actual implementation servers that are older can be given a lighter workload and still be productive and not overloaded.
The pessimistic processing time comes at a very high cost as the project expands. As can be seen by the permanent processing style, it will be suitable for smaller project sizes less than 100 nodes. Future works for pessimistic processing can be quorum group processing where many groups are responsible for the coordination of processing a transaction pessimistically, instead of one server.
References
[1] J.M. Crichlow, S. Hartley, M. Hosein, C. Innis, The COPAR Service: Combining Optimism and Pessimism in Accessing Replicas. Proceedings of the Third IASTED International Conference Communications, Internet and Information Technology. St. Thomas, US Virgin Islands: ACM, 2004. 558-563.
[2] J. M Crichlow, Combining Optimism and Pessimism to Produce High Availability in Distributed Transaction Processing. ACM SIGOPS Operating Systems Review, by J.M. Crichlow, 43-64. ACM, 1994.
[3] M.F. Fransis, J.M. Crichlow, A Mechanism for Combining Optimism and Pessisism in Distributed Processing. Proceedings of the IASTED/ISMM International Conference of Intelligent Information Management Systems. Washington D.C, 1995. 103-106.
[4] M. Hosein, J.M Crichlow, Fault-tolerant Optimistic Concurrency control in a Distributed System. Proceedings of the IASTED International Conference on Software Engineering. Las Vegas, October 28-31,1998. 319-322.
[5] R. Seegobin, M. Hosein, Expanding the COPAR Service using Mutual Exclusion, Optimization and C++ Threads in Opnet Modeler. Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks. Innsbruck, Austria, February 15-17, 2011. 99-106.
[6] R. Seegobin, M. Hosein, J. Crichlow, S. Hartley Design Changes to the COPAR Service using Opnet Modeler. Proceedings of the IASTED International Conference on Software Engineering and Applications. Dallas, USA, December 14-16, 2011.
[7] Inc., Opnet Technoligies. Opnet Modeler Accelerating Network R & D. 1986-2008. Opnet Technoligies Inc., 2011.
[8] Verizon. 2013. “Verizon.” (accessed October 29, 2013)http://www.verizonenterprise.com/about/network/latency/.
[9] Kernen, Thomas. 2013. http://www.traceroute.org. (accessed
October 29, 2013).
Dr. Michael Hosein is currently a lecturer in the Department of Computing and Information Technology at The University of the West Indies, where he lectures mainly in the areas of wireless and mobile computing, distributed systems, computer networks, networking technologies, and computer programming. For many years he has taught the course “Wireless and Mobile Computing”
in which wireless apps are developed. He is also involved in app development using Bluetooth. Dr. Hosein is involved in examining duties for CAPE Computer Science of the Caribbean Examinations Council and is the coauthor of a CSEC Information Technology Multiple Choice Text. He has supervised students pursuing Masters and Ph.D. degrees. Mrs. Rhea Seegobin is currently a Ph.D. student in the Department of Computing and Information Technology at the University of the West Indies (UWI). She has taught several courses at the department especially a few that involve networking concepts. Rhea holds a BSc in Computer Science and an M.Phil. Computer Science with high commendation. Her main areas of research are network simulations and wireless application technology.
GSTF International Journal on Computing (JoC) Vol.3 No.4, April 2014
©The Author(s) 2014. This article is published with open access by the GSTF
126
This article is distributed under the terms of theCreative Commons Attribution License whichpermits any use, distribution, and reproductionin any medium, provided the original author(s)and the source are credited.
Recommended