Upload
deepak-kumar
View
224
Download
1
Embed Size (px)
Citation preview
1. INTRODUCTION
1.1 About the Project
Load balancing is a set of techniques that configures the servers in such a
way that the workload is distributed equally among the servers, thereby reducing the
dependency on any single server and making it easy to add resources in such a way that
performance is scaled according to the resources added. Scalability is achieved by
creating a server cluster. These are collections of servers that distribute the workload
amongst themselves. The purpose is to add more servers in an easy and efficient way, in
order to handle increasing workloads. This is far more preferable to adding more
resources to a single server for reasons of cost, reliability and efficiency. It also describes
the interfaces for the system. It views at different aspects of reducing the server load. It
also detects hosts that have become unavailable and automatically redistributes traffic,
ensuring high availability.
1
1.2 SYSTEM SPECIFICATION
1.2.1 Hardware Configuration
Client and Server
Processor : Pentium iii 866 mhz
Ram : 128 md sd ram
Monitor : 15” color
Hard disk : 250 gb
1.2.2 Software Configuration
Operating system : Generic
Language : Java
Back end : Sql server
2
1.2.3 Software Description
Each Java program is both compiled and interpreted. With a compiler, you
translate a Java program into an intermediate language called Java byte codes--the
platform-independent codes interpreted by the Java interpreter. With an interpreter, each
Java byte code instruction is parsed and run on the computer. Compilation happens just
once; interpretation occurs each time the program is executed. This figure illustrates how
this works.
Fig 1.1- Working of Java
The Java Platform
A platform is the hardware or software environment in which a program runs. The
Java platform differs from most other platforms in that it's a software-only platform that
runs on top of other, hardware-based platforms. Most other platforms are described as a
combination of hardware and operating system.
The Java platform has two components:
The Java Virtual Machine (Java VM)
The Java Application Programming Interface (Java API)
3
Socket Overview
A network socket is a lot like an electrical socket. Various plugs around the network
have a standard way of delivering their payload. Anything that understands the standard
protocol can “plug in” to the socket and communicate.
Internet protocol (IP) is a low-level routing protocol that breaks data into small
packets and sends them to an address across a network, which does not guarantee to
deliver said packets to the destination.
Transmission Control Protocol (TCP) is a higher-level protocol that manages to
reliably transmit data. A third protocol, User Datagram Protocol (UDP), sits next to TCP
and can be used directly to support fast, connectionless, unreliable transport of packets.
Client/Server
A server is anything that has some resource that can be shared. There are compute
servers, which provide computing power; print servers, which manage a collection of
printers; disk servers, which provide networked disk space; and web servers, which store
web pages. A client is simply any other entity that wants to gain access to a particular
server.
The notion of a socket allows single computer to serve many different clients at
once, as well as serving many different types of information. This feat is managed by the
introduction of a port, which is a numbered socket on a particular machine. A server
process is said to “listen” to a port until a client connects to it. A server is allowed to
accept multiple clients connected to the same port number, although each session is
unique. To manage multiple client connections, a server process must be multithreaded or
have some other means of multiplexing the simultaneous I/O.
4
RMI
A Java application and its components can invoke objects located on a different
JVM by using the Java Remote Method Invocation (RMI) system. RMI is used for
remote communication between Java applications and components, both of which must
be written in Java programming language. RMI is used to connect together a client and a
server. A client is an application or component that requires the services of an object to
fulfill a request. A server creates an application or component that requires the services
of an object available to the clients. A client contacts the server to reference and invoke
the object using RMI.
JDBC
JDBC offers a generic SQL database access mechanism that provides a consistent
interface to a variety of RDBMS. This consistent interface is achieved through the use of
“plug-in” database connectivity modules, or drivers. If a database vendor wishes to have
JDBC support, he or she must provide the driver for each platform that the database and
Java run on.
5
2. EXISTING SYSTEM ANALYSIS
In the existing system, they have developed the project using SSL with
Session model and Direct Routing Model. These models are not effective. These models
are not able to give their output in time and the throughput is also lesser than that their
expected output.
2.1 Secure Socket Layer Technique
The Secure Socket Layer is commonly used for secure communication
between clients and Web servers. Even though SSL is the standard for transport layer
security, its high overhead and poor scalability are two major problems in designing
secure large-scale network servers. Deployment of SSL can decrease a server’s capacity
by up to two orders of magnitude. In addition, the overhead of SSL becomes even more
severe in application servers. Application servers provide dynamic contents and the
contents require secure mechanisms for protection. Generating dynamic content takes
about 100 to 1,000 times longer than simply reading static content. Moreover, since static
content is seldom updated, it can be easily cached. However caching dynamic content is
not an efficient option like caching static content. Server load may increase slightly when
more number clients requesting at a particular time.The operation of SSL is transparent to
the user .
2.1.1 SSL and the load balancer
SSL operates with the load balancer in one of two ways. An SSL type load
balancer uses the SSL Session ID to identify the client, and an HTTP type load balancer
examines cookies in the HTTP packet header to identify the client because SSL packets
are encrypted, an SSL load balancer cannot examine the cookie in order to make
balancing decisions. The load balancer supports the termination of SSL secured HTTP
6
sessions. This allows the contents of the HTTP request to be examined and the request to
be distributed between multiple resources. The load balancer does not carry out SSL
client authentication. Client authentication is usually done with the HTTP protocol using
a user name and password.
2.1.2 Load Balancing Algorithm
If CR1->S then
Check load of S.
If LS exceeds then
Calculate R and then Send P
S1, S2, S3…Sn
CR1 ->Si.
The server, which receives the request from another node, generates and
encrypts the dynamic content using the forwarded session key. Finally, it returns the
reply to the initial node, which sends the response back to the client. We assume that all
the intra communications in a cluster are secure since these nodes are connected through
the user-level communication and are located very closely. The requests arriving at the
Web switch of the network server are sent to either the Web server layer or the
application server layer according to the requested service by the client. Since the SSL
connection is served by a different type of HTTP server (Hypertext Transfer Protocol
Secure (HTTPS)) and a different port number, the requests for the SSL connection are
passed on to the distributor in the application server layer.
To solely focus on the performance of the application server, it ignores the latency
between the Web switch and the distributor and logically represents them as one unit.
When a request arrives at the distributor, it searches its lookup table to determine whether
there is a server that has the session information of the client and then forwards the
request to the server. Otherwise, it picks up a new server to forward the request. The
7
forwarded server establishes a new SSL connection with the client. If the request is
forwarded to a highly loaded server, the server in turn sends the request with the session
information to a lightly loaded server. The server identifies the available server by
sending an empty packet. Server load balancing can also distribute workloads to firewalls
and redirect requests to proxy servers and caching servers.
2.1.3 SSL Architecture
o
Fig 2.1- SSL Architecture
The requests from various clients are gathered in the server side, the server load
will be calculated using the SSL_LB scheme if the server load exceeds the sub servers
details can be collected. After that the server sends as empty packet to all the sub
servers, depends on the response time the client request will be navigated to the
particular sub server. The client navigation will be considered .
8
Client Request
Client Request
Client Request
Server
SSL_LB Scheme
Application Server
Sub Server Load Detail
Allocation Server
2.1.4 SSL_Load Balancing (SSL-LB) Algorithm:
Step 1:
Representing the server and its sub servers by defining the IPaddress and Host
name. This helps to allocate and construct the network structure with main server and
proxy server before proceeding.
Step 2:
Save those Network construction with relevant details. This can be done with
proper authentication.
Step 3:
Select the files to be shared with the proxy servers.
Step 4:
Encrypt the files with the help of private and public keys. This can be done with
the help of RSA algorithm.
Step 5:
Save those encrypted files on the sub servers. These files will be stored on proxy,
proxy servers could be identified by the network construction module, which stores the
IP addresses.
Step 6:
The uploaded files are sharable and client can download those files which they
need.
Step 7:
The next step is to evaluate the server load. When client requests the server, server
will calculate the load by the number of open connections. The request of the clients will
be categorized into 2 types such as dynamic and static requests.
Step 8:
The server distributes an empty packet to all sub servers and gathers the response.
The response time will be calculated through queuing method.
9
Step 9:
The response time will be calculated and compared by using the total number of
requests and download time of the sub servers.
Step 10:
The user request is redirected to the sub server and the user can download files
and decrypt using the private key.
The server load will be calculated by determining some elements such as the
number of connections made in the network, proxy server allocation. Here the concept
cluster has been proposed to combine all requesting clients for a particular server at a
time.
2.1.5 Disadvantages of Ssl
Latency Problem
Minimal Throughput
High Overhead
Poor Scalability
10
2.2 Direct Routing Technique
It is one of the hardware based technique. The basic principle is that network
traffic is sent to a shared IP in many cases called a virtual IP (VIP), or listening IP. This
VIP is an address that it attached to the load balancer. Once the load balancer receives a
request on this VIP it will need to make a decision on where to send it.The request is then
sent to the appropriate server and the server will produce a response (hop. Depending on
the type of device, the response will be sent either back to the load balancer or directly
back to the end user In the case of a proxy based load balancer, the request from the web
server can be returned to the load balancer and manipulated before being sent back to the
user.
This manipulation could involve content substitution, compression. Indeed some
top end devices offer full scripting capability. Depending on the capability of the load
balancer, in many cases it is desirable for the same user to be sent back to the same web
server. This is generally referred to as persistence.
Fig 2.2 Working of Direct Routing
11
1. The host sends a request, with VSIP being the destination address.
2. Upon receiving the request, the general device forwards it to LB device. The VSIP
cannot be contained in an ARP request and response, therefore the general device only
forwards the request to the LB device.
3. Upon receiving the request, the LB device uses an algorithm to calculate to which
server it distributes the request.
4. The LB device distributes the request. LB device encapsulates VSIP as the destination
IP address, and the server’s MAC address (obtained through ARP) as the destination
MAC address. In this way, the request can be forwarded normally to the server.
5. The server receives and processes the request, and then sends a response.
Note that the destination IP address of the response is the host IP.
6. After receiving the response, the general device forwards the response to the host.
Because the response is addressed to the host rather than the LB device, DR-mode server
load balancing is thus called.
Disadvantages of Direct Routing
Backend server must respond to both its own IP and the virtual IP
Port translation or cookie insertion cannot be implemented.
The backend server must not reply to ARP requests for the VIP
Connection Optimization functionality is lost
Data Leak Prevention can't be accomplished
12
2.3.1 Dynamic Load Balancing Technique
Dynamic Load Balancing algorithms, are based on the redistribution of tasks
among the available processors during execution time. This redistribution is performed
by transferring tasks from the heavily loaded processors to the lightly loaded processors
with the aim of improving the performance of the application. A typical DLB algorithm
is generally defined by their four inherent policies and the policies are: Transfer policy,
Selection policy, Location policy and Information policy
Table 2.1 Comparison of Existing System
Algorithm Initiated
On
Initiated
By
Job
Trans
fer
Transfer
Policy
Selection
Policy
Location
Policy
Informati
on Policy
Sender
Initiated
Job
Arrival
Sender Pre-
empti
ve
Threshol
d Based
Consider
only new
Jobs
Random,Thres
hold or
Shortest
Demand
Driven
Receiver
Initiated
Job
Departu
re
Receiver Non
Pre-
empti
ve
Threshol
d Based
Consider
all jobs
Random Demand
Driven
Symmetrical
ly Initiated
Both Both Both Threshol
d Based
Both Depends on
Design
Demand
Driven
13
Information policy:
It states the workload of a task in formation to be collected, when it is to be
collected and from where.
Triggering policy:
It determines the appropriate period to start a load balancing operation.
Resource type policy:
It orders a resource as server or receiver of tasks according to its availability status.
Location policy:
It uses the results of the resource type policy to find a suitable partner for a server
or receiver.
Selection policy:
Defines the tasks that should be migrated from overloaded resources (source) to
most idle resources (receiver).
2.3.2 Grid Load Balancing Technique
A local grid resource is considered to be a cluster of workstations or a multiprocessor,
which is abstracted uniformly as peer-to-peer networked hosts. Two algorithms are
considered in the local management layer of each agent to perform local grid load
balancing.
First-come-first-served algorithm
Genetic algorithm
Disadvantages
Potential performance degradation as the size of grids increases.
Strength of the autonomy varies at different levels of the grid architecture
Cost of file transfer
Uneven job arrival pattern
14
3. PROPOSED SYSTEM
As there are many clients under a particular server, the CPU and memory usage
increases, which causes server jam. To solve this problem a tool is proposed to balance
the load. In our Proposed System when request is made, it is forwarded to the socket. At
the same time the calculation of CPU and Memory usage is made using a tool. Next
comparison of the maximum threshold with the calculated cpu percentage is done and
the request is forwarded to the server accordingly. The load distribution is done using a
plug-in, with the help of a benchmark tool. .Java RMI extends Java with distributed
objects whose methods can be called from remote clients. The client proxy is modified
with an aspect to forward requests to a specific server, but the server is also able to shed
load by altering or directing request to other server based on workloads.
3.1 RMI Architecture
Java RMI (Remote Method Invocation) adds remote objects to Java programs.
These remote objects reside on object servers, separate machines connected by a
common network. Clients can invoke methods on these remote objects using remote
method invocation, which bundles the information needed to invoke the method into a
message and sends it to the appropriate object server for execution.Java RMI is based on
the distinction between object interface and implementation. It relies on the fact that a
client cannot distinguish between objects implementing a remote interface if their
behaviour is identical.
15
Java RMI
Fig 3.1 RMI Architecture
The architecture of Java RMI consists of the three layers.The first layer provides a
proxy object on the client and a skeleton object at the server. In current versions of Java,
there is one skeleton object for the server. The proxy object is a local object on the client
JVM that implements the same remote interface as the object implementation on the
server. The proxy translates method invocations to remote method invocations to the
server. Part of this translations uses the remote object reference for the remote object held
in the Remote Reference Layer.
The Transport Layer handles client/server communication. The proxy object may
be statically-generated by the rmic stub compiler or may be a dynamic proxy generated at
runtime by the JVM. The rmic compiler starts with a class that implements a remote
interface (one derived from java.rmi.Remote). From this, rmic generates a proxy class
that implements the same remote interface. The name of this proxy class is the name of
the implementation with “ Stub” appended. For each method in the remote interface, rmic
generates code that uses the remote object reference to invoke the same method on the
object implementation at the server. At runtime, when the client imports the remote
object using the RMI registry, it loads this proxy class using its name. If the proxy class
16
Client Program Remote Interface
Server Object
Proxy Object
Remote Reference Layer
Transport Layer
Remote Reference Layer
Skeleton Object
is successfully loaded, a proxy object is created. If not, then the second method of proxy
generation is used.
The second method of generating a proxy object is using the dynamic proxy
mechanism introduced . Given a list of interfaces, the JVM can create a proxy
implementing them at runtime. Method calls on the proxy are delegated to an invocation
handler object provided by the developer. In Java RMI, if the JVM cannot load the rmic-
generated proxy class, the client creates a dynamic proxy using the remote interface.A
RemoteObjectInvocationHandler object is created as the invocation handler, which
provides identical functionality as rmic-generated stubs. However, we consider these
dynamic proxies to be statically generated. Their functionality is fixed; they are dynamic
only in that they are created at runtime.
3.2 Software Load Balancing
A load balancer process is placed between clients and servers. All client requests
are forwarded to the balancer process, which forwards the request to a suitable server.
The reply message takes the reverse path. In Java RMI, the balancer would maintain a
collection of references to different remote objects. For each incoming request, one of
these remote objects would be selected and the balancer would invoke the same method
on it, forwarding the request. The return value of the balancer method would simply
forward the return value from the remote object.
17
Request Request
C
Response Response
Fig 3.2 Load Balancer
A similar strategy can be used in Apache, forwarding all requests to an entry server
that rewrites the URL to redirect the request to one of a set of servers . This strategy has
the benefit of being able to redirect each request from each client to a suitable server. In
addition, incorporating new servers is relatively simple.When a new object starts on a
new server, it could register itself with the balancer. From that point, the balancer could
distribute requests to the new object.
The balancer can also control the load on the servers by deciding how many
requests to forward to any given server. Once this number has been reached, the balancer
could queue up requests and forward them to servers as they complete their outstanding
requests.
In addition, this strategy allows the servers (in conjunction with the balancer) to
shed load when necessary.The balancer can factor in server load when distributing
requests, by having each server periodically indicate its current status. The client is not
involved in this process, instead simply forwarding all requests to the central balancer
process. However, this strategy adds communication overhead in the extra pair of
messages between balancer and server. This overhead can be reduced by having the
server reply directly to the client, which is not possible in Java RMI without altering the
underlying RMI protocol.
18
Client Balancer Server
Remote Obj1
Register(“obj name” Register(“obj name”,
Remote obj1) remote obj2)
Fig 3.3 Multiple Registry entries
In addition, the balancer can potentially form a bottleneck since all requests must
pass through it, though the amount of processing for each request is small.A second
option is to augment the object registry to allow multiple remote objects to register
remote object references using the same name. When a lookup is performed,the registry
can return one of the registered references to the client, and the client invokes methods
directly on that object. In a load balancing system, the registry would return a single
reference to one of the registered objects.
19
Obj=lookup(“Object Name”)Obj.method()
Registry
Object NameRemote Obj1Remote Obj2
Remote Obj 1Remote Obj2
3.3 Architectural Design
Fig 4.1 System Architecture
20
3.4 Module Description
There are three modules in this Project
1. Request Forwarding
2. CPU usage
3. Redirecting to the server
3.4.1 Request Forwarding
This is the first module of the process. In this module, the client sends the request
to the Socket. Using the port number, binding has been made and the request is sent
along with the IP address of the server through the socket. Here, Load Balancer is
transparent to the client as well as the server, which means that the Client and the sever
doesn’t know the existence of the Balancer. This transparent medium receives the
request.
3.4.2 CPU Usage
The CPU usage of the system is calculated by using a tool that calculates the CPU
performance. This CPU percentage is stored in the database and is updated time to time.
The recent CPU usage is retrieveved from the database by the program by using JDBC.
When the client sends the request to the server, the CPU value retrieved from database is
compared with the maximum threshold value.
21
3.4.3 Redirecting to the Server
In this final module, The client selects the file to be updated to the server.
Comparison of maximum threshold with the CPU percentage has been done and based on
that LB action is performed. Then, it redirects the request to the server accordingly. If the
CPU usage is below the threshold, then the request is forwarded to the main server. Then
the file will be updated to the main server. Else, it will be forwarded to the Alternate
server and file will be uploaded in it.
If both the main server and the alternate servers are busy, that is the CPU usage
exceeds the threshold value, then based on the priority few processes will be killed and
the request will be redirected to the corresponding server. The priority for each process is
stored in the database and the running process that has the lowest priority is killed
22
3.5 Implementation
The implementation can be proceeded through Socket in java using RMI server.
Java will be more suitable for platform independence and networking concepts. For
maintaining cpu usage and priority information we go for MySQL as database back end.
In LB we have RMI server.
The RMI server is started using the command “start rmiregistry”. The RMI Server
performs the following process.
Process the Client Request / Response.
Stores the object in RMI registry.
It communicates directly with the server as well as client.
Also allocates the server for each client.
For every new client request, the Threshold value is updated.
Threshold value is the maximum CPU usage of the server machine. Each time we
compare it with the current CPU usage.
LB forwards the client request to the appropriate servers based on its CPU usage.
For each Request, we compare the existing and current CPU usage.
If the current usage is lesser than the threshold value then the main server is
allocated for the request.
Every client request comes to LB then using these ways only server is allocated.
23
When the CPU usage exceeds the Threshold then it will forward to alternate server.
When the request is forwarded to socket and if the CPU usage is greater than
the threshold, then the request is redirected to the server.
All the processing is done through Load Balancer only.
LB is allocating the server for every new client request.
Dynamically allocating the server for each client.
If all the servers are busy, then based on the priority which is stored in DB. A
low priority is killed and the request is forwared accordingly.
24
3.6 Data Flow Diagram
A data flow diagram (DFD) is graphic representation of the "flow" of data through
business functions or processes. More generally, a data flow diagram is used for the
visualization of data processing. It illustrates the processes, data stores, external entities,
data flows in a business or other system and the relationships between them
Fig 4.2 DataFlowDiagram
25
Client(s) sends request to socket
Socket forwards the request to LB
CPU usage is retrieved form DB which is calculated using a tool
LB action is performed
Request is redirected to server
4. SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub assemblies, assemblies and/or a finished
product. It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific
testing requirement.
4.1 Unit testing
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program input produces valid outputs. All decision
branches and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an individual unit
before integration. This is a structural testing, that relies on knowledge of its construction
and is invasive. Unit tests perform basic tests at component level and test a specific
business process, application, and/or system configuration. Unit tests ensure that each
unique path of a business process performs accurately to the documented specifications
and contains clearly defined inputs and expected results.
In this project, each module of the application has been tested individually. As the
modules were built up testing was carried out simultaneously, tracking out each and
every kind of input and checking the corresponding output until module is working
correctly. The functionality of the modules was also tested as separate units. Each of the
modules was tested as separate units. In each module all the functionalities were tested in
isolation.
26
In the first module, the client request has been forwarded to the socket. In this the
IP address of client is tested for communicating with the server.
In the CPU usage module, it is tested that the CPU tool to calculate the system’s
CPU usage is turned on. The CPU percentage must be updated with the recent CPU
value which is stored in the database during the program run time. So it is also tested.
In the final module, the server is redirected. So the IP address of the main server
and alternate server is checked.
4.2 Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.
In this project, after unit testing the modules are integrated one by one and then
tested the system for problems arising from component interaction. After forwarding the
request to load balancer, if the CPU usage which is stored in the database is below the
threshold value, then the request is redirected to the main server. This is tested by
maintaining the CPU performance within a limit.
If the CPU usage exceeds the limit then it is redirected to the alternate server. This
is tested by increasing the CPU performance by starting up various tasks and allowing it
to exceed the threshold value.
27
5.3 Acceptance Testing
User acceptance testing of a system is the key factor of the success of any system.
The system under study is tested for the user acceptance by constantly keeping in touch
with the prospective system users at any time of developing and making changes
whenever required.
In this project, it is tested that the valid response from the server for the
corresponding request is made. For example, a file from the client is updated to the server
and it is tested in the server side, by checking the directory for that particular file.
28
5. CONCLUSION
In this project, a dynamic, aspect-oriented implementation of load balancing in
Java RMI has been implemented. Initial requests from a client are directed to a
balancer process, which forwards the request to a server. This aspect instructs the
client to forward all subsequent requests to a specific server. However, if that server
needs to shed some of its load, the load balancer redirect it to the alternate server. This
server is transparent to the client. Here, the CPU usage for the system is calculated
and based on that the request is directed to the appropriate servers. This approach
reduces the overhead of having all requests forwarded by a balancer process and
provides a more dynamic ability to redistribute load when necessary. In addition, all
decisions are made by the server based on application needs. By which, allocation of
the request to the server is done effortlessly.
29
APPENDIX 1
SAMPLE CODE
LoadBalance Interface
import java.rmi.Remote;
import java.rmi.RemoteException;
public interface LoadBalanceInterface extends Remote
{
public void uploadFile(byte[] content,String filename) throws RemoteException;
public boolean CPUUsageStatus() throws RemoteException;
}
LoadBalance Implementation
import java.io.*;
import java.rmi.*;
import java.rmi.server.UnicastRemoteObject;
public class LoadBalanceImpl extends UnicastRemoteObject implements LoadBalanceInterface
{
private String name;
public LoadBalanceImpl() throws RemoteException{}
public boolean CPUUsageStatus()
{
30
CPUPerformanceMonitor s = new CPUPerformanceMonitor();
int cu = s.getCpuUsage();
System.out.println("CPU Usage:"+ cu);
if(cu>90)
return false;
else
return true;
}
public void uploadFile(byte[] filedata,String filename)
{
Try
{
BufferedOutputStream output = new BufferedOutputStream(new
FileOutputStream("d:\\AlternateServer\\"+ filename));
output.write(filedata,0,filedata.length);
output.flush();
output.close();
}catch(Exception ex)
{ System.out.println("Server :"+ ex.toString()); }
}
}
Load Balance Client
import java.io.*;
31
import java.rmi.*;
import javax.swing.*;
public class LoadBalanceClient
{
void getFiles(LoadBalanceInterface fi)
{
try
{
JFileChooser jfc = new JFileChooser();
jfc.showOpenDialog(null);
File f = jfc.getSelectedFile();
String fileName=f.getAbsolutePath();
File file = new File(fileName);
byte buffer[] = new byte[(int)file.length()];
BufferedInputStream input = new
BufferedInputStream(new FileInputStream(fileName));
input.read(buffer,0,buffer.length);
input.close();
fi.uploadFile(buffer,f.getName());
JOptionPane.showMessageDialog(null,"File Updated to Server");
}
Catch(Exception e)
{
System.out.println("FileImpl: "+e.getMessage());
32
e.printStackTrace();
}
}
LoadBalanceClient()
{
try
{
String name = "//localhost/LBServer";
LoadBalanceInterface fi = (LoadBalanceInterface) Naming.lookup(name);
if(fi.CPUUsageStatus()==true)
{
JOptionPane.showMessageDialog(null,"Server Ready for your request");
getFiles(fi);
}
else
{
JOptionPane.showMessageDialog(null,"Server is now busy, so now redirecting to another server");
}
}
catch(Exception e) {
System.err.println("FileServer exception: "+ e.getMessage());
e.printStackTrace();
33
}
}}
Load Balance Server
import java.io.*;
import java.rmi.*;
public class LoadBalanceServer {
public static void main(String argv[]) {
try {
LoadBalanceInterface fi = new LoadBalanceImpl();
Naming.rebind("LBServer", fi);
} catch(Exception e) {
System.out.println("FileServer: "+e.getMessage());
e.printStackTrace();
}}}
Cpu Performance Monitor
import java.sql.*;
public class CPUPerformanceMonitor
{
private long lastSystemTime = 1;
private long lastProcessCpuTime = 0;
DbConnection cn=new DbConnection();
public double getCpuUsage()
{
34
double val=0.0;
try
{
ResultSet rs = cn.st.executeQuery("select * from cputest");
if(rs.next())
{
val= Double.parseDouble(rs.getObject(1)+"");
}
}
catch(Exception ex){}
return val;
}
public synchronized int getCpuUsages()
{
if ( lastSystemTime == 0 )
{
baselineCounters();
return 0;
}
long systemTime = System.nanoTime();
long processCpuTime = 0;
double cpuUsage = (double) ( processCpuTime - lastProcessCpuTime ) /
( systemTime - lastSystemTime );
lastSystemTime = systemTime;
lastProcessCpuTime = processCpuTime;
Random r = new Random();
int cpuUsages = r.nextInt(100);
return cpuUsages;
}
35
public static void main(String[] args) {
CPUPerformanceMonitor s = new CPUPerformanceMonitor();
System.out.println(s.getCpuUsage());
}
}
DB Connection
import java.sql.*;
class DbConnection
{
Connection c;
Statement st;
DbConnection()
{
try
{
Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
}
catch(Exception ex)
{}
try
{
c=DriverManager.getConnection("Jdbc:Odbc:loadbalance");
st=c.createStatement();
}
36
catch(Exception ex){}
}}
LBC
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class LBC extends JFrame implements ActionListener
{
JLabel pic;
JButton jb;
LBC()
{
super("LBC");
setLayout(null);
jb = new JButton("Connect");
jb.reshape(350,450,100,30);
add(jb);
pic = new JLabel(new ImageIcon("Img/ntbanner.jpg"));
pic.reshape(20,20,750,350);
add(pic);
getContentPane().setBackground(Color.black);
jb.addActionListener(this);
setSize(800,600);
setVisible(true);
}
public void actionPerformed(ActionEvent ae)
{
37
if(ae.getSource()==jb)
{
new LoadBalanceClient();
}
}
public static void main(String str[])
{
new LBC();
}
}
38
APPENDIX 2
SCREEN SHOTS
Main Server
Server Startup
39
Alternate server startup
Client Startup
40
Server Ready for Request
File Updated to Server
41
Server Busy
Request redirected to alternate Server
42
File updated to the alternate Server
43
REFERENCES
1. Andrew Stevenson and Steve MacDonald, “Dynamic Aspect-Oriented Load
Balancing in Java RMI,” IEEE Networking Conf., 8(11), 2008
2. R. Kapitza, J. Domaschka, F. Hauck, H. Reiser, and H. Schmidt. “FORMI:
Integrating adaptive fragmented objects into Java RMI,” IEEE Distributed Systems
Online, 7(10), 2006
3. S. Dhakal, M.M. Hayat, M. Elyas, J. Ghanem, and C.T. Abdallah, “Load Balancing
in Distributed Computing: Effects of Network Delay,” Proc. IEEE Networking Conf.
(WCNC ’05), Mar. 2005.
4. M.M. Hayat, S. Dhakal, C.T. Abdallah, J.D. Birdwell, and J. Chiasson, “Dynamic
Time Delay Models for Load Balancing. Part II: Stochastic Analysis,” Verlag, 2004.
5. Z. Lan, V.E. Taylor, and G. Bryan, “Dynamic Load Balancing for Adaptive Mesh
Refinement Application,” Proc. Int’l Conf. Parallel Processing (ICPP), 2001.
6. N. Narasimhanetal.” Interceptors for Java remote method invocation. Concurrency
and Computation: Practice and Experience,” 13(8-9):755–774, 2001
7. “Dynamic Load Balancing in a Distributed System Using a Sender-Initiated
Algorithm” Proceedings of the 7th International Conference on Distributed
Computing Systems, pp. 170-177, September 1997.
8. G. Weikum,” Disk Partitioning and Load Balancing in Parallel Storage Systems,”
IEEE Symposium on Mass Storage Systems, pp. 99-99, IEEE Computer Society
Press, June 1994.
9. http://au.loadbalancer.org/products.php
10. www.cainetworks.com/support/ load-balance.html
11. http://us.loadbalancer.org/load_balancing_methods.php
44