51
Toward Secure and Dependable Storage Services in Cloud Computing Objective: This paper a flexible distributed storage integrity auditing mechanism, utilizing the distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. Abstract: Cloud computing is the delivery of computing and storage capacity as a service to a community of end-recipients. Cloud computing entrusts services with a user's data, software and computation over a network. Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users, physical possession of their outsourced data, which unavoidably poses new security risks toward the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphism token and distributed erasure-coded data.

Towards Secure

Embed Size (px)

Citation preview

Page 1: Towards Secure

Toward Secure and Dependable Storage Services in Cloud Computing

Objective:

This paper a flexible distributed storage integrity auditing mechanism, utilizing the

distributed erasure-coded data. The proposed design allows users to audit the cloud storage

with very lightweight communication and computation cost.

Abstract:

Cloud computing is the delivery of computing and storage capacity as a service to a

community of end-recipients. Cloud computing entrusts services with a user's data, software

and computation over a network. Cloud storage enables users to remotely store their data and

enjoy the on-demand high quality cloud applications without the burden of local hardware and

software management. Though the benefits are clear, such a service is also relinquishing users,

physical possession of their outsourced data, which unavoidably poses new security risks

toward the correctness of the data in cloud. In order to address this new problem and further

achieve a secure and dependable cloud storage service, we propose in this paper a flexible

distributed storage integrity auditing mechanism, utilizing the homomorphism token and

distributed erasure-coded data.

The proposed design allows users to audit the cloud storage with very lightweight

communication and computation cost. The auditing result not only ensures strong cloud

storage correctness guarantee, but also simultaneously achieves fast data error localization,

i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature,

the proposed design further supports secure and efficient dynamic operations on outsourced

data, including block modification, deletion, and append. Analysis shows the proposed scheme

is highly efficient and resilient against Byzantine failure, malicious data modification attack, and

even server scheming attacks.

Page 2: Towards Secure

Existing system:

1. User: an entity, who has data to be stored in the cloud and relies on the cloud for data

storage and computation, can be either enterprise or individual customers.

2. Data redundancy can be employed with a technique of erasure correcting code to

further tolerate faults or server crash as user’s data grow in size and importance.

3. The existing system user to audit the cloud storage for very high communication and

cost.

4. To easily attack the cloud data using different intrusion attacks such as,

Malicious attack.

Data modification attack.

Server clouding attack.

Disadvantages:

1. Less control comes with handing over your data and information.

2. Dependence on a third party to ensure security and confidentiality of data and

information.

3. Long-term dependence on cloud host for maintenance of your information.

Page 3: Towards Secure

Proposed system:

1. The proposed design allows users to audit the cloud storage with very lightweight

communication and computation cost.

2. The proposed design further supports secure and efficient dynamic operations on

outsourced data, including managed control such as,

Block modification.

Deletion.

Append.

3. The cloud data to securely introduce an effective TPA, the auditing process should bring

in no new vulnerabilities toward user data privacy.

Advantages:

1. Access your data at all times – not just while in the office.

2. A physical storage center is no longer needed.

3. Cloud storage cost is low, and then most have any pay structure that only calls for

payment when used.

4. Relieves burden on IT professionals and frees up their time in the office.

5. Easily scalable so companies can add or subtract resources based on their own needs.

Page 4: Towards Secure

Cloud Servers

UsersThird Party Auditor (optional)

Data Flow

Block Diagram:

Data auditing to enforce public auditing

Service level agreement

Auditing Delegation

Software Used:

Language : Java, Servlet

Web Server : Apache Tomcat 5.5

Database : My SQL 5.0

Page 5: Towards Secure

Hardware used:

> 2GHz Processor

80GB hard disc

1GB RAM

Operating System : Windows XP

Literature Survey

CLOUD COMPUTING: A SOLUTION TO INFORMATION SUPPORT SYSTEMS (ISS)

Muzafar Ahmad Bhat,BashirAhmad,RazeefMohdShah,InayatRasoolBhat

Information Support Systems (ISS) are computer technology/network support systems

that interactively support the information processing mechanisms for individuals and groups in

life, public, and private organizations, and other entities. Over some decades in the past,

organizations have put efforts to be at the forefront of the development and application of

computer-based Information Support Systems to collect, analyze and process the data and

generate information to support decisions. Various computing paradigms have been employed

for the purpose and needs have emerged for enormous infrastructure, unlimited system

accessibility, cost effectiveness, increased storage, increased automation, flexibility, system

mobility and shift of IT focus. This paper presents a brief evaluation on how Cloud Computing

paradigm can be used to meet the increasing demands of the Information Support Systems and

how Cloud Computing paradigm can prove to be future solution for such systems.

Page 6: Towards Secure

PROOFS OF RETRIEVABILITY: THEORY AND IMPLEMENTATION

Kevin D. Bowers, Ari Juels, and AlinaOprea

A proof of retrievability(POR) is a compact proof by a file system (prover) to a client

(verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs

incur lower communication complexity than transmission of F itself, they are an attractive

building block for high-assurance remote storage systems. In this paper, we propose a

theoretical framework for the design of PORs. Our framework improves the previously

proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the

conceptual limitations of previous theoretical models for PORs. It supports a fully Byzantine

adversarial model, carrying only the restriction—fundamental to all PORs—that the adversary’s

error rate ² be bounded when the client seeks to extract F. Our techniques support efficient

protocols across the full possible range of ², up to ² non-negligibly close to 1. We propose a new

variant on the JuelsKaliski protocol and describe a prototype implementation. We demonstrate

practical encoding even for files F whose size exceeds that of client main memory.

DYNAMIC PROVABLE DATA POSSESSION

C. Chris ErwayAlptekinKupc¸ u CharalamposPapamanthou Roberto Tamassia

As storage-outsourcing services and resource-sharing networks have become popular,

the problem of efficiently proving the integrity of data stored at untrusted servers has received

increased attention. In the provable data possession (PDP) model, the client preprocesses the

data and then sends it to an untrusted server for storage, while keeping a small amount of

meta-data. The client later asks the server to prove that the stored data has not been tampered

with or deleted (without downloading the actual data). However, the original PDP scheme

applies only to static (or append-only) files.

Page 7: Towards Secure

The price of dynamic updates is a performance change from O(1) to O(log n) (or O(nǫlog

n)), for a file consisting of n blocks, while maintaining the same (or better, respectively)

probability of misbehavior detection. Our experiments show that this slowdown is very low in

practice (e.g., 415KB proof size and 30ms computational overhead for a 1GB file). We also show

how to apply our DPDP scheme to outsourced file systems and version control systems (e.g.,

CVS).

PRIVACY-PRESERVING PUBLIC AUDITING FORSECURE CLOUD STORAGE

Cong Wang, Student, Sherman S.-M. Chow, Qian Wang, KuiRen, and Wenjing Lou

Using Cloud Storage, users can remotely store their data and enjoy the on-demand high

quality applications and services from a shared pool of configurable computing resources,

without the burden of local data storage and maintenance. Thus, enabling public auditability

for cloud storage is of critical importance so that users can resort to a third party auditor (TPA)

to check the integrity of outsourced data and be worry-free. To securely introduce an effective

TPA, the auditing process should bring in no new vulnerabilities towards user data privacy, and

introduce no additional online burden to user.

In this paper, we propose a secure cloud storage system supporting privacy-preserving

public auditing. We further extend our result to enable the TPA to perform audits for multiple

users simultaneously and efficiently. Extensive security and performance analysis show the

proposed schemes are provably secure and highly efficient.

Page 8: Towards Secure

Diagrams

Dataflow Diagrams:

LEVEL1:

DATABASE

USER

User Registration

User Login

CRM SERVICE

Page 9: Towards Secure

LEVEL2:

DATABASE

USER

User Login

Access CRM service

Accessing data

ENCRYPTION SERVICE

Page 10: Towards Secure

LEVEL3:

DATABASE

USER

User Login

Access CRM service

Accessing data

ENCRYPTION SERVICE

Encrypting data

Data decryptionDecrypting dataOriginal records retrieved

Page 11: Towards Secure

UML Diagrams:

Use case Diagram:

Page 12: Towards Secure

Class Diagram:

Page 13: Towards Secure

Sequence Diagram:

Page 14: Towards Secure
Page 15: Towards Secure

MODULES OF THE SYSTEM

List of Modules:

1. User Registration and Control

2. CRM Service

3. Encryption/Decryption Service

4. Accessing Storage service

User Registration and Control:

This module can be also used to register users for custom modules that support

personalization and user specific handling. If the users wish to create their own user accounts,

i.e. register, then registration checks for the username availability and assign unique ID. User

Control means controlling the login with referring the username and password which are given

during the registration process.

After login, the user can encrypts the original data and stored it in database, and the

user can retrieve the original data which gets decrypted after checking the unique ID and

searched data. Based on their logins, they have rights to view, or edit or update or delete the

contents of resources. Part of the stored data are confidential, but when these institutions

store the data to equipment afforded by cloud computing service provider, priority accessing to

the data is not the owner, but cloud computing service provider. Therefore, there is a possibility

that stored confidential data cannot rule out being leaked. Also there is no possibility to track

the original data for the hackers.

Page 16: Towards Secure

CRM Service:

This module is customer relationship management, where the user can interact with the

application. CRM is concerned with the creation, development and enhancement of

individualised customer relationships with carefully targeted customers and customer groups

resulting in maximizing their total customer life-time value. CRM is a business strategy that aims

to understand, anticipate and manage the needs of an organisation’s current and potential

customers. It is a comprehensive approach which provides seamless integration of every area of

business that touches the customer- namely marketing, sales, customer services and field

support through the integration of people, process and technology.

CRM is a shift from traditional marketing as it focuses on the retention of customers in

addition to the acquisition of new customers. The expression Customer Relationship

Management (CRM) is becoming standard terminology, replacing what is widely perceived to

be a misleadingly narrow term, relationship marketing (RM). The main purpose of CRM is:

• The focus [of CRM] is on creating value for the customer and the company over the

longer term.

• When customers value the customer service that they receive from suppliers, they are

less likely to look to alternative suppliers for their needs.

• CRM enables organisations to gain ‘competitive advantage’ over competitors that

supply similar products or services.

Page 17: Towards Secure

CRM consists of index page, registration page, login page, etc. Through this, the user can

register with the user details, after registration the user can send the original data, which gets

encrypted and stored in database; also the user can retrieve the original data which they stored

only after decrypting the encrypted data by giving the decryption key.

Encryption/Decryption Service:

This module describes about the encryption and decryption process for the original

data. The encryption process is needed while storing the data, and the data decryption is

needed while retrieving the data. After the user’s login has been successfully verified, if the

CRM Service System requires client information from the user, it sends a request the

information (for encryption and decryption) to the Storage Service System.

Encryption: In this (data storage service), the CRM Service System transmits the user ID

to the Storage Service System where it searches for the user’s data. This original data, once

found, a request must be sent to the Encryption/Decryption Service System along with the user

ID. It shows the Storage Service System executing the transmission of client data and the user

ID to the Encryption/Decryption Service System. Here, the user sended original data gets

encrypted and stored in storage service as per the user request. That data cannot be hacked by

unauthorized one, that are more confidential and encrypted.

Decryption: In this (data retrieval service), if the user request the CRM service to

retrieve the data which are stored in Storage service, the CRM sends the user ID and the search

data to the Encryption/Decryption Service System. It authenticates whether the user ID and

search data are owned by the same user. If authenticated, the encrypted data from the storage

service system is send to the Encryption/Decryption Service System for the decryption process.

Page 18: Towards Secure

In that process, it checks for decryption key, if it OK, then decrypts the encrypted data and the

original data retrieved, it sended to the user.

Accessing Storage service

This module describes about how the data gets stored and retrieved from the database.

The original data which given by the user gets encrypted and request for the storage, the

storage service system store the encrypted data with the user ID for avoiding the misuse of

data. Also during retrieval, the user request for retrieving the data by giving the search data,

the storage service system checks for user ID and search data are identical, if so it sends the

encrypted data to the Encryption/Decryption Service System for the decryption process, it

decrypts the data and sends to the user. The user interacts with the database every time

through the CRM service only.

The user’s goal in logging into the CRM Service System is possibly to maintain part of the

client data, thus the system design must take data maintenance into consideration. Feasible

design methods include matching the encrypted client data with the corresponding user ID and

client ID, thus allowing for the indexing of the user ID to obtain the corresponding client data.

Then the client ID can be used to index the client data the user wishes to maintain. Considering

the massive amount of client data, search efficiency could be improved by combining the user

ID and client ID to form a combined ID used for searching for a specific client’s data.

In the new business model, multiple cloud service operators jointly serve their clients

through existing information technologies including various application systems such as ERP,

accounting software, portfolio selection and financial operations which may require the user ID

to be combined with other IDs for indexing stored or retrieved data. In addition, the foregoing

description of the two systems can use Web Service related technology to achieve operational

synergies and data exchange goals.

Page 19: Towards Secure

JSP:

JavaServer Pages (JSP) is a Java technology that allows software developers to

dynamically generate HTML, XML or other types of documents in response to a Web client

request. The technology allows Java code and certain pre-defined actions to be embedded into

static content.

The JSP syntax adds additional XML-like tags, called JSP actions, to be used to invoke

built-in functionality. Additionally, the technology allows for the creation of JSP tag libraries

that act as extensions to the standard HTML or XML tags. Tag libraries provide a platform

independent way of extending the capabilities of a Web server.

JSPs are compiled into Java Servlets by a JSP compiler. A JSP compiler may generate a

servlet in Java code that is then compiled by the Java compiler, or it may generate byte code for

the servlet directly. JSPs can also be interpreted on-the-fly reducing the time taken to reload

changes

JavaServer Pages (JSP) technology provides a simplified, fast way to create dynamic web

content. JSP technology enables rapid development of web-based applications that are server-

and platform-independent.

Page 21: Towards Secure

The Advantages of JSP:

Active Server Pages (ASP). ASP is a similar technology from Microsoft. The advantages

of JSP are twofold. First, the dynamic part is written in Java, not Visual Basic or other

MS-specific language, so it is more powerful and easier to use. Second, it is portable to

other operating systems and non-Microsoft Web servers.

Pure Servlets. JSP doesn't give you anything that you couldn't in principle do with a

servlet. But it is more convenient to write (and to modify!) regular HTML than to have a

zillion println statements that generate the HTML. Plus, by separating the look from the

content you can put different people on different tasks: your Web page design experts

can build the HTML, leaving places for your servlet programmers to insert the dynamic

content.

Server-Side Includes (SSI). SSI is a widely-supported technology for including externally-

defined pieces into a static Web page. JSP is better because it lets you use servlets

instead of a separate program to generate that dynamic part. Besides, SSI is really only

intended for simple inclusions, not for "real" programs that use form data, make

database connections, and the like.

JavaScript. JavaScript can generate HTML dynamically on the client. This is a useful

capability, but only handles situations where the dynamic information is based on the

client's environment. With the exception of cookies, HTTP and form submission data is

not available to JavaScript. And, since it runs on the client, JavaScript can't access server-

side resources like databases, catalogs, pricing information, and the like.

Page 22: Towards Secure

Static HTML. Regular HTML, of course, cannot contain dynamic information. JSP is so

easy and convenient that it is quite feasible to augment HTML pages that only benefit

marginally by the insertion of small amounts of dynamic data. Previously, the cost of

using dynamic data would preclude its use in all but the most valuable instances.

SERVLETS

The Java Servlet API allows a software developer to add dynamic content to a Web

server using the Java platform. The generated content is commonly HTML, but may be other

data such as XML. Servlets are the Java counterpart to non-Java dynamic Web content

technologies such as PHP, CGI and ASP.NET. Servlets can maintain state across many server

transactions by using HTTP cookies, session variables or URL rewriting.

The Servlet API, contained in the Java package hierarchy javax.servlet, defines the

expected interactions of a Web container and a servlet. A Web container is essentially the

component of a Web server that interacts with the servlets. The Web container is responsible

for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that

the URL requester has the correct access rights.

A Servlet is an object that receives a request and generates a response based on that

request. The basic servlet package defines Java objects to represent servlet requests and

responses, as well as objects to reflect the servlet's configuration parameters and execution

environment. The package javax.servlet.http defines HTTP-specific subclasses of the generic

Page 23: Towards Secure

servlet elements, including session management objects that track multiple requests and

responses between the Web server and a client. Servlets may be packaged in a WAR file as a

Web application.

Servlets can be generated automatically by JavaServer Pages (JSP), or alternately by

template engines such as WebMacro. Often servlets are used in conjunction with JSPs in a

pattern called "Model 2", which is a flavor of the model-view-controller pattern.

Servlets are Java technology's answer to CGI programming. They are programs that run on a

Web server and build Web pages. Building Web pages on the fly is useful (and commonly done)

for a number of reasons:

The Web page is based on data submitted by the user. For example the results pages

from search engines are generated this way, and programs that process orders for e-

commerce sites do this as well.

The data changes frequently. For example, a weather-report or news headlines page

might build the page dynamically, perhaps returning a previously built page if it is still up

to date.

The Web page uses information from corporate databases or other such sources. For

example, you would use this for making a Web page at an on-line store that lists current

prices and number of items in stock.

Page 24: Towards Secure

The Servlet Run-time Environment:

A servlet is a Java class and therefore needs to be executed in a Java VM by a service we

call a servlet engine.

The servlet engine loads the servlet class the first time the servlet is requested, or

optionally already when the servlet engine is started. The servlet then stays loaded to handle

multiple requests until it is explicitly unloaded or the servlet engine is shut down.

Some Web servers, such as Sun's Java Web Server (JWS), W3C's Jigsaw and Gefion

Software's LiteWebServer (LWS) are implemented in Java and have a built-in servlet engine.

Other Web servers, such as Netscape's Enterprise Server, Microsoft's Internet Information

Server (IIS) and the Apache Group's Apache, require a servlet engine add-on module. The

add-on intercepts all requests for servlets, executes them and returns the response through the

Web server to the client. Examples of servlet engine add-ons are Gefion Software's

WAICoolRunner, IBM's WebSphere, Live Software's JRun and New Atlanta's ServletExec.

All Servlet API classes and a simple servlet-enabled Web server are combined into the

Java Servlet Development Kit (JSDK), available for download at Sun's official Servlet site .To get

started with servlets I recommend that you download the JSDK and play around with the

sample servlets.

Page 25: Towards Secure

Life Cycle OF Servlet

The Servlet lifecycle consists of the following steps:

1. The Servlet class is loaded by the container during start-up.

2. The container calls the init() method. This method initializes the servlet and must be

called before the servlet can service any requests. In the entire life of a servlet, the

init() method is called only once.

3. After initialization, the servlet can service client-requests. Each request is serviced in

its own separate thread. The container calls the service() method of the servlet for

every request. The service() method determines the kind of request being made and

dispatches it to an appropriate method to handle the request. The developer of the

servlet must provide an implementation for these methods. If a request for a

method that is not implemented by the servlet is made, the method of the parent

class is called, typically resulting in an error being returned to the requester.

4. Finally, the container calls the destroy() method which takes the servlet out of

service. The destroy() method like init() is called only once in the lifecycle of a

Servlet.

Page 26: Towards Secure

MODEL VIEW CONTROLLER(MVC)

Model-view-controller (MVC) is an architectural pattern, which at the same time is also

a Multitier architecture, used in software engineering. In complex computer applications that

present a large amount of data to the user, a developer often wishes to separate data (model)

and user interface (view) concerns, so that changes to the user interface will not affect data

handling, and that the data can be reorganized without changing the user interface. The model-

view-controller solves this problem by decoupling data access and business logic from data

presentation and user interaction, by introducing an intermediate component: the controller.

Pattern description:

It is common to split an application into separate layers: presentation (UI), domain logic,

and data access. In MVC the presentation layer is further separated into view and controller.

MVC encompasses more of the architecture of an application than is typical for a design

pattern.

Model

The domain-specific representation of the information on which the application

operates. Domain logic adds meaning to raw data (e.g., calculating whether today is the

user's birthday, or the totals, taxes, and shipping charges for shopping cart items).

Many applications use a persistent storage mechanism (such as a database) to store

data. MVC does not specifically mention the data access layer because it is understood

to be underneath or encapsulated by the Model.

Page 27: Towards Secure

View

Renders the model into a form suitable for interaction, typically a user interface

element. Multiple views can exist for a single model for different purposes.

Controller

Processes and responds to events, typically user actions, and may invoke

changes on the model.

MVC is often seen in web applications, where the view is the actual HTML page, and the

controller is the code that gathers dynamic data and generates the content within the

HTML. Finally, the model is represented by the actual content, usually stored in a

database or XML files.

Though MVC comes in different flavors, control flow generally works as follows:

1. The user interacts with the user interface in some way (e.g., presses a button).

2. A controller handles the input event from the user interface, often via a registered

handler or callback.

3. The controller accesses the model, possibly updating it in a way appropriate to the

user's action (e.g., controller updates user's Shopping cart).[3]

4. A view uses the model (indirectly) to generate an appropriate user interface (e.g., the

view produces a screen listing the shopping cart contents). The view gets its own data

from the model. The model has no direct knowledge of the view.

5. The user interface waits for further user interactions, which begins the cycle anew.

Page 28: Towards Secure

Benefits of the MVC design pattern:

The application of the model-view-controller division to the development of dynamic Web

applications has several benefits:

You can distribute development effort to some extent, so that implementation

changes in one part of the Web application do not require changes to another. The

developers responsible for writing the business logic can work independently of the

developers responsible for the flow of control, and Web-page designers can work

independently of the developers.

You can more easily prototype your work. You might do as follows, for example:

1. Create a prototype Web application that accesses several workstation-based

programs.

2. Change the application in response to user feedback.

3. Implement the production-level programs on the same or other platforms.

Outside of the work you do on the programs themselves, your only adjustments are to

configuration files or name-server content, not to other source code.

You can more easily migrate legacy programs, because the view is separate from the

model and the control and can be tailored to platform and user category.

You can maintain an environment that comprises different technologies across

different locations.

The MVC design has an organizational structure that better supports scalability

(building bigger applications) and ease of modification and maintenance (due to the

cleaner separation of tasks).

Page 29: Towards Secure

Feasibility Study

Feasibility study is the test of a system proposal according to its workability, impact on the

organization, ability to meet user needs, and effective use of recourses. It focuses on the

evaluation of existing system and procedures analysis of alternative candidate system cost

estimates. Feasibility analysis was done to determine whether the system would be feasible.

The development of a computer based system or a product is more likely plagued by

resources and delivery dates. Feasibility study helps the analyst to decide whether or not to

proceed, amend, postpone or cancel the project, particularly important when the project is

large, complex and costly.Once the analysis of the user requirement is complement, the system

has to check for the compatibility and feasibility of the software package that is aimed at. An

important outcome of the preliminary investigation is the determination that the system

requested is feasible.

Technical Feasibility:

The technology used can be developed with the current equipments and has the

technical capacity to hold the data required by the new system.

This technology supports the modern trends of technology.

Easily accessible, more secure technologies.

Technical feasibility on the existing system and to what extend it can support the proposed

addition.We can add new modules easily without affecting the Core Program. Most of parts are

running in the server using the concept of stored procedures.

Page 30: Towards Secure

Operational Feasibility:

This proposed system can easily implemented, as this is based on JSP coding (JAVA) &

HTML .The database created is with MySql server which is more secure and easy to handle. The

resources that are required to implement/install these are available. The personal of the

organization already has enough exposure to computers. So the project is operationally

feasible.

Economical Feasibility:

Economic analysis is the most frequently used method for evaluating the effectiveness of a

new system. More commonly known cost/benefit analysis, the procedure is to determine the

benefits and savings that are expected from a candidate system and compare them with costs.

If benefits outweigh costs, then the decision is made to design and implement the system. An

entrepreneur must accurately weigh the cost versus benefits before taking an action. This

system is more economically feasible which assess the brain capacity with quick & online test.

So it is economically a good project.

Java (programming language)

Java is a programming language originally developed by James Gosling at Sun Microsystems

(which is now a subsidiary of Oracle Corporation) and released in 1995 as a core component of

Sun Microsystems' Java platform. The language derives much of its syntax from C and C++ but

has a simpler object model and fewer low-level facilities. Java applications are typically

compiled to bytecode (class file) that can run on any Java Virtual Machine (JVM) regardless of

computer architecture. Java is general-purpose, concurrent, class-based, and object-oriented,

and is specifically designed to have as few implementation dependencies as possible. It is

intended to let application developers "write once, run anywhere". Java is considered by many

Page 31: Towards Secure

as one of the most influential programming languages of the 20th century, and widely used

from application software to web application.

The original and reference implementation Java compilers, virtual machines, and class libraries

were developed by Sun from 1995. As of May 2007, in compliance with the specifications of the

Java Community Process, Sun relicensed most of their Java technologies under the GNU

General Public License. Others have also developed alternative implementations of these Sun

technologies, such as the GNU Compiler for Java and GNU Classpath

J2EE application

A J2EE application or a Java 2 Platform Enterprise Edition application is any deployable unit

of J2EE functionality. This can be a single J2EE module or a group of modules packaged into

an EAR file along with a J2EE application deployment descriptor. J2EE applications are typically

engineered to be distributed across multiple computing tiers.

Enterprise applications can consist of the following:

EJB modules (packaged in JAR files);

Web modules (packaged in WAR files);

connector modules or resource adapters (packaged in RAR files);

Session Initiation Protocol (SIP) modules (packaged in SAR files);

application client modules;

Additional JAR files containing dependent classes or other components required by the

application;

Any combination of the above.

Page 32: Towards Secure

Servlet

Java Servlet technology provides Web developers with a simple, consistent mechanism

for extending the functionality of a Web server and for accessing existing business systems.

Servlets are server-side Java EE components that generate responses (typically HTML pages) to

requests (typically HTTP requests) from clients. A servlet can almost be thought of as an applet

that runs on the server side—without a face.

// Hello.java

import java.io.*;

import javax.servlet.*;

public class Hello extends GenericServlet {

public void service(ServletRequest request, ServletResponse response)

throws ServletException, IOException {

response.setContentType("text/html");

final PrintWriter pw = response.getWriter();

pw.println("Hello, world!");

pw.close();

}

}

The import statements direct the Java compiler to include all of the public classes and

interfaces from the java.io and javax.servlet packages in the compilation.

The Hello class extends the GenericServlet class; the GenericServlet class provides the interface

for the server to forward requests to the servlet and control the servlet's lifecycle.

The Hello class overrides the service(ServletRequest, ServletResponse) method defined by the

Servlet interface to provide the code for the service request handler. The service() method is

passed a ServletRequest object that contains the request from the client and a

ServletResponse object used to create the response returned to the client. The service()

Page 33: Towards Secure

method declares that it throws the exceptions ServletException and IOException if a problem

prevents it from responding to the request.

The setContentType(String) method in the response object is called to set the MIME content

type of the returned data to "text/html". The getWriter() method in the response returns a

PrintWriter object that is used to write the data that is sent to the client. The println(String)

method is called to write the "Hello, world!" string to the response and then the close()

method is called to close the print writer, which causes the data that has been written to the

stream to be returned to the client.

Testing:

The various levels of testing are:

1. White Box Testing

2. Black Box Testing

3. Unit Testing

4. Functional Testing

5. Performance Testing

6. Integration Testing

7. Objective

8. Integration Testing

9. Validation Testing

10. System Testing

11. Structure Testing

12. Output Testing

13. User Acceptance Testing

Page 34: Towards Secure

White Box Testing

Execution of every path in the program.

Black Box Testing

Exhaustive input testing is required to find all errors.

Unit Testing

Unit testing, also known as Module Testing, focuses verification efforts on the

module. The module is tested separately and this is carried out at the

programming stage itself.

Unit Test comprises of the set of tests performed by an individual programmer

before integration of the unit into the system.

Unit test focuses on the smallest unit of software design- the software

component or module.

Using component level design, important control paths are tested to uncover

errors within the boundary of the module.

Unit test is white box oriented and the step can be conducted in parallel for

multiple components.

Page 35: Towards Secure

Functional Testing:

Functional test cases involve exercising the code with normal input values for which the

expected results are known, as well as the boundary values

Objective:

The objective is to take unit-tested modules and build a program structure that has

been dictated by design.

Performance Testing:

Performance testing determines the amount of execution time spent in various parts of

the unit, program throughput, and response time and device utilization of the program

unit. It occurs throughout all steps in the testing process.

Integration Testing:

It is a systematic technique for constructing the program structure while at the same

time conducting tests to uncover errors associated with in the interface.

It takes the unit tested modules and builds a program structure.

All the modules are combined and tested as a whole.

Integration of all the components to form the entire system and a overall testing is

executed.

Page 36: Towards Secure

Validation Testing:

Validation test succeeds when the software functions in a manner that can be

reasonably expected by the client.

Software validation is achieved through a series of black box testing which confirms to

the requirements.

Black box testing is conducted at the software interface.

The test is designed to uncover interface errors, is also used to demonstrate that

software functions are operational, input is properly accepted, output are produced and

that the integrity of external information is maintained.

System Testing:

Tests to find the discrepancies between the system and its original objective,

current specifications and system documentation.

Structure Testing:

It is concerned with exercising the internal logic of a program and traversing

particular execution paths.

Output Testing:

Page 37: Towards Secure

Output of test cases compared with the expected results created during design

of test cases.

Asking the user about the format required by them tests the output generated

or displayed by the system under consideration.

Here, the output format is considered into two was, one is on screen and

another one is printed format.

The output on the screen is found to be correct as the format was designed in

the system design phase according to user needs.

The output comes out as the specified requirements as the user’s hard copy.

User acceptance Testing:

Final Stage, before handling over to the customer which is usually carried out by the

customer where the test cases are executed with actual data.

The system under consideration is tested for user acceptance and constantly keeping

touch with the prospective system user at the time of developing and making changes

whenever required.

It involves planning and execution of various types of test in order to demonstrate that

the implemented software system satisfies the requirements stated in the requirement

document

Two set of acceptance test to be run:

1. Those developed by quality assurance group.

2. Those developed by customer.

Page 38: Towards Secure

Reference:

[1] C. Wang, Q. Wang, K. Ren, and W. Lou, “Ensuring Data Storage Security in Cloud

Computing,” Proc. 17th Int’l Workshop Quality of Service (IWQoS ’09), pp. 1-9, July 2009.

[2] Amazon.com, “Amazon Web Services (AWS),” http:// aws . amazon.com, 2009.

[3] Sun Microsystems, Inc., “Building Customer Trust in Cloud Computing with Transparent

Security,” https:// www.sun.com/ offers/details/sun_transparency.xml, Nov. 2009.

[4] K. Ren, C. Wang, and Q. Wang, “Security Challenges for the Public Cloud,” IEEE Internet

Computing, vol. 16, no. 1, pp. 69-73, 2012.

[5] M. Arrington, “Gmail Disaster: Reports of Mass Email Deletions,”

http://www.techcrunch.com/2006/12/28/gmail-disasterreportsof-mass-email-deletions, Dec.

2006.

Page 39: Towards Secure