63
DESIGN AND IMPLEMENTATION OF AN INTRUSION RESPONSE SYSTEM FOR RELATIONAL DATABASES: ABSTRACT: Database security is the system, processes, and procedures that protect a database from unintended activity. Unintended activity can be categorized as authenticated misuse, malicious attacks or inadvertent mistakes made by authorized individuals or processes. Database security is also a specialty within the broader discipline of computer security. Information is the most critical resource for many organizations. In many cases, the success of an organization depends on the availability of key information and, therefore, on the systems used to store and manage the data supporting that information. The protection of data against unauthorized access or corruption due to malicious actions is one of the main problems faced by system administrators. Due to the growth of networked data, security attacks have become a dominant problem in practically all information infrastructures. In Information Security intrusion detection is the act of detecting actions that attempt to compromise confidentiality, integrity, or availability of resource. When intrusion detection takes a preventive measure

Intrusion Response system

Embed Size (px)

DESCRIPTION

http://www.penmai.com/forums/serial-stories/13246-full-pdf-files-serial-stories-all-writers-3.htmlhttp://www.chillzee.in/tamil-thodar-kathai/tamil-thodarkathai-all-list?start=50

Citation preview

DESIGN AND IMPLEMENTATION OF AN INTRUSION RESPONSE SYSTEM FOR RELATIONAL DATABASES:

ABSTRACT: Database security is the system, processes, and procedures that protect a database from unintended activity. Unintended activity can be categorized as authenticated misuse, malicious attacks or inadvertent mistakes made by authorized individuals or processes. Database security is also a specialty within the broader discipline of computer security. Information is the most critical resource for many organizations. In many cases, the success of an organization depends on the availability of key information and, therefore, on the systems used to store and manage the data supporting that information. The protection of data against unauthorized access or corruption due to malicious actions is one of the main problems faced by system administrators. Due to the growth of networked data, security attacks have become a dominant problem in practically all information infrastructures.

In Information Security intrusion detection is the act of detecting actions that attempt to compromise confidentiality, integrity, or availability of resource. When intrusion detection takes a preventive measure without direct human intervention, then it becomes an intrusion response system. We propose the notion of database response policies to support our intrusion response system tailored for a dbms. The two main issues that we address in context of such response policies are that of Policy Matching and Policy Administration. For the policy matching problem, we propose two algorithms that efficiently search the policy database for policies that match an anomalous request. The other issue that we address is that of administration of response policies to prevent malicious modifications to policy objects from legitimate users. We propose a novel Joint Threshold Administration Model (JTAM) that is based on the principle of separation of duty. The key idea in JTAM is that a policy object is jointly administered by at least k database administrator (DBAs), that is, any modification made to a policy object will be invalid unless it has been authorized by at least k DBAs.

MODULES: 1.Policy Language.

2.Attributes and Conditions.

3.Response Actions.

4.RSA public-private keys.

5. Policy Administration.

5.a Policy Creation.5.b Policy Suspension.5.c Policy Deletion.

6.Policy Matching.

PROJECT DESCRIPTION:

Our approach to an ID mechanism consists of two main elements, specifically tailored to a DBMS: an anomaly detection (AD) system and an anomaly response system. The first element is based on the construction of database access profiles of roles and users, and on the use of such profiles for the task user-request that does not conform to the normal access profiles is characterized as anomalous.The two main issues that we address in the context of such response policies are that of policy matching and policy administration. Policy matching is the problem of searching for policies applicable to an anomalous request. When an anomaly is detected, the response system must search through the policy database and find policies that match theanomaly. Our id mechanism is a real-time intrusion detection and response system; thus efficiency of the policy search procedure is crucial. we present two efficient algorithms that take as input the anomalous request details, and search through the policy database to find the matching policies. We implement our policy matching scheme in the dbms and discuss relevant implementation issues. We also report experimental results that show that our techniques are very efficient.

Module discription:1. Policy Language:

The detection of an anomaly by the detection engine can be considered as a system event. The attributes of the anomaly, such as user, role, SQL command, then correspond to the environment surrounding such an event. Intuitively, a policy can be specified taking into account the anomaly attributes to guide the response engine in taking a suitable action. Keeping this in mind, we propose an Event-Condition-Action (ECA) language for specifying response policies. Later in this section, we extend the ECA languageto support novel response semantics. ECA rules have been widely investigated in the field of active databases.

2.ATTRIBUTES AND CONDITIONS:

The anomaly detection mechanism provides its assessment of the anomaly using the anomaly attributes. We have identified two main categories for such attributes. The first category, referred to as contextual category, includes all attributes describing the context of the anomalous request such as user, role, source, and time. The second category,referred to as structural category, includes all attributes conveying information about the structure of the anomalous request such as SQL command, and accessed databaseobjects. Details concerning these attributes are reported in Table 1. The detection engine submits its characterization of the anomaly using the anomaly attributes. Therefore, theanomaly attributes also act as an interface for the response engine, thereby hiding the internals of the detection mechanism. Note that the list of anomaly attributes provided here is not exhaustive. Our implementation of the response system can be configured to include/excludeother user-defined anomaly attributes.

A response policy condition is a conjunction of predicateswhere each predicate is specified against a single anomaly attribute. Note that to minimize the overhead of the policy matching procedure , we do not support disjunctions between predicates of different attributes such as SQLCmd = Select OR IPAddress = 10.10.21.200.However, disjunctions between predicates of the same attribute are still supported. For example, if an administrator wants to create a policy with the condition SQLCmd = Select OR SQLCmd = Insert.3.RESPONSE ACTIONS:Once a database request has been flagged off as anomalous,an action is executed by the response system to address the anomaly. The response action to be executed is specified as part of a response policy. Table 2 presents a taxonomy ofresponse actions supported by our system. The conservative actions are low severity actions. Such actions may log the anomaly details or send an alert, but they do not proactivelyprevent an intrusion. Aggressive actions, on the other hand, are high severity responses. Such actions are capable of preventing an intrusion proactively by either dropping therequest, disconnecting the user or revoking/denying the necessary privileges. Fine-grained response actions are neither too conservative nor too aggressive. Such actionsmay suspend or taint an anomalous request. A suspended request is simply put on hold, until some specific actions are executed by the user, such as the execution of furtherauthentication steps.

4.RSA PUBLIC PRIVATE KEYS:

JTAM Setup Before the response policies can be used, some securityparameters are registered with the DBMS as part of a onetime registration phase. The details of the registration phase are as follows: The parameter l is set equal to the number ofDBAs registered with the DBMS.4 Such requirement allows any DBA to generate a valid signature share on a policy object, thereby making our approach very flexible. Shoupsscheme also requires a trusted dealer to generate the security parameters. This is because it relies on a special property of the RSA modulus, namely, that it must be the product of two safe primes. We assume the DBMS to be the trusted component that generates the security parameters.5 For all values of k, such that 2 _ k _ l _ 1, the DBMS generates thefollowing parameters.RSA Public-Private Keys. The DBMS chooses p, q astwo large prime numbers such that p 2p0 1 and q 2q0 1; where p0 and q0 are themselves large primes. Let n p_q be the RSA modulus. Let m p0_q0. The DBMSalso chooses e as the RSA public exponent such that e > l. Thus, the RSA public key is PK n; e. The server also computes the private key d 2 Z such that de _ 1 mod m.

5.POLICY ADMINISTRATION:Policy state transition diagram:5.a.POLICY CREATION:

The policy creation command has the following format: Create Response Policy Policy Data_ Jointly Administered By k Users; Policy Data refers to the interactive ECA response policy conditions and actions Suppose that DBA1 issues such command and that k 3, and l 5. DBA1 becomes the owner of the newly created policy object. The newly created policy will be administered by three users (including the owner). We define anadministrator of a policy as a user that has owner-like privileges on the policy object. Owner-like privileges means that the user has all privileges on the object along with the ability to grant these privileges to other users.7 Note that the DBAs are assumed to possess the owner-like privileges on all database objects by default.Policy Activation: Once the policy has been created, it must be authorized for activation by at least k _ 1 administrators after which the DBMS changes the state of the policy to ACTIVATED. The policy activation command has the following format: Authorize Response Policy Policy ID_ Create; Suppose that DBA3 issues such command. After the command is issued, the DBMS performs the following operations in a sequence:

5.b.Policy Suspension:

To alter/drop a policy or to make it nonoperational, the policy state must be changed to SUSPENDED. To change the policy state to SUSPENDED, an administrator issues the Suspend Response Policy [Policy ID] command. Suppose that DBA2 issues this command. The sequence of steps followed by the DBMS upon receiving this command is as follows: 1. It prompts DBA2 for its password. 2. It uses the password received in step 2 to decrypt the encrypted secret share of DBA2 corresponding to k 3 to get s2. 3. It creates a signature share, using the secret share s2 in a manner similar to the

5.c.Policy Drop:

A response policy is dropped by executing the Drop Response Policy [Policy ID] command. The sequence of steps performed to drop a policy is similar to the steps performed for policy suspension; the difference in this case is that the hash, is taken on the DROPPED policy state. Also, the final signature, W0 final, obtained after the policy suspension phase is left unchanged when the policy state is DROP-IN PROGRESS. After the policy drop has been authorized by k _ 1 administrators, a new final signature, W00 final, is obtained and stored in the sys response policy catalog. The DROPPED state is the final state in the lifecycle of a policy, that is, a policy cannot be reactivated after it has been dropped.

6.POLICY MATCHING:The policy matching algorithm is invoked when theresponse engine receives an anomaly detection assessment. For every attribute A in the anomaly assessment, the algorithm evaluates the predicates defined on A. After evaluating a predicate, the algorithm visits all the policy nodes connected to the evaluated predicate node. If the predicate evaluates to true, the algorithm increments the predicate-match-count of the connected policy nodes by one. A policy is matched when its predicate-match-count becomes equal to the number of predicates in the policy condition. On the other hand, if the predicate evaluates to false, thealgorithm marks the connected policy nodes as invalidated. For every invalidated policy, the algorithm decrements the policy-match-count9 of the connected predicates; the rationaleis that a predicate need not be evaluated if its policy-match count reaches zero.

EXISTING SYSTEM:The Existing mechanism is a log based mechanism for the detection of malicious transactions in DBMS. In practice malicious database transactions are related to security attacks carried out either externally or internally to the organization. External security attacks are intentional unauthorized attempts to access or destroy the organizations private data. These attacks are perpetrated by unauthorized users (hackers) that try to gain access to the database by exploring the system vulnerabilities (e.g., incorrect configuration, hidden flaws on the implementation, etc). On the other hand, internal security attacks are intentional malicious actions executed by authorized users. These users use their normal privileges to execute illicit actions for their personal benefit or to harm the organization by causing loss or corruption of critical data. Given the growing threat represented by security attacks to databases, and the fact that databases are where most of the relevant data of organizations is actually stored, a practical mechanism to detect the execution of malicious transactions in databases is of utmost importance. However, to the best of our knowledge, none of the existing commercial DBMS provides such a mechanism. Manual supervision and audit procedures are normally the only tools the DBA can use to detect potential database intrusion. In a typical database environment the profile of the transactions that each user is allowed to execute is usually known by the DBA, as the database transactions are programmed in the database application code. In other words, the transactions are not ad hoc sequences of SQL commands.

PROPOSED SYSTEM:In many dbms the intrusion detection and response policies are not efficient because they mainly focus on the end users and intermediate persons. There is an another person who should be monitored and controlled they are the malicious dbas who have a top most privilege for a dbms like creating, altering, dropping database objects. In most of the sectors privileges to dbas are separate and high, there are some intrusion detection systems for dbms but it fails in the case of malicious dba. So we are going for a new INTRUSION RESPONSE SYSTEM in which previously mentioned drawbacks were overcome with new ideas like instead of giving separate privileges for dbas, we are going to share the privilege between the dbas. For example if a single dba want to make an alteration for database objects than he should get permission from other dbas within their group, so it is an easiest way to control the privilege of dbas. We are maintaining a log table which consists of privileges that are assigned to dbas, a dba can request for a privilege by creating a policy and that policy can be viewed by other dbas and if all the dbas grant permission for a particular request then the policy will be activated. Only after the policy activation the privilege will be granted, if even a single dba refuses to grant permission than the policy will not be activated and it will be only in the created state. After activating a policy and its usage ,when there is a state that policy is not needed at present, in that situation the policy can be suspended after getting permission from all the dbas and the policy can also dropped by the dbas.

DEVELOPMENT TOOLS HARDWARE SPECIFICATIONPROCESSOR :Dual CoreHARD DISK :256 GBRAM :1 GBMONITOR :15 Color Monitor KEYBOARD :104 keys Standard KeyboardMOUSE :Standard 3 Button Mouse SOFTWARE SPECIFICATION OPERATING SYSTEM:Windows XP FROND END : JAVA BACK END : MYSQL.

SOFTWARE SPECIFICATIONDetails of the Software:

SOFTWARE DESCRIPTIONJava:Java is a new computer programming language developed by Sun Microsystems. Java has a good chance to be the first really successful new computer language in several decades. Advanced programmers like it because it has a clean, well-designed definition. Business likes it because it dominates an important new application, Web programming. Java has several important features: A Java program runs exactly the same way on all computers. Most other languages allow small differences in interpretation of the standards. It is not just the source that is portable. A Java program is a stream of bytes that can be run on any machine. An interpreter program is built into Web browsers, though it can run separately. Java programs can be distributed through the Web to any client computer. Java applets are safe. The interpreter program does not allow Java code loaded from the network to access local disk files, other machines on the local network, or local databases. The code can display information on the screen and communicate back to the server from which it was loaded. A group at Sun reluctantly invented Java when they decided that existing computer languages could not solve the problem of distributing applications over the network. C++ inherited many unsafe practices from the old C language. Basic was too static and constrained to support the development of large applications and libraries.Today, every major vendor supports Java. Netscape incorporates Java support in every version of its Browser and Server products. Oracle will support Java on the Client, the Web Server, and the Database Server. IBM looks to Java to solve the problems caused by its heterogeneous product line.

The Java programming language and environment is designed to solve a number of problems in modern programming practice. It has many interesting features that make it an ideal language for software development. It is a high-level language that can be characterized by all of the following buzzwords:Features Sun describes Java as Simple Object-oriented Distributed Robust Secure Architecture Neutral Portable Interpreted High performance Multithreaded Dynamic. Java is simple.What it means by simple is being small and familiar.Sun designed Java as closely to C++ as possible in order to make the system more comprehensible, but removed many rarely used, poorly understood, confusing features of C++. These primarily include operator overloading, multiple inheritance, and extensive automatic coercions. The most important simplification is that Java does not use pointers and implements automatic garbage collection so that we don't need to worry about dangling pointers, invalid pointer references, and memory leaks and memory management. Java is object-oriented. This means that the programmer can focus on the data in his application and the interface to it. In Java, everything must be done via method invocation for a Java object. We must view our whole application as an object; an object of a particular class. . Java is distributed. Java is designed to support applications on networks. Java supports various levels of network connectivity through classes in java. net. For instance, the URL class provides a very simple interface to networking. If we want more control over the downloading data than is through simpler URL methods, we would use a URL Connection object which is returned by a URL URL.OpenConnection () method. Also, you can do your own networking with the Socket and Server Socket classes. Java is robust. Java is designed for writing highly reliable or robust software. Java puts a lot of emphasis on early checking for possible problems, later dynamic (runtime) checking, and eliminating situations that are error prone. The removal of pointers eliminates the possibility of overwriting memory and corrupting data. Java is secure. Java is intended to be used in networked environments. Toward that end, Java implements several security mechanisms to protect us against malicious code that might try to invade your file system. Java provides a firewall between a networked application and our computer. Java is architecture-neutral. Java program are compiled to an architecture neutral byte-code format. The primary advantage of this approach is that it allows a Java application to run on any system that implements the Java Virtual Machine. This is useful not only for the networks but also for single system software distribution. With the multiple flavors of Windows 95 and Windows NT on the PC, and the new PowerPC Macintosh, it is becoming increasing difficult to produce software that runs on all platforms. Java is portable. The portability actually comes from architecture-neutrality. But Java goes even further by explicitly specifying the size of each of the primitive data types to eliminate implementation-dependence. The Java system itself is quite portable. The Java compiler is written in Java, while the Java run-time system is written in ANSI C with a clean portability boundary. Java is interpreted. The Java compiler generates byte-codes. The Java interpreter executes the translated byte codes directly on system that implements the Java Virtual Machine. Java's linking phase is only a process of loading classes into the environment. Java is high-performance. Compared to those high-level, fully interpreted scripting languages, Java is high-performance. If the just-in-time compilers are used, Sun claims that the performance of byte-codes converted to machine code are nearly as good as native C or C++. Java, however, was designed to perform well on very low-power CPUs.Java is multithreaded. Java provides support for multiple threads of execution that can handle different tasks with a Thread class in the java.lang Package. The thread class supports methods to start a thread, run a thread, stop a thread, and check on the status of a thread. This makes programming in Java with threads much easier than programming in the conventional single-threaded C and C++ style.

Java is dynamic. Java language was designed to adapt to an evolving environment. It is a more dynamic language than C or C++. Java loads in classes, as they are needed, even from across a network. This makes an upgrade to software much easier and effectively.With the compiler, first we translate a program into an intermediate language called Java byte codes ---the platform-independent codes interpreted by the interpreted on the Java platform. The interpreter parses and runs each Java byte code instruction on the computer. Compilation happen s just once; interpretation occurs each time the program is executed.Java byte codes can be thought as the machine code instructions for the Java Virtual Machine (JVM). Every Java interpreter, whether its a development tool or a web browser that can run applets, is an implementation of the Java Virtual Machine.

The Java Platform

A platform is the hardware or software environment in which a program runs. Most Platforms can be described as a combination of the operating system and hardware. The Java platform differs from most other platforms in that its software only platform that runs on top of other hardware-based platforms.The Java platform has two components:1. The Java Virtual Machine (JVM).2. The Java Application Programming Interfaces (Java API).

The JVM has been explained above. Its the base for the Java platform and is ported onto various hardware-based platforms.

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI). The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages.

Native code is code that after you compile it, the compiled code runs on a specific hardware platform. As a platform-independent environment, the Java platform can be bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring performance close to that of native code without threatening portability.Overview of the SwingThe Swing package is part of Java Foundation Classes (JFC) in the Java platform. The JFC encompasses a group of features to help people build GUIs; Swing provides all the components from buttons to split panes and tables. The Swing package was first available as an add-on to JDK 1.1. Prior to the introduction of the Swing package, the Abstract Window Toolkit (AWT) components provided all the UI components in the JDK1.0 and 1.1 platforms. Although the Java2 Platform still supports the AWT components, we strongly encourage using Swing components instead. You can identify Swing components because their names start with J. The AWT button class, for example, is named Button, whereas the Swing button class is named JButton. In addition, the AWT components are in the java.awt package, whereas the swing components are in the javax.swing package.As a rule, programs should not use heavyweight AWT components alongside Swing components. Heavyweight components include all the ready-to-use AWT components, such as Menu and ScrollPane, and all components that inherit from the AWT canvas and Panel classes. When Swing components (and all other lightweight components) overlap with heavyweight components, the heavyweight component is always painted on top.

Swing Features and Concepts:It introduces Swings features and explains all the concepts we need to be able to use Swing components effectively.Swing Components and the Containment Hierarchy:Swing provides many standard GUI components such as buttons, lists, menus and textareas, which we combine to create our programs GUI. It also includes containers such as windows and tool bars.Layout Management:Containers use layout managers to determine the size and position of the components they contain. Borders affect the layout of Swing GUIs by making Swing components larger. We can also use invisible components to affect layout.Event Handling:Event handling is how programs respond to external events, such as the user pressing a mouse button. Swing programs perform all their painting and event handling in the event-dispatching thread.Painting:Painting means drawing the components on-screen. Although its easy to customize a components painting, most programs dont do anything more complicated than customizing a components border.Threads and Swing:If we do something to a visible component that might depend on or affect its state, then we need to do it from the event-dispatching thread. This isnt an issue for many simple programs, which generally refer to components only in event -handling code. However, other programs need to use the invoke Later method to execute component-related calls in the event-dispatching thread.

More Swing Features and Concepts:These are some of the major concepts we need to know to build Swing GUIsthe containment hierarchy, layout management, event handling, painting, and threads. These are some of the other important Swing features. They are

Features that JComponent provides Icons Actions Pluggable Look and Feel support Support for assistive technologies Separate data and state models

Features that JComponent Provides:Except for the top-level containers, all components that begin with J inherit from the JComponent class. They get many features from JComponent, such as the ability to have borders, tool tips, and a configurable look and feel. They also inherit many convenient methods. Icons:Many Swing componentsnotably buttons and labelscan display images. We specify these images as Icon objects.Actions:With Action objects, the Swing API provides special support for sharing data and state between two or more components that can generate action events. For example, if we have a button and a menu item that perform the same function, consider using an Action object to coordinate the text, icon, and enabled state for the two components.Pluggable Look and Feel:This Feature enables the user to switch the look- and-feel of Swing components without restarting the application. The Swing library supports cross-platform look-and-feel---also Java look-and-feel--- that remains the same across all platforms wherever the program runs. The native look-and-feel is native to whatever particular system on which the program happens to be running, including Windows and Motif. The Swing library provides an API that gives great flexibility in determining the look-and-feel of applications. It also enables you to create our own look-and-feel.Support for Assistive Technologies:Assistive technologies such as screen readers can use the Accessibility API to get information from swing components. Because support for the Accessibility API is built into the Swing components, our swing program will probably work just fine with assistive technologies, even if we do nothing special.Separate Data and State Models:Most noncontainer Swing components have models. A button (JButton), for example, has a model (ButtonModel) that stores the buttons statewhat its keyboard mnemonic is, Whether its enabled, selected, or pressed, and so on.Some components have multiple models. A list (JList), for example, uses a ListModel to hold the lists contents, and a ListSelectionModel to track the lists current Selection.Some Swing Components of Java: JScrollPane:Provides a scrollable view of a lightweight component. It is used to display a child component with a built-in scrolling facility. The scrolling of a child component, when its size is larger than the available view port, is performed in horizontal or vertical directions by using the scrollbars associated with the scroll pane. Scroll panes are very easy to implement because the adjustment events fired by the scrollbars are already taken care of by the scrollpane object. A Swing scroll pane is an object of type JScrollPane that extends from the class JComponent. JButton:Swing buttons are represented by the objects of class JButton, and each button is basically an implementation of a push-type button. Unlike AWT buttons, Swing buttons can be displayed with text labels as well as icons. We can also set different icon for different states of the buttons by using supporting methods.

JFrame:An extended version of java.awt.Frame that adds support for the JFC/Swing components architecture. The JFrame class is slightly incompatible with Frame. Like all other JFC/Swing top-level containers, a JFrame contains a JRootPane as its only child. The content pane provided by the root pane should, as a rule, contain all the non-menu components displayed by the JFrame.JTextField:The swing text field can be used to display or edit a single line of plain text. The component appears similar to the AWT text field; however, the Swing text field is a lightweight component. A text-field object is created by using the class JTextField, which is a direct subclass of JTextComponent. Thus, the functionality of JTextField spreads into JTextComponent and JComponent. JTextField objects can fire the action and mouse events that can be captured by a registered listener.MYSQL SERVER:MY SQL Server is a powerful database management system and the user can create applications that require little or no programming. It supports GUI features and an entire programming language, Visual Studio Application which can be used to develop richer and more developed application.There are quite a few reasons, the first being that MYSQL is a feature rich program that can handle any database related task you have. You can create places to store your data build tools that make it easy to read and modify your database contents, and ask questions of your data. MYSQL is a relational database, a database that stores information about related objects. In MYSQL that database means a collection of tables that hold data. It collectively stores all the other related objects such as queries, forms and reports that are used to implement function effectively.The MYSQL database can act as a back end database for JAVA as a front end, MYSQL supports the user with its powerful database management functions. A beginner can create his/her own database very simply by some mouse clicks. Another good reason to use MYSQL as backend tool is that it is a component of the overwhelmingly popular software suite.MYSQL however is a relational database, which means that you can define relationships among the data it contains. Relational database, are superior to flat file databases because you can store discrete information.

Primary key and other indexed fields: Access use key fields and indexing to help speed many database operations. We can tell access, which should be key fields, or access can assign them automatically.Controls and objects: Queries are access objects us display, print and use our data. They can be things like field labels that we drag around when designing reports. Or they can be pictures, or titles for reports, or boxes containing the results of calculations.Queries and dynasts: Queries are request to information. When access responds with its list of data, that response constitutes a dynaset. A dynamic set of data meeting our query criteria. Because of the way access is designed, dynasts are updated even after we have made our query.Forms: Forms are on screen arrangement that make it easy to enter and read data. we can also print the forms if we want to. We can design form our self, or let the access auto form feature.Reports: Reports are paper copies of dynasets. We can also print reports to disk, if we like. Access helps us to create the reports. There are even wizards for complex printouts.Properties: Properties are the specification we assigned to parts of our database design. We can define properties for fields, forms, controls and most other access objects. SYSTEM DESIGNSystem design is the process of planning a new system to complement or altogether replace the old system. The purpose of the design phase is the first step in moving from the problem domain to the solution domain. The design of the system is the critical aspect that affects the quality of the software. System design is also called top-level design. The design phase translates the logical aspects of the system into physical aspects of the system.5.1 INPUT DESIGNInput design is one of the most important phases of the system design. Input design is the process where the input received in the system are planned and designed, so as to get necessary information from the user, eliminating the information that is not required. The aim of the input design is to ensure the maximum possible levels of accuracy and also ensures that the input is accessible that understood by the user.The input design is the part of overall system design, which requires very careful attention. If the data going into the system is incorrect then the processing and output will magnify the errors.

The objectives considered during input design are: Nature of input processing. Flexibility and thoroughness of validation rules. Handling of properties within the input documents. Screen design to ensure accuracy and efficiency of the input relationship with files. Careful design of the input also involves attention to error handling, controls, batching and validation procedures.

Input design features can ensure the reliability of the system and produce result from accurate data or they can result in the production of erroneous information. 4.3 OUTPUT DESIGNThe output design is the most important and direct source of information to the user. The encoding time and file size for both the fractal as well as fast fractal technique are shown in output screen. The comparison of both techniques is done and the PSNR value is calculated. The reconstructed image is also displayed in the output screen. Output from the computer system is required to communicate the result of processing to the user and to provide permanent copy of these results for later consultation. While designing the output, the type of output format, frequency etc has been taken into consideration. Output designed to simply generate an output of the process whether it was successful or not. SYSTEM TESTING

SYSTEM IMPLEMENTATION Implementation is the process of converting a new or revised system design into an operational one. The implementation is the final and important phase. It involves ser training, system testing and successfully running of developed proposed system. The user tests the developed system and changes are made according to their needs. The testing phase involves the testing of developed system using various kinds of data. An elaborate testing of data is prepared and the system is tested using that test data. The corrections are also noted for future use. The users are trained to operate the developed system. Both the hardware and software securities are made to run the developed system successfully in future. Implementation is the process of converting a new or revised system design in to an operational one. Education of user should really have taken place much earlier in the project when they were being involved in the investigation and design work. Training has to be given to the user regarding the new system. Once the user has been trained, the system can be tested hardware and software securities are to run the developed system successfully in the future.REQUIREMENTS GATHERING The first phase of software project is to gather requirements. Gathering software requirements begins as a creative brainstorming process in which the goal is to develop an idea for a new product that no other software vendor has thought. New software product ideas normally materialize as a result of analyzing market data and interviewing customers about their product needs.The main function of the requirements gathering phase is to take an abstract idea that fills a particular need or that solves a particular problem and create a real world project with a particular set of objectives, a budget, a timeline and a team.Some of the highlights of the requirement gathering phase include Collecting project ideas Gathering customer requirements and proposed solutions Justifying the project Submitting the request for proposal Getting the team in place Preparing the requirements document

Collecting project ideas Coming up with project ideas can prove expansion exercise.Gathering customer requirements and proposed solutions Focus on solving the customers particular problems and fulfilling the customers needs.Justifying the project An important part of the requirements gathering phase is to decide whether a particular project is more or less likely to make the company successful.Submitting the request for proposal The request for proposal is an important document that senior management uses to decide which projects are likely to show a profit and therefore are worthy of allocating a budget for further testing and study.Getting the team in place One of the requirements of every project is a team to work on the project. The team includes the project sponsor, a project manager, analysts, developers, database administrators, technical writers, quality assurance testers, trainers, release managers and possibly others. Preparing the requirements document The Requirements Document (RD) becomes main input to analysis phase. It includes a summary of the requirements to solve the objective of the project, the major features of the product, a documentation plan, a support plan, and licensing issues.

Feasibility StudyAt this stage the clients business needs are analyzed, information about project participants is collected, and the requirements for the system are gathered and analyzed. The clients expectations for system implementation are studied and the proposed solution is offered. During the Feasibility Study stage, the projects goals, parameters and restraints are agreed upon with the client including: On the basis of result of the initial study, feasibility study takes place. The feasibility study is basically the test of the proposed system in the light of its workability, meeting users requirements, effective use of resources and .of course, the cost effectiveness. The main goal of feasibility study is not to solve the problem but to achieve the scope. In the process of feasibility study, the cost and benefits are estimated with greater accuracy. Project budget and rules for its adjustment; Project time frame; Conceptual problem solution. The following tasks are performed at this stage: The project feasibility is estimated and the project scope is defined; Risks and benefits are identified; The project structure is elaborated; The project is roughly planned; The next project stage is planned precisely; Cost of the next phase is evaluated precisely and cost of the other phases approximately; Functionality development priorities are defined; System creation risks are estimated. At the end of this phase the following documents are available: Feasibility Report description of the proposed solution and list of high-level functional requirements; Project Structure description of the project organization; Project Plan project schedule; Risks List list of potential project risks and possibilities of their elimination. The Feasibility Report must be agreed upon and signed by the client. Depending on the customers technology needs the documents name may vary, e.g., Solution Definition.Signing this document means that the client and the project team have a common understanding of the project goals and tasks and have reached agreement on the process for project implementation.Average duration of this phase is about 10% of the total project duration.Economic FeasibilityThe bottom line in many projects is economic feasibility. During the early phases of the project, economic feasibility analysis amounts to little more than judging whether the possible benefits of solving the problem are worthwhile. As soon as specific requirements and solutions have been identified, the analyst can weigh the costs and benefits of each Alternative. This is called a cost-benefit analysis.PHASES OF SYSTEM DEVELOPMENT LIFE CYCLELet us now describe the different phases and the related activities of system development life cycle in detail.System StudySystem study is the first stage of system development life cycle. This gives a clear picture of what actually the physical system is? In practice, the system study is done in two phases. In the first phase, the preliminary survey of the system is done which helps in identifying the scope of the system. The second phase of the system study is more detailed and in-depth study in which the identification of users requirement and the limitations and problems of the present system are studied. After completing the system study, a system proposal is prepared by the System Analyst (who studies the system) and placed before the user. The proposed system contains the findings of the present system and recommendations to overcome the limitations and problems of the present system in the light of the users requirements. To describe the system study phase more analytically, we would say that system study phase passes through the following steps: problem identification and project initiation background analysis inference or findingsFeasibility StudyOn the basis of result of the initial study, feasibility study takes place. The feasibility study is basically the test of the proposed system in the light of its workability, meeting users requirements, effective use of resources and .of course, the cost effectiveness. The main goal of feasibility study is not to solve the problem but to achieve the scope. In the process of feasibility study, the cost and benefits are estimated with greater accuracy.ANALYSISThe main function of the analysis phase is to look carefully at the requested features with an eye toward the issue that each may create in the actual coding. This phase is the time during which reasonably deliverable thoughts of each team member can be decided. The analysis phase depends on completing the deliverables that are provided during the requirements gathering phase particularly, in the Requirement Documents. After analyzing the requirements document phase, the project team members create several deliverables, which act as input to other process. They are, Project scope document(PSD)

This document is the most critical deliverable document in the analysis phase. The PSD document prioritizes and limits the high level deliverables from the Requirements Document to a manageable list in project phase. It also includes time estimates for completing each requested feature. Project team and the product marketing manager typically undergo extensive negotiations before all stakeholders approve the PSD. Work breakdown structure

Successful project management includes the ability to take complex tasks and break them down into manageable chunks which can be achieved easily. The work breakdown structure is first pass effort at the rate of defining the task. It is necessary for completing project and assigning resources to those tasks. Baseline project schedule Wide range of software tools can be used to schedule and track project. One such tool is Microsoft project. By using the documents that are developed up to this point, a baseline schedule in Microsoft project will be developed that serves as a measuring stick for the duration of project.DESIGNThe design phase is the one, where the technical problems are really solved that makes the project a reality. In this phase the relationship of the code, database, user interface, and classes begin to take shape in the minds of the project team. During the design phase, project team is responsible for seven deliverables: Data model design User interface design Functional specifications Documentation plan Software Quality Assurance (SQA) test plan Test cases Detailed design specifications.

Data model or schemaThe primary objective in designing the data model or schema is to meet the high level software specifications that the requirement document outlines. Usually the database administrator (DBA) designs the data model for the software project. User interfaceThe user interface is the first part of the software application that is visible to the user. The UI provides the user with the capability of navigating through the software application. The UI is often known in the software industry as the look and feel aspect of the software application. The design of the UI must be such that, the software application provides an interface that is as user friendly and as cosmetically attractive as possible.PrototypeAfter the data model and UI design are ready, project team can design the prototype for the project. Sales and marketing teams generally cannot wait to get prototype in hands to show it off to sales prospects and industry trade shows.Functional specificationIt provides the definitive overview of what is included in the project. This deliverables incorporates many of the documents that prepare to this point, gathering them into one place for easy reference.Documentation planThe technical publications manager or the technical writer on the project writes the documentation plan. The plan provides an overview of the software product documenting, the recommended processes and procedures, and a list of document deliverables for the project.Detailed design specificationsIt provides a high level view of the project. Detailed design specification, layouts the blueprint of how to develop the project. The design specifications includes document that has been created to design phase but in some cases, provides step by step information on how to implement the specifications.Software quality assurance test plan and test casesDeliverables are software quality assurance test plan, test cases, test data, entrance test plan test automation requirements.

SOFTWARE LIFE CYCLE MODELWaterFall cycle modelThe waterfall life-cycle model describes a sequence of activities that begins with concept exploration and concludes with maintenance and eventual replacement.

Requirements Design Implementation Test Installation and Checkout Operation and maintenance Replacement Concept Exploration WHATFEEDBACKHOWOPERATIONEVOLVE The waterfall model caters to forward engineering of software products. This means starting with a high-level conceptual model for a system. After a description of the conceptual model for a system has been worked out, the software process continues with the design, implementation, and testing of a physical model of the system.A principal advantages of the waterfall model is its simplicity and its provision for baseline management, which identifies a fixed set of documents provided as a result of each phase in the life-cycle.DESIGN OBJECTIVE:The design process translates the requirements into a representation of the software that can be accessed for quality before the coding begins. Like the requirements, the design is also documented and becomes a part of the software organization. The objectives that to be considered are Controlled Redundancy Data Independence Accuracy Integrity Security Performance

DESIGN PLAN The design must be translated into machine-readable process focuses on the logical intervals of the software, assuring that all statements have been tested, and on the functional externals-that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results.

AdminUpload RecordsDatabaseUserSearches for RecordsSimilar RecordsData Flow Diagram

DATABASE DESIGNField NameData types

namenvarchar(50)

authVarchar(50)

ediInt(4)

isbnBigInt(8)

publVarchar(50)

Tabel name : novelPrimary Key : isbn

Table name : fictionPrimary Key : isbnField NameData types

namenvarchar(50)

authVarchar(50)

ediInt(4)

isbnBigInt(8)

publVarchar(50)

Table name : storyPrimary Key : isbnField NameData types

namenvarchar(50)

authVarchar(50)

ediInt(4)

isbnBigInt(8)

publVarchar(50)

Table name : hotel

Table name : movieField NameData types

nameVarchar(50)

dirVarchar(50)

langVarchar(50)

Field NameData types

nameVarchar(50)

PlaceVarchar(50)

ownerVarchar(50)

DEVELOPMENTAfter designing the data model, user interface, prototype, documentation plan, functional specifications, detailed design specifications and software quality assurance test plan, the next logical step is to start coding the actual software, writing the documentation and developing the SQA test cases, entrance criteria, and automations. Software coding includes building the data model and programming the user interface and application. If analysis and design efforts are correct, developing the software will not be a hardest task. Several versions include release of certain products. They are Pre-alphaPre-alpha basically means that individual modules are ready, but have not yet combined them into a functional unit. AlphaAlpha is the functional version of software. It is the fundamental structure since it covers about 60 percent of the functionality. Pre-beta Normally pre beta testing is used for in-house testing. Beta The beta version of a software product is its first release to be viewed. GA The GA or Generally Available is a software release ready to install for use. This refers to the finished version of the software.

TESTING

Testing is not isolated to only one phase of the project but should be exercised in all phases of the project. After developing each unit of the software product, the developers go through an extensive testing process of the software.After the development of software modules, developers perform a thorough unit testing of each software component. They also perform integration testing of all combined modules. INTEGRATION TESTINGWhen the individual components are working correctly and meeting the specified objectives, they are combined into a working system. This integration is planned and co-coordinated so that when a failure occurs, there is some idea of what caused it.In addition, the order in which components are tested, affects the choice of test cases and tools. This test strategy explains why and how the components are combined to test the working system. It affects not only the integration timing and coding order, but also the cost and thoroughness of the testing.

BOTTOM-UP INTEGRATIONOne popular approach for merging components to the larger system is bottom-up testing. When this method is used, each component at the lowest level of the system hierarchy is tested individually. Then, the next components to be tested are those that call the previously tested ones. This approach is followed repeatedly until all components are included in the testing. Bottom-up method is useful when many of the low-level components are general-purpose utility routines that are invoked often by others, when the design is object-oriented or when the system is integrated using a large number of stand-alone reused components. TOP-DOWN INTEGRATIONMany developers prefer to use a top-down approach, which in many ways is the reverse of bottom-up. The top level, usually one controlling component, is tested by itself. Then, all components called by the tested components are combined and tested as a larger unit. This approach is reapplied until all components are incorporated.

TestA, B.C, D, E, F.GTest B, E, FTest CTest FTest ETest GTest D,G Fig 2. Bottom-up Testing

A component being tested may call another that is not yet tested, so we write a stub, a special-purpose program to stimulate the activity of the missing component. The stub answers the calling sequence and passes back the output data that lets the testing process continue. For example, if a component is called to calculate the next available address but that component is not yet tested, then a stub is created for it, that may pass back a fixed address which allows only testing to proceed. As with drivers, stubs need not be complex or logically complete.

TestATest A, B, C, DTest A, B, C, D, E, F,GFig 3. Top-down Integration

BIG-BANG INTEGRATIONWhen all components are tested in isolation, it is tempting to mix them together as the final system and see if it works the first time. Many programmers use the big-bang approach for small systems, but it is not practical for large ones. In fact, since big-bang testing has several disadvantages, it is not recommended for any system. First, it requires both stubs and drives to test the independent components. Second, because all components are merged at once, it is difficult to find the cause of any failure. Finally, interface faults cannot be distinguished easily from other types of faults.

Test BTest CTest DTest ETest A, B, C, D, E, F, GTest A

Test GTest F

Fig 4. Big-Bang Integration

SANDWICH INTEGRATIONThe combination of top-down and the bottom-up form a sandwich testing approach. The system is viewed as three layers, just like a sandwich. The target layer in the middle, the levels above the target, and the levels below the target are the three layers. A top-down approach is used in the top layer and a bottom-up approach in the lower layer. Testing converges on the target layer, which is chosen on the basis of system characteristics and the structure of the component hierarchy.For example, if the bottom layer contains many general-purpose utility programs, the target layer may be the one above, in which most of the components using the utilities are present. This approach allows bottom-up testing to verify the utilitys correctness at the beginning of testing. Then stubs for utilities need not be written, since the actual utilities are available for use.

Test ETest DTest D, GTest B, E, FTest BTest FTest GTest ATest A, B, C, D, E, F, GTest C

Fig 5. Sandwich Testing

BLACK BOX TESTINGBlack Box Testing involves testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge about the program is necessary. For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used. Also, due to the nature of black box testing, the test planning can begin as soon as the specifications are written. The opposite of this would be glass box testing where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.ADVANTAGES OF BLACK BOX TESTING It is more effective on larger units of code than that of glass box testing. The tester needs no knowledge of implementation, including specific programming languages. The tester and the programmer are independent of one another. The tests are being done from a user's point of view. It will help to expose any ambiguities or inconsistencies in the specifications.DISADVANTAGES OF BLACK BOX TESTING Only a small number of possible inputs can actually be tested. The test cases are hard to design without clear and concise specifications. There may be unnecessary repetition of test inputs if the tester is not informed of test cases that the programmer has already tried. It may leave many program paths untested. It cannot be directed towards specific segments of code which may be very complex (and therefore more error prone). Most testing related research has been directed toward glass box testing.

TESTING STRATEGIES AND TECHNIQUES The Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester. The data outside of the specified input range should be tested to check the robustness of the program. The boundary cases should be tested (top and bottom of specified range) to make sure that the highest and lowest allowable inputs produce proper output. The number zero should be tested when numerical data is given as input. Stress testing (try to overload the program with inputs to see where it reaches its maximum capacity) should be performed, especially with real time systems. Crash testing should be performed to identify the scenarios that would make the system down. Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests is used to avoid repetition and also to aid in the software maintenance. Other functional testing techniques include transaction testing, syntax testing, domain testing, logic testing, and state testing. Finite state machine models can be used as a guide to design the functional tests. WHITE BOX TESTINGWhite box testing uses an internal perspective of the system to design test cases based on internal structure. It is also known as glass box, structural, clear box and open box testing. It requires programming skills to identify all paths of the software. The tester chooses test case inputs to exercise all paths and to determine the appropriate outputs. In electrical hardware, testing every node in a circuit may be probed and measured. Eg. in-circuit testing (ICT).Since the tests are based on the actual implementation, when the implementation changes the tests also change probably. For instance, ICT needs update if the component value changes, and needs modified/new fixture if the circuit changes. This adds financial resistance to the change process, thus buggy products may stay buggy. Automated Optical Inspection (AOI) offers similar component level correctness checking without the cost of ICT fixtures. However changes still require test updates.While white box testing is applicable at the unit, integration and system levels of the software testing process, it is typically applied to the unit. So when it normally tests paths within a unit, it can also test paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover an overwhelming number of test cases, it might not detect unimplemented parts of the specification or missing requirements. But it is sure that all the paths through the test objects are executed.Typical white box test design techniques include: Control flow testing Data flow testingWHITE BOX TESTING STRATEGY White box testing strategy deals with the internal logic and structure of the code. The tests that are written based on the white box testing strategy incorporates coverage of the code written, branches, paths, statements and internal logic of the code etc.In order to implement white box testing, the tester has to deal with the code and hence he should possess knowledge of coding and logic i.e. internal working of the code. White box testing also needs the tester to look into the code and find out which unit/ statement/ chunk of the code is malfunctioning.ADVANTAGES OF WHITE BOX TESTING i) As the knowledge of internal coding structure is prerequisite, it becomes very easy to find out which type of input/data can help in testing the application effectively. ii) It helps in optimizing the code. iii) It helps in removing the extra lines of code, which can bring in hidden defects. DISADVANTAGES OF WHITE BOX TESTINGi) As the knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing which increases the cost.ii) It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems resulting in failure of the application.

TYPES OF TESTING UNDER WHITE/GLASS BOX TESTING STRATEGY UNIT TESTING The developer carries out unit testing in order to check if the particular module or unit of code is working well. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built.

STATIC AND DYNAMIC ANALYSIS : Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output. STATEMENT COVERAGE : In this type of testing, the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect. BRANCH COVERAGE No software application can be written in a continuous mode of coding. At some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to the abnormal behavior of the application.SECURITY TESTING Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking, cracking, any code damage etc. This type of testing needs sophisticated testing techniques.MUTATION TESTING It is a kind of testing in which the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively. Besides all the testing types given above, there are some more types which fall under both Black box and White box testing strategies such as functional testing (which deals with the code in order to check its functional performance), incremental integration testing (which deals with the testing of newly added code in the application), performance and load testing (which helps in finding out how the particular code manages resources and give performance etc.) etc.