Upload
vishal-gupta
View
9
Download
7
Embed Size (px)
Citation preview
Contents1. Introduction..............................................................................................................................4
2. The Vulnerabilities..................................................................................................................5
2.1. Vulnerabilities detection in web application.....................................................................5
2.1.1. SQL Injection.............................................................................................................5
2.1.2. XSS................................................................................................................................7
3. Web application vulnerability scanners...................................................................................8
3.1. Inner Working...................................................................................................................8
3.2. Advantages......................................................................................................................10
3.3. Limitations......................................................................................................................10
4. The security Tools.................................................................................................................12
4.1. Free/open source tools....................................................................................................12
4.1.1. Grabber....................................................................................................................12
4.1.2. Vega.........................................................................................................................12
4.1.3. Wapiti......................................................................................................................12
4.1.4. w3af.........................................................................................................................13
4.1.5. Zed Attack Proxy.....................................................................................................13
5. Benchmark Overview & Assessment Criteria.......................................................................14
5.1. TEST I. Feature Documentation.....................................................................................14
5.2. TEST II. Accuracy Assessment......................................................................................14
5.3. TEST III. Attack Surface Coverage Assessment............................................................15
6. Experimental and Scanner usages observation......................................................................16
6.1. Test Setup........................................................................................................................16
6.2. Observation.....................................................................................................................16
6.2.1. TEST I :...................................................................................................................16
6.2.2. TEST II :..................................................................................................................17
6.2.3. TEST III :.................................................................................................................18
7. TOOLS..................................................................................................................................19
7.1. GRABBER......................................................................................................................19
7.1.1. Test Log :.................................................................................................................20
7.2. OWSAP-ZAP..................................................................................................................20
7.2.1. Test Log:..................................................................................................................21
7.3. VEGA..............................................................................................................................22
7.3.1. Test Log:..................................................................................................................23
7.4. W3AF..............................................................................................................................23
7.4.1. Test Log:..................................................................................................................24
7.5. Wapiti..............................................................................................................................24
7.5.1. Test Log:......................................................................................................................25
The Cross Site Scripting................................................................................................................27
8. Results and Recommendation................................................................................................27
8.1. Result Verification..........................................................................................................27
8.2. Making the Results Useful to Vendors...........................................................................27
References:....................................................................................................................................29
Appendix........................................................................................................................................31
SQL injection toy example........................................................................................................31
Cross site Scripting Toy Example..............................................................................................32
1. Introduction
As technologies change and evolve, so do hacking attack methodologies. The proliferation of the
Web as the medium for communication in many ways, means hackers put more effort on their
exploitation. In terms of security, web applications are quite inadequate (Zanero, Carettoni and
Zanchetta, 2005). For the prevention of vulnerabilities, developers should apply coding practices,
perform security reviews of the code and follow the development security standards. Sometimes,
it is hard to test/review the code specially over complex/agile development environment.
However, many times, companies focus on the implementation of the functionalities/features and
on satisfying the customer's requirements and disregard security aspects.
In-depth research and analysis for various vulnerabilities in the tested applications shows that
although the type of vulnerabilities (SQL Injection, XSS, Information disclosure, etc.) exposed
similar in different applications, the properties and restrictions in the exposure instances were
very different in each application(Chen, 2010). This drives us to the point that the different
methods that tools use to discover security exposure might be efficient for detecting certain
common instances of a vulnerability. While simultaneously being inefficient for detecting other
instances of the same vulnerability. Sometimes that tools with lesser algorithms or different
architecture might be able to fill the gap.
There are lots of security tools available. As a security consultants (pen-tester) point of view,
most of these tools make their job easy in many ways as job completion time and test result
improvement. Unfortunately, in reality that is not the case every time. Some of these tools are
outdated, some contain numerous bugs that make them less effective and some simply don't
justify the required execution time. On the other hand, some tools generate impressive results,
but for some reasons, do not receive much credit. Therefore, the questions arise, which tool is the
best? Can we rely on single tool?
To investigate features, usability, accuracy, advantages and disadvantages, selected web
application vulnerability scanner tool ran against known vulnerable web application (WebGoat,
WAVSEP, etc.), and some self-develop test cases (Baseline tests). Based on the results of each
tool, the comparison has been made. These results will not only benefit the security consultants
to make better choices in the selection of the tool but also motivate various tool vendors to
compete and improve their tools.
2. The Vulnerabilities
2.1. Vulnerabilities detection in web application
Following reasons explain why I chose SQL injection and cross-site scripting vulnerabilities as
our primary target of detection:
i. they are most common and high risk(mostly) vulnerabilities exits in several web
application[4],
ii. In reality the avoidance and detection are still a difficult task because of the complexity
of web application.[4]
Here I will present a brief description of each vulnerability with the help of detection models.
2.1.1. SQL Injection
Web applications use client data input from input field for the execution of SQL query, many
time. For example login/registration form, comment field, or any input textbox inside a html
form tag. If the input data is not sanitize then attacker can use malicious input patterns that
results in execution of arbitrary query which can expose sensitive data.[3] This flaw can even
exploit by attacker to inject system commands.
$sql = "SELECT NAME FROM rmit where NAME='" . $_POST["name"] . "' " ;
$result = $conn->query($sql);
if($result->num_rows > 0){
echo "User EXIST!";
}
else {
echo "User does NOT EXIST!";
}
$conn->close();
The above query is simple PHP & SQL login form. The web application makes various
request/response during communication between client and server. For an instance in the above
code user make the request to verify to access web application by submitting his credentials. In
response, the server receives the values inserted by the client. The server searches the values by
executing a query in the database and response to the client request, according the database
response to the server.
Let assume client request to the server to search name 'Bob' then the server side query execution
on the database will be following:.
SELECT * FROM rmit WHERE username='Bob' and password='qwerty'. Database returns the
row in the table where username is 'Bob' and password is 'qwerty'. The server receives the return
value from the database and sends a response to the client, "Welcome [username]!". In case if
either of the value is incorrect then the server response "Username/password is incorrect!".
Query() is used to execute the SQL query and returns the matched records. It is interesting to see
the user inputs, $username and $password is used directly in SQL query execution without any
input sanitization or validation, thus makes the code SQL injection vulnerable. If attacker input
following code it can fool the server and access the web application without knowing the
genuine credential.
Username: S' OR 'A'='A
Password: S' OR 'A'='A
Then the SQL query will be interpreted as:
SELECT * FROM rmit WHERE username='S' OR 'A'='A'
and password='S' OR 'A'='A'.
Since the 'A'='A' value always be true, where clause does not make any difference, and the whole
query will be, SELECT * FROM rmit. Therefore, query( ) function will always successfully run,
and this technique let the attacker bypass the authentication mechanism.
The above example is a simple version SQL injection. In real web application environment, the
SQL injections syntax are much complex and difficult to exploit.
2.1.2. XSS
Cross-site scripting refers to different types of attacks which allow an attacker to injects
malicious script to exploit the web application. The victim is tricked into clicking a specially
crafted link to a vulnerable website.[6] The injected script would then essentially “reflect” off
the vulnerable website, which the browser would then execute as the browser assumes that the
script came from the “trusted” website.
Although XSS define mainly as three types: Reflected XSS, Stored XSS, DOM base XSS,
however, in reality they overlap. Thus, for better understanding XSS can be categorized
according to the attack target:
1. Client XSS
2. Server XSS
Since the script origin is from the website itself, the server accepts that script and run it. As a
result, an attacker can steal the cookies, redirect user to phishing page or steal sensitive data from
the user. For the prevention of these attacks, the Same Origin Policy of javascript is
implemented[5].
$name = $_GET['name'];
echo $name;
The above code is simple example of the cross-site vulnerability. When client pass through the
following value for the 'name' parameter the server pop up an alert message.
<script>alert('1');</script>
At server side this request interpreted as echo = <script>alert('1');</script> and server write this
code on page and that will execute in php file. This type of attack come under reflected XSS.
Attacker can make this whole attack in a request URL (http://localhost/xss_test.php?
name=<script>alert('Hacked');</script>&submit=Submit) and make sure victim open this link.
The attacker can send this like via email or text message.
3. Web application vulnerability scanners
A penetration testing tools scan different types of vulnerability across information systems such
as computers, operating system, network systems, and software applications. That may have
originated from a vendor, system administration activities, or everyday user activities[7]:
1. Vendor-originated: Bugs in the software, missing patches, vulnerability in services provided, vulnerable default configurations, and web application vulnerability.
2. System administrative-originated: Weak credential protection polices insecure right or unauthorized system configurations et cetera.
3. User-originated: Weak or vulnerable sharing policies or remote access, malicious activities such as backdoors.
3.1. Inner Working
Most of the vulnerabilities scanners use a technique called “fuzz testing” or “fuzzing”.
“Fuzz testing or Fuzzing is a Black Box software testing technique, which basically consists in
finding implementation bugs using malformed/semi-malformed data injection in an automated
fashion.”[9]
The part of the program that does this is called a fuzzer or a fault injector.
“A fuzzer is a program which injects automatically semi-random data into a program/stack and
detects bugs. The data-generation part is made of generators, and vulnerability identification
relies on debugging tools. Generators usually use combinations of static fuzzing vectors (known-
to-be-dangerous values), or totally random data. New generation fuzzers use genetic algorithms
to link injected data and observed impact. Such tools are not public yet.”[12]
Fuzzers works mainly in two ways:
Fuzzer as a proxy: Monitor the data user put in input field of the web application.
Fuzzer as a crawler: Scan the web application to find input that may be vulnerable and only rely
on the links.
There are other ways in which fuzzers can be differentiating according to their property:[9]
Specification-based fuzzers: Specification-based fuzzers learn the interface functionality from
the protocol specification, and they can generate tests either from whole model of the protocol,
or the meta-data in the protocols.
Mutation based fuzzer: Mutation-based or template-based fuzzers look through messages
structure from the mutation such as data packets in traffic and reply either by using random
template or some predetermine algorithms.
The followings are the general steps which most of the penetration tools performed:
[13][11]
Identify the target
Identity inputs
Generated fuzzed data
Execute fuzzed data
Monitor for exceptions
Determine exploitability
Identify the target: Identification of the target consist selecting fuzzing tools or techniques to run
against the tools.
Identity inputs: Looking for places input vulnerable spots in web applications. Scanning the web
applications to find all the links, methods or actions attributes.
Generated fuzzed data: From the input identification phase send the fuzz data and check weather
this data is generated in advance or generated dynamically based on the target and data format.
Execute fuzzed data: Sending data packet to the target, and look for vulnerable spots when
identifying the inputs. This a automation phase as well as where penetration testing can be
realized.
Monitor for exceptions: In this step the result of all the data packets monitored for exceptions or
faults. Monitoring is a flexible task which takes many forms and will depend on the target
application and the technique or type of fuzzing which is implemented.
Determine exploitability: At the time fault identification, depending on the goals of the scanning,
it is necessary to determine whether or not that vulnerability can further be exploit or not. This
phase cannot be done by automation. This is done manually by person.
The penetration testing tools working can be also be defined in terms of above steps:
Crawling: This phase scan the web application to discover the links which may contains some
vulnerable inputs. The crawling capability is based on the input vectors supported by
vulnerability scanner. (Step 1&2)
Fuzzing: This phase sends the data to the test the web application and send the results to next
phase.(Step 3&4)
Analyzing: The received results from the fuzzing data is analyzed in this phase to find whether
the web application is vulnerable or not. (Step 5)
3.2. Advantages
The main advantage of using penetration testing tools is that is a very fast way of detecting the
vulnerabilities in comparison to the traditional way of testing which is a black box testing, makes
tester job not even difficult but time consuming as well because of little or no knowledge.[8]
Penetration testing tools provide the quick scan of the test area and provide the report to on
which analysis can be done. The knowledge base in a vulnerability scanner allows early
detection and handling of known and critical security problems.[10] Moreover, an ongoing
process of security scan using vulnerably scanners provide easy way to identify vulnerabilities
presented in the network, from both the internal and external perspective.
A new setup of devices or even a whole new system is added to the network, it is recommended
that to verify the system and network security with the help of vulnerability scanners, so that
there would be no ‘weak link’ left on the network which in future cause security problems. The
penetration testing tools are useful in managing and tracking security in any organization by
solving old security problem from previously made knowledge base and storing newly identified
problems in knowledge base.
3.3. Limitations
Although of all previously mentioned advantages, followings are the drawbacks or limitations of vulnerability scanners[10]:
Penetration testing tools can’t find vulnerabilities such as information disclosure and encryption weaknesses. They can only find vulnerabilities that cause results that can be monitored for. Therefore, they do poor job at finding vulnerabilities related to physical operational or procedural issues.
Regular or timely scanning of network should be conducted to make security more efficient because penetration testing tools can only assess a “snapshot of time” otherwise it system could be vulnerable to new vulnerabilities or system configuration may introduce to new security holes.
Human review is always require, penetration testing tools can only report all those vulnerabilities which are installed in knowledge base. They cannot judge between a false positive or false negative response.
Automated penetration testing tools have no specific goals towards the work. Therefore, tools have to try all possible risk.
Fuzzing process will have to repeat again and again because of its random nature. Therefore, the tool would not be able to uncover all the vulnerabilities.
4. The security Tools
Web application vulnerability scanners are tools known for detecting the vulnerabilities in web
applications security. In order to assess the current state of the art, I have selected five leading
tools to carry out a study of[14]:
(i) The type of vulnerabilities tested against chosen scanners,
(ii) The effectiveness of scanners against targeted vulnerabilities.
(iii) The significance of the target vulnerabilities to vulnerabilities found in the real word.
To obtain the significant results for my study, I used a couple of known vulnerable web
application. The selection of known vulnerable web application rather than real word application
is because of the limited exploitation knowledge. Unless we exploit each the alerted
vulnerabilities, it's hard to define the relevance of the result because that could be a false alarm or
a duplicate result. My results show effectiveness and limitation of automated tools. The objective
of this research to access the potential of future research and not to declare the winner.
Therefore, I do not recommend to use comparative data about the purchase of specific tools.
4.1. Free/open source tools
4.1.1. Grabber
Grabber is a free and open source web application scanner. It is written in Python language.
Grabber is simple, not fast but portable and really adaptable.[20] This software is designed to
scan small websites. User should have prior understanding about the vulnerabilities because it
only detects the vulnerability, not give the solution to exploit.
4.1.2. Vega
Vega is free as well open source web application vulnerability scanner and runs on Linux,
Macintosh, and Windows. The tool written in java language and have GUI version.[18] Vega has
a quick scan feature for testing web application automatically with default configuration. It also
provide manual driven inspection for complex web architecture.
4.1.3. Wapiti
Wapiti framework is developed using Python and available on most of the platform as Windows,
Unix/Linux and Macintosh . [19]It's a command line completely automatic tool. Although tool
provide command line options for customize or defining the scope of whole scanning. It support
different output file format as text, html or XML etc.
4.1.4. w3af
w3af is the abbreviation of the Web Application Attack and Audit Framework .The tool is
developed using Python and only available for Linux and Macintosh platform. [17]The tool is
free however licensed under GPLv2.0. There are various plugins provided to perform testing of
the web application against various vulnerabilities as sqli, bindsqli, xss, dom_xss etc. There used
to be command line version of this tool however they have GUI version as well, with similar
features. It produce output results in many formats as console, XML, text or HTML.
4.1.5. Zed Attack Proxy
OWASP-ZAP has cross platform framework and runs on Raspberry Pi as well. It is available on
most of the platform as Windows, Unix/Linux and Macintosh .[16] It is completely free and
open source. User can add configure rules via Scan Policies.
5. Benchmark Overview & Assessment CriteriaThe aim of benchmark is to provide testing environment for vulnerability scanning tools. The
test verify how many vulnerabilities detected by each tool on a wide range of URLs. The only
focus on this benchmarking process is to find detection rate, where as exploitation of each
detection is not necessary. Each tested tool required to have following features:
- The ability to detect both Cross Site Scripting and SQL Injection vulnerabilities.
- The ability make multiple requests or scan more than one URLs at once. Either using any
crawler/spider feature.
- The ability to restrict and control the scanning process to internal or external host.
The testing procedure of all the tools included the following phases:
5.1. TEST I. Feature Documentation
Each scanner performance were documented and compared for specific vulnerability detection.
According to the results receive from the scanner, detection rate was given that were further use
for the making of various hieratical charts.
5.2. TEST II. Accuracy Assessment
It make a tool less effective if the accuracy of generated results were not correct. The aim of
accuracy assessment to reduce the false positive results. This can be done by making more
confidence results algorithm and to check if there was detection for simple test cases, or covers
combination of common or advanced cases.
The scanners were all tested against the latest version of WAVSEP (v1.5), a benchmarking
platform designed to assess the detection accuracy of web application scanners, which was
released alongside the publication of this benchmark[11].
The purpose of WAVSEP’s test cases is to provide a scale for understanding which detection
barriers each scanning tool can bypass, and which common vulnerability variations can be
detected by each tool.
Detection Rate :
The various scanners were tested against the following test cases (GET/POST):
66 test cases that were vulnerable to Reflected Cross Site Scripting attacks.
80 test cases that contained Error Disclosing SQL Injection exposures.
46 test cases that contained Blind SQL Injection exposures.
10 test cases that were vulnerable to Time Based SQL Injection attacks.
Accuracy Rate:
The various scanners were also tested against a variety of false positive scenarios:
7 different categories of false positive Reflected XSS vulnerabilities.
10 different categories of false positive SQL Injection vulnerabilities.
Overall, a collection of 1413 vulnerable test cases for 2 different attack vectors, each test case
simulating a different and unique scenario that may exist in an application.
5.3. TEST III. Attack Surface Coverage Assessment
For assessing the attack surface coverage, this assessment include various tests cases that
evaluates the efficiency of the vulnerability scanner tools crawling mechanism.
In order to measure the efficiency of the crawler, this section included a benchmark,
WIVET(Web Input Vector Extractor Teaser), by that the identification of crawler efficiency was
possible[21].
6. Experimental and Scanner usages observation
6.1. Test Setup
There are three test cases define in this experiment. The test case I is only about the features
comparison of selected tools. The information related to functionalities/features such as session
handing for each scanner is available online on respective scanner documentation file.
The second test case is about the detection rate and false positive rate of each scanner. Before
testing the scanner on the real test case web application, I have made a simple sql injection
vulnerable page in php for baseline testing. The sql injection vulnerable page source code is
provided in appendix. After running each scanner against the base line test, the main test
scenario is setup, WAVSEP. WAVSEP acronym stands for Web application Vulnerability
Scanning Project which is publically available to users to test the efficiency of web application
vulnerability scanner. The downloaded WAVSEP application is setup to apache server on port
8080. The java run time environment/Java development kit need to install prior to any wavsep
web application setup. The apache tomcat version 6.x and mysql version 5.6.17 used for running
the WAVSEP application. After the downloading the file from wavsep repository, place it into
the apache web root folder of apache tomcat. The platform use to run WAVSEP web application
was Kali which is debian based linux system. After that initiate the installation script by visiting
the page http://localhost:8080/wavsep/wavsep-install/install.jsp. The host, port and root
credentials can me change if needed, however it wasn't change and have by default values. After
the installation and configuration the website can be accessible at http://localhost:8080/wavsep/
The third and last test case is to evaluate the crawling feature of different web application
scanner by running against the WIVET(Web Input Vector Extractor Teaser) benchmarking
project. This web application is simply copied to the web root folder of apache. The web
application can be access through the address http://localhost/wivet/index.php
6.2. Observation
6.2.1. TEST I :
Scanner Grabbe
r
Zed Attack Proxy Vega Wapiti W3af
Version V0.1
(Romain
(ZAP) v2.3.1 (OWASP) v1.0
v2.3.0
(Nicolas
v1.6
revision
Gaucher
)
(Subgraph
)
Surribas) 5460aa0377
(The W3AF team)
Input
Vector
Support[15
]
GET,
POST
GET,POST,COOKIE,HEADE
R
,XML,XmlTAG,JSON,Multip
art
,GWT,ODataID, ODataFilt
GET,
POST,
Header
GET,POST
, Multipart
GET,POST,Cooki
e,
Header,PName,
DIR,FILE,PATH
Total
number of
Input
vector
support
2 11 3 3 8
6.2.2. TEST II :
The second assessment criteria was based on evaluating the crawling capability of different
scanners with the help of various discovery methods to increase the scope of auditing phase of
tested application. This include finding the additional resources such as text/swf/flash files and
input delivery methods to exploitation.
This assessment was focused on automated crawling capabilities and input vector extraction
converge of selected vulnerability scanners which is represented as WIVET score of respective
scanner.
The given data of evaluation is based on point and shoot based. Only the target url was set in the
vulnerability scanners and no other manual configuration for any tool has been applied during
the experiment. This results are useful for non security experts that do not know how to manually
crawl the testing web application.
As mentioned earlier in test setup, in order to assess the crawling capability of the tool, I used a
project called WIVET (Web Input Vector Extractor Teaser); The WIVET project is a
benchmark for crawling capability evaluation of vulnerability scanner and was written by
Bedirhan Urgun, and released under the GPL2 license.
For assessment result of each tool, the crawler must use the same session during the crawling
requests, and have exception feature to avoid the some web pages. In case of the WIVET the
scanner have to avoid 100.php page which is a Logout request. Although not all selected tools
have these features, therefore for their evaluation, I made some changes accordingly in WIVET
project link removing 100.php page while testing with a tool that does not shave exception
feature.
Grabber
Vega
Wapiti
w3af
Zed Attack Proxy
0 10 20 30 40 50 60 70 80
WIVET Score
WIVET Score
6.2.3. TEST III :
The third test was focused on Accuracy and detection rate of selected vulnerability scanners.
Two most common vulnerabilities(XSS,SQL) were chosen as an assessment scale. All the
detection was point and shoot basis. Only related plugins or policies, respective to
vulnerabilities, were set for each vulnerability scanner, and no other additional scripts were use
(if any tool support) for making result unbiased for other tools that do not support such features.
7. TOOLS
7.1. GRABBER
Tested Against WAVSEP Version:1.5
The SQL Injection Detection Accuracy of the Scanner:
Detection Rate : 14 % (10/130)
False Positive Rate : 0 % (0/10)
RESPONSE TYPE INPUT VECTOR DETECTION
RATE
DETAILS
Erroneous 500
Responses
HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Erroneous 500
Responses
HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Erroneous 200
Responses
HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Erroneous 200
Responses
HTTP POST (Body
Parameters)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Valid 200 Responses HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Valid 200 Responses HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Identical 200 Responses HTTP GET (Query
String Parameter)
0/8 None of the cases were detected.
Identical 200 Responses HTTP POST (Body
Parameters)
0/8 None of the cases were detected.
False Positive SQLi Test
Cases
HTTP GET (Query
String Parameter)
0/10 None false positive detected.
The Reflected XSS Detection Accuracy of the Scanner
Detection Rate : 50 % (32/64)
False Positive Rate : 14 % (1/10)
Response Type Input Vector Detection Rate Details
Reflected XSS HTTP GET (Query
String Parameter)
0/32 None of the cases were
detected.
Reflected XSS HTTP POST (Body
Parameters)
32/32 All test cases detected
1-32.
False Positive HTTP GET (Query
String Parameter)
1/7 Only case 4 detected.
7.1.1. Test Log :
The tool is command line and can be configure and execute by following line.
# grabber --spider 1 --sql --bsql --xss --url http://192.168.56.101:8080/wavsep/active
Tool successfully able to crawl all URLs however did not scan any GET parameter, in results
only scanned Post test cases. Because false positive were only implemented in GET method,
therefore tools were not able to detect any of false test cases.
The output file format was in XML format.
7.2. OWSAP-ZAP
Tested Against WAVSEP Version:1.5
The SQL Injection Detection Accuracy of the Scanner:
Detection Rate : 30 % (39/130)
False Positive Rate : 30 % (0/10)
RESPONSE TYPE INPUT VECTOR DETECTION
RATE
DETAILS
Erroneous 500
Responses
HTTP GET (Query
String Parameter)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 500
Responses
HTTP POST (Body
Parameters)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 200
Responses
HTTP GET (Query
String Parameter)
1/19 Only test case 1 detected.
Erroneous 200
Responses
HTTP POST (Body
Parameters)
0/19 None of the cases were be able to
detect by the scanner.
Valid 200 Responses HTTP GET (Query
String Parameter)
0/19 None of the cases were be able to
detect by the scanner.
Valid 200 Responses HTTP POST (Body
Parameters)
0/19 None of the cases were be able to
detect by the scanner.
Identical 200 Responses HTTP GET (Query
String Parameter)
0/8 None of the cases were be able to
detect by the scanner.
Identical 200 Responses HTTP POST (Body
Parameters)
0/8 None of the cases were be able to
detect by the scanner.
False Positive SQLi Test
Cases
HTTP GET (Query
String Parameter)
3/10 Three test cases detected 1,2,7.
The Reflected XSS Detection Accuracy of the Scanner
Detection Rate : 98 % (63/130)
False Positive Rate : 0 % (0/7)
Response Type Input Vector Detection Rate Details
Reflected XSS HTTP GET (Query
String Parameter)
31/32 All test cases detected
except test case 32.
Reflected XSS HTTP POST (Body
Parameters)
32/32 All test cases detected
1-32.
False Positive HTTP GET (Query
String Parameter)
0/7 No false positive
detection.
7.2.1. Test Log:
All the injection plugins were enabled in ZAP and executed the tool in-front of the following
URLs:
http:// 192.168.56.101:8080/wavsep/active/index-xss.jsp
http:// 192.168.56.101:8080/wavsep/active/index-sql.jsp
http://192.168.56.101:8080/wavsep/active/index-false.jsp
The output were publish in alert frame and by clicking each alert, response/request statement can
be observe.
7.3. VEGA
Tested Against WAVSEP Version:1.5
The SQL Injection Detection Accuracy of the Scanner:
Detection Rate : 58 % (76/130)
False Positive Rate : 87 % (0/10)
RESPONSE TYPE INPUT VECTOR DETECTION
RATE
DETAILS
Erroneous 500
Responses
HTTP GET (Query
String Parameter)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 500
Responses
HTTP POST (Body
Parameters)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 200
Responses
HTTP GET (Query
String Parameter)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 200
Responses
HTTP POST (Body
Parameters)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Valid 200 Responses HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Valid 200 Responses HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Identical 200 Responses HTTP GET (Query
String Parameter)
7/8 Test cases 1-4, 6-8 detected.
Identical 200 Responses HTTP POST (Body
Parameters)
0/8 None of the cases were detected.
False Positive SQLi Test
Cases
HTTP GET (Query
String Parameter)
7/10 False positive 1-4, 6-8 detected
The Reflected XSS Detection Accuracy of the Scanner
Detection Rate : 100 % (64/64)
False Positive Rate : 0 % (0/7)
Response Type Input Vector Detection Rate Details
Reflected XSS HTTP GET (Query
String Parameter)
32/32 Test case 1-32 detected
Reflected XSS HTTP POST (Body 32/32 Test case 1-32 detected
Parameters)
False Positive HTTP GET (Query
String Parameter)
0/7 None false positive
detected.
7.3.1. Test Log:
Three modules were selected during this scan: Blind SQL Text Injection Differential checks,
XSS Injection Checks, Blind SQL Injection Arithmetic Evaluation Differential . The output are
publish in Scan alert box.
7.4. W3AF
Tested Against WAVSEP Version:1.5
The SQL Injection Detection Accuracy of the Scanner:
Detection Rate : 30 % (39/130)
False Positive Rate : 30 % (3/10)
RESPONSE TYPE INPUT VECTOR DETECTION
RATE
DETAILS
Erroneous 500
Responses
HTTP GET (Query
String Parameter)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 500
Responses
HTTP POST (Body
Parameters)
1/19 Test case 1 detected.
Erroneous 200
Responses
HTTP GET (Query
String Parameter)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 200
Responses
HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Valid 200 Responses HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Valid 200 Responses HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Identical 200 Responses HTTP GET (Query
String Parameter)
0/8 None of the cases were detected.
Identical 200 Responses HTTP POST (Body
Parameters)
0/8 None of the cases were detected.
False Positive SQLi Test
Cases
HTTP GET (Query
String Parameter)
3/10 False positive cases 2,4,6 detected.
The Reflected XSS Detection Accuracy of the Scanner
Detection Rate : 100 % (64/130)
False Positive Rate : 0 % (0/7)
Response Type Input Vector Detection Rate Details
Reflected XSS HTTP GET (Query
String Parameter)
32/32 All test cases detected
1-32.
Reflected XSS HTTP POST (Body
Parameters)
32/32 All test cases detected
1-32.
False Positive HTTP GET (Query
String Parameter)
0/7 None false positive test
cases detected.
7.4.1. Test Log:
W3AF built-in web_spider crawl plugin to crawl the URLs in each directory. The following
plugins were enabled before the scan:
Cross Site Scripting: xss (audit plugin), domXSS (grep plugin)
SQL Injection: sqli, blindSqli (all audit plugins)
Index pages scanned:
http://192.168.56.101:8080/wavsep/active/index-xss.jsp
http:// 192.168.56.101:8080/wavsep/active/index-sql.jsp
http:// 192.168.56.101:8080/wavsep/active/index-false.jsp
7.5. Wapiti
Tested Against WAVSEP Version:1.5
The SQL Injection Detection Accuracy of the Scanner:
Detection Rate : 29 % (10/130)
False Positive Rate : 40 % (4/10)
RESPONSE TYPE INPUT VECTOR DETECTION
RATE
DETAILS
Erroneous 500
Responses
HTTP GET (Query
String Parameter)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 500
Responses
HTTP POST (Body
Parameters)
19/19 All the test cases detected
1,2,3,4,5,6 ,7,8,9,10,11,12,13,14,15,1
6,17,18,19
Erroneous 200
Responses
HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Erroneous 200
Responses
HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Valid 200 Responses HTTP GET (Query
String Parameter)
0/19 None of the cases were detected.
Valid 200 Responses HTTP POST (Body
Parameters)
0/19 None of the cases were detected.
Identical 200 Responses HTTP GET (Query
String Parameter)
0/8 None of the cases were detected.
Identical 200 Responses HTTP POST (Body
Parameters)
0/8 None of the cases were detected.
False Positive SQLi Test
Cases
HTTP GET (Query
String Parameter)
4/10 False positive cases 1,2,6,7 detected.
The Reflected XSS Detection Accuracy of the Scanner
Detection Rate : 32 % (42/64)
False Positive Rate : 42 % (3/7)
The XSS Detection accuracy of scanner
Response Type Input Vector Detection Rate Details
Reflected XSS HTTP GET (Query
String Parameter)
21/32 Test cases 1-5, 16-32
detected.
Reflected XSS HTTP POST (Body
Parameters)
21/32 Test cases 1-5, 16-32
detected.
False Positive HTTP GET (Query
String Parameter)
3/7 False positive cases
1,2,6 detected
7.5.1. Test Log:
wapiti http://192.168.1.100:8080/wavsep/active/index-xss
The results provided consist of the exposures detected in all the individual scans, which were
performed in front of the
following URLs:
http://192.168.56.101:8080/wavsep/active/index-xss.jsp
http://192.168.56.101:8080/wavsep/active/index-false.jsp
http://192.168.56.101:8080/wavsep/active/index-sql.jsp
The tool used a lot of time-based injection detection payloads which obviously fulfilled their
purpose in detecting vulnerable locations; it is however, important to mention that time-based
detection that does include syntax verification might cause plenty of false positives on slow
systems (and thus, the scanner should verify the successful time-based injections at least twice,
and in addition, retransmit the syntax that caused the delay with characters that will make it
invalid, in order to verify that the delay is actually caused by that syntax).
Grabber OWSAP-ZAP
VEGA W3AF Wapiti
% Detection Rate 14 30 58 30 29
% False positive rate
0 30 87 30 40
51525354555657585
SQL Injection Vulnerbility Evaluation
Axis Title
Grabber OWSAP-ZAP
VEGA W3AF Wapiti
% Detection Rate 50 98 100 100 32
% False positive rate
14 0 0 0 42
5152535455565758595
XSS Vulnerbility Evaluation
Axis Title
8. Results and Recommendation
8.1. Result Verification
The testing procedure for each vulnerability scanner is perform multiple times using scan policy
that only included the relevant configurations or plugins. This will make sure the result are
consistent and not be affected by some system miss configuration.
Similarly, in order to produce reliable results, WIVET testing is also done couple of time.
8.2. Making the Results Useful to Vendors
From the experimental results vendors can understand which test cases were "undetected" by
their products, from the table provided for each scanner separately. Since wavsep contains
detailed information regarding each test cases, this information can help vendor to improve their
tools by identifying the weakness.
For the practical reader who wants to make use of the experimental results to his advantage, can
follow the above guidelines for understanding the results:
The earlier experimental results are based on only few test case scenarios and does not cover
every single scenarios. One should keep in the mind that insignificant differences in results are
just that insignificant and should be treated accordingly. The main purpose of this research is to
make vendors aware about the weaknesses in their product for future improvement not to declare
a winner. Moreover, few percentage difference doesn't make a product better among all.
However, huge difference probably do. When choosing a tool, one can follow the above simple
methodology:
1. Input Vector and Scan Barrier Support
The first thing to find out which vulnerability scanner supports the input delivery method used
by the targeted application/applications. If a scanner does not support the input vectors used by
the application, then it will either produce less results or wouldn't work at all.
The security concern person/pen tester should understand the fact that the scanner might not be
eligible or efficient for a particular application, however works fine with others. Thus evaluation
of scanner in this scenario will based on the application.
2. Crawling and Input Vector Extraction
For non security experts whom have difficulty in manually crawling the web application can
look for high WIVET score. The test was based on point and shoot based therefore, all the
crawling results based on automation as much as possible. Most of pentester can deal with
reasonable score as well, however the high score will definitely help in auditing. The WIVET
score for selected scanners can be found in 'Test II'.
3. Vulnerability Detection Features and Accuracy
The high detection rate and low false positive rate is desirable from the vulnerability scanners.
Moreover, most features rich is the better. It's hard for a scanner to stand on all these expectation
perfectly. Therefore, the pentester should keep all of those scales in balance when choosing a
tool. It is interesting to observe that, in Test III that the increase in detection rate will also
increases in the false positive rate.
These experiment results were limited and do not consist all test case scenarios, complementary
features such as result documentation, other major vulnerabilities (Path traversal, Local File
Inclusion etc). It recommended that although the chosen product meet the general guidelines and
that choice seems significant for the project, it is better to consult with an expert to help you
evaluate the significance and reliability of the test result of that tool.
References:
1 - Chen, S 2010, 'Security Tools Benchmarking: Web Application Scanner Benchmark (v1.0)',
Sectooladdict.blogspot.com, accessed April 26, 2015, from
<http://sectooladdict.blogspot.com/2010/12/web-application-scanner-benchmark.html>.
2 -Zanero, S, Carettoni, L & Zanchetta, M 2005, 'Automatic Detection of Web Application
Security Flaws', accessed April 26, 2015, from <http://www.s0ftpj.org/docs/Zanero-BH-
Amsterdam-05.pdf>.
3. Morgan David, 2006, 'Web application security – SQL injection attacks', Network Security,
vol. 2006, no. 4, pp. 4-5.
4. Yao-Wen Huang, Shih-Kun Huang, Tsung-Po Lin, and Chung-Hung Tsai. 2003. Web
application security assessment by fault injection and behavior monitoring. In Proceedings of the
12th international conference on World Wide Web (WWW '03). ACM, New York, NY, USA,
148-159. DOI=10.1145/775152.775174 http://doi.acm.org/10.1145/775152.775174
5- Wiesmann, A, Stock, A, Curphey, M & Stirbei, R 2005, 'A guide to building secure web
applications and web services', Taurean, accessed May 2, 2015, from
<http://www.taurean.net/docs/OWASPGuide2.0.1.pdf>.
6 - Morgan David, 2006, 'Web Injection Attacks', Network Security, Volume 2006, Issue 3,
Pages 8-10, , from <http://www.sciencedirect.com/science/article/pii/S1353485806703440>.
7- Jnena, Rami MF. Modern Approach for WEB Applications Vulnerability Analysis. Diss. The
Islamic University of Gaza, 2013.
8- Oehlert, P. (2005). Violating Assumptions with Fuzzing. IEEE Secur. Privacy Mag., [online]
3(2), pp.58-62. Available at: http://dx.doi.org/10.1109/msp.2005.55 [Accessed 16 APR. 2015].
9- Owasp.org, (2014). OWASP. [online] Available at:
https://www.owasp.org/index.php/Main_Page [Accessed 28 APR. 2015].
10- Takanen, A., Demott, J. and Miller, C. (2008). Fuzzing for software security testing and
quality assurance. 1st ed. Boston: Artech House.
11- Code.google.com, 2014, 'wavsep - The Web Application Vulnerability Scanner Evaluation
Project - Google Project Hosting', accessed May 2, 2015, from
<https://code.google.com/p/wavsep/>.12- Loo, F. (2011). Comparison of penetration testing
tools for web applications. 1st ed. [ebook] Available at:
http://www.ru.nl/publish/pages/578936/frank_van_der_loo_scriptie.pdf [Accessed 17 APR.
2015].
13- Netiq.com, (2014). Novell Doc: Sentinel 6.1 Rapid Deployment User Guide - Architecture
Overview. [online] Available at:
https://www.netiq.com/documentation/sentinel61rd/s61rd_user/data/abt_architecture.html
[Accessed 17 APR. 2014].
14. Bau, J.; Bursztein, E.; Gupta, D.; Mitchell, J., 2010, "State of the Art: Automated Black-Box
Web Application Vulnerability Testing," Security and Privacy (SP), 2010 IEEE Symposium on,
pp.332, 345, accessed May 2, 2015, from <http://ieeexplore.ieee.org/stamp/stamp.jsp?
tp=&arnumber=5504795&isnumber=5504699>
15. Sectoolmarket.com, 2015, 'The Input Delivery Method Scanning Support of Web
Application Vulnerability Scanners - WAVSEP Benchmark 2014', accessed May 2, 2015, from
<http://www.sectoolmarket.com/input-vector-support-unified-list.html>.
16. Code.google.com, 2015, 'zaproxy - OWASP ZAP: An easy to use integrated penetration
testing tool for finding vulnerabilities in web applications. - Google Project Hosting', accessed
April 15, 2015, from <https://code.google.com/p/zaproxy/>.
17. Docs.w3af.org, 2015, 'Welcome to w3af’s documentation — w3af - Web application attack
and audit framework 1.6.51 documentation', accessed April 15, 2015, from
<http://docs.w3af.org/en/latest/>.
18. Subgraph.com, 2015, 'Documentation', accessed April 15, 2015, from
<https://subgraph.com/vega/documentation/index.en.html>.
19. Wapiti.sourceforge.net, 2015, 'Wapiti : a Free and Open-Source web-application
vulnerability scanner in Python for Windows, Linux, BSD, OSX', accessed April 15, 2015, from
<http://wapiti.sourceforge.net/>.
20. Rgaucher.info, 2015, 'Grabber!', accessed May 15, 2015, from
<http://rgaucher.info/beta/grabber/>.
21 - GitHub, 2014, 'WIVET(Web Input Vector Extractor Teaser)', accessed May 4, 2015, from
<https://github.com/bedirhan/wivet>.
Appendix
SQL injection toy example
<?php$servername = "localhost";$username = "root";$password = "";$dbname = "mysql";// Create connection$conn = new mysqli($servername, $username, $password , $dbname);// Check connectionif ($conn->connect_error) { die("Connection failed: " . $conn->connect_error);} echo " <html><body><form action='sql_test.php' method='post'>";echo "<input type='text' name='name'>";
$sql = "SELECT NAME FROM rmit where NAME='" . $_POST["name"] . "' " ;$result = $conn->query($sql);
if($result->num_rows > 0){echo "EXIST";} else { echo "He is not here!!";}$conn->close();
echo "<input type='submit'></form></body></html>";
/*if ($result->num_rows > 0) { // output data of each row while($row = $result->fetch_assoc()) { if($row["NAME"]){echo $row["NAME"] . "EXIST";}echo "id: " . $row["NAME"]. " - Name: " . $row["NUMBER"]. " <br>";*/
?>
Cross site Scripting Toy Example
<html>
<body>
<form action="xss_test.php" method="get">
<input type="text" name="name" value="">
<input type="submit" name="submit" value="Submit">
</form>
<html><body>Hello, <?php echo $name; ?>!</body></html>
<?php
$name = $_GET['name'];
echo $name;
echo "Welcome $name<br>";
// echo "<a href='http://google.com/'>Click to start your Search</a>";
?>
</body>
</html>