Mobile Application Assessment By the Numbers: a Whole-istic View

Preview:

DESCRIPTION

Typically, mobile application assessments myopically test only the software living on the device. However, the code deployed on the device, the corporate web services backing the device and any third party supporting services must be “whole-isticly” tested AS WELL AS testing the interactions between these components to reach an acceptable level of software assurance for mobile applications.

Citation preview

SESSION ID:

Mobile Application Assessment By The Numbers – A Whole-istic View

MBS-F02

Dan Cornell CTO

Denim Group @danielcornell

#RSAC

Agenda

u  Background u  Mobile Application Threat Model

u  Assessment Methodology

u  Data Collected

u  Findings u  Types of Vulnerabilities Identified

u  Where Vulnerabilities Were Identified

u  How Vulnerabilities Were Identified

2

Background

#RSAC

Introduction

u  Data comes from:

u  61 Assessments

u  20 Applications

u  What we found:

u  957 Vulnerabilities

u  Assessment with the most vulnerabilities: 3 assessments had 10 Critical vulnerabilities

u  Assessments with the least vulnerabilities: only three assessments had one vulnerability (all others had more)

4

#RSAC

Research Background

u  Mobile application threat model

u  Assessment methodology u  Static versus dynamic testing

u  Automated versus manual testing

u  Why CWE?

u  Assessment data

5

#RSAC

Mobile Application Threat Model

u  More complicated than a “typical” web application threat model

u  Not just about code running on the device

u  Main components: u  Mobile application u  Enterprise web services u  3rd party web services

6

#RSAC

Assessment Methodology

u  Testing activities

u  Combination of both static and dynamic activities

u  Combination of automated tools, manual review of automated test results and manual testing

u  Tools include Fortify SCA, IBM Rational AppScan, Portswigger BurpSuite

u  Scope can include:

u  Code running on the device itself

u  Enterprise services

u  3rd party supporting services

7

#RSAC

Determining Severity

Based on customized DREAD model

u  Damage potential u  Reproducibility u  Exploitability u  Affected users u  Discoverability

u  Each factor ranked 1-3

Collapsed to single dimension

u  Critical: > 2.6 u  High: 2.3 – 2.6 u  Medium: 2.0 – 2.3 u  Low: < 2

8

#RSAC

Why CWE?

u  Vulnerability taxonomy used was MITRE’s Common Weakness Enumeration (CWE) u  http://cwe.mitre.org/

u  Every tool has its own “spin” on naming vulnerabilities

u  OWASP Top 10 / WASC 24 are helpful but not comprehensive

u  CWE is exhaustive (though a bit sprawling at times)

u  Reasonably well-adopted standard

u  Many tools have mappings to CWE for their results

9

#RSAC

Assessment Data

u  Subset of mobile assessments

u  Mostly customer-facing applications from financial services organizations

u  Primarily iOS and Android applications u  Some WAP, Windows Phone 7

10

What Did We Find?

#RSAC

Types of Vulnerabilities Found

u  Top 10 Most Prevalent CWEs – Overall

u  Top 10 Most Prevalent CWEs – Critical/High Risk

12

#RSAC

Top 10 Most Prevalent CWEs – Overall

13

14  

14  

16  

20  

21  

21  

22  

26  

271  

284  

0   50   100   150   200   250   300  

Use of a Broken or Risky Cryptographic Algorithm - LOW RISK

Information Exposure Through an Error Message - LOW RISK

Cross-Site Request Forgery (CSRF) - LOW RISK

Information Leak Through Debug Information - LOW RISK

External Control of System or Configuration Setting - LOW RISK

Improper Input Validation - LOW RISK

Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection') - CRITICAL

Cleartext Transmission of Sensitive Information - LOW RISK

Information Exposure - LOW RISK

Information Leak Through Log Files - LOW RISK

#RSAC

Top 10 Most Prevalent CWEs – Critical/High Risk

14

1  

1  

2  

3  

3  

3  

4  

6  

6  

22  

0   50   100   150   200   250   300  

Uncontrolled Resource Consumption ('Resource Exhaustion') - CRITICAL

Failure to Preserve Web Page Structure ('Cross-Site Scripting') - CRITICAL

Missing XML Validation - CRITICAL

Uncontrolled Resource Consumption ('Resource Exhaustion') - CRITICAL

Incorrect User Management - CRITICAL

Exposure of Access Control List Files to an Unauthorized Control Sphere - CRITICAL

Access Control (Authorization) Issues - CRITICAL

Access Control Bypass Through User-Controlled Key - CRITICAL

Information Leak Through Caching - HIGH

Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection') - CRITICAL

#RSAC

OWASP Top 10 Mobile Risks

u  Similar to the OWASP Top 10 Web Application Risks, but targeted at mobile applications (obviously)

u  Top risks to mobile applications: u  https://www.owasp.org/index.php/

OWASP_Mobile_Security_Project#tab=Top_Ten_Mobile_Risks

u  Work in progress to update this based on industry-contributed data

15

#RSAC

OWASP Top 10 Mobile Risks

M1: Insecure Data Storage M2: Weak Server Side Controls M3: Insufficient Transport Layer Protection M4: Client Side Injection M5: Poor Authorization and Authentication

M6: Improper Session Handling M7: Security Decisions Via Untrusted Inputs M8: Side Channel Data Leakage M9: Broken Cryptography M10: Sensitive Information Disclosure

16

#RSAC

Compare to OWASP Top 10 Mobile Risks

17

Strong Overlap

• Weak server-side controls • Poor authentication and

authorization • Security decisions via

untrusted inputs • Sensitive information

disclosure

Overlap

•  Insecure data storage •  Insufficient transport layer

data protection •  Improper session handling • Side channel data leakage • Broken cryptography

Weak Overlap

• Client-side injection

#RSAC

Where Did We Find Overall Vulnerabilities?

18

Corporate Web Service

591 62%

Device 342 36%

Third-Party Web Service

24 2%

#RSAC

Where Did We Find Critical/High Risk Vulnerabilities?

19

Corporate Web Service

41 70%

Device 15

25%

ThirdParty Web Service

3 5%

#RSAC

Analysis of “Where” Data

u  Mobile security is about more than the code running on the device

u  The things we really care about (Critical, High) are most frequently found on corporate web services u  Then on the device u  Then on 3rd party web services

u  Reflects the “scale” benefits of finding web services vulnerabilities

20

#RSAC

How Did We Find Vulnerabilities?

u  Static vs. dynamic testing

u  Automated vs. manual testing

u  What techniques identified the most vulnerabilities?

u  What techniques identified the most serious vulnerabilities?

21

#RSAC

Static vs. Dynamic Method of Finding Vulnerabilities

22

Critical, 10

Critical, 33

High Risk, 14

High Risk, 2

Medium Risk, 84

Medium Risk, 9

Low Risk, 206

Low Risk, 599

0   100   200   300   400   500   600   700  

Dynamic

Static

#RSAC

Static vs. Dynamic Method of Finding Vulnerabilities

23

Critical 5%

High Risk 0%

Medium Risk 2%

Low Risk 93%

Static

Critical 3%

High Risk 4%

Medium Risk 27%

Low Risk 66%

Dynamic

#RSAC

Critical and High Risk Vulnerabilities u  Static testing was more effective

when finding serious (Critical and High) vulnerabilities

u  But it also found a lot of lower-risk vulnerabilities (as well as results that had to be filtered out)

24

Found with Dynamic Testing

24 41%

Found with Static

Testing 35

59%

Critical/High Risk Vulnerabilities Found

#RSAC

Automated vs. Manual Method of Finding Vulnerabilities

25

Critical, 33

Critical, 10

High Risk, 1

High Risk, 15

Medium Risk, 4

Medium Risk, 89

Low Risk, 526

Low Risk, 279

0   100   200   300   400   500   600  

Automatic

Manual

#RSAC

Automated vs. Manual Method of Finding Vulnerabilities

26

Critical 6% High Risk

0% Medium

Risk 1%

Low Risk 93%

Automatic

Critical 2%

High Risk 4% Medium

Risk 23%

Low Risk 71%

Manual

#RSAC

Automated vs. Manual Method of Finding Vulnerabilities (Critical and High) u  Automated testing was more

effective when finding serious (Critical and High) vulnerabilities

27

Found with Automated

Testing 34

58%

Found with Manual Testing

25 42%

Critical/High Risk Vulnerabilities Found

#RSAC

Automated vs. Manual, Static vs. Dynamic Methods

28

Cri.cal,  33  

Cri.cal,  10  

Cri.cal,  0  

High Risk, 1

High Risk, 14

High Risk, 1

Medium Risk, 4

Medium Risk, 84

Medium Risk, 73

Low Risk, 526

Low Risk, 206

Low Risk, 5

0   100   200   300   400   500   600  

Automatic / Static

Manual / Dynamic

Manual / Static

Automa.c  /  Sta.c   Manual  /  Dynamic   Manual  /  Sta.c  Low  Risk   526   206   5  

Medium  Risk   4   84   73  

High  Risk   1   14   1  

Cri.cal   33   10   0  

#RSAC

Automated vs. Manual, Static vs. Dynamic Methods

29

Automatic, 564

Automatic, 0

Manual, 79

Manual, 314

0   100   200   300   400   500   600  

Static

Dynamic

Static Dynamic Manual 79 314 Automatic 564 0

#RSAC

Automated vs. Manual, Static vs. Dynamic for Critical and High Vulnerabilities

30

Automatic, 34

Automatic, 0

Manual, 1

Manual, 24

0   5   10   15   20   25   30   35   40  

Static

Dynamic

Static Dynamic Manual 1 24 Automatic 34 0

#RSAC

Analysis of “How” Data

u  A comprehensive mobile application security assessment program must incorporate a significant manual testing component

u  Automated tools for testing mobile applications are not as mature as those for testing web applications

u  Web services can be challenging to test in an automated manner

31

#RSAC

On-Device Vulnerabilities By Platform

Platforms Number of Assessments on Device

Number of Total Vulnerabilities on Device

Average Number of Vulnerabilities Found per Assessment

iOS 39 252 6.5

Android 19 84 4.4

Windows Phone 7 1 3 3

WAP 1 3 3

32

#RSAC

Other Observations

u  We also include “other observations” as part of our assessments

u  These reflect: u  Application weaknesses

u  Coding flaws or behavior that are not “best practice” but do not reflect an immediate, exploitable vulnerability

u  We had 1,948 “other observations” u  Roughly twice as many as actual vulnerabilities

33

#RSAC

Other Observations – Where Were They Found?

34

Corporate Web Service

55 3%

Device 1892 97%

Third-Party Web Service

1 0%

#RSAC

What Does This Mean?

u  Most of these “other observations” are about code on the device u  Mobile application developers need help building better code

u  AND automated code scanning tools need to be better about filtering less valuable results

u  Something that is not a problem today could be later on u  Identification of new platform vulnerabilities

u  Changes coming along with a new application release

35

#RSAC

Conclusions

u  What To Test? u  Mobile “apps” are not standalone applications

u  They are systems of applications

u  Serious vulnerabilities can exist in any system component

u  How To Test? u  Mobile application testing does benefit from automation

u  Manual review and testing is required to find the most serious issues

u  A combination of static and dynamic testing is required for coverage

36

#RSAC

Recommendations

u  Plan your mobile application assessment strategy with coverage in mind

u  Evaluate the value of automation for your testing u  More “cost” than simply licensing – deployment time and results culling

u  Look for opportunities to streamline u  Fast application release cycles can require frequent assessments

u  Control scope:

u  Assess application changes (versus entire applications)

u  Manage cost of reporting

37

#RSAC

Next Steps (For Us)

u  Incorporate more assessment data

u  Possible collaboration with OWASP Top 10 Mobile Risks u  Currently being reworked based on data sets such as ours

u  Better analysis of applications over time

38