28
An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and Technology http:// samate .nist. gov [email protected]

An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Embed Size (px)

Citation preview

Page 1: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

An Overview of the DHS/NIST SAMATE Project

SSE Seminar April 10, 2006

Michael Kass

Information Technology Laboratory

National Institute of Standards and Technology

http://samate.nist.gov

[email protected]

Page 2: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Outline

• Background on the SAMATE project

• Overview of work done to date

• Opportunities for Collaboration with NKU

Page 3: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

The Questions

SwA tools are increasingly being used to provide an argument for an applications software assurance through the entire SDLC

• Do software assurance tools work as they should?• Do they really find vulnerabilities and catch bugs? How much

assurance does running the tool provide?• Software Assurance tools should be …

– Tested: accurate and reliable – Peer reviewed– Generally accepted

Page 4: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

DHS Tasks NIST to:

• Assess current SwA tools and methods in order to identify deficiencies which can lead to software product failures and vulnerabilities

• Identify gaps in SwA tools and methods, and suggest areas of further research

• Develop metrics for the effectiveness of SwA tools

Page 5: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

NIST Software Assurance Metrics and Tool Evaluation (SAMATE) Project Goals

• Define a taxonomy of Software Assurance tools and their functions

• Define a “common/agreed-upon” classification scheme for software security flaws/weaknesses

• Develop functional specifications for SwA tools• Create an open reference dataset of test material for

SwA tools• Define common metrics for measuring the assurance

of applications and effectiveness of SwA tools• Identify gaps in capabilities of today’s tools and make

recommendations to DHS for funding research in these areas

Page 6: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

SAMATEworkshop

tool focusgroup

tool function taxonomy

tool functional specification

reference dataset of tests

SAMATE Products

code and tool metrics

software flawtaxonomy

Page 7: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: SwA Tool Taxonomy (high-level view)

”External” Tools

– Network Scanners– Web Application Scanners– Web Services Scanners– Dynamic Analysis/Fault Injection Tools

Page 8: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: SwA Tool Taxonomy (cont.)

“Internal” Tools -

– Compilers– Software Requirements Verification – Software Design/Model Verification– Source Code Scanners– Byte Code Scanners– Binary Code Scanners

Page 9: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: Common Software Flaw Taxonomy

• There are currently multiple taxonomies to choose from/integrate (CWE, CLASP, Fortify Software..others) We need to integrate them into one common, agreed-upon taxonomy

• A flaw taxonomy must cover the entire SDLC (flaws introduced during requirements, design, implementation, deployment)

• A taxonomy should also contain axes/views such as “remediation” or “point of introduction in SDLC” Volunteers helping with the flaw taxonomy include: Cigital, Mitre, Ounce Labs, Klockwork Inc., Secure Software Inc., Fortify Software, OWASP

Page 10: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: SwA Tool Specification

• Need to define core functions of each class of tool

• Need to define a “base” set of functions that constitute a minimally acceptable level of capability

• Specification must be clear, unambiguous and testable

Page 11: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: Reference Dataset for Tool Evaluation

• A reference dataset for static analysis must be “open”

• Users must be able to access, critique and contribute

• Collaboration and contribution:– Need a community front end (interface for

contributors), where peer review decides if a submitted piece of code is a vulnerability

• Reference dataset must be based upon a common enumeration of software weaknesses as defined by Mitre’s CWE

Page 12: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: Common Metrics for Software

• No “standard” code metrics are being used among source code scanners

• Hard to get agreement on “global” set of code metrics, because risk varies depending upon the local requirements

• Code metric must be multi-dimensional; no single scalar metric can measure software assurance

Page 13: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Products: Common Metrics for SwA Tools

• How “effective” is a tool?

– How many “false positives” does it produce?– How many real flaws does it miss?– Should the metric be some combination of the

above?

Page 14: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Initial SAMATE Work:

• SAMATE (Software Assurance Metrics and Tool Evaluation) project began work in 2005 – Initial kickoff meeting August 2005

• Survey the State of the Art in SwA tools• Classify SwA tools• Choose a class of SwA tool to develop a functional

specification• Enlist volunteers to help:

– Develop SwA Tool Functional Specifications– Contribute Test Cases for SwA Tools– Help define effectiveness metrics for SwA tools

• Over 50 members on the SAMATE email list

Page 15: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Initial SAMATE Work (cont.):

• Follow-on SAMATE meeting in Long Beach November 2005

• Paper presentations on the state of the art in SwA Tools– code and tool metrics, benchmarks

• “Target Practice” against contributed Test Cases– Test Case contributions from Fortify, Klocwork, MIT,

Secure Software Inc.– Usability/validity of test cases (coverage, complexity,

variation) was discussed– Required features of an online repository of SwA artifacts

was discussed• Discussion of what might constitute a “baseline” benchmark

of test cases for source code analysis tools– set the bar “low” to start with– a mix of discreet and real world test cases is needed

Page 16: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Latest Work• Currently developing a “baseline” functional

specification for Source Code Analysis tools– Defining minimal functional requirements– Defining requirements for “optional” tool features– Defining a dictionary of terms

• What is a “weakness”, “false positive”, “control flow”, “inter-procedural analysis” …etc.

– Linking functional tool requirements (finding weaknesses) to Mitre’s Common Weakness Enumeration CWE

– Defining minimal code complexities that a tool should handle

• Continue work on an online SAMATE Reference Dataset (populate with test cases, and add usability features)

Page 17: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Source Code Scanner Requirements: The Tool Shall:

• SCA-RM-1: Identify any code weakness that is in the subset of the Common Weakness Enumeration list that apply to the coding language being analyzed ( listed in Appendix A)

• SCA-RM-2: Generate a text report identifying all weaknesses that it finds in a source code application

• SCA-RM-3: Identify a weakness by its proper CWE identifier• SCA-RM-4: Specify the location of a weakness by providing the

directory path, file name and line number• SCA-RM-9: The tool shall be capable of detecting weaknesses

that may exist within complex coding constructs (listed in Appendix B)

Page 18: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

CWE Subset of Weaknesses for Source Code Analysis Tools (for test coverage)• Location

– Environment– Configuration– Code

• Source Code– Data Handling– API Abuse– Security Features– Time and State– Error Handling– Code Quality– Encapsulation

• Byte Code/Object Code• Motivation/Intent• Time of Introduction

Page 19: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Appendix B:Coding Constructs (for test complexity and variation)

• Initially based on MIT’s 22 C code constructs (Kratkiewicz, Lippman, MIT Lincoln Lab)

– buffer (address, index,length/limit) complexity– obfuscation via container– local and secondary control flow – environmental dependencies– asynchrony– loop structure and complexity– memory (access type, location)– aliasing (pointer or variable)– tainted data (via input,file,socket,environment)– other (to be added) …

Page 20: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

The coverage/complexity/variation “cube”of the SAMATE Reference Dataset

CWE Coverage

Cod

e C

omp

lexi

ty

Variation

Page 21: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Test Case Coverage of CWE

• CWE is still “in progress”, but SAMATE is already aligning its specification and reference dataset terminology with it

• Coverage is based upon initial contributed tests by Klocwork (40), Fortify Software (80), MIT Lincoln Lab (1000) and Secure Software Inc. (20)

• NIST is supplementing this with other test cases to “fill in” coverage of the CWE

• A test suite for a “baseline benchmark” of source code analysis tools is the goal in populating the SRD (SAMATE Reference Dataset)

Page 22: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

CWE Test Coverage

LocationEnvironmentConfigurationCode

Source CodeData HandlingInput Validation(PATH) Pathname Traversal and Equivalence ErrorsInjectionCommand InjectionOS Command Injection (1 Fortify)Technology-Specific Input Validation ProblemsOutput ValidationRange ErrorsBuffer ErrorsOVER - Unbounded Transfer ('classic overflow')Stack overflow (43 Fortify, 1164 MIT, 1 Secure Software)Heap overflow (10 Fortify, 4 MIT, 1 Secure Software)Write-what-where condition (1 Secure Software)UNDER - Boundary beginning violation (1 Secure Software)READ - Out-of-bounds ReadOREAD - Buffer over-read (1 MIT)Wrap-around errorUnchecked array indexingLENCALC - Other length calculation errorMiscalculated null termination (37 Fortify, 2 Secure Software)String ErrorsFORMAT - Format string vulnerability (7 Fortify, 1 Secure Software) Improper string length checking (1 Secure Software)

Page 23: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Type ErrorsRepresentation Errors(NUM) Numeric Errors

OVERFLOW - Integer overflow (7 FortifyUNDERFLOW - Integer underflow Integer coercion error (1 Secure Software)

(INFO) Information Management Errors (INFO) Information Management Errors

LEAK - Information Leak (information disclosure) INT - Intended information leak (2 Fortify)

API Abuse Often Misused: String Management (36 Fortify)

Security Features Password Management

Plaintext Storage (1 Secure Software)(CRYPTO) Cryptographic errors

KEYMGT - Key Management Errors (1 Secure Software) (NUM) Numeric Errors

OVERFLOW - Integer overflow (wrap or wraparound) UNDERFLOW - Integer underflow (wrap or wraparound) Integer coercion error (1 Secure Software)OBO - Off-by-one Error Sign extension error (1 Secure Software)Signed to unsigned conversion error Unsigned to signed conversion error (1 Secure Software)TRUNC - Numeric truncation error (1 Secure Software)BYTEORD - Numeric Byte Ordering Error

Security Features

CWE Test Coverage(cont.)

Page 24: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Time and State

(RACE) Race Conditions

SIGNAL - Signal handler race condition (1 Secure Software)

Race condition in switch (1 Secure Software)

TOCTOU - Time-of-check Time-of-use race condition (2 Fortify)

Code Quality

(RES) Resource Management Errors

MEMLEAK - Memory leak (6 Fortify, 8 Klocwork)

Double Free (2 Fortify, 1 Secure Software)

Use After Free (10 Klocwork, 1 Secure Software)

Code Quality

(INIT) Initialization and Cleanup Errors

Uninitialized variable (8 Klocwork)

Pointer Issues

Illegal Pointer Value (15 Klocwork)

Byte/Object Code

Motivation/Intent

Time of Introduction

CWE Test Coverage(cont.)

Page 25: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

A Sample Test Case(derived from MIT contribution)

CWE = Code.Source Code.Range Errors.Stack Overflow.Buffer Errors.Over.Stack Overflow

index complexity = constant , secondary control flow = if, loop structure = non-standard for, scope = inter-procedurallocal control flow = function pointer

void function1(char *buf){ /* BAD */ buf[10] = 'A';}

int main(int argc, char *argv[]){ void (*fptr)(char *); int test_value; int inc_value; int loop_counter; char buf[10];

test_value = 10; inc_value = 10 - (10 - 1);

for(loop_counter = 0; ; loop_counter += inc_value) { if (loop_counter > test_value) break;fptr = function1;fptr(buf);} return 0;}

Page 26: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Opportunities for Collaboration between NKU and SAMATE• Static Analysis Summit, June 29, 2006 at NIST http://samate.nist.gov/index.php/SAS

– What is possible with today's techniques? – What is NOT possible today? – Where are the gaps that further research might fill? – What is the minimum performance bar for a source code analyzer? – Vetting of the draft SAMATE Source Code Analysis Specification

• Contributions to and use of the SRD http://samate.nist.gov/SRD– Test cases are needed to “fill in the coverage cube”– Studies/papers done using the SRD content

Page 27: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

Contact

• Paul Black – SAMATE project leader at:– [email protected]

Page 28: An Overview of the DHS/NIST SAMATE Project SSE Seminar April 10, 2006 Michael Kass Information Technology Laboratory National Institute of Standards and

References

• K. Kratkiewicz, R.Lippman, “A Taxonomy of Buffer Overflows for Evaluating Static and Dynamic Software Testing Tools”, ASE 2005