Upload
donhan
View
213
Download
0
Embed Size (px)
Citation preview
Outline
1. Risk management2. Standards on Evaluating Secure System 3. Security Analysis using Security Metrics
2
Reading
• This lecture: –McGraw: Chapter 2
• Cigital Risk Management Framework– Security Metrics
• Our paper
3
Security Protection
Percentage of Organizations Using ROI, NPV, or IRR Metrics
Percentage of IT Budget Spent on Security
CSI/FBI Computer Crime and Security Survey, Computer Security Institute5
Real Cost of Cyber Attack
• Damage of the target may not reflect the real amount of damage
• Services may rely on the attacked service, causing a cascading and escalating damage
• Need: support for decision makers to – Evaluate risk and consequences of cyber attacks– Support methods to prevent, deter, and mitigate
consequences of attacks
6
System Security Engineering (Traditional View)
Specify SystemArchitecture
Identify Threats, Vulnerabilities, Attacks
EstimateRisk
PrioritizeVulnerabilities
Identify and Install Safeguards
Risk is acceptably low
7
Risk Management Framework (Business Context)
Understand BusinessContext
Identify Business and Technical Risks
Synthesize and RankRisks
Define RiskMitigation Strategy
Carry Out Fixesand Validate
Measurement and Reporting
8
Understand the Business Context
• “Who cares?”• Identify business goals, priorities and
circumstances, e.g., – Increasing revenue–Meeting service-level agreements– Reducing development cost– Generating high return investment
• Identify software risk to consider
9
Identify Business and Technical Risks
• Business risk– Direct threat– Indirect threat
• Consequences– Financial loss– Loss of reputation– Violation of customer or
regulatory constraints– Liability
• Technical risk– Runs counter to the
planned design and Implementation
– Consequences
• Consequences– Unexpected system calls– Avoidance of control
(audit)– Unauthorized data access– Needless rework of
artifacts
• “Why should business care?”
Tying technical risks to the business context in a meaningful way
10
Synthesize and Rank the Risks
• “What should be done first?”• Prioritization of identified risks based on
business goals• Allocating resources• Risk metrics:– Risk likelihood– Risk impact– Risk severity– Number of emerging risks
11
Define the Risk Mitigation Strategy
• “How to mitigate risks?”• Available technology and resources• Constrained by the business context: what can
the organization afford, integrate, and understand
• Need validation techniques
12
Carry Out Fixes and Validate
• Perform actions defined in the previous stage• Measure “completeness” against the risk
mitigation strategy– Progress against risk– Remaining risks– Assurance of mechanisms
• Testing–Measure the effectiveness of risk mitigation
activities
13
Measuring and Reporting
• Continuous and consistent identification and storage of risk information over time
• Maintain risk information at all stages of risk management
• Establish measurements, e.g., – Number of risks, severity of risks, cost of
mitigation, etc.
14
Outline
1. Risk management2. Standards on Evaluating Secure System 3. Security Analysis using Security Metrics
15
Standards on Evaluating Secure System
• Trusted Computer System Evaluation Criteria (TCSEC) , also known as “Orange Book”
• Common Criteria
16
National Computer Security Center
• 1981: National Computer Security Center (NCSC) was established within NSA– To provide technical support and reference for
government agencies– To define a set of criteria for the evaluation and
assessment of security– To encourage and perform research in the field of
security– To develop verification and testing tools– To increase security awareness in both federal and
private sector• 1985: Trusted Computer System Evaluation
Criteria (TCSEC) == Orange Book
17
Orange Book
• Orange Book objectives– Guidance of what security features to build into
new products– Provide measurement to evaluate security of
systems– Basis for specifying security requirements
• Security features and Assurances• Trusted Computing Base (TCB) security
components of the system: hardware, software, and firmware + reference monitor
18
Orange Book
• It supplies– Users: evaluation metrics to assess the reliability
of the security system for protection of classified or sensitive information when• Commercial product• Internally developed system
– Developers/vendors: design guide showing security features to be included in commercial systems
– Designers: guide for the specification of security requirements
19
Orange Book
• Set of criteria and requirements• Three main categories:– Security policy – protection level offered by the
system– Accountability – of the users and user operations– Assurance – of the reliability of the system
20
Security Policy
• Concerns the definition of the policy regulation the access of users to information– Discretionary Access Control–Mandatory Access Control– Labels: for objects and subjects– Reuse of objects: basic storage elements must be
cleaned before released to a new user
21
Accountability
• Identification/authentication• Audit• Trusted path: no users are attempting to
access thr system fraudulently
22
Assurance
• Reliable hardware/software/firmware components that can be evaluated separately
• Operation reliability• Development reliability
23
Operation reliability
• During system operation – System architecture: TCB isolated from user
processes, security kernel isolated from non-security critical portions of the TCB
– System integrity: correct operation (use diagnostic software)
– Covert channel analysis– Trusted facility management: separation of duties– Trusted recovery: recover security features after
TCB failures
24
Development reliability
• System reliable during the development process. Formal methods.– System testing: security features tested and
verified– Design specification and verification: correct
design and implementation wrt security policy. TCB formal specifications proved
– Configuration management: configuration of the system components and its documentation
– Trusted distribution: no unauthorized modifications
25
Documentation
• Defined set of documents • Minimal set:– Trusted facility manual– Security features user’s guide– Test documentation– Design documentation– Personnel info: Operators, Users, Developers,
Maintainers
26
Orange Book Levels
• Highest Security– A1 Verified protection– B3 Security Domains– B2 Structured Protection– B1 Labeled Security Protections– C2 Controlled Access Protection– C1 Discretionary Security Protection– D Minimal Protection
• No Security
27
Common Criteria
• January 1996: Common Criteria– Joint work with Canada and Europe– Separates functionality from assurance– Nine classes of functionality: audit, communications,
user data protection, identification and authentication, privacy, protection of trusted functions, resource utilization, establishing user sessions, and trusted path.
– Seven classes of assurance: configuration management, delivery and operation, development, guidance documents, life cycle support, tests, and vulnerability assessment.
28
Common Criteria
• Evaluation Assurance Levels (EAL)– EAL1: functionally tested– EAL2: structurally tested– EAL3: methodologically tested and checked– EAL4: methodologically designed, tested and
reviewed– EAL5: semi-formally designed and tested– EAL6: semi-formally verified and tested– EAL7: formally verified design and tested
29
Outline
1. Risk management2. Standards on Evaluating Secure System3. Security Analysis using Security Metrics
30
Introduction
• How to quantitatively measure and demonstrate the amount of security for a computer or a network?– Meaningful security metrics for networked (e.g.,
enterprise) systems are significantly more difficult to define, analyze, compose, and use intelligently.
• Challenges– What security metrics are meaningful and useful? – How to collect security metrics? What to measure?– How to compose enterprise-level security metrics?– How to present the security metrics in a clean
manner?
31
System Architecture
• Develop a toolkit including security metrics collection, security metrics analysis, and security metrics visualization using security metrics.
32
Step 1: Identify Security Metrics
• Summarize exiting security metrics and identify new security metrics.– Collect existing security metrics: financial metrics;
application security; configuration management; network management ; asset management, etc.
– Identify new security metrics• Patch risk• Security score• Criticality• Time series
33
Patch Risk
• There is a risk to apply patches to fix vulnerabilities in applications.
– When an operating system is patched, the software may or may not function properly from that point forward.
– The patches themselves may contain vulnerabilities which require patching.
– Patch risk of a patch may be derived according to • the trustworthiness of its provider, and • how long the patch has been released and verified.
34
Security Score
• Security score provides an explicit number to evaluate the security of a computer or a network.
• Three types/levels of security scores:– Security score for individual vulnerability (e.g., CVSS
score**).– Security score for one computer with multiple
vulnerabilities.– Security score for a network with multiple computers.
** A “one-shot” security score may not be meaningful or useful formission-awareness situations, in which different missions relydifferently on the available services and applications.
35
Criticality
• Criticality is a combined metric to evaluate the importance of one computer in the network.
• It depends on – Location (intranet, DMZ, internet)– Service (HTTP, FTP, SSH)– Role (Firewall, Desktop, Router)– Asset (database, financial files)
36
Time Series
• Time series is to show the changes of security in a period of time. – It tells whether the security of a computer is
improved, or falls below a pre-determined threshold.
• Security changes can be triggered by many factors– Vulnerability changes over time (CVSS Temporal
Metrics)– Network configuration– Security training– Financial problems
37
Step 2: Collect Security Metrics
• We focus on collecting security metrics about vulnerability and network reachabilityautomatically.– Scan vulnerabilities on computers using Nessus
scanner.– Obtain vulnerability score based on NVD/CVSS.– Obtain firewall rules, network configuration files,
and network topology to derive network reachability information.
38
NVD/CVSS
• http://nvd.nist.gov/download.cfm#CVE_FEED• It provides XML database on CVSS score and Vector.
39
XML Parser for CVSS Dataset
• Raw CVSS vulnerability record in XML format.• We develop an XML parser to extract security metrics
from the database and save the metrics in Vulnerability Scoring and Description Table (VSDT). Entry Identifier CVE-2009-0022Score 6.3Severity MediumVector (AV:N/AC:M/Au:S/C:C/I:N/A:N)Vuln_Types ConfRange NetworkVuln_Soft Samba 3.2.0, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.2.6Description Samba 3.2.0 through 3.2.6, when registry shares are
enabled, allows remote authenticated users to access theroot files ystem via a crafted connection request thatspecifies a blank share name.
40
Network Reachability
• Network reachability captures the interactions among all attack possibilities in a network, so it has direct impacts on security scores for interdependent computers.
• It consists of three components:– Network topology
• Import network topology from OPNET network design software.
• JANASSURE tool by IAI can automatically obtain network topology information.
– Router Configuration• We can import router configuration files from CISCO routers.
– Firewall Rules• We can import firewall rules from CISCO routers.
41
Composing Security Score
• Use AHP (Analytic hierarchy process) to decide different weights for exploitability (access vector, access complexity, authentication) and impact (confidentiality, integrity, availability).
Vulnerability Score1.0
Exploitability0.4
Impact 0.6
Confidentiality0.2
Integrity0.2
Availability0.2
Access Vector0.13333
Access Complexity
0.13333
Authentication
0.1333
42
Step 4: Visualize Security Metrics
• Use an example bank system to show the security metrics• Dashboard for the whole network and individual computer.
43
Limitations: Security Score for Correlated Vulnerabilities
• We assume vulnerabilities are independent to each other in Phase I, and it may not be true in real world.
• What if the vulnerabilities on one computer are correlated to each other? – E.g., one user who installs application A (with
vulnerability v1) always install the application B (with vulnerability v2).
– How to obtain this correlation information?– How to take the correlation into calculating the
security score?
48
Limitations: Security Score for Network/Subnet
• Summary/Average/Max/Min of the scores on the computers are not good enough
• Combine vulnerability dependent information and network reachability information to measure the security of a network.– Assume we know the reachability information of the
network from firewall rules, network configuration. – From NVD, we know one vulnerability may lead to or
facilitate another vulnerability.• In simple cases, assume all the vulnerabilities on one computer are
independent to all the vulnerabilities in all other computers. • If vulnerability Va on computer A is a pre-requisite for vulnerability
Vb on computer B, how do we change the score?
49