Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Data SecurityMethods: A Risk Perspective
Pankaj RohatgiIBM T J Watson Research CenterIBM T. J. Watson Research Center
Goal• Understand the rationale behind
conventional data security concepts andconventional data security concepts and methods from a risk management perspective.p p– Rationale and limits of existing data security
mechanisms
• How to move towards a more quantitative qapproach towards data security risk – Discussion
Traditional Security Goals from a Risk Perspective
• Confidentiality– Prevent (minimize/mitigate risk of) damage from unauthorized
information disclosure.information disclosure. • Integrity
– Prevent (minimize/mitigate risk of) deliberate corruption of information and entities.
• Damage from use of low-integrity data.• Damage from corrupted entities (Programs, systems, users)
misusing Privileges. • Programs and configs are data.Programs and configs are data.
• Availability– Prevent (minimize/mitigate risk of) damage from attacks that
make key systems/information becoming unavailable for y y glegitimate purposes.
• Privacy (beyond confidentiality)– Information should only be used for specific purposes, under
specific conditions/contexts with specific obligationsspecific conditions/contexts, with specific obligations.
Basic Security Risk Management: Access Control
A C t l P li S ifi h i ll d t d• Access Control Policy: Specifies who is allowed to do what with each “Protected” resource.– Subject: Who – e.g., a Program, system, human.
Obj t th fil k t d t b t bl– Object: the resource: file, socket, database table.. – Permission: action – read, append write, execute…
• Auditability/Accountability R d f h did h t– Record of who did what
• Principle of Least Privilege– Each entity has exactly the Privileges needed to do their job and
no moreno more.• Primitive form of risk management
– Extra rights Prob. damage > 0, Benefit =0.– Fewer rights opportunity loss.
H d t li• Hard to realize.• Ideally, access control Policy should encode the result of
a risk/benefit analysis. Even harder to realize– Even harder to realize
Ideal Access Control
Authenticated-subject, actionAccessControlPolicy
ProtectedResource/
ObjectAuthentication
Object
auditdit
Reference M itaudit Monitor
• Authentication– Determines “subject”
• Reference Monitor – Interposes between all Possible accesses to all protected
resources/objects to implements access control Policy – Self Protecting, trusted component
Classic access control Policy models• ACLs/Capabilities
R l b d A C l• Role based Access Control• Multi-Level Security
– Bell LaPadula– Biba
• Clark Wilson
ACLs/Capabilities • Complex, error proneto manage
• Reasoning behind
O1 O2 O3 O4 O5 O6 O7 O8 O9 O10
Objects/Object-Groups• Reasoning behind
assignments lost
S1 r rw r rw rx
S2 r rw rw rx r
S3 ra ra
S4 r r rwx
S5 rw r
Capability
S6 r r r rw rw
S7 r r
S8
Subjects/Subject Groups
S8 rw r rx
S9 r rw rw
S10 r rw
ACL
Role Based Access ControlRole Based Access Control• Natural grouping sets of subjects into roles
U h f th i il f ti– Users who perform the similar functions• Loan officer, teller, branch manager etc.
• Roles assigned a set of privileges necessary to perform their job– Least privilegeeas p ege
• Simplified management -- reduced errors
s
permissions Roles assigned sets of permissions
user
s
s gr
oupe
d in
role
s
user
Use
rs
Multi-level security: “Quantized” risk management
U d l t l i l i t/d f i d t• Used almost exclusively in govt/defense industry– Concepts pre-date the advent of computers– Bell-Lapadula Model for confidentiality, Biba for Integrity
• Bell-LaPadula Model for confidentiality– Rules based on managing risks of information disclosure.g g– All objects and subjects tagged with secrecy labels
• Secrecy Labels <Level, Category Set> form a lattice.
• Motivation for rulesMotivation for rules– Consider three sources of information disclosure risk
• Deliberate leakage due to (human) temptation: Sensitive information provided to subjects who are not trustworthy enough to protect it.
• Inadvertent leakage by subject– By mistake, under duress, bug in software.
• Software/Hardware used by humans to access sensitive data may have bugs/Trojans and leak informationhave bugs/Trojans and leak information
Bell-LaPadula: Managing Risk of Deliberate L kLeakage
• Secrecy level y– Objects: Quantization of potential damage if object gets leaked.
• Top Secret: exceptionally grave damage• Secret: serious damage• Confidential: significant damage• Unclassified: No damage.
– Subject Clearance: Assessment subject’s trustworthiness to t t i f tiprotect information
• Top Secret, Secret, Confidential, None.
• Rule 1: For a subject to read an object– Subject clearance level >= Object secrecy level.
• Don’t take the risk of disclosing sensitive information to subjects who are not trustworthy enough to protect itj y g p
Bell-LaPadula: Risk of Inadvertent Disclosure
• Information compartmentalized into categories– e.g., CNWDI (nuclear weapon design)g , ( p g )
• Object belongs to a set of categories based on provenance and information within it.p
• Subject admitted to a set of categories based on needs for their job.
• Rule 2: For a subject to read an object– Subject categories Object categories
• Accept inadvertent disclosure risk only if the subjects needs the information for their job.
Bell-LaPadula: Managing risk from SW/HW
• SW/HW that reads information at aSW/HW that reads information at a particular secrecy level and category set cannot write any information at a lowercannot write any information at a lower secrecy level or reduced set of categories
• For SW/HW writing an object• For SW/HW writing an object– Subject Level ≤ Object Level– Subject Categories Object Categories
Bell-Lapadula Secrecy Labels, Information flow and confinement
2, {N, M}
SECRECY LATTICE Label A ≤ Label B iff
Level (A) ≤ Level Band Cat(A) Cat (B)
2, {N} 2, {N}
E.g., SECRECY:
Levels
0: Unclassified
Read: 1, NWrite: 1, N
2
1, {N, M}
1: Confidential
2: Secret
Categories:
N: Nuclear Weapon Technology
RegularProgram
1, {N}1, {M}
p gy
M: Missile Technology
0
1 1. “READ DOWN” rules limit secrets given to program/human user.
2. “WRITE UP” Rule prevents program READ
WRITE0 from leaking its secrets !LIMITEDBLIND-WRITE
Biba: Integrity model• Data objects have integrity levels
– Degree of correctness, non-maliciousness, non-corruptiong , , p
• Subjects (Software) have integrity levels– Required level of protection against corruption
D f d ibl if bj t t t d k– Degree of damage possible if subject gets corrupted or works incorrectly
• Rules– Read: Subject Integrity Level ≤ Object Integrity Level
• Prevent a critical program from being corrupted and causing damage due to bad/malicious input data
– Write: Subject Integrity Level ≥ Object Integrity Level• Prevent critical data from being corrupted from possibly less
protected programs
MLS and High Assurance, Risk Mgmt• High assurance is required to
handle MLS labeled data at different secrecy levels.S stems m st be e al ated
Common Criteria Evaluation Assurance Levels
• EAL1 - Functionally Tested• Systems must be evaluated
/certified at EAL5+ (or Orange Book B1 or higher)
• Rigorous process to validate
• EAL2 - Structurally Tested
• EAL3 - Methodically Tested & Checked
EAL4 strength of protection.• (Risk-based) Rules on
assurance of system needed to handle a range of levels
• EAL4 - Methodically Designed, Tested and Reviewed.
– EAL4+ AIX,WinXP,Solaris,Linux
• EAL5 - Semi formally Designed & handle a range of levels – Higher the assurance, higher the
permitted range. – Yellow Book.
• Higher range of data handled
• EAL5 - Semi-formally Designed & Tested. (IBM PR/SM)
• EAL6 - Semi-formally Verified Design & Tested
• EAL7 Formally Verified Design & • Higher range of data handled Higher potential for damage if compromised reserved only for more trusted systems.
• EAL7 - Formally Verified Design & Tested
Issues with MLS• Seen as restrictive, static, onerous and inflexible, hinders critical
information sharing.– Requires (possibly human assisted) creation, maintenance, binding of
security labels to all systems and information throughout its lifecyclesecurity labels to all systems and information throughout its lifecycle– Coarseness of levels/categories– Inflexible access control decisions– No-write down rule causes information overclassification
V f t h th d d t i l t th• Very few systems have the assurance needed to implement the strict lattice based information flow policy– In practice enforced by creating air-gapped information silos
• E.g., SIPRNET, NIPRNET– Complicated procedures and limited guard programs to move
information.• Inflexibility leads to users subverting the spirit of MLS to get their job
done– “Overall security could increase by using a less secure but more flexible
system” [JASON Program Office, MITRE]• Difficult to define meaningful Biba integrity levels
Very little semantics in a numeric level– Very little semantics in a numeric level.
RISK SCALEFuzzy MLS: Risk Adaptive Access Control
Unsafe/Lawless
Traditional Access Control Dilemma :To Allow : Risk, get job doneTo Deny : No Risk, job not done
DENYPolicy : A fixed trade-offinflexible binary decision
Risk-Adaptive Access Control :Based on Quantified Estimateof Risk
Risk/Liability
ALLOW WITHE ti t d Ri k f
of RiskTo Allow, To Deny, ORAllow with additionalRisk Mitigation according toQuantified Risk Estimate
static risk postureALLOW WITH
RISK-MITIGATIONaccording to
Risk Estimate
Estimated Risk fromAccess
Hard boundaryNon-binary decisions :many bands of risk
Bands are adjustable according torisk tolerance: DYNAMIC TRADEOFF
Soft boundary
B siness Objecti e
ALLOW
Fuzzy MLS: First Case Study ofRisk-Adaptive Access Control byrelaxing the MLS model.
Business Objective
Safe/UnworkableCurrent risk tolerance
Fuzzy MLS : quantified risk estimate
• Risk = (estimated value of loss) ×
Corresponding to the MLSSensitivity Level of information
( )(estimated probability of the loss is incurred)
• Probability for One Risk Contributing Factor : example:MLS clearance (cl) and sensitivity (sl)
Put relative qualitative risk comparison into formula Risk– Put relative, qualitative risk comparison into formula Risk Indices
• Risk Index = a (sl – cl) / ( M – sl )– Assign probabilities to indices : Sigmoid Function
b bilit 1 / ( 1 k( RiskIndex mid ) )Quantify the gap between
User trustworthiness Resource Value LossProb
1.0MLS: step function
probability = 1 / ( 1 + e k( RiskIndex –mid ) )• Smooth out the MLS step function
from binary decision multiple decisions• Multi factors contributing to overall risk :
Quantify the gap betweenUser Trustworthiness andInformation Value/Sensitivity
mid index
0.5FuzzyMLS
g– Define and Compute index and probability for each factor.– Compute joint probability : on going research and
experimentsM th d h i ti d l t t j i t b bilit
Can Joe be trusted with this$10M info ? How likely is hegoing to leak it ?• Math and heuristic model to compute joint probability.going to leak it ?
Disclosure Probability Assessment: Traditional MLS1 2 3 4 5 6 7 8 9 10
1 0 0 0 0 0 0 0 0 0 0
OLSL
2 1 0 0 0 0 0 0 0 0 0
3 1 1 0 0 0 0 0 0 0 0
4 1 1 1 0 0 0 0 0 0 04 1 1 1 0 0 0 0 0 0 0
5 1 1 1 1 0 0 0 0 0 0
6 1 1 1 1 1 0 0 0 0 0
7 1 1 1 1 1 1 0 0 0 0
8 1 1 1 1 1 1 1 0 0 0
9 1 1 1 1 1 1 1 1 0 0
10 1 1 1 1 1 1 1 1 1 0
Fuzzy MLS Assessment ( t M 11 10 k 3 id 4)(parameters M=11, a=10, k =3, mid=4)
1 2 3 4 5 6 7 8 9 10OL
SL
1 2 3 4 5 6 7 8 9 10
1 8.3 e-6 6.3 e-6 6.2 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6
2 1.7 e-4 8.6 e-6 6.4 e-6 6.2 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6
3 1 2 6 4 8 9 6 6 4 6 6 2 6 6 1 6 6 1 6 6 1 6 6 1 6 6 1 63 1 2.6 e-4 8.9 e-6 6.4 e-6 6.2 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6
4 1 1 4.5 e-4 9.4 e-6 6.4 e-6 6.2 e-6 6.1 e-6 6.1 e-6 6.1 e-6 6.1 e-6
5 1 1 1 9.1 e-4 1.0 e-5 6.5 e-6 6.2 e-6 6.1 e-6 6.1 e-6 6.1 e-6
6 1 1 1 1 2.5 e-3 1.1 e-5 6.5 e-6 6.2 e-6 6.1 e-6 6.1 e-6
7 1 1 1 1 1 1.1 e-2 1.3 e-5 6.6 e-6 6.2 e-6 6.1 e-6
8 1 1 1 1 1 1 1.2 e-1 1.7 e-5 6.8 e-6 6.2 e-6
9 1 1 1 1 1 1 1 0.95 2.8 e-5 7.1 e-6
10 1 1 1 1 1 1 1 1 1 1.2 e-4
Clark Wilson Model• Focus on Data Integrity in commercial/transactional settings.• Key conceptsy
– CDI -- (constrained data item) -- data to be integrity protected– TP -- trusted programs which can run on behalf of a set of users and
are guaranteed to take a set CDIs from one valid state to another• Mapping of TP to CDIs is certified.• Mapping of users to TPs and CDIs is certified• Only certified TPs with authenticated, certified user allowed to manipulate
CDIsCDIs– Logging, with logs as CDI.– TPs must be able to handling untrusted input without impacting
integrity of CDI’sintegrity of CDI s.– Separation of duty, between certifiers and users of TPs.
Access Control (Reality)
Unprotecteddata
PartiallyProtected
data
“Authenticated-subject”, action Rigid/Dated/Inconsistent/Incomplete
Access“Protected”
Data
data
Authentication ??AccessControlPolicy
Data
Audit ??Low AssuranceReference
Audit ??Reference Monitor
Information (In) Security: Enterprise Perspective
M fMany forms of valuable data
Structured UnstructuredImagesVideo, Voice
Accessible from/storedin multiple ways and on multipledevices…
Cell phones
All of which are
Cell phonesLaptopsPDAs,iPodsBriefcases
Coffee ShopAll of which aresubject to increasingly sophisticated internal and external attacks
oraccidental leakage ! Business
Partners
Co ee S opHotels
Home
Supply Chain
Adapted from: IBM Internet Security Systems
OutsourcingSaaSCloud computing
Software (In)security Trends
4000
5000
6000
7000 Percentage of Remotely Exploitable Vulnerabilities
70
80
90
100
0
1000
2000
3000
4000
10
20
30
40
50
60
0Software Vulnerabilities Reported 2000-
07
IBM/ISS analysis of the 2007 malwareIBM/ISS analysis of the 2007 malware
02000 2001 2002 2003 2004 2005 2006 2007
12%
4%
14% VirusMiscPWD Steal
IBM/ISS analysis of the 2007 malware IBM/ISS analysis of the 2007 malware zoo: 410,000 distinct sampleszoo: 410,000 distinct samplesImpact of 2007 Vulnerabilies
7%
6% 1%2%
Gain Access
6%
6%
10%PWD StealDialerBackdoorWormTrojanD l d
50%
9% Data ManipulationDoSObtain InfoBypass SecGain PrivilegeFile Manipulatio 6%
16%26%
DownloaderAdware
11%
14% File ManipulatioOther
Some costs of data insecuritySome costs of data insecurity • Since 2005 over 250 million customer records stolen*.• Cost of average data breach : $6.6 million*g $
– Average $202 per customer record*• Detection, Notification, Response, lost business
• On average, the cost of preventative measures is four gtimes less than the cost of a breach *
• 92% of security practitioners report their organization suffered a cyber-crime attack*
• 12,000 laptops are lost in airports each week*– Only 30% recovered– ~50 % travelers say their laptops contain customer data or
fid ti l b i i f ticonfidential business information.• Average notebook holds content valued at $972,000
-- McAfee/Datamonitor’s Data Loss Survey, 2007
* Ponemon institute
Data Security Risk MitigantsTECHNOLOGIESTECHNOLOGIES• Data Erasure• Cryptographic protection for data
– Encryption/Authentication• Data-at-rest, motion
• Firewalls network zonesFirewalls, network zones• Network admission control (NAC)• Trusted Computing• Secure Auditing• Antivirus
I t i D t ti /P ti• Intrusion Detection/Prevention• Transaction/User Monitoring for forensics/fraud• Data Leakage Prevention
– Data Discovery and Classification– Enforcement of data security policy for data-at-rest, data-in-use, data-in motion– Data masking
PROCESSES
• Operational security processes (vulnerability scanning, pen-test, password renewal/checking, production system management etc)
• User security education• Secure software development practices, evaluation & certification.• Security alert monitoring and incident response process• Change and Patch Management ProcessChange and Patch Management Process• …..
Risk/Benefit based data security
DiscoveredUnprotected
dataTrusted
DLP – DiscoveryDLP – enforcementSecure Deletion
PartiallyProtected
Trusted computingAntivirusIntrusion DetectionP t h t
IDS/IPS EncryptionMasking
Firewall/Zones
Degree of authentication, subject, action
DynamicRisk/Benefit
BasedAccess
“Protected”Data
dataPatch mgmtNAC
MaskingDLP
Risk based Authentication
AccessControlPolicy
Data
Audit “Patched”Reference
Audit Reference Monitor
DLP
AntivirusPatch mgmt
DLPSecure AuditTransaction/User MonitoringSecurity Alert Monitoring
Secure Development
Challenges• Quantifying the benefit and risk of data
– Information about benefit may be “outside the system”• Risk Market/Information Trading• Risk Market/Information Trading
– Estimating potential damage from loss of data integrity
• confidentiality and availability may be easier• confidentiality and availability may be easier – Quantifying the probability of attack (quality of
protection)• Asymmetric Adversarial setting• Asymmetric, Adversarial setting• Rapidly evolving set of attack vectors and fixes• Impact of risk mitigants• Deliberate data disclosure risks by employees, may be easierDeliberate data disclosure risks by employees, may be easier
to model than modeling risks from external attackers• Optimizing risks, benefits, costs.
M ti ti J C itt t [2006]
Progress: Risk-Based Information Sharing PoliciesMotivation: Jason Committee report [2006]
– Drawbacks of Multi-Level Security for information sharing
– Need for risk based access control– Need for risk based access controlFuzzyMLS [IEEE Security and Privacy 2007]
– First realization of risk based information sharing
– Simplified information modelSimplified information modelTrading in Risk [NSPW 2008]Metadata Framework [IEEE QoISN 2008]
– Data centric metadata model for risk-based information sharing
– Realizing flexible, context-dependent security policies over existing ontologies [SACMAT 2009]
Quantitative Risk Analysis [ACM CCS 2008, IEEE Milcom 2008]
– Closed form control loop analysis
IEEE QoISN, 2008
Metadata Calculus [ASC 2008]– Extend QoI calculus for security metadata
29