Upload
bhattajagdish
View
162
Download
13
Tags:
Embed Size (px)
Citation preview
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1
CIS 739COMPUTER SECURITY
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-2
Chapter 1: Introduction
• Components of computer security• Threats• Policies and mechanisms• The role of trust• Assurance• Operational Issues• Human Issues
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-3
Basic Components
• Confidentiality– Keeping data and resources hidden
• Integrity– Data integrity (integrity)– Origin integrity (authentication)
• Availability– Enabling access to data and resources
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-4
Classes of Threats
• Disclosure– Snooping
• Deception– Modification, spoofing, repudiation of origin, denial of
receipt• Disruption
– Modification• Usurpation
– Modification, spoofing, delay, denial of service
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-5
Policies and Mechanisms
• Policy says what is, and is not, allowed– This defines “security” for the site/system/etc.
• Mechanisms enforce policies• Composition of policies
– If policies conflict, discrepancies may create security vulnerabilities
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-6
Goals of Security
• Prevention– Prevent attackers from violating security policy
• Detection– Detect attackers’ violation of security policy
• Recovery– Stop attack, assess and repair damage– Continue to function correctly even if attack
succeeds
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-7
Trust and Assumptions
• Underlie all aspects of security• Policies
– Unambiguously partition system states– Correctly capture security requirements
• Mechanisms– Assumed to enforce policy– Support mechanisms work correctly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-8
Types of Mechanisms
secure precise broad
set of reachable states set of secure states
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-9
Assurance
• Specification– Requirements analysis– Statement of desired functionality
• Design– How system will meet specification
• Implementation– Programs/systems that carry out design
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-10
Operational Issues
• Cost-Benefit Analysis– Is it cheaper to prevent or recover?
• Risk Analysis– Should we protect something?– How much should we protect this thing?
• Laws and Customs– Are desired security measures illegal?– Will people do them?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-11
Human Issues
• Organizational Problems– Power and responsibility– Financial benefits
• People problems– Outsiders and insiders– Social engineering
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-12
Tying Together
Threats
PolicySpecification
Design
Implementation
Operation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-13
Key Points
• Policy defines security, and mechanisms enforce security– Confidentiality– Integrity– Availability
• Trust and knowing assumptions• Importance of assurance• The human factor
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-14
Chapter 2: Access Control Matrix
• Overview• Access Control Matrix Model• Protection State Transitions
– Commands– Conditional Commands
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-15
Overview
• Protection state of system– Describes current settings, values of system
relevant to protection• Access control matrix
– Describes protection state precisely– Matrix describing rights of subjects– State transitions change elements of matrix
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-16
Description
objects (entities)
subj
ects
s1s2
…
sn
o1 … om s1 … sn • Subjects S = { s1,…,sn }• Objects O = { o1,…,om }• Rights R = { r1,…,rk }
• Entries A[si, oj] ⊆ R• A[si, oj] = { rx, …, ry }
means subject si has rights rx, …, ry over object oj
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-17
Example 1
• Processes p, q• Files f, g• Rights r, w, x, a, o
f g p qp rwo r rwxo wq a ro r rwxo
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-18
Example 2
• Procedures inc_ctr, dec_ctr, manage• Variable counter• Rights +, –, call
counter inc_ctr dec_ctr manageinc_ctr +dec_ctr –manage call call call
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-19
State Transitions
• Change the protection state of system• |– represents transition
– Xi |– τ Xi+1: command τ moves system from state Xi to Xi+1
– Xi |– * Xi+1: a sequence of commands moves system from state Xi to Xi+1
• Commands often called transformation procedures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-20
Primitive Operations• create subject s; create object o
– Creates new row, column in ACM; creates new column in ACM• destroy subject s; destroy object o
– Deletes row, column from ACM; deletes column from ACM• enter r into A[s, o]
– Adds r rights for subject s over object o• delete r from A[s, o]
– Removes r rights from subject s over object o
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-21
Creating File
• Process p creates file f with r and wpermissioncommand create•file(p, f)
create object f;enter own into A[p, f];enter rinto A[p, f];enter winto A[p, f];
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-22
Mono-Operational Commands
• Make process p the owner of file gcommand make•owner(p, g)
enter own into A[p, g];end
• Mono-operational command– Single primitive operation in this command
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-23
Conditional Commands
• Let p give q r rights over f, if p owns fcommand grant•read•file•1(p, f, q)
if ownin A[p, f]then
enter rinto A[q, f];end
• Mono-conditional command– Single condition in this command
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-24
Multiple Conditions
• Let p give q r and w rights over f, if p owns f and p has c rights over qcommand grant•read•file•2(p, f, q)
if ownin A[p, f] and cin A[p, q]then
enter rinto A[q, f];enter winto A[q, f];
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-25
Key Points
• Access control matrix simplest abstraction mechanism for representing protection state
• Transitions alter protection state• 6 primitive operations alter matrix
– Transitions can be expressed as commands composed of these operations and, possibly, conditions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-26
Chapter 3: Foundational Results
• Overview• Harrison-Ruzzo-Ullman result
– Corollaries
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-27
Overview
• Safety Question• HRU Model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-28
What Is “Secure”?
• Adding a generic right r where there was not one is “leaking”
• If a system S, beginning in initial state s0, cannot leak right r, it is safe with respect to the right r.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-29
Safety Question
• Does there exist an algorithm for determining whether a protection system Swith initial state s0 is safe with respect to a generic right r?– Here, “safe” = “secure” for an abstract model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-30
Mono-Operational Commands
• Answer: yes• Sketch of proof:
Consider minimal sequence of commands c1, …, ck to leak the right.– Can omit delete, destroy– Can merge all creates into oneWorst case: insert every right into every entry; with s subjects and o objects initially, and nrights, upper bound is k ≤ n(s+1)(o+1)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-31
General Case
• Answer: no• Sketch of proof:
Reduce halting problem to safety problemTuring Machine review:– Infinite tape in one direction– States K, symbols M; distinguished blank b– Transition function δ(k, m) = (k′, m′, L) means in state
k, symbol m on tape location replaced by symbol m′, head moves to left one square, and enters state k′
– Halting state is qf; TM halts when it enters this state
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-32
Mapping
A B C D …
1 2 3 4
head
s1 s2 s3 s4
s4
s3
s2
s1 A
B
C k
D end
own
own
ownCurrent state is k
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-33
Mapping
A B X D …
1 2 3 4
head
s1 s2 s3 s4
s4
s3
s2
s1 A
B
X
D k1 end
own
own
ownAfter δ(k, C) = (k1, X, R)where k is the currentstate and k1 the next state
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-34
Command Mapping
δ(k, C) = (k1, X, R) at intermediate becomescommand ck,C(s3,s4)ifowninA[s3,s4] and kin A[s3,s3]
and C inA[s3,s3]thendelete kfrom A[s3,s3];delete C from A[s3,s3];enter X into A[s3,s3];enter k1into A[s4,s4];
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-35
Mapping
A B X Y
1 2 3 4
head
s1 s2 s3 s4
s4
s3
s2
s1 A
B
X
Y
own
own
ownAfter δ(k1, D) = (k2, Y, R)where k1 is the currentstate and k2 the next state
s5
s5
own
b k2 end
5
b
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-36
Command Mappingδ(k1, D) = (k2, Y, R) at end becomescom m and crightmostk,C(s4,s5)ifend in A[s4,s4] and k1in A[s4,s4]
and D in A[s4,s4]then
delete endfrom A[s4,s4];create subject s5;enterown into A[s4,s5];enterendinto A[s5,s5];delete k1from A[s4,s4];delete D from A[s4,s4];enterY into A[s4,s4];enterk2into A[s5,s5];
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-37
Rest of Proof
• Protection system exactly simulates a TM– Exactly 1 end right in ACM– 1 right in entries corresponds to state– Thus, at most 1 applicable command
• If TM enters state qf, then right has leaked• If safety question decidable, then represent TM as
above and determine if qf leaks– Implies halting problem decidable
• Conclusion: safety question undecidable
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-38
Other Results
• Set of unsafe systems is recursively enumerable• Delete create primitive; then safety question is complete in
P-SPACE• Delete destroy, delete primitives; then safety question is
undecidable– Systems are monotonic
• Safety question for monoconditional, monotonic protection systems is decidable
• Safety question for monoconditional protection systems with create, enter, delete (and no destroy) is decidable.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-39
Key Points
• Safety problem undecidable• Limiting scope of systems can make
problem decidable
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-40
Chapter 4: Security Policies
• Overview• The nature of policies
– What they cover– Policy languages
• The nature of mechanisms– Types
• Underlying both– Trust
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-41
Overview
• Overview• Policies• Trust• Nature of Security Mechanisms• Example Policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-42
Security Policy
• Policy partitions system states into:– Authorized (secure)
• These are states the system can enter– Unauthorized (nonsecure)
• If the system enters any of these states, it’s a security violation
• Secure system– Starts in authorized state– Never enters unauthorized state
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-43
Confidentiality
• X set of entities, I information• I has confidentiality property with respect to X if
no x ∈ X can obtain information from I• I can be disclosed to others• Example:
– X set of students– I final exam answer key– I is confidential with respect to X if students cannot
obtain final exam answer key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-44
Integrity
• X set of entities, I information• I has integrity property with respect to X if all x ∈
X trust information in I• Types of integrity:
– trust I, its conveyance and protection (data integrity)– I information about origin of something or an identity
(origin integrity, authentication)– I resource: means resource functions as it should
(assurance)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-45
Availability
• X set of entities, I resource• I has availability property with respect to X if all x
∈ X can access I• Types of availability:
– traditional: x gets access or not– quality of service: promised a level of access (for
example, a specific level of bandwidth) and not meet it, even though some access is achieved
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-46
Policy Models
• Abstract description of a policy or class of policies
• Focus on points of interest in policies– Security levels in multilevel security models– Separation of duty in Clark-Wilson model– Conflict of interest in Chinese Wall model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-47
Types of Security Policies
• Military (governmental) security policy– Policy primarily protecting confidentiality
• Commercial security policy– Policy primarily protecting integrity
• Confidentiality policy– Policy protecting only confidentiality
• Integrity policy– Policy protecting only integrity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-48
Integrity and Transactions
• Begin in consistent state– “Consistent” defined by specification
• Perform series of actions (transaction)– Actions cannot be interrupted– If actions complete, system in consistent state– If actions do not complete, system reverts to
beginning (consistent) state
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-49
Trust
Administrator installs patch1. Trusts patch came from vendor, not
tampered with in transit2. Trusts vendor tested patch thoroughly3. Trusts vendor’s test environment
corresponds to local environment4. Trusts patch is installed correctly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-50
Trust in Formal Verification
• Gives formal mathematical proof that given input i, program P produces output o as specified
• Suppose a security-related program Sformally verified to work with operating system O
• What are the assumptions?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-51
Trust in Formal Methods
1. Proof has no errors• Bugs in automated theorem provers
2. Preconditions hold in environment in which S is to be used
3. S transformed into executable S′ whose actions follow source code
– Compiler bugs, linker/loader/library problems4. Hardware executes S′ as intended
– Hardware bugs (Pentium f00fbug, for example)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-52
Types of Access Control
• Discretionary Access Control (DAC, IBAC)– individual user sets access control mechanism to allow
or deny access to an object
• Mandatory Access Control (MAC)– system mechanism controls access to object, and
individual cannot alter that access
• Originator Controlled Access Control (ORCON)– originator (creator) of information controls who can
access information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-53
Question
• Policy disallows cheating– Includes copying homework, with or without
permission
• CS class has students do homework on computer• Anne forgets to read-protect her homework file• Bill copies it• Who cheated?
– Anne, Bill, or both?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-54
Answer Part 1
• Bill cheated– Policy forbids copying homework assignment– Bill did it– System entered unauthorized state (Bill having a copy
of Anne’s assignment)• If not explicit in computer security policy,
certainly implicit– Not credible that a unit of the university allows
something that the university as a whole forbids, unless the unit explicitly says so
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-55
Answer Part 2
• Anne didn’t protect her homework– Not required by security policy
• She didn’t breach security• If policy said students had to read-protect
homework files, then Anne did breach security– She didn’t do this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-56
Mechanisms
• Entity or procedure that enforces some part of the security policy– Access controls (like bits to prevent someone
from reading a homework file)– Disallowing people from bringing CDs and
floppy disks into a computer facility to control what is placed on systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-57
Example English Policy
• Computer security policy for academic institution– Institution has multiple campuses, administered
from central office– Each campus has its own administration, and
unique aspects and needs• Authorized Use Policy• Electronic Mail Policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-58
Authorized Use Policy
• Intended for one campus (Davis) only• Goals of campus computing
– Underlying intent
• Procedural enforcement mechanisms– Warnings– Denial of computer access– Disciplinary action up to and including expulsion
• Written informally, aimed at user community
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-59
Electronic Mail Policy
• Systemwide, not just one campus• Three parts
– Summary– Full policy– Interpretation at the campus
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-60
Summary
• Warns that electronic mail not private– Can be read during normal system
administration– Can be forged, altered, and forwarded
• Unusual because the policy alerts users to the threats– Usually, policies say how to prevent problems,
but do not define the threats
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-61
Summary
• What users should and should not do– Think before you send– Be courteous, respectful of others– Don’t nterfere with others’ use of email
• Personal use okay, provided overhead minimal• Who it applies to
– Problem is UC is quasi-governmental, so is bound by rules that private companies may not be
– Educational mission also affects application
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-62
Full Policy• Context
– Does not apply to Dept. of Energy labs run by the university– Does not apply to printed copies of email
• Other policies apply here
• E-mail, infrastructure are university property– Principles of academic freedom, freedom of speech apply– Access without user’s permission requires approval of vice
chancellor of campus or vice president of UC– If infeasible, must get permission retroactively
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-63
Uses of E-mail
• Anonymity allowed– Exception: if it violates laws or other policies
• Can’t interfere with others’ use of e-mail– No spam, letter bombs, e-mailed worms, etc.
• Personal e-mail allowed within limits– Cannot interfere with university business– Such e-mail may be a “university record”
subject to disclosure
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-64
Security of E-mail
• University can read e-mail– Won’t go out of its way to do so– Allowed for legitimate business purposes– Allowed to keep e-mail robust, reliable
• Archiving and retention allowed– May be able to recover e-mail from end system
(backed up, for example)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-65
Implementation
• Adds campus-specific requirements and procedures– Example: “incidental personal use” not allowed if it
benefits a non-university organization– Allows implementation to take into account differences
between campuses, such as self-governance by Academic Senate
• Procedures for inspecting, monitoring, disclosing e-mail contents
• Backups
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-66
Key Points
• Policies describe what is allowed• Mechanisms control how policies are
enforced• Trust underlies everything
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-67
Chapter 5: Confidentiality Policies
• Overview– What is a confidentiality model
• Bell-LaPadula Model– General idea– Informal description of rules
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-68
Overview
• Goals of Confidentiality Model• Bell-LaPadula Model
– Informally– Example Instantiation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-69
Confidentiality Policy
• Goal: prevent the unauthorized disclosure of information– Deals with information flow– Integrity incidental
• Multi-level security models are best-known examples– Bell-LaPadula Model basis for many, or most,
of these
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-70
Bell-LaPadula Model, Step 1
• Security levels arranged in linear ordering– Top Secret: highest– Secret– Confidential– Unclassified: lowest
• Levels consist of security clearance L(s)– Objects have security classification L(o)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-71
Example
objectsubjectsecurity level
Telephone Lists
Activity Logs
E-Mail Files
Personnel Files
UlaleyUnclassified
ClaireConfidential
SamuelSecret
TamaraTop Secret
• Tamara can read all files• Claire cannot read Personnel or E-Mail Files• Ulaley can only read Telephone Lists
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-72
Reading Information
• Information flows up, not down– “Reads up” disallowed, “reads down” allowed
• Simple Security Condition (Step 1)– Subject s can read object o iff L(o) ≤ L(s) and s
has permission to read o• Note: combines mandatory control (relationship of
security levels) and discretionary control (the required permission)
– Sometimes called “no reads up” rule
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-73
Writing Information
• Information flows up, not down– “Writes up” allowed, “writes down” disallowed
• *-Property (Step 1)– Subject s can write object o iff L(s) ≤ L(o) and s
has permission to write o• Note: combines mandatory control (relationship of
security levels) and discretionary control (the required permission)
– Sometimes called “no writes down” rule
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-74
Basic Security Theorem, Step 1
• If a system is initially in a secure state, and every transition of the system satisfies the simple security condition, step 1, and the *-property, step 1, then every state of the system is secure– Proof: induct on the number of transitions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-75
Bell-LaPadula Model, Step 2
• Expand notion of security level to include categories
• Security level is (clearance, category set)• Examples
– ( Top Secret, { NUC, EUR, ASI } )– ( Confidential, { EUR, ASI } )– ( Secret, { NUC, ASI } )
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-76
Levels and Lattices
• (A, C) dom (A′, C′) iff A′ ≤ A and C′ ⊆ C• Examples
– (Top Secret, {NUC, ASI}) dom (Secret, {NUC})– (Secret, {NUC, EUR}) dom (Confidential,{NUC, EUR})– (Top Secret, {NUC}) ¬dom (Confidential, {EUR})
• Let C be set of classifications, K set of categories. Set of security levels L = C × K, dom form lattice– lub(L) = (max(A), C)– glb(L) = (min(A), ∅)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-77
Levels and Ordering
• Security levels partially ordered– Any pair of security levels may (or may not) be
related by dom• “dominates” serves the role of “greater
than” in step 1– “greater than” is a total ordering, though
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-78
Reading Information
• Information flows up, not down– “Reads up” disallowed, “reads down” allowed
• Simple Security Condition (Step 2)– Subject s can read object o iff L(s) dom L(o)
and s has permission to read o• Note: combines mandatory control (relationship of
security levels) and discretionary control (the required permission)
– Sometimes called “no reads up” rule
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-79
Writing Information
• Information flows up, not down– “Writes up” allowed, “writes down” disallowed
• *-Property (Step 2)– Subject s can write object o iff L(o) dom L(s)
and s has permission to write o• Note: combines mandatory control (relationship of
security levels) and discretionary control (the required permission)
– Sometimes called “no writes down” rule
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-80
Basic Security Theorem, Step 2
• If a system is initially in a secure state, and every transition of the system satisfies the simple security condition, step 2, and the *-property, step 2, then every state of the system is secure– Proof: induct on the number of transitions– In actual Basic Security Theorem, discretionary access
control treated as third property, and simple security property and *-property phrased to eliminate discretionary part of the definitions — but simpler to express the way done here.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-81
Problem
• Colonel has (Secret, {NUC, EUR}) clearance
• Major has (Secret, {EUR}) clearance– Major can talk to colonel (“write up” or “read
down”)– Colonel cannot talk to major (“read up” or
“write down”)• Clearly absurd!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-82
Solution
• Define maximum, current levels for subjects– maxlevel(s) dom curlevel(s)
• Example– Treat Major as an object (Colonel is writing to him/her)– Colonel has maxlevel (Secret, { NUC, EUR })– Colonel sets curlevel to (Secret, { EUR })– Now L(Major) dom curlevel(Colonel)
• Colonel can write to Major without violating “no writes down”– Does L(s) mean curlevel(s) or maxlevel(s)?
• Formally, we need a more precise notation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-83
DG/UX System
• Provides mandatory access controls– MAC label identifies security level– Default labels, but can define others
• Initially– Subjects assigned MAC label of parent
• Initial label assigned to user, kept in Authorization and Authentication database
– Object assigned label at creation• Explicit labels stored as part of attributes• Implicit labels determined from parent directory
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-84
MAC Regions
Administrative RegionA&A database, audit
User data and applications User RegionHierarchylevels
VP–1VP–2VP–3VP–4
Site executablesTrusted dataExecutables not part of the TCB
Reserved for future use
Virus Prevention Region
CategoriesVP–5
Executables part of the TCB
IMPL_HI is “maximum” (least upper bound) of all levelsIMPL_LO is “minimum” (greatest lower bound) of all levels
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-85
Directory Problem
• Process p at MAC_A tries to create file /tmp/x• /tmp/x exists but has MAC label MAC_B
– Assume MAC_B dom MAC_A
• Create fails– Now p knows a file named x with a higher label exists
• Fix: only programs with same MAC label as directory can create files in the directory– Now compilation won’t work, mail can’t be delivered
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-86
Multilevel Directory
• Directory with a set of subdirectories, one per label– Not normally visible to user– p creating /tmp/x actually creates /tmp/d/x where d is
directory corresponding to MAC_A– All p’s references to /tmp go to /tmp/d
• p cd’s to /tmp/a, then to ..– System call stat(“.”, &buf) returns inode number of real
directory– System call dg_stat(“.”, &buf) returns inode of /tmp
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-87
Object Labels
• Requirement: every file system object must have MAC label
1. Roots of file systems have explicit MAC labels• If mounted file system has no label, it gets
label of mount point2. Object with implicit MAC label inherits
label of parent
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-88
Object Labels
• Problem: object has two names– /x/y/z, /a/b/c refer to same object– y has explicit label IMPL_HI– b has explicit label IMPL_B
• Case 1: hard link created while file system on DG/UX system, so …
3. Creating hard link requires explicit label• If implicit, label made explicit• Moving a file makes label explicit
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-89
Object Labels
• Case 2: hard link exists when file system mounted
– No objects on paths have explicit labels: paths have same implicit labels
– An object on path acquires an explicit label: implicit label of child must be preserved
so …4. Change to directory label makes child labels
explicit before the change
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-90
Object Labels
• Symbolic links are files, and treated as such, so …
5. When resolving symbolic link, label of object is label of target of the link
• System needs access to the symbolic link itself
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-91
Using MAC Labels
• Simple security condition implemented• *-property not fully implemented
– Process MAC must equal object MAC– Writing allowed only at same security level
• Overly restrictive in practice
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-92
MAC Tuples
• Up to 3 MAC ranges (one per region)• MAC range is a set of labels with upper, lower
bound– Upper bound must dominate lower bound of range
• Examples1. [(Secret, {NUC}), (Top Secret, {NUC})]2. [(Secret, ∅), (Top Secret, {NUC, EUR, ASI})]3. [(Confidential, {ASI}), (Secret, {NUC, ASI})]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-93
MAC Ranges
1. [(Secret, {NUC}), (Top Secret, {NUC})]2. [(Secret, ∅), (Top Secret, {NUC, EUR, ASI})]3. [(Confidential, {ASI}), (Secret, {NUC, ASI})]• (Top Secret, {NUC}) in ranges 1, 2• (Secret, {NUC, ASI}) in ranges 2, 3• [(Secret, {ASI}), (Top Secret, {EUR})] not
valid range– as (Top Secret, {EUR}) ¬dom (Secret, {ASI})
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-94
Objects and Tuples
• Objects must have MAC labels– May also have MAC label– If both, tuple overrides label
• Example– Paper has MAC range:
[(Secret, {EUR}), (Top Secret, {NUC, EUR})]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-95
MAC Tuples
• Process can read object when:– Object MAC range (lr, hr); process MAC label pl– pl dom hr
• Process MAC label grants read access to upper bound of range
• Example– Peter, with label (Secret, {EUR}), cannot read paper
• (Top Secret, {NUC, EUR}) dom (Secret, {EUR})– Paul, with label (Top Secret, {NUC, EUR, ASI}) can read
paper• (Top Secret, {NUC, EUR, ASI}) dom (Top Secret, {NUC, EUR})
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-96
MAC Tuples
• Process can write object when:– Object MAC range (lr, hr); process MAC label pl– pl ∈ (lr, hr)
• Process MAC label grants write access to any label in range
• Example– Peter, with label (Secret, {EUR}), can write paper
• (Top Secret, {NUC, EUR}) dom (Secret, {EUR}) and (Secret, {EUR}) dom (Secret, {EUR})
– Paul, with label (Top Secret, {NUC, EUR, ASI}), cannot read paper
• (Top Secret, {NUC, EUR, ASI}) dom (Top Secret, {NUC, EUR})
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-97
Key Points
• Confidentiality models restrict flow of information
• Bell-LaPadula models multilevel security– Cornerstone of much work in computer security
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-98
Chapter 6: Integrity Policies
• Overview• Requirements• Biba’s models• Clark-Wilson model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-99
Overview
• Requirements– Very different than confidentiality policies
• Biba’s model• Clark-Wilson model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-100
Requirements of Policies1. Users will not write their own programs, but will use existing
production programs and databases. 2. Programmers will develop and test programs on a non-production
system; if they need access to actual data, they will be given production data via a special process, but will use it on their development system.
3. A special process must be followed to install a program from thedevelopment system onto the production system.
4. The special process in requirement 3 must be controlled and audited.5. The managers and auditors must have access to both the system
state and the system logs that are generated.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-101
Biba Integrity Model
• Set of subjects S, objects O, integrity levels I, relation ≤ ⊆ I × I holding when second dominates first
• min: I × I → I returns lesser of integrity levels
• i: S ∪ O → I gives integrity level of entity• r: S × O means s ∈ S can read o ∈ O• w, x defined similarly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-102
Intuition for Integrity Levels
• The higher the level, the more confidence– That a program will execute correctly– That data is accurate and/or reliable
• Note relationship between integrity and trustworthiness
• Important point: integrity levels are notsecurity levels
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-103
Biba’s Model
• Similar to Bell-LaPadula model1. s ∈ S can read o ∈ O iff i(s) ≤ i(o)2. s ∈ S can write to o ∈ O iff i(o) ≤ i(s)3. s1 ∈ S can execute s2 ∈ S iff i(s2) ≤ i(s1)
• Add compartments and discretionary controls to get full dual of Bell-LaPadula model
• Information flow result holds– Different proof, though
• Actually the “strict integrity model” of Biba’s set of models
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-104
LOCUS and Biba
• Goal: prevent untrusted software from altering data or other software
• Approach: make levels of trust explicit– credibility rating based on estimate of software’s
trustworthiness (0 untrusted, n highly trusted)– trusted file systems contain software with a single
credibility level– Process has risk level or highest credibility level at
which process can execute– Must use run-untrusted command to run software at
lower credibility level
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-105
Clark-Wilson Integrity Model
• Integrity defined by a set of constraints– Data in a consistent or valid state when it satisfies these
• Example: Bank– D today’s deposits, W withdrawals, YB yesterday’s
balance, TB today’s balance– Integrity constraint: D + YB –W
• Well-formed transaction move system from one consistent state to another
• Issue: who examines, certifies transactions done correctly?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-106
Entities
• CDIs: constrained data items– Data subject to integrity controls
• UDIs: unconstrained data items– Data not subject to integrity controls
• IVPs: integrity verification procedures– Procedures that test the CDIs conform to the integrity
constraints• TPs: transaction procedures
– Procedures that take the system from one valid state to another
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-107
Certification Rules 1 and 2
CR1 When any IVP is run, it must ensure all CDIsare in a valid state
CR2 For some associated set of CDIs, a TP must transform those CDIs in a valid state into a (possibly different) valid state
– Defines relation certified that associates a set of CDIs with a particular TP
– Example: TP balance, CDIs accounts, in bank example
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-108
Enforcement Rules 1 and 2
ER1 The system must maintain the certified relations and must ensure that only TPscertified to run on a CDI manipulate that CDI.
ER2 The system must associate a user with each TP and set of CDIs. The TP may access those CDIs on behalf of the associated user. The TP cannot access that CDI on behalf of a user not associated with that TP and CDI.
– System must maintain, enforce certified relation– System must also restrict access based on user ID
(allowed relation)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-109
Users and Rules
CR3 The allowed relations must meet the requirements imposed by the principle of separation of duty.
ER3 The system must authenticate each user attempting to execute a TP– Type of authentication undefined, and depends on
the instantiation– Authentication not required before use of the
system, but is required before manipulation of CDIs (requires using TPs)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-110
Logging
CR4 All TPs must append enough information to reconstruct the operation to an append-only CDI.– This CDI is the log– Auditor needs to be able to determine what
happened during reviews of transactions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-111
Handling Untrusted Input
CR5 Any TP that takes as input a UDI may perform only valid transformations, or no transformations, for all possible values of the UDI. The transformation either rejects the UDI or transforms it into a CDI.– In bank, numbers entered at keyboard are UDIs, so
cannot be input to TPs. TPs must validate numbers (to make them a CDI) before using them; if validation fails, TP rejects UDI
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-112
Separation of Duty In Model
ER4 Only the certifier of a TP may change the list of entities associated with that TP. No certifier of a TP, or of an entity associated with that TP, may ever have execute permission with respect to that entity.– Enforces separation of duty with respect to
certified and allowed relations
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-113
Comparison With Requirements
1. Users can’t certify TPs, so CR5 and ER4 enforce this
2. Procedural, so model doesn’t directly cover it; but special process corresponds to using TP
• No technical controls can prevent programmer from developing program on production system; usual control is to delete software tools
3. TP does the installation, trusted personnel do certification
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-114
Comparison With Requirements
4. CR4 provides logging; ER3 authenticates trusted personnel doing installation; CR5, ER4 control installation procedure
• New program UDI before certification, CDI (and TP) after
5. Log is CDI, so appropriate TP can provide managers, auditors access
• Access to state handled similarly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-115
Comparison to Biba
• Biba– No notion of certification rules; trusted subjects
ensure actions obey rules– Untrusted data examined before being made
trusted• Clark-Wilson
– Explicit requirements that actions must meet– Trusted entity must certify method to upgrade
untrusted data (and not certify the data itself)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-116
Key Points
• Integrity policies deal with trust– As trust is hard to quantify, these policies are
hard to evaluate completely– Look for assumptions and trusted users to find
possible weak points in their implementation• Biba based on multilevel integrity• Clark-Wilson focuses on separation of duty
and transactions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-117
Chapter 7: Hybrid Policies
• Overview• Chinese Wall Model• Clinical Information Systems Security
Policy• ORCON• RBAC
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-118
Overview
• Chinese Wall Model– Focuses on conflict of interest
• CISS Policy– Combines integrity and confidentiality
• ORCON– Combines mandatory, discretionary access controls
• RBAC– Base controls on job function
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-119
Chinese Wall Model
Problem:– Tony advises American Bank about
investments– He is asked to advise Toyland Bank about
investments• Conflict of interest to accept, because his
advice for either bank would affect his advice to the other bank
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-120
Organization
• Organize entities into “conflict of interest” classes
• Control subject accesses to each class• Control writing to all classes to ensure
information is not passed along in violation of rules
• Allow sanitized data to be viewed by everyone
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-121
Definitions
• Objects: items of information related to a company
• Company dataset (CD): contains objects related to a single company– Written CD(O)
• Conflict of interest class (COI): contains datasets of companies in competition– Written COI(O)– Assume: each object belongs to exactly one COI class
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-122
Example
Bank of America
Citibank Bank of the West
Bank COI Class
Shell Oil
Union ’76
Standard Oil
ARCO
Gasoline Company COI Class
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-123
Temporal Element
• If Anthony reads any CD in a COI, he can never read another CD in that COI– Possible that information learned earlier may
allow him to make decisions later– Let PR(S) be set of objects that S has already
read
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-124
CW-Simple Security Condition
• s can read o iff either condition holds:1. There is an o′ such that s has accessed o′ and
CD(o′) = CD(o)– Meaning s has read something in o’s dataset
2. For all o′ ∈ O, o′ ∈ PR(s) ⇒ COI(o′) ≠ COI(o)– Meaning s has not read any objects in o’s conflict of
interest class
• Ignores sanitized data (see below)• Initially, PR(s) = ∅, so initial read request
granted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-125
Sanitization
• Public information may belong to a CD– As is publicly available, no conflicts of interest
arise– So, should not affect ability of analysts to read– Typically, all sensitive data removed from such
information before it is released publicly (called sanitization)
• Add third condition to CW-Simple Security Condition:
3. o is a sanitized object
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-126
Writing
• Anthony, Susan work in same trading house• Anthony can read Bank 1’s CD, Gas’ CD• Susan can read Bank 2’s CD, Gas’ CD• If Anthony could write to Gas’ CD, Susan
can read it– Hence, indirectly, she can read information
from Bank 1’s CD, a clear conflict of interest
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-127
CW-*-Property
• s can write to o iff both of the following hold:
1. The CW-simple security condition permits sto read o; and
2. For all unsanitized objects o′, if s can read o′, then CD(o′) = CD(o)
• Says that s can write to an object if all the (unsanitized) objects it can read are in the same dataset
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-128
Compare to Bell-LaPadula
• Fundamentally different– CW has no security labels, B-LP does– CW has notion of past accesses, B-LP does not
• Bell-LaPadula can capture state at any time– Each (COI, CD) pair gets security category– Two clearances, S (sanitized) and U (unsanitized)
• S dom U– Subjects assigned clearance for compartments without
multiple categories corresponding to CDs in same COI class
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-129
Compare to Bell-LaPadula
• Bell-LaPadula cannot track changes over time– Susan becomes ill, Anna needs to take over
• C-W history lets Anna know if she can• No way for Bell-LaPadula to capture this
• Access constraints change over time– Initially, subjects in C-W can read any object– Bell-LaPadula constrains set of objects that a subject
can access• Can’t clear all subjects for all categories, because this violates
CW-simple security condition
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-130
Compare to Clark-Wilson• Clark-Wilson Model covers integrity, so consider
only access control aspects• If “subjects” and “processes” are interchangeable,
a single person could use multiple processes to violate CW-simple security condition– Would still comply with Clark-Wilson Model
• If “subject” is a specific person and includes all processes the subject executes, then consistent with Clark-Wilson Model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-131
Clinical Information Systems Security Policy
• Intended for medical records– Conflict of interest not critical problem– Patient confidentiality, authentication of records and
annotators, and integrity are• Entities:
– Patient: subject of medical records (or agent)– Personal health information: data about patient’s health
or treatment enabling identification of patient– Clinician: health-care professional with access to
personal health information while doing job
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-132
Assumptions and Principles
• Assumes health information involves 1 person at a time– Not always true; OB/GYN involves father as
well as mother• Principles derived from medical ethics of
various societies, and from practicing clinicians
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-133
Access
• Principle 1: Each medical record has an access control list naming the individuals or groups who may read and append information to the record. The system must restrict access to those identified on the access control list.– Idea is that clinicians need access, but no-one
else. Auditors get access to copies, so they cannot alter records
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-134
Access
• Principle 2: One of the clinicians on the access control list must have the right to add other clinicians to the access control list.– Called the responsible clinician
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-135
Access
• Principle 3: The responsible clinician must notify the patient of the names on the access control list whenever the patient’s medical record is opened. Except for situations given in statutes, or in cases of emergency, the responsible clinician must obtain the patient’s consent.– Patient must consent to all treatment, and must
know of violations of security
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-136
Access
• Principle 4: The name of the clinician, the date, and the time of the access of a medical record must be recorded. Similar information must be kept for deletions.– This is for auditing. Don’t delete information;
update it (last part is for deletion of records after death, for example, or deletion of information when required by statute). Record information about all accesses.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-137
Creation
• Principle: A clinician may open a record, with the clinician and the patient on the access control list. If a record is opened as a result of a referral, the referring clinician may also be on the access control list.– Creating clinician needs access, and patient
should get it. If created from a referral, referring clinician needs access to get results of referral.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-138
Deletion
• Principle: Clinical information cannot be deleted from a medical record until the appropriate time has passed.– This varies with circumstances.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-139
Confinement
• Principle: Information from one medical record may be appended to a different medical record if and only if the access control list of the second record is a subset of the access control list of the first.– This keeps information from leaking to
unauthorized users. All users have to be on the access control list.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-140
Aggregation
• Principle: Measures for preventing aggregation of patient data must be effective. In particular, a patient must be notified if anyone is to be added to the access control list for the patient’s record and if that person has access to a large number of medical records.– Fear here is that a corrupt investigator may obtain
access to a large number of records, correlate them, and discover private information about individuals which can then be used for nefarious purposes (such as blackmail)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-141
Enforcement
• Principle: Any computer system that handles medical records must have a subsystem that enforces the preceding principles. The effectiveness of this enforcement must be subject to evaluation by independent auditors.– This policy has to be enforced, and the
enforcement mechanisms must be auditable (and audited)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-142
Compare to Bell-LaPadula
• Confinement Principle imposes lattice structure on entities in model– Similar to Bell-LaPadula
• CISS focuses on objects being accessed; B-LP on the subjects accessing the objects– May matter when looking for insiders in the
medical environment
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-143
Compare to Clark-Wilson– CDIs are medical records– TPs are functions updating records, access control lists– IVPs certify:
• A person identified as a clinician is a clinician;• A clinician validates, or has validated, information in the
medical record;• When someone is to be notified of an event, such notification
occurs; and• When someone must give consent, the operation cannot
proceed until the consent is obtained– Auditing (CR4) requirement: make all records append-
only, notify patient when access control list changed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-144
ORCON
• Problem: organization creating document wants to control its dissemination– Example: Secretary of Agriculture writes a
memo for distribution to her immediate subordinates, and she must give permission for it to be disseminated further. This is “originator controlled” (here, the “originator” is a person).
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-145
Requirements
• Subject s ∈ S marks object o ∈ O as ORCON on behalf of organization X. X allows o to be disclosed to subjects acting on behalf of organization Y with the following restrictions:
1. o cannot be released to subjects acting on behalf of other organizations without X’s permission; and
2. Any copies of o must have the same restrictions placed on it.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-146
DAC Fails
• Owner can set any desired permissions– This makes 2 unenforceable
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-147
MAC Fails
• First problem: category explosion– Category C contains o, X, Y, and nothing else. If a
subject y ∈ Y wants to read o, x ∈ X makes a copy o′. Note o′ has category C. If y wants to give z ∈ Z a copy, z must be in Y—by definition, it’s not. If x wants to let w ∈ W see the document, need a new category C′containing o, X, W.
• Second problem: abstraction– MAC classification, categories centrally controlled, and
access controlled by a centralized policy– ORCON controlled locally
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-148
Combine Them
• The owner of an object cannot change the access controls of the object.
• When an object is copied, the access control restrictions of that source are copied and bound to the target of the copy.– These are MAC (owner can’t control them)
• The creator (originator) can alter the access control restrictions on a per-subject and per-object basis.– This is DAC (owner can control it)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-149
RBAC
• Access depends on function, not identity– Example:
• Allison, bookkeeper for Math Dept, has access to financial records.
• She leaves.• Betty hired as the new bookkeeper, so she now has
access to those records
– The role of “bookkeeper” dictates access, not the identity of the individual.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-150
Definitions
• Role r: collection of job functions– trans(r): set of authorized transactions for r
• Active role of subject s: role s is currently in– actr(s)
• Authorized roles of a subject s: set of roles s is authorized to assume– authr(s)
• canexec(s, t) iff subject s can execute transaction tat current time
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-151
Axioms
• Let S be the set of subjects and T the set of transactions.
• Rule of role assignment:(∀s ∈ S)(∀t ∈ T) [canexec(s, t) → actr(s) ≠
∅].– If s can execute a transaction, it has a role– This ties transactions to roles
• Rule of role authorization:(∀s ∈ S) [actr(s) ⊆ authr(s)].
– Subject must be authorized to assume an active role (otherwise, any subject could assume any role)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-152
Axiom
• Rule of transaction authorization:(∀s ∈ S)(∀t ∈ T)
[canexec(s, t) → t ∈trans(actr(s))].– If a subject s can execute a transaction, then the
transaction is an authorized one for the role shas assumed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-153
Containment of Roles
• Trainer can do all transactions that trainee can do (and then some). This means role rcontains role r′ (r > r′). So:(∀s ∈ S)[ r′ ∈ authr(s) ∧ r > r′ → r ∈ authr(s) ]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-154
Separation of Duty
• Let r be a role, and let s be a subject such that r ∈auth(s). Then the predicate meauth(r) (for mutually exclusive authorizations) is the set of roles that s cannot assume because of the separation of duty requirement.
• Separation of duty:(∀r1, r2 ∈ R) [ r2 ∈ meauth(r1) →
[ (∀s ∈ S) [ r1∈ authr(s) → r2 ∉ authr(s) ] ] ]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-155
Key Points
• Hybrid policies deal with both confidentiality and integrity– Different combinations of these
• ORCON model neither MAC nor DAC– Actually, a combination
• RBAC model controls access based on functionality
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-156
Chapter 8: Basic Cryptography
• Classical Cryptography• Public Key Cryptography• Cryptographic Checksums
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-157
Overview
• Classical Cryptography– Cæsar cipher– Vigènere cipher– DES
• Public Key Cryptography– Diffie-Hellman– RSA
• Cryptographic Checksums– HMAC
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-158
Cryptosystem
• Quintuple (E, D, M, K, C)– M set of plaintexts– K set of keys– C set of ciphertexts– E set of encryption functions e: M × K → C– D set of decryption functions d: C × K → M
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-159
Example
• Example: Cæsar cipher– M = { sequences of letters }– K = { i | i is an integer and 0 ≤ i ≤ 25 }– E = { Ek | k ∈ K and for all letters m,
Ek(m) = (m + k) mod 26 }– D = { Dk | k ∈ K and for all letters c,
Dk(c) = (26 + c – k) mod 26 }– C = M
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-160
Attacks
• Opponent whose goal is to break cryptosystem is the adversary– Assume adversary knows algorithm used, but not key
• Three types of attacks:– ciphertext only: adversary has only ciphertext; goal is to
find plaintext, possibly key– known plaintext: adversary has ciphertext,
corresponding plaintext; goal is to find key– chosen plaintext: adversary may supply plaintexts and
obtain corresponding ciphertext; goal is to find key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-161
Basis for Attacks
• Mathematical attacks– Based on analysis of underlying mathematics
• Statistical attacks– Make assumptions about the distribution of
letters, pairs of letters (digrams), triplets of letters (trigrams), etc.
• Called models of the language– Examine ciphertext, correlate properties with
the assumptions.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-162
Classical Cryptography
• Sender, receiver share common key– Keys may be the same, or trivial to derive from
one another– Sometimes called symmetric cryptography
• Two basic types– Transposition ciphers– Substitution ciphers– Combinations are called product ciphers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-163
Transposition Cipher
• Rearrange letters in plaintext to produce ciphertext
• Example (Rail-Fence Cipher)– Plaintext is HELLO W O RLD– Rearrange as
HLOOL
ELWRD
– Ciphertext is HLOOL ELW R D
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-164
Attacking the Cipher
• Anagramming– If 1-gram frequencies match English
frequencies, but other n-gram frequencies do not, probably transposition
– Rearrange letters to form n-grams with highest frequencies
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-165
Example
• Ciphertext: HLOOLELWRD• Frequencies of 2-grams beginning with H
– HE 0.0305– HO 0.0043– HL, HW, HR, HD < 0.0010
• Frequencies of 2-grams ending in H– WH 0.0026– EH, LH, OH, RH, DH ≤ 0.0002
• Implies E follows H
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-166
Example
• Arrange so the H and E are adjacentHE
LL
O W
OR
LD
• Read off across, then down, to get original plaintext
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-167
Substitution Ciphers
• Change characters in plaintext to produce ciphertext
• Example (Cæsar cipher)– Plaintext is HELLO W O RLD– Change each letter to the third letter following
it (X goes to A, Y to B, Z to C)• Key is 3, usually written as letter ‘D’
– Ciphertext is KHOOR ZRUOG
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-168
Attacking the Cipher
• Exhaustive search– If the key space is small enough, try all possible
keys until you find the right one– Cæsar cipher has 26 possible keys
• Statistical analysis– Compare to 1-gram model of English
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-169
Statistical Attack
• Compute frequency of each letter in ciphertext:
G 0.1 H 0.1 K 0.1 O 0.3R 0.2 U 0.1 Z 0.1
• Apply 1-gram model of English– Frequency of characters (1-grams) in English is
on next slide
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-170
Character Frequencies
0.002z0.015g
0.020y0.060s0.030m0.020f
0.005x0.065r0.035l0.130e
0.015w0.002q0.005k0.040d
0.010v0.020p0.005j0.030c
0.030u0.080o0.065i0.015b
0.090t0.070n0.060h0.080a
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-171
Statistical Analysis
• f(c) frequency of character c in ciphertext• ϕ(i) correlation of frequency of letters in
ciphertext with corresponding letters in English, assuming key is i– ϕ(i) = Σ0 ≤ c ≤ 25 f(c)p(c – i) so here,
ϕ(i) = 0.1p(6 – i) + 0.1p(7 – i) + 0.1p(10 – i) + 0.3p(14 – i) + 0.2p(17 – i) + 0.1p(20 – i) + 0.1p(25 – i)
• p(x) is frequency of character x in English
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-172
Correlation: ϕ(i) for 0 ≤ i ≤ 25
0.0430250.066060.0316240.0299180.0325120.019050.0370230.0392170.0262110.025240.0380220.0322160.0635100.057530.0517210.0226150.026790.041020.0302200.0535140.020280.036410.0315190.0520130.044270.04820
ϕ(i)iϕ(i)iϕ(i)iϕ(i)i
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-173
The Result
• Most probable keys, based on ϕ:– i = 6, ϕ(i) = 0.0660
• plaintext EBIIL TLOLA– i = 10, ϕ(i) = 0.0635
• plaintext AXEEH PHKE W– i = 3, ϕ(i) = 0.0575
• plaintext HELLO WORLD– i = 14, ϕ(i) = 0.0535
• plaintext WTAAD LDGAS
• Only English phrase is for i = 3– That’s the key (3 or ‘D’)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-174
Cæsar’s Problem
• Key is too short– Can be found by exhaustive search– Statistical frequencies not concealed well
• They look too much like regular English letters
• So make it longer– Multiple letters in key– Idea is to smooth the statistical frequencies to
make cryptanalysis harder
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-175
Vigènere Cipher
• Like Cæsar cipher, but use a phrase• Example
– Message THE BOY HAS THE BALL– Key VIG– Encipher using Cæsar cipher for each letter:
key VIGVIGVIGVIGVIGV
plain THEBOYHASTHEBALL
cipher OPKW W ECIYOPK WIRG
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-176
Relevant Parts of Tableau
G I VA G I VB H J WE L M ZH N P CL R T GO U W JS Y A NT Z B OY E H T
• Tableau shown has relevant rows, columns only
• Example encipherments:– key V, letter T: follow V
column down to T row (giving “O”)
– Key I, letter H: follow I column down to H row (giving “P”)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-177
Useful Terms
• period: length of key– In earlier example, period is 3
• tableau: table used to encipher and decipher– Vigènere cipher has key letters on top, plaintext
letters on the left• polyalphabetic: the key has several different
letters– Cæsar cipher is monoalphabetic
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-178
Attacking the Cipher
• Approach– Establish period; call it n– Break message into n parts, each part being
enciphered using the same key letter– Solve each part
• You can leverage one part from another
• We will show each step
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-179
The Target Cipher
• We want to break this cipher:ADQYS MIUSB OXKKT MIBHK IZOOO
EQOOG IFBAG KAUMF VVTAA CIDTW
M O CIO EQOOG BMBFV ZGGW P CIEKQ
HSNE W VECNE DLAAV R W KXS VNSVP
HCEUT QOIOF MEGJS WTPCH AJM O C
HIUIX
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-180
Establish Period
• Kaskski: repetitions in the ciphertext occur when characters of the key appear over the same characters in the plaintext
• Example:key VIGVIGVIGVIGVIGV
plain THEBOYHASTHEBALLcipher OPK W W ECIYOPKWIRG
Note the key and plaintext line up over the repetitions (underlined). As distance between repetitions is 9, the period is a factor of 9 (that is, 1, 3, or 9)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-181
Repetitions in Example
2, 36124118CH
339794SV
2, 368377NE
2, 2, 2, 2, 34811769 PC
7, 74910556QO
2, 2, 2, 3, 37212250M OC
2, 2, 11448743AA
2, 2, 2, 3246339FV
2, 3, 5305424OEQOOG
552722OO
2, 510155MI
FactorsDistanceEndStartLetters
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-182
Estimate of Period
• OEQOOG is probably not a coincidence– It’s too long for that– Period may be 1, 2, 3, 5, 6, 10, 15, or 30
• Most others (7/10) have 2 in their factors• Almost as many (6/10) have 3 in their
factors• Begin with period of 2 × 3 = 6
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-183
Check on Period
• Index of coincidence is probability that two randomly chosen letters from ciphertext will be the same
• Tabulated for different periods:1 0.066 3 0.047 5 0.0442 0.052 4 0.045 10 0.041Large 0.038
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-184
Compute IC
• IC = [n (n – 1)]–1 Σ0≤i≤25 [Fi (Fi – 1)]– where n is length of ciphertext and Fi the
number of times character i occurs in ciphertext• Here, IC = 0.043
– Indicates a key of slightly more than 5– A statistical measure, so it can be in error, but it
agrees with the previous estimate (which was 6)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-185
Splitting Into Alphabets
alphabet 1: AIKHOIATTOBGEEERNEOSAIalphabet 2: DUKKEFUAW E M G K W D W SUFWJUalphabet 3: QSTIQBMAM Q B W QVLKVTMTMIalphabet 4: YBMZOAFCOOFPHEAXPQEPOXalphabet 5: SOIOOGVICOVCSVASHOGCCalphabet 6: MXBOGKVDIGZINNVVCIJHH• ICs (#1, 0.069; #2, 0.078; #3, 0.078; #4, 0.056; #5,
0.124; #6, 0.043) indicate all alphabets have period 1, except #4 and #6; assume statistics off
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-186
Frequency Examination
ABCDEFGHIJKLMNOPQRSTUVW X YZ
1 31004011301001300112000000
2 10022210013010000010404000
3 12000000201140004013021000
4 21102201000010431000000211
5 10500021200000500030020000
6 01110022311012100000030101Letter frequencies are (H high, M medium, L low):
HMM M HMMHHMM M MHHMLHHHMLLLLL
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-187
Begin Decryption
• First matches characteristics of unshifted alphabet• Third matches if I shifted to A• Sixth matches if V shifted to A• Substitute into ciphertext (bold are substitutions)ADIYS RIUKB OCKKL MIGHK AZOTOEIOOL IFTAG PAUEF VATAS CIITW EOCNO EIOOL BMTFV EGGOP CNEKI
HSSE W NECSE DDAAA R W CXS ANSNP HHEUL QONOF EEGOS WLPCM AJEOC MIUAX
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-188
Look For Clues
• AJE in last line suggests “are”, meaning second alphabet maps A into S:ALIYS RICKB OCKSL MIGHS AZOTO
MIOOL INTAG PACEF VATIS CIITE
EOCNO MIOOL BUTFV EGOOP CNESI
HSSEE NECSE LDAAA RECXS ANANP
HHECL QONON EEGOS ELPCM AREOC
MICAX
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-189
Next Alphabet
• MICAX in last line suggests “mical” (a common ending for an adjective), meaning fourth alphabet maps O into A:ALIMS RICKP OCKSL AIGHS ANOTO MICOL INTOG PACET VATIS QIITE ECCNO MICOL BUTTV EGOOD CNESI VSSEE NSCSE LDOAA RECLS ANAND HHECL EONON ESGOS ELDCM ARECC MICAL
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-190
Got It!
• QI means that U maps into I, as Q is always followed by U:ALIME RICKP ACKSL AUGHS ANATO MICAL INTOS PACET HATIS QUITE ECONO MICAL BUTTH EGOOD ONESI VESEE NSOSE LDOMA RECLE ANAND THECL EANON ESSOS ELDO M ARECO MICAL
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-191
One-Time Pad
• A Vigenère cipher with a random key at least as long as the message– Provably unbreakable– Why? Look at ciphertext DXQR . Equally likely to
correspond to plaintext DOIT (key AJIY) and to plaintext DONT (key AJDY) and any other 4 letters
– Warning: keys must be random, or you can attack the cipher by trying to regenerate the key
• Approximations, such as using pseudorandom number generators to generate keys, are not random
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-192
Overview of the DES
• A block cipher:– encrypts blocks of 64 bits using a 64 bit key– outputs 64 bits of ciphertext
• A product cipher– basic unit is the bit– performs both substitution and transposition
(permutation) on the bits• Cipher consists of 16 rounds (iterations) each with
a round key generated from the user-supplied key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-193
Generation of Round Keyskey
PC-1
C0 D0
LSH LSH
D1
PC-2 K1
K16LSH LSH
C1
PC-2
• Round keys are 48 bits each
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-194
Enciphermentinput
IP
L0 R0
⊕ f K1
L1 = R0 R1 = L0 ⊕ f(R0, K1)
R16 = L15 - f(R15, K16) L16 = R15
IPŠ1
output
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-195
The f FunctionRiŠ1 (32 bits)
E
RiŠ1 (48 bits)
Ki (48 bits)
⊕
S1 S2 S3 S4 S5 S6 S7 S8
6 bits into each
P
32 bits
4 bits out of each
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-196
Controversy
• Considered too weak– Diffie, Hellman said in a few years technology
would allow DES to be broken in days• Design using 1999 technology published
– Design decisions not public• S-boxes may have backdoors
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-197
Undesirable Properties
• 4 weak keys– They are their own inverses
• 12 semi-weak keys– Each has another semi-weak key as inverse
• Complementation property– DESk(m) = c ⇒ DESk′(m′) = c′
• S-boxes exhibit irregular properties– Distribution of odd, even numbers non-random– Outputs of fourth box depends on input to third box
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-198
Differential Cryptanalysis
• A chosen ciphertext attack– Requires 247 plaintext, ciphertext pairs
• Revealed several properties– Small changes in S-boxes reduce the number of pairs
needed– Making every bit of the round keys independent does
not impede attack• Linear cryptanalysis improves result
– Requires 243 plaintext, ciphertext pairs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-199
DES Modes
• Electronic Code Book Mode (ECB)– Encipher each block independently
• Cipher Block Chaining Mode (CBC)– Xor each block with previous ciphertext block– Requires an initialization vector for the first one
• Encrypt-Decrypt-Encrypt Mode (2 keys: k, k′)– c = DESk(DESk′
–1(DESk(m)))• Encrypt-Encrypt-Encrypt Mode (3 keys: k, k′, k′′)
– c = DESk(DESk′ (DESk′′(m)))
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-200
CBC Mode Encryption
⊕
init. vector m1
DES
c1
⊕
m2
DES
c2
sent sent
…
…
…
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-201
CBC Mode Decryption
⊕
init. vector c1
DES
m1
…
…
…
⊕
c2
DES
m2
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-202
Self-Healing Property
• Initial message– 3231343336353837 3231343336353837 3231343336353837 3231343336353837
• Received as (underlined 4c should be 4b)– ef7c4cb2b4ce6f3b f6266e3a97af0e2c 746ab9a6308f4256 33e60b451b09603d
• Which decrypts to– efca61e19f4836f1 3231333336353837 3231343336353837 3231343336353837
– Incorrect bytes underlined– Plaintext “heals” after 2 blocks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-203
Current Status of DES
• Design for computer system, associated software that could break any DES-enciphered message in a few days published in 1998
• Several challenges to break DES messages solved using distributed computing
• NIST selected Rijndael as Advanced Encryption Standard, successor to DES– Designed to withstand attacks that were successful on
DES
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-204
Public Key Cryptography
• Two keys– Private key known only to individual– Public key available to anyone
• Public key, private key inverses
• Idea– Confidentiality: encipher using public key,
decipher using private key– Integrity/authentication: encipher using private
key, decipher using public one
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-205
Requirements
1. It must be computationally easy to encipher or decipher a message given the appropriate key
2. It must be computationally infeasible to derive the private key from the public key
3. It must be computationally infeasible to determine the private key from a chosen plaintext attack
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-206
RSA
• Exponentiation cipher• Relies on the difficulty of determining the
number of numbers relatively prime to a large integer n
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-207
Background
• Totient function φ(n)– Number of positive integers less than n and relatively
prime to n• Relatively prime means with no factors in common with n
• Example: φ(10) = 4– 1, 3, 7, 9 are relatively prime to 10
• Example: φ(21) = 12– 1, 2, 4, 5, 8, 10, 11, 13, 16, 17, 19, 20 are relatively
prime to 21
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-208
Algorithm
• Choose two large prime numbers p, q– Let n = pq; then φ(n) = (p–1)(q–1)– Choose e < n such that e is relatively prime to
φ(n).– Compute d such that ed mod φ(n) = 1
• Public key: (e, n); private key: d• Encipher: c = me mod n• Decipher: m = cd mod n
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-209
Example: Confidentiality
• Take p = 7, q = 11, so n = 77 and φ(n) = 60• Alice chooses e = 17, making d = 53• Bob wants to send Alice secret message HELLO
(07 04 11 11 14)– 0717 mod 77 = 28– 0417 mod 77 = 16– 1117 mod 77 = 44– 1117 mod 77 = 44– 1417 mod 77 = 42
• Bob sends 28 16 44 44 42
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-210
Example
• Alice receives 28 16 44 44 42• Alice uses private key, d = 53, to decrypt message:
– 2853 mod 77 = 07– 1653 mod 77 = 04– 4453 mod 77 = 11– 4453 mod 77 = 11– 4253 mod 77 = 14
• Alice translates message to letters to read HELLO– No one else could read it, as only Alice knows her
private key and that is needed for decryption
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-211
Example: Integrity/Authentication
• Take p = 7, q = 11, so n = 77 and φ(n) = 60• Alice chooses e = 17, making d = 53• Alice wants to send Bob message HELLO (07 04 11 11
14) so Bob knows it is what Alice sent (no changes in transit, and authenticated)– 0753 mod 77 = 35– 0453 mod 77 = 09– 1153 mod 77 = 44– 1153 mod 77 = 44– 1453 mod 77 = 49
• Alice sends 35 09 44 44 49
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-212
Example• Bob receives 35 09 44 44 49• Bob uses Alice’s public key, e = 17, n = 77, to decrypt message:
– 3517 mod 77 = 07– 0917 mod 77 = 04– 4417 mod 77 = 11– 4417 mod 77 = 11– 4917 mod 77 = 14
• Bob translates message to letters to read HELLO– Alice sent it as only she knows her private key, so no one else could have
enciphered it– If (enciphered) message’s blocks (letters) altered in transit, would not
decrypt properly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-213
Example: Both• Alice wants to send Bob message HELLO both enciphered
and authenticated (integrity-checked)– Alice’s keys: public (17, 77); private: 53– Bob’s keys: public: (37, 77); private: 13
• Alice enciphers HELLO (07 04 11 11 14):– (0753 mod 77)37 mod 77 = 07– (0453 mod 77)37 mod 77 = 37– (1153 mod 77)37 mod 77 = 44– (1153 mod 77)37 mod 77 = 44– (1453 mod 77)37 mod 77 = 14
• Alice sends 07 37 44 44 14
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-214
Security Services
• Confidentiality– Only the owner of the private key knows it, so
text enciphered with public key cannot be read by anyone except the owner of the private key
• Authentication– Only the owner of the private key knows it, so
text enciphered with private key must have been generated by the owner
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-215
More Security Services
• Integrity– Enciphered letters cannot be changed
undetectably without knowing private key• Non-Repudiation
– Message enciphered with private key came from someone who knew it
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-216
Warnings
• Encipher message in blocks considerably larger than the examples here– If 1 character per block, RSA can be broken
using statistical attacks (just like classical cryptosystems)
– Attacker cannot alter letters, but can rearrange them and alter message meaning
• Example: reverse enciphered message of text ON to get NO
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-217
Cryptographic Checksums
• Mathematical function to generate a set of kbits from a set of n bits (where k ≤ n).– k is smaller then n except in unusual
circumstances• Example: ASCII parity bit
– ASCII has 7 bits; 8th bit is “parity”– Even parity: even number of 1 bits– Odd parity: odd number of 1 bits
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-218
Example Use
• Bob receives “10111101” as bits.– Sender is using even parity; 6 1 bits, so
character was received correctly• Note: could be garbled, but 2 bits would need to
have been changed to preserve parity
– Sender is using odd parity; even number of 1 bits, so character was not received correctly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-219
Definition
• Cryptographic checksum h: A→B:1. For any x ∈ A, h(x) is easy to compute2. For any y ∈ B, it is computationally infeasible to
find x ∈ A such that h(x) = y3. It is computationally infeasible to find two inputs x,
x′ ∈ A such that x ≠ x′ and h(x) = h(x′)– Alternate form (stronger): Given any x ∈ A, it is
computationally infeasible to find a different x′ ∈ Asuch that h(x) = h(x′).
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-220
Collisions
• If x ≠ x′ and h(x) = h(x′), x and x′ are a collision– Pigeonhole principle: if there are n containers
for n+1 objects, then at least one container will have 2 objects in it.
– Application: if there are 32 files and 8 possible cryptographic checksum values, at least one value corresponds to at least 4 files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-221
Keys
• Keyed cryptographic checksum: requires cryptographic key– DES in chaining mode: encipher message, use
last n bits. Requires a key to encipher, so it is a keyed cryptographic checksum.
• Keyless cryptographic checksum: requires no cryptographic key– MD5 and SHA-1 are best known; others
include MD4, HAVAL, and Snefru
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-222
HMAC
• Make keyed cryptographic checksums from keyless cryptographic checksums
• h keyless cryptographic checksum function that takes data in blocks of b bytes and outputs blocks of l bytes. k′ is cryptographic key of length b bytes– If short, pad with 0 bytes; if long, hash to length b
• ipad is 00110110 repeated b times• opad is 01011100 repeated b times• HMAC-h(k, m) = h(k′ ⊕ opad || h(k′ ⊕ ipad || m))
– ⊕ exclusive or, || concatenation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-223
Key Points
• Two main types of cryptosystems: classical and public key
• Classical cryptosystems encipher and decipher using the same key– Or one key is easily derived from the other
• Public key cryptosystems encipher and decipher using different keys– Computationally infeasible to derive one from the other
• Cryptographic checksums provide a check on integrity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-224
Chapter 9: Key Management
• Session and Interchange Keys• Key Exchange• Cryptographic Key Infrastructure• Storing and Revoking Keys• Digital Signatures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-225
Overview
• Key exchange– Session vs. interchange keys– Classical, public key methods
• Cryptographic key infrastructure– Certificates
• Key storage– Key revocation
• Digital signatures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-226
Notation
• X → Y : { Z || W } kX,Y– X sends Y the message produced by concatenating Z
and W enciphered by key kX,Y, which is shared by users X and Y
• A → T : { Z } kA || { W } kA,T– A sends T a message consisting of the concatenation of
Z enciphered using kA, A’s key, and W enciphered using kA,T, the key shared by A and T
• r1, r2 nonces (nonrepeating random numbers)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-227
Session, Interchange Keys
• Alice wants to send a message m to Bob– Assume public key encryption– Alice generates a random cryptographic key ks and uses
it to encipher m• To be used for this message only• Called a session key
– She enciphers ks with Bob;s public key kB• kB enciphers all session keys Alice uses to communicate with
Bob• Called an interchange key
– Alice sends { m } ks { ks } kB
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-228
Benefits
• Limits amount of traffic enciphered with single key– Standard practice, to decrease the amount of traffic an
attacker can obtain• Prevents some attacks
– Example: Alice will send Bob message that is either “BUY” or “SELL”. Eve computes possible ciphertexts{ “BUY” } kB and { “SELL” } kB. Eve intercepts enciphered message, compares, and gets plaintext at once
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-229
Key Exchange Algorithms
• Goal: Alice, Bob get shared key– Key cannot be sent in clear
• Attacker can listen in• Key can be sent enciphered, or derived from exchanged data
plus data not known to an eavesdropper
– Alice, Bob may trust third party– All cryptosystems, protocols publicly known
• Only secret data is the keys, ancillary information known only to Alice and Bob needed to derive keys
• Anything transmitted is assumed known to attacker
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-230
Classical Key Exchange
• Bootstrap problem: how do Alice, Bob begin?– Alice can’t send it to Bob in the clear!
• Assume trusted third party, Cathy– Alice and Cathy share secret key kA
– Bob and Cathy share secret key kB
• Use this to exchange shared key ks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-231
Simple Protocol
Alice Cathy{ request for session key to Bob } kA
Alice Cathy{ ks } kA || { ks } kB
Alice Bob{ ks } kB
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-232
Problems
• How does Bob know he is talking to Alice?– Replay attack: Eve records message from Alice
to Bob, later replays it; Bob may think he’s talking to Alice, but he isn’t
– Session key reuse: Eve replays message from Alice to Bob, so Bob re-uses session key
• Protocols must provide authentication and defense against replay
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-233
Needham-Schroeder
Alice CathyAlice || Bob || r1
Alice Cathy{ Alice || Bob || r1 || ks || { Alice || ks } kB } kA
Alice Bob{ Alice || ks } kB
Alice Bob{ r2 } ks
Alice Bob{ r2 – 1 } ks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-234
Argument: Alice talking to Bob
• Second message– Enciphered using key only she, Cathy knows
• So Cathy enciphered it
– Response to first message• As r1 in it matches r1 in first message
• Third message– Alice knows only Bob can read it
• As only Bob can derive session key from message
– Any messages enciphered with that key are from Bob
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-235
Argument: Bob talking to Alice
• Third message– Enciphered using key only he, Cathy know
• So Cathy enciphered it
– Names Alice, session key• Cathy provided session key, says Alice is other party
• Fourth message– Uses session key to determine if it is replay from Eve
• If not, Alice will respond correctly in fifth message• If so, Eve can’t decipher r2 and so can’t respond, or responds
incorrectly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-236
Denning-Sacco Modification
• Assumption: all keys are secret• Question: suppose Eve can obtain session key.
How does that affect protocol?– In what follows, Eve knows ks
Eve Bob{ Alice || ks } kB
Eve Bob{ r2 } ks
Eve Bob{ r2 – 1 } ks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-237
Solution
• In protocol above, Eve impersonates Alice• Problem: replay in third step
– First in previous slide• Solution: use time stamp T to detect replay• Weakness: if clocks not synchronized, may either
reject valid messages or accept replays– Parties with either slow or fast clocks vulnerable to
replay– Resetting clock does not eliminate vulnerability
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-238
Needham-Schroeder with Denning-Sacco Modification
Alice CathyAlice || Bob || r1
Alice Cathy{ Alice || Bob || r1 || ks || { Alice || T || ks } kB } kA
Alice Bob{ Alice || T || ks } kB
Alice Bob{ r2 } ks
Alice Bob{ r2 – 1 } ks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-239
Otway-Rees Protocol
• Corrects problem– That is, Eve replaying the third message in the
protocol• Does not use timestamps
– Not vulnerable to the problems that Denning-Sacco modification has
• Uses integer n to associate all messages with particular exchange
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-240
The Protocol
Alice Bobn || Alice || Bob || { r1 || n || Alice || Bob } kA
Cathy Bobn || Alice || Bob || { r1 || n || Alice || Bob } kA ||
{ r2 || n || Alice || Bob } kB
Cathy Bobn || { r1 || ks } kA || { r2 || ks } kB
Alice Bobn || { r1 || ks } kA
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-241
Argument: Alice talking to Bob
• Fourth message– If n matches first message, Alice knows it is
part of this protocol exchange– Cathy generated ks because only she, Alice
know kA
– Enciphered part belongs to exchange as r1matches r1 in encrypted part of first message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-242
Argument: Bob talking to Alice
• Third message– If n matches second message, Bob knows it is
part of this protocol exchange– Cathy generated ks because only she, Bob know
kB
– Enciphered part belongs to exchange as r2matches r2 in encrypted part of second message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-243
Replay Attack
• Eve acquires old ks, message in third step– n || { r1 || ks } kA || { r2 || ks } kB
• Eve forwards appropriate part to Alice– Alice has no ongoing key exchange with Bob: n
matches nothing, so is rejected– Alice has ongoing key exchange with Bob: n does not
match, so is again rejected• If replay is for the current key exchange, and Eve sent the
relevant part before Bob did, Eve could simply listen to traffic; no replay involved
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-244
Kerberos
• Authentication system– Based on Needham-Schroeder with Denning-Sacco
modification– Central server plays role of trusted third party
(“Cathy”)
• Ticket– Issuer vouches for identity of requester of service
• Authenticator– Identifies sender
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-245
Idea
• User u authenticates to Kerberos server– Obtains ticket Tu,TGS for ticket granting service (TGS)
• User u wants to use service s:– User sends authenticator Au, ticket Tu,TGS to TGS asking
for ticket for service– TGS sends ticket Tu,s to user– User sends Au, Tu,s to server as request to use s
• Details follow
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-246
Ticket
• Credential saying issuer has identified ticket requester
• Example ticket issued to user u for service sTu,s = s || { u || u’s address || valid time || ku,s } ks
where:– ku,s is session key for user and service– Valid time is interval for which ticket valid– u’s address may be IP address or something else
• Note: more fields, but not relevant here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-247
Authenticator
• Credential containing identity of sender of ticket– Used to confirm sender is entity to which ticket was
issued• Example: authenticator user u generates for
service sAu,s = { u || generation time || kt } ku,s
where:– kt is alternate session key– Generation time is when authenticator generated
• Note: more fields, not relevant here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-248
Protocol
user Cathyuser || TGS
Cathy user{ ku,TGS } ku || Tu,TGS
user TGSservice || Au,TGS || Tu,TGS
user TGSuser || { ku,s } ku,TGS || Tu,s
user serviceAu,s || Tu,s
user service{ t + 1 } ku,s
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-249
Analysis
• First two steps get user ticket to use TGS– User u can obtain session key only if u knows
key shared with Cathy• Next four steps show how u gets and uses
ticket for service s– Service s validates request by checking sender
(using Au,s) is same as entity ticket issued to– Step 6 optional; used when u requests
confirmation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-250
Problems
• Relies on synchronized clocks– If not synchronized and old tickets,
authenticators not cached, replay is possible• Tickets have some fixed fields
– Dictionary attacks possible– Kerberos 4 session keys weak (had much less
than 56 bits of randomness); researchers at Purdue found them from tickets in minutes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-251
Public Key Key Exchange
• Here interchange keys known– eA, eB Alice and Bob’s public keys known to all– dA, dB Alice and Bob’s private keys known only to
owner
• Simple protocol– ks is desired session key
Alice Bob{ ks } eB
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-252
Problem and Solution
• Vulnerable to forgery or replay– Because eB known to anyone, Bob has no assurance that
Alice sent message
• Simple fix uses Alice’s private key– ks is desired session key
Alice Bob{ { ks } dA } eB
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-253
Notes
• Can include message enciphered with ks
• Assumes Bob has Alice’s public key, and vice versa– If not, each must get it from public server– If keys not bound to identity of owner, attacker Eve can
launch a man-in-the-middle attack (next slide; Cathy is public server providing public keys)
• Solution to this (binding identity to keys) discussed later as public key infrastructure (PKI)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-254
Man-in-the-Middle Attack
Alice Cathysend Bob’s public key
Eve Cathysend Bob’s public key
Eve CathyeB
AliceeE Eve
Alice Bob{ ks } eE
Eve Bob{ ks } eB
Eve intercepts request
Eve intercepts message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-255
Cryptographic Key Infrastructure
• Goal: bind identity to key• Classical: not possible as all keys are shared
– Use protocols to agree on a shared key (see earlier)
• Public key: bind identity to public key– Crucial as people will use key to communicate with
principal whose identity is bound to key– Erroneous binding means no secrecy between
principals– Assume principal identified by an acceptable name
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-256
Certificates
• Create token (message) containing– Identity of principal (here, Alice)– Corresponding public key– Timestamp (when issued)– Other information (perhaps identity of signer)
signed by trusted authority (here, Cathy)CA = { eA || Alice || T } dC
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-257
Use
• Bob gets Alice’s certificate– If he knows Cathy’s public key, he can decipher the
certificate• When was certificate issued?• Is the principal Alice?
– Now Bob has Alice’s public key• Problem: Bob needs Cathy’s public key to validate
certificate– Problem pushed “up” a level– Two approaches: Merkle’s tree, signature chains
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-258
Certificate Signature Chains
• Create certificate– Generate hash of certificate– Encipher hash with issuer’s private key
• Validate– Obtain issuer’s public key– Decipher enciphered hash– Recompute hash from certificate and compare
• Problem: getting issuer’s public key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-259
X.509 Chains
• Some certificate components in X.509v3:– Version– Serial number– Signature algorithm identifier: hash algorithm– Issuer’s name; uniquely identifies issuer– Interval of validity– Subject’s name; uniquely identifies subject– Subject’s public key– Signature: enciphered hash
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-260
X.509 Certificate Validation
• Obtain issuer’s public key– The one for the particular signature algorithm
• Decipher signature– Gives hash of certificate
• Recompute hash from certificate and compare– If they differ, there’s a problem
• Check interval of validity– This confirms that certificate is current
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-261
Issuers
• Certification Authority (CA): entity that issues certificates– Multiple issuers pose validation problem– Alice’s CA is Cathy; Bob’s CA is Don; how
can Alice validate Bob’s certificate?– Have Cathy and Don cross-certify
• Each issues certificate for the other
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-262
Validation and Cross-Certifying
• Certificates:– Cathy<<Alice>>– Dan<<Bob>– Cathy<<Dan>>– Dan<<Cathy>>
• Alice validates Bob’s certificate– Alice obtains Cathy<<Dan>>– Alice uses (known) public key of Cathy to validate
Cathy<<Dan>>– Alice uses Cathy<<Dan>> to validate Dan<<Bob>>
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-263
PGP Chains
• OpenPGP certificates structured into packets– One public key packet– Zero or more signature packets
• Public key packet:– Version (3 or 4; 3 compatible with all versions of PGP,
4 not compatible with older versions of PGP)– Creation time– Validity period (not present in version 3)– Public key algorithm, associated parameters– Public key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-264
OpenPGP Signature Packet
• Version 3 signature packet– Version (3)– Signature type (level of trust)– Creation time (when next fields hashed)– Signer’s key identifier (identifies key to encipher hash)– Public key algorithm (used to encipher hash)– Hash algorithm– Part of signed hash (used for quick check)– Signature (enciphered hash)
• Version 4 packet more complex
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-265
Signing
• Single certificate may have multiple signatures• Notion of “trust” embedded in each signature
– Range from “untrusted” to “ultimate trust”– Signer defines meaning of trust level (no standards!)
• All version 4 keys signed by subject– Called “self-signing”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-266
Validating Certificates• Alice needs to validate
Bob’s OpenPGP cert– Does not know Fred,
Giselle, or Ellen• Alice gets Giselle’s cert
– Knows Henry slightly, but his signature is at “casual” level of trust
• Alice gets Ellen’s cert– Knows Jack, so uses his
cert to validate Ellen’s, then hers to validate Bob’s Bob
Fred
Giselle
EllenIrene
Henry
Jack
Arrows show signaturesSelf signatures not shown
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-267
Storing Keys
• Multi-user or networked systems: attackers may defeat access control mechanisms– Encipher file containing key
• Attacker can monitor keystrokes to decipher files• Key will be resident in memory that attacker may be able to
read
– Use physical devices like “smart card”• Key never enters system• Card can be stolen, so have 2 devices combine bits to make
single key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-268
Key Revocation
• Certificates invalidated before expiration– Usually due to compromised key– May be due to change in circumstance (e.g., someone
leaving company)• Problems
– Entity revoking certificate authorized to do so– Revocation information circulates to everyone fast
enough• Network delays, infrastructure problems may delay
information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-269
CRLs
• Certificate revocation list lists certificates that are revoked
• X.509: only certificate issuer can revoke certificate– Added to CRL
• PGP: signers can revoke signatures; owners can revoke certificates, or allow others to do so– Revocation message placed in PGP packet and signed– Flag marks it as revocation message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-270
Digital Signature
• Construct that authenticated origin, contents of message in a manner provable to a disinterested third party (“judge”)
• Sender cannot deny having sent message (service is “nonrepudiation”)– Limited to technical proofs
• Inability to deny one’s cryptographic key was used to sign– One could claim the cryptographic key was stolen or
compromised• Legal proofs, etc., probably required; not dealt with here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-271
Common Error
• Classical: Alice, Bob share key k– Alice sends m || { m } k to Bob
This is a digital signatureWRONGWRONG
This is not a digital signature– Why? Third party cannot determine whether
Alice or Bob generated message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-272
Classical Digital Signatures• Require trusted third party
– Alice, Bob each share keys with trusted party Cathy• To resolve dispute, judge gets { m } kAlice, { m } kBob, and
has Cathy decipher them; if messages matched, contract was signed
Alice Bob
Cathy Bob
Cathy Bob
{ m }kAlice
{ m }kAlice
{ m }kBob
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-273
Public Key Digital Signatures
• Alice’s keys are dAlice, eAlice
• Alice sends Bobm || { m } dAlice
• In case of dispute, judge computes{ { m } dAlice } eAlice
• and if it is m, Alice signed message– She’s the only one who knows dAlice!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-274
RSA Digital Signatures
• Use private key to encipher message– Protocol for use is critical
• Key points:– Never sign random documents, and when
signing, always sign hash and never document• Mathematical properties can be turned against signer
– Sign message first, then encipher• Changing public keys causes forgery
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-275
Attack #1
• Example: Alice, Bob communicating– nA = 95, eA = 59, dA = 11– nB = 77, eB = 53, dB = 17
• 26 contracts, numbered 00 to 25– Alice has Bob sign 05 and 17:
• c = mdB mod nB = 0517 mod 77 = 3• c = mdB mod nB = 1717 mod 77 = 19
– Alice computes 05×17 mod 77 = 08; corresponding signature is 03×19 mod 77 = 57; claims Bob signed 08
– Judge computes ceB mod nB = 5753 mod 77 = 08• Signature validated; Bob is toast
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-276
Attack #2: Bob’s Revenge
• Bob, Alice agree to sign contract 06• Alice enciphers, then signs:
(meB mod 77)dA mod nA = (0653 mod 77)11 mod 95 = 63• Bob now changes his public key
– Computes r such that 13r mod 77 = 6; say, r = 59– Computes reB mod φ(nB) = 59×53 mod 60 = 7– Replace public key eB with 7, private key dB = 43
• Bob claims contract was 13. Judge computes:– (6359 mod 95)43 mod 77 = 13– Verified; now Alice is toast
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-277
Key Points
• Key management critical to effective use of cryptosystems– Different levels of keys (session vs. interchange)
• Keys need infrastructure to identify holders, allow revoking– Key escrowing complicates infrastructure
• Digital signatures provide integrity of origin and contentMuch easier with public key cryptosystems than with
classical cryptosystems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-278
Chapter 10: Cipher Techniques
• Some Problems• Types of Ciphers• Networks• Examples
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-279
Overview
• Problems– What can go wrong if you naively use ciphers
• Cipher types– Stream or block ciphers?
• Networks– Link vs end-to-end use
• Examples– Privacy-Enhanced Electronic Mail (PEM)– Security at the Network Layer (IPsec)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-280
Problems
• Using cipher requires knowledge of environment, and threats in the environment, in which cipher will be used– Is the set of possible messages small?– Do the messages exhibit regularities that remain
after encipherment?– Can an active wiretapper rearrange or change
parts of the message?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-281
Attack #1: Precomputation
• Set of possible messages M small• Public key cipher f used• Idea: precompute set of possible ciphertexts
f(M), build table (m, f(m))• When ciphertext f(m) appears, use table to
find m• Also called forward searches
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-282
Example
• Cathy knows Alice will send Bob one of two messages: enciphered BUY, or enciphered SELL
• Using public key eBob, Cathy precomputesm1 = { BUY } eBob, m2 = { SELL } eBob
• Cathy sees Alice send Bob m2
• Cathy knows Alice sent SELL
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-283
May Not Be Obvious
• Digitized sound– Seems like far too many possible plaintexts
• Initial calculations suggest 232 such plaintexts
– Analysis of redundancy in human speech reduced this to about 100,000 (≈ 217)
• This is small enough to worry about precomputationattacks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-284
Misordered Blocks
• Alice sends Bob message– nBob = 77, eBob = 17, dBob = 53– Message is LIVE (11 08 21 04)– Enciphered message is 44 57 21 16
• Eve intercepts it, rearranges blocks– Now enciphered message is 16 21 57 44
• Bob gets enciphered message, deciphers it– He sees EVIL
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-285
Notes
• Digitally signing each block won’t stop this attack
• Two approaches:– Cryptographically hash the entire message and
sign it– Place sequence numbers in each block of
message, so recipient can tell intended order• Then you sign each block
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-286
Statistical Regularities
• If plaintext repeats, ciphertext may too• Example using DES:
– input (in hex):3231 3433 3635 3837 3231 3433 3635 3837
– corresponding output (in hex):ef7c 4bb2 b4ce 6f3b ef7c 4bb2 b4ce 6f3b
• Fix: cascade blocks together (chaining)– More details later
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-287
What These Mean
• Use of strong cryptosystems, well-chosen (or random) keys not enough to be secure
• Other factors:– Protocols directing use of cryptosystems– Ancillary information added by protocols– Implementation (not discussed here)– Maintenance and operation (not discussed here)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-288
Stream, Block Ciphers
• E encipherment function– Ek(b) encipherment of message b with key k– In what follows, m = b1b2 …, each bi of fixed length
• Block cipher– Ek(m) = Ek(b1)Ek(b2) …
• Stream cipher– k = k1k2 …– Ek(m) = Ek1(b1)Ek2(b2) …– If k1k2 … repeats itself, cipher is periodic and the
kength of its period is one cycle of k1k2 …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-289
Examples
• Vigenère cipher– bi = 1 character, k = k1k2 … where ki = 1 character– Each bi enciphered using ki mod length(k)
– Stream cipher
• DES– bi = 64 bits, k = 56 bits– Each bi enciphered separately using k– Block cipher
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-290
Stream Ciphers
• Often (try to) implement one-time pad by xor’ing each bit of key with one bit of message– Example:
m = 00101k = 10010c = 10111
• But how to generate a good key?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-291
Synchronous Stream Ciphers
• n-stage Linear Feedback Shift Register: consists of– n bit register r = r0…rn–1
– n bit tap sequence t = t0…tn–1
– Use:• Use rn–1 as key bit• Compute x = r0t0 ⊕ … ⊕ rn–1tn–1
• Shift r one bit to right, dropping rn–1, x becomes r0
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-292
Operation
r0 rn–1… bi
…
…
⊕
ci
r0´ rn–1´… ri´ = ri–1,0 < i ≤ n
r0t0 + … + rn–1tn–1
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-293
Example
• 4-stage LFSR; t = 1001r ki new bit computation new r0010 0 01⊕00⊕10⊕01 = 0 00010001 1 01⊕00⊕00⊕11 = 1 10001000 0 11⊕00⊕00⊕01 = 1 11001100 0 11⊕10⊕00⊕01 = 1 11101110 0 11⊕10⊕10⊕01 = 1 11111111 1 11⊕10⊕10⊕11 = 0 01111110 0 11⊕10⊕10⊕11 = 1 1011– Key sequence has period of 15 (010001111010110)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-294
NLFSR
• n-stage Non-Linear Feedback Shift Register: consists of– n bit register r = r0…rn–1– Use:
• Use rn–1 as key bit• Compute x = f(r0, …, rn–1); f is any function• Shift r one bit to right, dropping rn–1, x becomes r0
Note same operation as LFSR but more general bit replacement function
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-295
Example
• 4-stage NLFSR; f(r0, r1, r2, r3) = (r0 & r2) | r3r ki new bit computation new r1100 0 (1 & 0) | 0 = 0 0110
0110 0 (0 & 1) | 0 = 0 0011
0011 1 (0 & 1) | 1 = 1 1001
1001 1 (1 & 0) | 1 = 1 1100
1100 0 (1 & 0) | 0 = 0 0110
0110 0 (0 & 1) | 0 = 0 0011
0011 1 (0 & 1) | 1 = 1 1001
– Key sequence has period of 4 (0011)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-296
Eliminating Linearity
• NLFSRs not common– No body of theory about how to design them to have
long period• Alternate approach: output feedback mode
– For E encipherment function, k key, r register:• Compute r′= Ek(r); key bit is rightmost bit of r′• Set r to r′ and iterate, repeatedly enciphering register and
extracting key bits, until message enciphered– Variant: use a counter that is incremented for each
encipherment rather than a register• Take rightmost bit of Ek(i), where i is number of encipherment
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-297
Self-Synchronous Stream Cipher
• Take key from message itself (autokey)• Example: Vigenère, key drawn from plaintext
– key XTHEBOYHASTHEBA
– plaintext THEBOYHASTHEBAG
– ciphertext QALFPNFHSLALFCT
• Problem:– Statistical regularities in plaintext show in key– Once you get any part of the message, you can decipher
more
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-298
Another Example
• Take key from ciphertext (autokey)• Example: Vigenère, key drawn from ciphertext
– key XQXBCQOVVNGNRTT
– plaintext THEBOYHASTHEBAG
– ciphertext QXBCQOVVNGNRTTM
• Problem:– Attacker gets key along with ciphertext, so deciphering
is trivial
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-299
Variant• Cipher feedback mode: 1 bit of ciphertext fed into n bit
register– Self-healing property: if ciphertext bit received incorrectly, it and
next n bits decipher incorrectly; but after that, the ciphertext bits decipher correctly
– Need to know k, E to decipher ciphertext
kEk(r)r
… E …
⊕
mi
ci
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-300
Block Ciphers
• Encipher, decipher multiple bits at once• Each block enciphered independently• Problem: identical plaintext blocks produce
identical ciphertext blocks– Example: two database records
• MEMBER: HOLLY INCO ME $100,000• MEMBER: HEIDI INCOME $100,000
– Encipherment:• ABCQZRME GHQ MRSIB CTXUVYSS RM GRPFQN
• ABCQZRME ORMPABRZ CTXUVYSS RM GRPFQN
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-301
Solutions
• Insert information about block’s position into the plaintext block, then encipher
• Cipher block chaining:– Exclusive-or current plaintext block with
previous ciphertext block:• c0 = Ek(m0 ⊕ I)• ci = Ek(mi ⊕ ci–1) for i > 0
where I is the initialization vector
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-302
Multiple Encryption
• Double encipherment: c = Ek′(Ek(m))– Effective key length is 2n, if k, k′ are length n– Problem: breaking it requires 2n+1 encryptions, not 22n
encryptions• Triple encipherment:
– EDE mode: c = Ek(Dk′(Ek(m))• Problem: chosen plaintext attack takes O(2n) time using 2n
ciphertexts– Triple encryption mode: c = Ek(Ek′(Ek′′(m))
• Best attack requires O(22n) time, O(2n) memory
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-303
Networks and Cryptography
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
Application layer
Presentation layer
Session layer
Transport layer
Network layer
Data link layer
Physical layer
Network layer
Data link layer
Physical layer
• ISO/OSI model• Conceptually, each host has peer at each layer
– Peers communicate with peers at same layer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-304
Link and End-to-End Protocols
Link Protocol
End-to-End (or E2E) Protocol
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-305
Encryption
• Link encryption– Each host enciphers message so host at “next
hop” can read it– Message can be read at intermediate hosts
• End-to-end encryption– Host enciphers message so host at other end of
communication can read it– Message cannot be read at intermediate hosts
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-306
Examples
• TELNET protocol– Messages between client, server enciphered, and
encipherment, decipherment occur only at these hosts– End-to-end protocol
• PPP Encryption Control Protocol– Host gets message, deciphers it
• Figures out where to forward it• Enciphers it in appropriate key and forwards it
– Link protocol
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-307
Cryptographic Considerations
• Link encryption– Each host shares key with neighbor– Can be set on per-host or per-host-pair basis
• Windsor, stripe, seaview each have own keys• One key for (windsor, stripe); one for (stripe, seaview); one for
(windsor, seaview)
• End-to-end– Each host shares key with destination– Can be set on per-host or per-host-pair basis– Message cannot be read at intermediate nodes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-308
Traffic Analysis
• Link encryption– Can protect headers of packets– Possible to hide source and destination
• Note: may be able to deduce this from traffic flows
• End-to-end encryption– Cannot hide packet headers
• Intermediate nodes need to route packet
– Attacker can read source, destination
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-309
Example Protocols
• Privacy-Enhanced Electronic Mail (PEM)– Applications layer protocol
• IP Security (IPSec)– Network layer protocol
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-310
Goals of PEM
1. Confidentiality• Only sender and recipient(s) can read message
2. Origin authentication• Identify the sender precisely
3. Data integrity• Any changes in message are easy to detect
4. Non-repudiation of origin• Whenever possible …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-311
Message Handling System
MTA
UA
MTA
UA
MTA
UA UserAgents
MessageTransferAgents
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-312
Design Principles
• Do not change related existing protocols– Cannot alter SMTP
• Do not change existing software– Need compatibility with existing software
• Make use of PEM optional– Available if desired, but email still works without them– Some recipients may use it, others not
• Enable communication without prearrangement– Out-of-bands authentication, key exchange problematic
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-313
Basic Design: Keys
• Two keys– Interchange keys tied to sender, recipients and
is static (for some set of messages)• Like a public/private key pair• Must be available before messages sent
– Data exchange keys generated for each message• Like a session key, session being the message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-314
Basic Design: Sending
Alice Bob{ m } ks || { ks } kB
Confidentiality• m message• ks data exchange key• kB Bob’s interchange key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-315
Basic Design: Integrity
Alice Bobm { h(m) } kA
Integrity and authentication:• m message• h(m) hash of message m —Message Integrity Check (MIC)• kA Alice’s interchange key
Non-repudiation: if kA is Alice’s private key, this establishesthat Alice’s private key was used to sign the message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-316
Basic Design: Everything
Alice Bob{ m } ks || { h(m) } kA || { ks } kB
Confidentiality, integrity, authentication:• Notations as in previous slides• If kA is private key, get non-repudiation too
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-317
Practical Considerations
• Limits of SMTP– Only ASCII characters, limited length lines
• Use encoding procedure1. Map local char representation into canonical format
– Format meets SMTP requirements2. Compute and encipher MIC over the canonical format;
encipher message if needed3. Map each 6 bits of result into a character; insert
newline after every 64th character4. Add delimiters around this ASCII message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-318
Problem
• Recipient without PEM-compliant software cannot read it– If only integrity and authentication used, should be able
to read it• Mode MIC-CLEAR allows this
– Skip step 3 in encoding procedure– Problem: some MTAs add blank lines, delete trailing
white space, or change end of line character– Result: PEM-compliant software reports integrity
failure
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-319
PEM vs. PGP
• Use different ciphers– PGP uses IDEA cipher– PEM uses DES in CBC mode
• Use different certificate models– PGP uses general “web of trust”– PEM uses hierarchical certification structure
• Handle end of line differently– PGP remaps end of line if message tagged “text”, but
leaves them alone if message tagged “binary”– PEM always remaps end of line
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-320
IPsec
• Network layer security– Provides confidentiality, integrity,
authentication of endpoints, replay detection• Protects all messages sent along a path
dest gw2 gw1 srcIP IP+IPsec IP
security gateway
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-321
IPsec Transport Mode
• Encapsulate IP packet data area• Use IP to send IPsec-wrapped data packet• Note: IP header not protected
encapsulateddata body
IPheader
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-322
IPsec Tunnel Mode
• Encapsulate IP packet (IP header and IP data)• Use IP to send IPsec-wrapped packet• Note: IP header protected
encapsulateddata body
IPheader
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-323
IPsec Protocols
• Authentication Header (AH)– Message integrity– Origin authentication– Anti-replay
• Encapsulating Security Payload (ESP)– Confidentiality– Others provided by AH
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-324
IPsec Architecture
• Security Policy Database (SPD)– Says how to handle messages (discard them,
add security services, forward message unchanged)
– SPD associated with network interface– SPD determines appropriate entry from packet
attributes• Including source, destination, transport protocol
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-325
Example
• Goals– Discard SMTP packets from host 192.168.2.9– Forward packets from 192.168.19.7 without change
• SPD entriessrc 192.168.2.9, dest 10.1.2.3 to 10.1.2.103, port 25, discard
src 192.168.19.7, dest 10.1.2.3 to 10.1.2.103, port 25, bypass
dest 10.1.2.3 to 10.1.2.103, port 25, apply IPsec
• Note: entries scanned in order– If no match for packet, it is discarded
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-326
IPsec Architecture
• Security Association (SA)– Association between peers for security services
• Identified uniquely by dest address, security protocol (AH or ESP), unique 32-bit number (security parameter index, or SPI)
– Unidirectional• Can apply different services in either direction
– SA uses either ESP or AH; if both required, 2 SAs needed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-327
SA Database (SAD)
• Entry describes SA; some fields for all packets:– AH algorithm identifier, keys
• When SA uses AH
– ESP encipherment algorithm identifier, keys • When SA uses confidentiality from ESP
– ESP authentication algorithm identifier, keys• When SA uses authentication, integrity from ESP
– SA lifetime (time for deletion or max byte count)– IPsec mode (tunnel, transport, either)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-328
SAD Fields
• Antireplay (inbound only)– When SA uses antireplay feature
• Sequence number counter (outbound only)– Generates AH or ESP sequence number
• Sequence counter overflow field– Stops traffic over this SA if sequence counter overflows
• Aging variables– Used to detect time-outs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-329
IPsec Architecture
• Packet arrives• Look in SPD
– Find appropriate entry– Get dest address, security protocol, SPI
• Find associated SA in SAD– Use dest address, security protocol, SPI– Apply security services in SA (if any)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-330
SA Bundles and Nesting
• Sequence of SAs that IPsec applies to packets– This is a SA bundle
• Nest tunnel mode SAs– This is iterated tunneling
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-331
Example: Nested Tunnels
• Group in A.org needs to communicate with group in B.org
• Gateways of A, B use IPsec mechanisms– But the information must be secret to everyone except
the two groups, even secret from other people in A.org and B.org
• Inner tunnel: a SA between the hosts of the two groups
• Outer tunnel: the SA between the two gateways
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-332
Example: Systems
hostA.A.org
gwA.A.org
gwB.B.org
hostB.B.org
SA in tunnel mode(outer tunnel)SA in tunnel mode(inner tunnel)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-333
Example: Packets
• Packet generated on hostA• Encapsulated by hostA’s IPsec mechanisms• Again encapsulated by gwA’s IPsec mechanisms
– Above diagram shows headers, but as you go left, everything to the right would be enciphered and authenticated, etc.
IP headerfromhostA
Transportlayer
headers,data
ESP headerfromhostA
AH headerfromhostA
IP headerfromhostA
ESP headerfromgwA
AH headerfromgwA
IP headerfromgwA
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-334
AH Protocol
• Parameters in AH header– Length of header– SPI of SA applying protocol– Sequence number (anti-replay)– Integrity value check
• Two steps– Check that replay is not occurring– Check authentication data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-335
Sender
• Check sequence number will not cycle• Increment sequence number• Compute IVC of packet
– Includes IP header, AH header, packet data• IP header: include all fields that will not change in
transit; assume all others are 0• AH header: authentication data field set to 0 for this• Packet data includes encapsulated data, higher level
protocol data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-336
Recipient
• Assume AH header found• Get SPI, destination address• Find associated SA in SAD
– If no associated SA, discard packet• If antireplay not used
– Verify IVC is correct• If not, discard
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-337
Recipient, Using Antireplay
• Check packet beyond low end of sliding window• Check IVC of packet• Check packet’s slot not occupied
– If any of these is false, discard packet
…
current window
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-338
AH Miscellany
• All implementations must support:HMAC_MD5HMAC_SHA-1
• May support other algorithms
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-339
ESP Protocol
• Parameters in ESP header– SPI of SA applying protocol– Sequence number (anti-replay)– Generic “payload data” field– Padding and length of padding
• Contents depends on ESP services enabled; may be an initialization vector for a chaining cipher, for example
• Used also to pad packet to length required by cipher
– Optional authentication data field
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-340
Sender
• Add ESP header– Includes whatever padding needed
• Encipher result– Do not encipher SPI, sequence numbers
• If authentication desired, compute as for AH protocol except over ESP header, payload and not encapsulating IP header
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-341
Recipient
• Assume ESP header found• Get SPI, destination address• Find associated SA in SAD
– If no associated SA, discard packet
• If authentication used– Do IVC, antireplay verification as for AH
• Only ESP, payload are considered; not IP header• Note authentication data inserted after encipherment, so no
deciphering need be done
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-342
Recipient
• If confidentiality used– Decipher enciphered portion of ESP heaser– Process padding– Decipher payload– If SA is transport mode, IP header and payload
treated as original IP packet– If SA is tunnel mode, payload is an
encapsulated IP packet and so is treated as original IP packet
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-343
ESP Miscellany
• Must use at least one of confidentiality, authentication services
• Synchronization material must be in payload– Packets may not arrive in order, so if not, packets
following a missing packet may not be decipherable• Implementations of ESP assume classical
cryptosystem– Implementations of public key systems usually far
slower than implementations of classical systems– Not required
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-344
More ESP Miscellany
• All implementations must support (enciphermentalgorithms):DES in CBC modeNULL algorithm (identity; no encipherment)
• All implementations must support (integrity algorithms):
HMAC_MD5HMAC_SHA-1NULL algorithm (no MAC computed)
• Both cannot be NULL at the same time
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-345
Which to Use: PEM, IPsec
• What do the security services apply to?– If applicable to one application and application layer
mechanisms available, use that• PEM for electronic mail
– If more generic services needed, look to lower layers• IPsec for network layer, either end-to-end or link mechanisms,
for connectionless channels as well as connections
– If endpoint is host, IPsec sufficient; if endpoint is user, application layer mechanism such as PEM needed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-346
Key Points
• Key management critical to effective use of cryptosystems– Different levels of keys (session vs. interchange)
• Keys need infrastructure to identify holders, allow revoking– Key escrowing complicates infrastructure
• Digital signatures provide integrity of origin and contentMuch easier with public key cryptosystems than with
classical cryptosystems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-347
Chapter 11: Authentication
• Basics• Passwords• Challenge-Response• Biometrics• Location• Multiple Methods
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-348
Overview
• Basics• Passwords
– Storage– Selection– Breaking them
• Other methods• Multiple methods
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-349
Basics
• Authentication: binding of identity to subject– Identity is that of external entity (my identity,
Matt, etc.)– Subject is computer entity (process, etc.)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-350
Establishing Identity
• One or more of the following– What entity knows (eg. password)– What entity has (eg. badge, smart card)– What entity is (eg. fingerprints, retinal
characteristics)– Where entity is (eg. In front of a particular
terminal)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-351
Authentication System
• (A, C, F, L, S)– A information that proves identity– C information stored on computer and used to
validate authentication information– F complementation function; f : A → C– L functions that prove identity– S functions enabling entity to create, alter
information in A or C
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-352
Example
• Password system, with passwords stored on line in clear text– A set of strings making up passwords– C = A– F singleton set of identity function { I }– L single equality test function { eq }– S function to set/change password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-353
Passwords
• Sequence of characters– Examples: 10 digits, a string of letters, etc.– Generated randomly, by user, by computer with user
input
• Sequence of words– Examples: pass-phrases
• Algorithms– Examples: challenge-response, one-time passwords
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-354
Storage
• Store as cleartext– If password file compromised, all passwords revealed
• Encipher file– Need to have decipherment, encipherment keys in
memory– Reduces to previous problem
• Store one-way hash of password– If file read, attacker must still guess passwords or invert
the hash
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-355
Example
• UNIX system standard hash function– Hashes password into 11 char string using one of 4096
hash functions
• As authentication system:– A = { strings of 8 chars or less }– C = { 2 char hash id || 11 char hash }– F = { 4096 versions of modified DES }– L = { login, su, … }– S = { passwd, nispasswd, passwd+, … }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-356
Anatomy of Attacking
• Goal: find a ∈ A such that:– For some f ∈ F, f(a) = c ∈ C– c is associated with entity
• Two ways to determine whether a meets these requirements:– Direct approach: as above– Indirect approach: as l(a) succeeds iff f(a) = c ∈ C for
some c associated with an entity, compute l(a)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-357
Preventing Attacks
• How to prevent this:– Hide one of a, f, or c
• Prevents obvious attack from above• Example: UNIX/Linux shadow password files
– Hides c’s
– Block access to all l ∈ L or result of l(a)• Prevents attacker from knowing if guess succeeded• Example: preventing any logins to an account from
a network– Prevents knowing results of l (or accessing l)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-358
Dictionary Attacks
• Trial-and-error from a list of potential passwords– Off-line: know f and c’s, and repeatedly try
different guesses g ∈ A until the list is done or passwords guessed
• Examples: crack, john-the-ripper– On-line: have access to functions in L and try
guesses g until some l(g) succeeds• Examples: trying to log in by guessing a password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-359
Using Time
Anderson’s formula:• P probability of guessing a password in
specified period of time• G number of guesses tested in 1 time unit• T number of time units• N number of possible passwords (|A|)• Then P ≥ TG/N
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-360
Example
• Goal– Passwords drawn from a 96-char alphabet– Can test 104 guesses per second– Probability of a success to be 0.5 over a 365 day period– What is minimum password length?
• Solution– N ≥ TG/P = (365×24×60×60)×104/0.5 = 6.31×1011
– Choose s such that Σsj=0 96j ≥ N
– So s ≥ 6, meaning passwords must be at least 6 chars long
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-361
Approaches: Password Selection
• Random selection– Any password from A equally likely to be
selected• Pronounceable passwords• User selection of passwords
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-362
Pronounceable Passwords
• Generate phonemes randomly– Phoneme is unit of sound, eg. cv, vc, cvc, vcv– Examples: helgoret, juttelon are; przbqxdfl, zxrptglfn are not
• Problem: too few• Solution: key crunching
– Run long key through hash function and convert to printable sequence
– Use this sequence as password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-363
User Selection• Problem: people pick easy to guess passwords
– Based on account names, user names, computer names, place names
– Dictionary words (also reversed, odd capitalizations, control characters, “elite-speak”, conjugations or declensions, swear words, Torah/Bible/Koran/… words)
– Too short, digits only, letters only– License plates, acronyms, social security numbers– Personal characteristics or foibles (pet names, nicknames, job
characteristics, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-364
Picking Good Passwords• “LlMm*2^Ap”
– Names of members of 2 families• “OoHeO/FSK”
– Second letter of each word of length 4 or more in third line of third verse of Star-Spangled Banner, followed by “/”, followed by author’s initials
• What’s good here may be bad there– “DMC/MHmh” bad at Dartmouth (“Dartmouth Medical
Center/Mary Hitchcock memorial hospital”), ok here• Why are these now bad passwords?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-365
Proactive Password Checking
• Analyze proposed password for “goodness”– Always invoked– Can detect, reject bad passwords for an appropriate
definition of “bad”– Discriminate on per-user, per-site basis– Needs to do pattern matching on words– Needs to execute subprograms and use results
• Spell checker, for example– Easy to set up and integrate into password selection
system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-366
Example: OPUS
• Goal: check passwords against large dictionaries quickly– Run each word of dictionary through k different hash functions h1,
…, hk producing values less than n– Set bits h1, …, hk in OPUS dictionary– To check new proposed word, generate bit vector and see if all
corresponding bits set• If so, word is in one of the dictionaries to some degree of probability• If not, it is not in the dictionaries
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-367
Example: passwd+
• Provides little language to describe proactive checking– test length(“$p”) < 6
• If password under 6 characters, reject it– test infile(“/usr/dict/words”, “$p”)
• If password in file /usr/dict/words, reject it– test !inprog(“spell”, “$p”, “$p”)
• If password not in the output from program spell, given the password as input, reject it (because it’s a properly spelled word)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-368
Salting
• Goal: slow dictionary attacks• Method: perturb hash function so that:
– Parameter controls which hash function is used– Parameter differs for each password– So given n password hashes, and therefore n
salts, need to hash guess n
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-369
Examples
• Vanilla UNIX method– Use DES to encipher 0 message with password
as key; iterate 25 times– Perturb E table in DES in one of 4096 ways
• 12 bit salt flips entries 1–11 with entries 25–36
• Alternate methods– Use salt as first part of input to hash function
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-370
Guessing Through L
• Cannot prevent these– Otherwise, legitimate users cannot log in
• Make them slow– Backoff– Disconnection– Disabling
• Be very careful with administrative accounts!– Jailing
• Allow in, but restrict activities
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-371
Password Aging
• Force users to change passwords after some time has expired– How do you force users not to re-use
passwords?• Record previous passwords• Block changes for a period of time
– Give users time to think of good passwords• Don’t force them to change before they can log in• Warn them of expiration days in advance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-372
Challenge-Response
• User, system share a secret function f (in practice, f is aknown function with unknown parameters, such as acryptographic key)
user systemrequest to authenticate
user systemrandom message r(the challenge)
user systemf(r)(the response)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-373
Pass Algorithms
• Challenge-response with the function f itself a secret– Example:
• Challenge is a random string of characters such as “abcdefg”, “ageksido”
• Response is some function of that string such as “bdf”, “gkip”– Can alter algorithm based on ancillary information
• Network connection is as above, dial-up might require “aceg”, “aesd”
– Usually used in conjunction with fixed, reusable password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-374
One-Time Passwords
• Password that can be used exactly once– After use, it is immediately invalidated
• Challenge-response mechanism– Challenge is number of authentications; response is password for
that particular number
• Problems– Synchronization of user, system– Generation of good random passwords– Password distribution problem
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-375
S/Key
• One-time password scheme based on idea of Lamport
• h one-way hash function (MD5 or SHA-1, for example)
• User chooses initial seed k• System calculates:
h(k) = k1, h(k1) = k2, …, h(kn–1) = kn
• Passwords are reverse order:p1 = kn, p2 = kn–1, …, pn–1 = k2, pn = k1
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-376
S/Key Protocol
user system{ name }
user system{ i }
user system{ pi }
System stores maximum number of authentications n, numberof next authentication i, last correctly supplied password pi–1.
System computes h(pi) = h(kn–i+1) = kn–i = pi–1. If match withwhat is stored, system replaces pi–1 with pi and increments i.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-377
Hardware Support
• Token-based– Used to compute response to challenge
• May encipher or hash challenge• May require PIN from user
• Temporally-based– Every minute (or so) different number shown
• Computer knows what number to expect when
– User enters number and fixed password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-378
C-R and Dictionary Attacks
• Same as for fixed passwords– Attacker knows challenge r and response f(r); if
f encryption function, can try different keys• May only need to know form of response; attacker
can tell if guess correct by looking to see if deciphered object is of right form
• Example: Kerberos Version 4 used DES, but keys had 20 bits of randomness; Purdue attackers guessed keys quickly because deciphered tickets had a fixed set of bits in some locations
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-379
Encrypted Key Exchange
• Defeats off-line dictionary attacks• Idea: random challenges enciphered, so attacker cannot
verify correct decipherment of challenge• Assume Alice, Bob share secret password s• In what follows, Alice needs to generate a random public
key p and a corresponding private key q• Also, k is a randomly generated session key, and RA and RB
are random challenges
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-380
EKE Protocol
Alice BobAlice || Es(p)
Alice BobEs(Ep(k))
Now Alice, Bob share a randomly generatedsecret session key k
Alice BobEk(RA)
Alice BobEk(RARB)
Alice BobEk(RB)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-381
Biometrics
• Automated measurement of biological, behavioral features that identify a person– Fingerprints: optical or electrical techniques
• Maps fingerprint into a graph, then compares with database• Measurements imprecise, so approximate matching algorithms
used– Voices: speaker verification or recognition
• Verification: uses statistical techniques to test hypothesis that speaker is who is claimed (speaker dependent)
• Recognition: checks content of answers (speaker independent)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-382
Other Characteristics
• Can use several other characteristics– Eyes: patterns in irises unique
• Measure patterns, determine if differences are random; or correlate images using statistical tests
– Faces: image, or specific characteristics like distance from nose to chin
• Lighting, view of face, other noise can hinder this– Keystroke dynamics: believed to be unique
• Keystroke intervals, pressure, duration of stroke, where key is struck
• Statistical tests used
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-383
Cautions
• These can be fooled!– Assumes biometric device accurate in the environment
it is being used in!– Transmission of data to validator is tamperproof,
correct
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-384
Location
• If you know where user is, validate identity by seeing if person is where the user is– Requires special-purpose hardware to locate
user• GPS (global positioning system) device gives
location signature of entity• Host uses LSS (location signature sensor) to get
signature for entity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-385
Multiple Methods
• Example: “where you are” also requires entity to have LSS and GPS, so also “what you have”
• Can assign different methods to different tasks– As users perform more and more sensitive tasks, must authenticate
in more and more ways (presumably, more stringently) File describes authentication required
• Also includes controls on access (time of day, etc.), resources, and requests to change passwords
– Pluggable Authentication Modules
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-386
PAM• Idea: when program needs to authenticate, it checks central
repository for methods to use• Library call: pam_authenticate
– Accesses file with name of program in /etc/pam_d• Modules do authentication checking
– sufficient: succeed if module succeeds– required: fail if module fails, but all required modules executed
before reporting failure– requisite: like required, but don’t check all modules– optional: invoke only if all previous modules fail
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-387
Example PAM Fileauth sufficient /usr/lib/pam_ftp.so
auth required /usr/lib/pam_unix_auth.so use_first_pass
auth required /usr/lib/pam_listfile.so onerr=succeed \item=user sense=deny file=/etc/ftpusers
For ftp:1. If user “anonymous”, return okay; if not, set
PAM_AUTHTOK to password, PAM_RUSER to name, and fail
2. Now check that password in PAM_AUTHTOK belongs to that of user in PAM_RUSER; if not, fail
3. Now see if user in PAM_RUSER named in /etc/ftpusers; if so, fail; if error or not found, succeed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-388
Key Points
• Authentication is not cryptography– You have to consider system components
• Passwords are here to stay– They provide a basis for most forms of authentication
• Protocols are important– They can make masquerading harder
• Authentication methods can be combined– Example: PAM
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-389
Chapter 12: Design Principles
• Overview• Principles
– Least Privilege– Fail-Safe Defaults– Economy of Mechanism– Complete Mediation– Open Design – Separation of Privilege– Least Common Mechanism– Psychological Acceptability
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-390
Overview
• Simplicity– Less to go wrong– Fewer possible inconsistencies– Easy to understand
• Restriction– Minimize access– Inhibit communication
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-391
Least Privilege
• A subject should be given only those privileges necessary to complete its task– Function, not identity, controls– Rights added as needed, discarded after use– Minimal protection domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-392
Fail-Safe Defaults
• Default action is to deny access• If action fails, system as secure as when
action began
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-393
Economy of Mechanism
• Keep it as simple as possible– KISS Principle
• Simpler means less can go wrong– And when errors occur, they are easier to
understand and fix• Interfaces and interactions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-394
Complete Mediation
• Check every access• Usually done once, on first action
– UNIX: access checked on open, not checked thereafter
• If permissions change after, may get unauthorized access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-395
Open Design
• Security should not depend on secrecy of design or implementation– Popularly misunderstood to mean that source
code should be public– “Security through obscurity” – Does not apply to information such as
passwords or cryptographic keys
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-396
Separation of Privilege
• Require multiple conditions to grant privilege– Separation of duty– Defense in depth
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-397
Least Common Mechanism
• Mechanisms should not be shared– Information can flow along shared channels– Covert channels
• Isolation– Virtual machines– Sandboxes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-398
Psychological Acceptability
• Security mechanisms should not add to difficulty of accessing resource– Hide complexity introduced by security
mechanisms– Ease of installation, configuration, use– Human factors critical here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-399
Key Points
• Principles of secure design underlie all security-related mechanisms
• Require:– Good understanding of goal of mechanism and
environment in which it is to be used– Careful analysis and design– Careful implementation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-400
Chapter 13: Representing Identity
• What is identity• Multiple names for one thing• Different contexts, environments• Pseudonymity and anonymity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-401
Overview
• Files and objects• Users, groups, and roles• Certificates and names• Hosts and domains• State and cookies• Anonymity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-402
Identity
• Principal: a unique entity• Identity: specifies a principal• Authentication: binding of a principal to a
representation of identity internal to the system– All access, resource allocation decisions
assume binding is correct
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-403
Files and Objects
• Identity depends on system containing object
• Different names for one object– Human use, eg. file name– Process use, eg. file descriptor or handle– Kernel use, eg. file allocation table entry, inode
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-404
More Names
• Different names for one context– Human: aliases, relative vs. absolute path
names– Kernel: deleting a file identified by name can
mean two things:• Delete the object that the name identifies• Delete the name given, and do not delete actual
object until all names have been deleted
• Semantics of names may differ
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-405
Example: Names and Descriptors
• Interpretation of UNIX file name– Kernel maps name into an inode using iterative
procedure– Same name can refer to different objects at different
times without being deallocated• Causes race conditions
• Interpretation of UNIX file descriptor– Refers to a specific inode– Refers to same inode from creation to deallocation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-406
Example: Different Systems
• Object name must encode location or pointer to location– rsh, ssh style: host:object– URLs: protocol://host/object
• Need not name actual object– rsh, ssh style may name pointer (link) to actual
object– URL may forward to another host
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-407
Users
• Exact representation tied to system• Example: UNIX systems
– Login name: used to log in to system• Logging usually uses this name
– User identification number (UID): unique integer assigned to user
• Kernel uses UID to identify users• One UID per login name, but multiple login names
may have a common UID
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-408
Multiple Identities
• UNIX systems again– Real UID: user identity at login, but changeable– Effective UID: user identity used for access control
• Setuid changes effective UID
– Saved UID: UID before last change of UID• Used to implement least privilege• Work with privileges, drop them, reclaim them later
– Audit/Login UID: user identity used to track original UID
• Cannot be altered; used to tie actions to login identity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-409
Groups
• Used to share access privileges• First model: alias for set of principals
– Processes assigned to groups– Processes stay in those groups for their lifetime
• Second model: principals can change groups– Rights due to old group discarded; rights due to
new group added
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-410
Roles
• Group with membership tied to function– Rights given are consistent with rights needed to
perform function
• Uses second model of groups• Example: DG/UX
– User root does not have administration functionality– System administrator privileges are in sysadmin role– Network administration privileges are in netadmin role– Users can assume either role as needed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-411
Naming and Certificates
• Certificates issued to a principal– Principal uniquely identified to avoid confusion
• Problem: names may be ambiguous– Does the name “Matt Bishop” refer to:
• The author of this book?• A programmer in Australia?• A stock car driver in Muncie, Indiana?• Someone else who was named “Matt Bishop”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-412
Disambiguating Identity
• Include ancillary information in names– Enough to identify principal uniquely– X.509v3 Distinguished Names do this
• Example: X.509v3 Distinguished Names– /O=University of California/OU=Davis
campus/OU=Department of Computer Science/CN=Matt Bishop/refers to the Matt Bishop (CN is common name) in the Department of Computer Science (OU is organizational unit) on the Davis Campus of the University of California (O is organization)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-413
CAs and Policies
• Matt Bishop wants a certificate from Certs-from-Us– How does Certs-from-Us know this is “Matt Bishop”?
• CA’s authentication policy says what type and strength of authentication is needed to identify Matt Bishop to satisfy the CA that this is, in fact, Matt Bishop
– Will Certs-from-Us issue this “Matt Bishop” a certificate once he is suitably authenticated?
• CA’s issuance policy says to which principals the CA will issue certificates
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-414
Example: Verisign CAs
• Class 1 CA issued certificates to individuals– Authenticated principal by email address
• Idea: certificate used for sending, receiving email with various security services at that address
• Class 2 CA issued certificates to individuals– Authenticated by verifying user-supplied real
name and address through an online database• Idea: certificate used for online purchasing
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-415
Example: Verisign CAs
• Class 3 CA issued certificates to individuals– Authenticated by background check from
investigative service• Idea: higher level of assurance of identity than Class
1 and Class 2 CAs
• Fourth CA issued certificates to web servers– Same authentication policy as Class 3 CA
• Idea: consumers using these sites had high degree of assurance the web site was not spoofed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-416
Internet Certification Hierarchy
• Tree structured arrangement of CAs– Root is Internet Policy Registration Authority, or IPRA
• Sets policies all subordinate CAs must follow• Certifies subordinate CAs (called policy certification
authorities, or PCAs), each of which has own authentication, issuance policies
• Does not issue certificates to individuals or organizations other than subordinate CAs
– PCAs issue certificates to ordinary CAs• Does not issue certificates to individuals or organizations other
than subordinate CAs– CAs issue certificates to organizations or individuals
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-417
Example
• University of Valmont issues certificates to students, staff– Students must present valid reg cards
(considered low assurance)– Staff must present proof of employment and
fingerprints, which are compared to those taken when staff member hired (considered high assurance)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-418
UValmont and PCAs
• First PCA: requires subordinate CAs to make good-faith effort to verify identities of principals to whom it issues certificates– Student authentication requirements meet this
• Second PCA: requires use of biometrics to verify identity– Student authentication requirements do not meet this– Staff authentication requirements do meet this
• UValmont establishes to CAs, one under each PCA above
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-419
UValmont and Certification Hierarchy
IPRA
PCA-1
UValmontStudent CA
student student
PCA-2
UValmontStaff CA
staff staff
high assurancePCA
low assurancePCA
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-420
Certificate Differences
• Student, staff certificates signed using different private keys (for different CAs)– Student’s signed by key corresponding to low assurance
certificate signed by first PCA– Staff’s signed by key corresponding to high assurance
certificate signed by second PCA• To see what policy used to authenticate:
– Determine CA signing certificate, check its policy– Also go to PCA that signed CA’s certificate
• CAs are restricted by PCA’s policy, but CA can restrict itself further
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-421
Types of Certificates
• Organizational certificate– Issued based on principal’s affiliation with organization– Example Distinguished Name
/O=University of Valmont/OU=Computer Science Department/CN=Marsha Merteuille/
• Residential certificate– Issued based on where principal lives– No affiliation with organization implied– Example Distinguished Name
/C=US/SP=Louisiana/L=Valmont/PA=1 Express Way/CN=Marsha Merteuille/
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-422
Certificates for Roles
• Certificate tied to a role• Example
– UValmont wants comptroller to have a certificate• This way, she can sign contracts and documents digitally
– Distinguished Name/O=University of Valmont/OU=Office of the Big Bucks/RN=Comptrollerwhere “RN” is role name; note the individual using the certificate is not named, so no CN
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-423
Meaning of Identity
• Authentication validates identity– CA specifies type of authentication– If incorrect, CA may misidentify entity
unintentionally• Certificate binds external identity to crypto
key and Distinguished Name– Need confidentiality, integrity, anonymity
• Recipient knows same entity sent all messages, but not who that entity is
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-424
Persona Certificate
• Certificate with meaningless Distinguished Name– If DN is
/C=US/O=Microsoft Corp./CN=Bill Gates/the real subject may not (or may) be Mr. Gates
– Issued by CAs with persona policies under a PCA with policy that supports this
• PGP certificates can use any name, so provide this implicitly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-425
Example
• Government requires all citizens with gene X to register– Anecdotal evidence people with this gene become
criminals with probability 0.5.– Law to be made quietly, as no scientific evidence
supports this, and government wants no civil rights fuss• Government employee wants to alert media
– Government will deny plan, change approach– Government employee will be fired, prosecuted
• Must notify media anonymously
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-426
Example
• Employee gets persona certificate, sends copy of plan to media– Media knows message unchanged during transit, but
not who sent it– Government denies plan, changes it
• Employee sends copy of new plan signed using same certificate– Media can tell it’s from original whistleblower– Media cannot track back whom that whistleblower is
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-427
Trust
• Goal of certificate: bind correct identity to DN• Question: what is degree of assurance?• X.509v3, certificate hierarchy
– Depends on policy of CA issuing certificate– Depends on how well CA follows that policy– Depends on how easy the required authentication can
be spoofed
• Really, estimate based on the above factors
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-428
Example: Passport Required
• DN has name on passport, number and issuer of passport
• What are points of trust?– Passport not forged and name on it not altered– Passport issued to person named in passport– Person presenting passport is person to whom it was
issued– CA has checked passport and individual using passport
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-429
PGP Certificates
• Level of trust in signature field• Four levels
– Generic (no trust assertions made)– Persona (no verification)– Casual (some verification)– Positive (substantial verification)
• What do these mean?– Meaning not given by OpenPGP standard– Signer determines what level to use– Casual to one signer may be positive to another
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-430
Identity on the Web
• Host identity– Static identifiers: do not change over time– Dynamic identifiers: changes as a result of an
event or the passing of time• State and Cookies• Anonymity
– Anonymous email– Anonymity: good or bad?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-431
Host Identity
• Bound up to networking– Not connected: pick any name– Connected: one or more names depending on
interfaces, network structure, context• Name identifies principal• Address identifies location of principal
– May be virtual location (network segment) as opposed to physical location (room 222)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-432
Example
• Layered network– MAC layer
• Ethernet address: 00:05:02:6B:A8:21• AppleTalk address: network 51, node 235
– Network layer• IP address: 192.168.35.89
– Transport layer• Host name: cherry.orchard.chekhov.ru
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-433
Danger!
• Attacker spoofs identity of another host– Protocols at, above the identity being spoofed
will fail– They rely on spoofed, and hence faulty,
information• Example: spoof IP address, mapping
between host names and IP addresses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-434
Domain Name Server
• Maps transport identifiers (host names) to network identifiers (host addresses)– Forward records: host names → IP addresses– Reverse records: IP addresses → host names
• Weak authentication– Not cryptographically based– Various techniques used, such as reverse
domain name lookup
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-435
Reverse Domain Name Lookup
• Validate identity of peer (host) name– Get IP address of peer– Get associated host name via DNS– Get IP addresses associated with host name
from DNS– If first IP address in this set, accept name as
correct; otherwise, reject as spoofed• If DNS corrupted, this won’t work
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-436
Dynamic Identifiers
• Assigned to principals for a limited time– Server maintains pool of identifiers– Client contacts server using local identifier
• Only client, server need to know this identifier– Server sends client global identifier
• Client uses global identifier in other contexts, for example to talk to other hosts
• Server notifies intermediate hosts of new client, global identifier association
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-437
Example: DHCP
• DHCP server has pool of IP addresses• Laptop sends DHCP server its MAC address,
requests IP address– MAC address is local identifier– IP address is global identifier
• DHCP server sends unused IP address– Also notifies infrastructure systems of the association
between laptop and IP address• Laptop accepts IP address, uses that to
communicate with hosts other than server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-438
Example: Gateways
• Laptop wants to access host on another network– Laptop’s address is 10.1.3.241
• Gateway assigns legitimate address to internal address– Say IP address is 101.43.21.241– Gateway rewrites all outgoing, incoming packets
appropriately– Invisible to both laptop, remote peer
• Internet protocol NAT works this way
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-439
Weak Authentication
• Static: host/name binding fixed over time• Dynamic: host/name binding varies over
time– Must update reverse records in DNS
• Otherwise, the reverse lookup technique fails
– Cannot rely on binding remaining fixed unless you know the period of time over which the binding persists
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-440
DNS Security Issues
• Trust is that name/IP address binding is correct
• Goal of attacker: associate incorrectly an IP address with a host name– Assume attacker controls name server, or can
intercept queries and send responses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-441
Attacks
• Change records on server• Add extra record to response, giving incorrect
name/IP address association– Called “cache poisoning”
• Attacker sends victim request that must be resolved by asking attacker– Attacker responds with answer plus two records for
address spoofing (1 forward, 1 reverse)– Called “ask me”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-442
Cookies
• Token containing information about state of transaction on network– Usual use: refers to state of interaction between
web browser, client– Idea is to minimize storage requirements of
servers, and put information on clients• Client sends cookies to server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-443
Some Fields in Cookies
• name, value: name has given value• expires: how long cookie valid
– Expired cookies discarded, not sent to server– If omitted, cookie deleted at end of session
• domain: domain for which cookie intended– Consists of last n fields of domain name of server– Must have at least one “.” in it
• secure: send only over secured (SSL, HTTPS) connection
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-444
Example
• Caroline puts 2 books in shopping cartcart at books.com– Cookie: name bought, value BK=234&BK=8753,
domain .books.com
• Caroline looks at other books, but decides to buy only those– She goes to the purchase page to order them
• Server requests cookie, gets above– From cookie, determines books in shopping cart
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-445
Who Can Get the Cookies?
• Web browser can send any cookie to a web server– Even if the cookie’s domain does not match that of the
web server– Usually controlled by browser settings
• Web server can only request cookies for its domain– Cookies need not have been sent by that browser
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-446
Where Did the Visitor Go?
• Server books.com sends Caroline 2 cookies– First described earlier– Second has name “id”, value “books.com”, domain
“adv.com”• Advertisements at books.com include some from
site adv.com– When drawing page, Caroline’s browser requests
content for ads from server “adv.com”– Server requests cookies from Caroline’s browser– By looking at value, server can tell Caroline visited
“books.com”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-447
Anonymity on the Web
• Recipients can determine origin of incoming packet– Sometimes not desirable
• Anonymizer: a site that hides origins of connections– Usually a proxy server
• User connects to anonymizer, tells it destination• Anonymizer makes connection, sends traffic in both directions
– Destination host sees only anonymizer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-448
Example: anon.penet.fi
• Offered anonymous email service– Sender sends letter to it, naming another destination– Anonymizer strips headers, forwards message
• Assigns an ID (say, 1234) to sender, records real sender and ID in database
• Letter delivered as if from [email protected]
– Recipient replies to that address• Anonymizer strips headers, forwards message as indicated by
database entry
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-449
Problem
• Anonymizer knows who sender, recipient really are
• Called pseudo-anonymous remailer or pseudonymous remailer– Keeps mappings of anonymous identities and
associated identities• If you can get the mappings, you can figure
out who sent what
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-450
More anon.penet.fi
• Material claimed to be copyrighted sent through site
• Finnish court directed owner to reveal mapping so plaintiffs could determine sender
• Owner appealed, subsequently shut down site
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-451
Cypherpunk Remailer
• Remailer that deletes header of incoming message, forwards body to destination
• Also called Type I Remailer• No record kept of association between sender
address, remailer’s user name– Prevents tracing, as happened with anon.penet.fi
• Usually used in a chain, to obfuscate trail– For privacy, body of message may be enciphered
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-452
Cypherpunk Remailer Message
• Encipher message• Add destination
header• Add header for
remailer n…
• Add header for remailer 2
Hi, Alice,It’s SQUEAMISHOSSIFRIGEBob
send to Alice
send to remailer 2
send to remailer 1
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-453
Weaknesses
• Attacker monitoring entire network– Observes in, out flows of remailers– Goal is to associate incoming, outgoing messages
• If messages are cleartext, trivial– So assume all messages enciphered
• So use traffic analysis!– Used to determine information based simply on
movement of messages (traffic) around the network
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-454
Attacks
• If remailer forwards message before next message arrives, attacker can match them up– Hold messages for some period of time, greater than the
message interarrival time– Randomize order of sending messages, waiting until at
least n messages are ready to be forwarded• Note: attacker can force this by sending n–1 messages into
queue
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-455
Attacks
• As messages forwarded, headers stripped so message size decreases– Pad message with garbage at each step,
instructing next remailer to discard it• Replay message, watch for spikes in
outgoing traffic– Remailer can’t forward same message more
than once
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-456
Mixmaster Remailer
• Cypherpunk remailer that handles only enciphered mail and pads (or fragments) messages to fixed size before sending them– Also called Type II Remailer– Designed to hinder attacks on Cypherpunk
remailers• Messages uniquely numbered• Fragments reassembled only at last remailer for
sending to recipient
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-457
Cypherpunk Remailer Message
recipent’s addressany mail headers to addmessagepadding if needed
enciphered with Triple DES key #2
final hop addresspacket ID: 168message ID: 7839Triple DES key: 2random garbage
enciphered with Triple DES key #1
remailer #2 addresspacket ID: 135Triple DES key: 1
enciphered with RSA for remailer #2
enciphered with RSA for remailer #1
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-458
Anonymity Itself
• Some purposes for anonymity– Removes personalities from debate– With appropriate choice of pseudonym, shapes
course of debate by implication– Prevents retaliation
• Are these benefits or drawbacks?– Depends on society, and who is involved
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-459
Privacy
• Anonymity protects privacy by obstructing amalgamation of individual records
• Important, because amalgamation poses 3 risks:– Incorrect conclusions from misinterpreted data– Harm from erroneous information– Not being let alone
• Also hinders monitoring to deter or prevent crime• Conclusion: anonymity can be used for good or ill
– Right to remain anonymous entails responsibility to use that right wisely
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-460
Key Points
• Identity specifies a principal (unique entity)– Same principal may have many different identities
• Function (role)• Associated principals (group)• Individual (user/host)
– These may vary with view of principal• Different names at each network layer, for example
– Anonymity possible; may or may not be desirable• Power to remain anonymous includes responsibility to use that
power wisely
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-461
Chapter 14: Access Control Mechanisms
• Access control lists• Capabilities• Locks and keys• Ring-based access control• Propagated access control lists
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-462
Overview
• Access control lists• Capability lists• Locks and keys• Rings-based access control• Propagated access control lists
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-463
Access Control Lists• Columns of access control matrix
file1 file2 file3Andy rx r rwoBetty rwxo rCharlie rx rwo wACLs:• file1: { (Andy, rx) (Betty, rwxo) (Charlie, rx) }• file2: { (Andy, r) (Betty, r) (Charlie, rwo) }• file3: { (Andy, rwo) (Charlie, w) }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-464
Default Permissions
• Normal: if not named, no rights over file– Principle of Fail-Safe Defaults
• If many subjects, may use groups or wildcards in ACL– UNICOS: entries are (user, group, rights)
• If user is in group, has rights over file• ‘*’ is wildcard for user, group
– (holly, *, r): holly can read file regardless of her group– (*, gleep, w): anyone in group gleep can write file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-465
Abbreviations
• ACLs can be long … so combine users– UNIX: 3 classes of users: owner, group, rest– rwx rwx rwx
restgroupowner
– Ownership assigned based on creating process• Some systems: if directory has setgid permission, file group owned by
group of directory (SunOS, Solaris)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-466
ACLs + Abbreviations
• Augment abbreviated lists with ACLs– Intent is to shorten ACL
• ACLs override abbreviations– Exact method varies
• Example: IBM AIX– Base permissions are abbreviations, extended permissions are
ACLs with user, group– ACL entries can add rights, but on deny, access is denied
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-467
Permissions in IBM AIXattributes:base permissionsowner(bishop): rw-group(sys): r--others: ---
extended permissions enabledspecify rw- u:hollypermit -w- u:heidi, g=syspermit rw- u:mattdeny -w- u:holly, g=faculty
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-468
ACL Modification
• Who can do this?– Creator is given own right that allows this– System R provides a grant modifier (like a
copy flag) allowing a right to be transferred, so ownership not needed
• Transferring right to another modifies ACL
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-469
Privileged Users
• Do ACLs apply to privileged users (root)?– Solaris: abbreviated lists do not, but full-blown
ACL entries do– Other vendors: varies
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-470
Groups and Wildcards
• Classic form: no; in practice, usually– AIX: base perms gave group sys read only
permit -w- u:heidi, g=sys
line adds write permission for heidi when in that group– UNICOS:
• holly : gleep : r– user holly in group gleep can read file
• holly : * : r– user holly in any group can read file
• * : gleep : r– any user in group gleep can read file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-471
Conflicts
• Deny access if any entry would deny access– AIX: if any entry denies access, regardless or rights
given so far, access is denied• Apply first entry matching subject
– Cisco routers: run packet through access control rules (ACL entries) in order; on a match, stop, and forward the packet; if no matches, deny
• Note default is deny so honors principle of fail-safe defaults
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-472
Handling Default Permissions
• Apply ACL entry, and if none use defaults– Cisco router: apply matching access control
rule, if any; otherwise, use default rule (deny)• Augment defaults with those in the
appropriate ACL entry– AIX: extended permissions augment base
permissions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-473
Revocation Question
• How do you remove subject’s rights to a file?– Owner deletes subject’s entries from ACL, or
rights from subject’s entry in ACL• What if ownership not involved?
– Depends on system– System R: restore protection state to what it
was before right was given• May mean deleting descendent rights too …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-474
Windows NT ACLs
• Different sets of rights– Basic: read, write, execute, delete, change permission, take
ownership– Generic: no access, read (read/execute), change
(read/write/execute/delete), full control (all), special access (assign any of the basics)
– Directory: no access, read (read/execute files in directory), list, add, add and read, change (create, add, read, execute, write files; delete subdirectories), full control, special access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-475
Accessing Files
• User not in file’s ACL nor in any group named in file’s ACL: deny access
• ACL entry denies user access: deny access• Take union of rights of all ACL entries
giving user access: user has this set of rights over file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-476
Capability Lists• Rows of access control matrix
file1 file2 file3Andy rx r rwoBetty rwxo rCharlie rx rwo wC-Lists:• Andy: { (file1, rx) (file2, r) (file3, rwo) }• Betty: { (file1, rwxo) (file2, r) }• Charlie: { (file1, rx) (file2, rwo) (file3, w) }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-477
Semantics
• Like a bus ticket– Mere possession indicates rights that subject has over
object– Object identified by capability (as part of the token)
• Name may be a reference, location, or something else– Architectural construct in capability-based addressing;
this just focuses on protection aspects• Must prevent process from altering capabilities
– Otherwise subject could change rights encoded in capability or object to which they refer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-478
Implementation
• Tagged architecture– Bits protect individual words
• B5700: tag was 3 bits and indicated how word was to be treated (pointer, type, descriptor, etc.)
• Paging/segmentation protections– Like tags, but put capabilities in a read-only segment or
page• CAP system did this
– Programs must refer to them by pointers• Otherwise, program could use a copy of the capability—which
it could modify
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-479
Implementation (con’t)
• Cryptography– Associate with each capability a cryptographic checksum
enciphered using a key known to OS– When process presents capability, OS validates checksum– Example: Amoeba, a distributed capability-based system
• Capability is (name, creating_server, rights, check_field) and is given to owner of object
• check_field is 48-bit random number; also stored in table corresponding to creating_server
• To validate, system compares check_field of capability with that stored in creating_server table
• Vulnerable if capability disclosed to another process
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-480
Amplifying
• Allows temporary increase of privileges• Needed for modular programming
– Module pushes, pops data onto stackmodule stack … endmodule.
– Variable x declared of type stackvarx: module;
– Only stack module can alter, read x• So process doesn’t get capability, but needs it when x is referenced—
a problem!– Solution: give process the required capabilities while it is in
module
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-481
Examples
• HYDRA: templates– Associated with each procedure, function in module– Adds rights to process capability while the procedure or function is
being executed– Rights deleted on exit
• Intel iAPX 432: access descriptors for objects– These are really capabilities– 1 bit in this controls amplification– When ADT constructed, permission bits of type control object set
to what procedure needs– On call, if amplification bit in this permission is set, the above bits
or’ed with rights in access descriptor of object being passed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-482
Revocation
• Scan all C-lists, remove relevant capabilities– Far too expensive!
• Use indirection– Each object has entry in a global object table– Names in capabilities name the entry, not the object
• To revoke, zap the entry in the table• Can have multiple entries for a single object to allow control of
different sets of rights and/or groups of users for each object– Example: Amoeba: owner requests server change random number
in server table• All capabilities for that object now invalid
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-483
Heidi (High)
Lou (Low)
Lough (Low)
rw*lough
rw*lough
C-List
r*loughC-List
Heidi (High)
Lou (Low)
Lough (Low)
rw*lough
rw*lough
C-List
r*loughC-List
rw*lough
• Problems if you don’t control copying of capabilities
The capability to write file lough is Low, and Heidi is Highso she reads (copies) the capability; now she can write to aLow file, violating the *-property!
Limits
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-484
Remedies
• Label capability itself– Rights in capability depends on relation between its
compartment and that of object to which it refers• In example, as as capability copied to High, and High
dominates object compartment (Low), write right removed
• Check to see if passing capability violates security properties– In example, it does, so copying refused
• Distinguish between “read” and “copy capability”– Take-Grant Protection Model does this (“read”, “take”)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-485
ACLs vs. Capabilities
• Both theoretically equivalent; consider 2 questions1. Given a subject, what objects can it access, and how?2. Given an object, what subjects can access it, and how?– ACLs answer second easily; C-Lists, first
• Suggested that the second question, which in the past has been of most interest, is the reason ACL-based systems more common than capability-based systems– As first question becomes more important (in incident
response, for example), this may change
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-486
Locks and Keys
• Associate information (lock) with object, information (key) with subject– Latter controls what the subject can access and how– Subject presents key; if it corresponds to any of the locks on the
object, access granted
• This can be dynamic– ACLs, C-Lists static and must be manually changed– Locks and keys can change based on system constraints, other
factors (not necessarily manual)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-487
Cryptographic Implementation
• Enciphering key is lock; deciphering key is key– Encipher object o; store Ek(o)– Use subject’s key k′ to compute Dk′(Ek(o))– Any of n can access o: store
o′ = (E1(o), …, En(o))– Requires consent of all n to access o: store
o′ = (E1(E2(…(En(o))…))
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-488
Example: IBM
• IBM 370: process gets access key; pages get storage key and fetch bit– Fetch bit clear: read access only– Fetch bit set, access key 0: process can write to
(any) page– Fetch bit set, access key matches storage key:
process can write to page– Fetch bit set, access key non-zero and does not
match storage key: no access allowed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-489
Example: Cisco Router
• Dynamic access control listsaccess-list 100 permit tcp any host 10.1.1.1 eqtelnet
access-list 100 dynamic test timeout 180 permit ip any host \
10.1.2.3 time-range my-time
time-range my-time
periodic weekdays 9:00 to 17:00
line vty 0 2
login local
autocommand access-enable host timeout 10
• Limits external access to 10.1.2.3 to 9AM–5PM– Adds temporary entry for connecting host once user
supplies name, password to router– Connections good for 180 minutes
• Drops access control entry after that
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-490
Type Checking
• Lock is type, key is operation– Example: UNIX system call write can’t work
on directory object but does work on file– Example: split I&D space of PDP-11– Example: countering buffer overflow attacks on
the stack by putting stack on non-executable pages/segments
• Then code uploaded to buffer won’t execute• Does not stop other forms of this attack, though …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-491
More Examples
• LOCK system:– Compiler produces “data”– Trusted process must change this type to “executable” becore
program can be executed
• Sidewinder firewall– Subjects assigned domain, objects assigned type
• Example: ingress packets get one type, egress packets another– All actions controlled by type, so ingress packets cannot
masquerade as egress packets (and vice versa)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-492
Ring-Based Access Control
…Privilegesincrease 0 1 n
• Process (segment) accessesanother segment
• Read• Execute
• Gate is an entry point forcalling segment
• Rights:• r read• w write• a append• e execute
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-493
Reading/Writing/Appending
• Procedure executing in ring r• Data segment with access bracket (a1, a2)• Mandatory access rule
– r ≤ a1 allow access– a1 < r ≤ a2 allow r access; not w, a access– a2 < r deny all access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-494
Executing
• Procedure executing in ring r• Call procedure in segment with access bracket (a1,
a2) and call bracket (a2, a3)– Often written (a1, a2 , a3 )
• Mandatory access rule– r < a1 allow access; ring-crossing fault– a1 ≤ r ≤ a2 allow access; no ring-crossing fault– a2 < r ≤ a3 allow access if through valid gate– a3 < r deny all access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-495
Versions
• Multics– 8 rings (from 0 to 7)
• Digital Equipment’s VAX– 4 levels of privilege: user, monitor, executive,
kernel• Older systems
– 2 levels of privilege: user, supervisor
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-496
PACLs
• Propagated Access Control List– Implements ORGON
• Creator kept with PACL, copies– Only owner can change PACL– Subject reads object: object’s PACL associated with
subject– Subject writes object: subject’s PACL associated with
object• Notation: PACLs means s created object; PACL(e)
is PACL associated with entity e
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-497
Multiple Creators• Betty reads Ann’s file dates
PACL(Betty) = PACLBetty ∩ PACL(dates)= PACLBetty ∩ PACLAnn
• Betty creates file dcPACL(dc) = PACLBetty ∩ PACLAnn
• PACLBetty allows Char to access objects, but PACLAnndoes not; both allow June to access objects– June can read dc– Char cannot read dc
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-498
Key Points
• Access control mechanisms provide controls for users accessing files
• Many different forms– ACLs, capabilities, locks and keys
• Type checking too
– Ring-based mechanisms (Mandatory)– PACLs (ORCON)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-499
Chapter 15: Information Flow
• Definitions• Compiler-based mechanisms• Execution-based mechanisms• Examples
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-500
Overview
• Basics and background• Compiler-based mechanisms• Execution-based mechanisms• Examples
– Security Pipeline Interface– Secure Network Server Mail Guard
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-501
Basics
• Bell-LaPadula Model embodies information flow policy– Given compartments A, B, info can flow from A
to B iff B dom A• Variables x, y assigned compartments x, y as
well as values– If x = A and y = B, and A dom B, then y := x
allowed but not x := y
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-502
Information Flow
• Idea: info flows from x to y as a result of a sequence of commands c if you can deduce information about x before c from the value in y after c
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-503
Example 1
• Command is x := y + z; where:– 0 ≤ y ≤ 7, equal probability– z = 1 with prob. 1/2, z = 2 or 3 with prob. 1/4
each• If you know final value of x, initial value of
y can have at most 3 values, so information flows from y to x
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-504
Example 2
• Command is– if x = 1 then y := 0 else y := 1;
where:– x, y equally likely to be either 0 or 1
• But if x = 1 then y = 0, and vice versa, so value of y depends on x
• So information flowed from x to y
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-505
Implicit Flow of Information
• Information flows from x to y without an explicit assignment of the form y := f(x)– f(x) an arithmetic expression with variable x
• Example from previous slide:– if x = 1 then y := 0
else y := 1;• So must look for implicit flows of
information to analyze program
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-506
Notation
• x means class of x– In Bell-LaPadula based system, same as “label
of security compartment to which x belongs”• x ≤ y means “information can flow from an
element in class of x to an element in class of y– Or, “information with a label placing it in class
x can flow into class y”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-507
Compiler-Based Mechanisms
• Detect unauthorized information flows in a program during compilation
• Analysis not precise, but secure– If a flow could violate policy (but may not), it is
unauthorized– No unauthorized path along which information could
flow remains undetected• Set of statements certified with respect to
information flow policy if flows in set of statements do not violate that policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-508
Example
ifx= 1 then y:= a;
else y:= b;
• Info flows from x and a to y, or from x and b to y
• Certified only if x ≤ y and a ≤ y and b ≤ y– Note flows for both branches must be true
unless compiler can determine that one branch will never be taken
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-509
Declarations
• Notation:x: intclass { A, B }
means x is an integer variable with security class at least lub{ A, B }, so lub{ A, B } ≤ x
• Distinguished classes Low, High– Constants are always Low
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-510
Input Parameters
• Parameters through which data passed into procedure
• Class of parameter is class of actual argument
ip: type class { ip}
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-511
Output Parameters
• Parameters through which data passed out of procedure– If data passed in, called input/output parameter
• As information can flow from input parameters to output parameters, class must include this:
op: type class { r1, …, rn}
where ri is class of ith input or input/output argument
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-512
Example
proc sum(x: int class { A };
varout: intclass { A, B });
begin
out:= out+ x;
end;
• Require x ≤ out and out ≤ out
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-513
Array Elements
• Information flowing out:… := a[i]
Value of i, a[i] both affect result, so class is lub{ a[i], i }
• Information flowing in:a[i] := …
• Only value of a[i] affected, so class is a[i]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-514
Assignment Statements
x := y + z;• Information flows from y, z to x, so this
requires lub{ y, z } ≤ xMore generally:y:= f(x1, …, xn)
• the relation lub{ x1, …, xn } ≤ y must hold
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-515
Compound Statements
x := y + z; a := b * c – x;• First statement: lub{ y, z } ≤ x• Second statement: lub{ b, c, x } ≤ a• So, both must hold (i.e., be secure)More generally:S1; … Sn;
• Each individual Si must be secure
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-516
Conditional Statementsif x + y< zthen a:= b else d:= b* c –x; end
• The statement executed reveals information about x, y, z, so lub{ x, y, z } ≤ glb{ a, d }
More generally:if f(x1, …, xn) then S1 else S2; end
• S1, S2 must be secure• lub{ x1, …, xn } ≤
glb{y | y target of assignment in S1, S2 }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-517
Iterative Statementswhile i< n do begin a[i] := b[i]; i:= i+ 1; end
• Same ideas as for “if”, but must terminateMore generally:whilef(x1, …, xn) do S;
• Loop must terminate;• S must be secure• lub{ x1, …, xn } ≤
glb{y | y target of assignment in S }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-518
Iterative Statementswhile i< n do begin a[i] := b[i]; i:= i+ 1; end
• Same ideas as for “if”, but must terminateMore generally:whilef(x1, …, xn) do S;
• Loop must terminate;• S must be secure• lub{ x1, …, xn } ≤
glb{y | y target of assignment in S }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-519
Goto Statements
• No assignments– Hence no explicit flows
• Need to detect implicit flows• Basic block is sequence of statements that
have one entry point and one exit point– Control in block always flows from entry point
to exit point
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-520
Example Programproc tm(x: array[1..10][1..10] of intclass {x};
vary: array[1..10][1..10] of intclass {y});
vari, j: int{i};
beginb1i:= 1;
b2 L2: if i> 10 gotoL7;b3j:= 1;
b4 L4: if j> 10 then goto L6;
b5 y[j][i] := x[i][j]; j:= j+ 1; goto L4;b6 L6: i:= i+ 1; goto L2;
b7 L7:end;
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-521
Flow of Control
b1 b2 b7
b6b3
b4
b5
i > n
i ≤ n
j > n
j ≤ n
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-522
IFDs
• Idea: when two paths out of basic block, implicit flow occurs– Because information says which path to take
• When paths converge, either:– Implicit flow becomes irrelevant; or– Implicit flow becomes explicit
• Immediate forward dominator of basic block b(written IFD(b)) is first basic block lying on all paths of execution passing through b
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-523
IFD Example
• In previous procedure:– IFD(b1) = b2 one path– IFD(b2) = b7 b2→b7 or b2→b3→b6→b2→b7
– IFD(b3) = b4 one path– IFD(b4) = b6 b4→b6 or b4→b5→b6
– IFD(b5) = b4 one path– IFD(b6) = b2 one path
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-524
Requirements
• Bi is set of basic blocks along an execution path from bi to IFD(bi)– Analogous to statements in conditional statement
• xi1, …, xin variables in expression selecting which execution path containing basic blocks in Bi used– Analogous to conditional expression
• Requirements for secure:– All statements in each basic blocks are secure– lub{ xi1, …, xin } ≤
glb{ y | y target of assignment in Bi }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-525
Example of Requirements
• Within each basic block:b1: Low ≤ i b3: Low ≤ j b6: lub{ Low, i } ≤ ib5: lub{ x[i][j], i, j } ≤ y[j][i] }; lub{ Low, j } ≤ j– Combining, lub{ x[i][j], i, j } ≤ y[j][i] }– From declarations, true when lub{ x, i } ≤ y
• B2 = {b3, b4, b5, b6}– Assignments to i, j, y[j][i]; conditional is i ≤ 10– Requires i ≤ glb{ i, j, y[j][i] }– From declarations, true when i ≤ y
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-526
Example (continued)
• B4 = { b5 }– Assignments to j, y[j][i]; conditional is j ≤ 10– Requires j ≤ glb{ j, y[j][i] }– From declarations, means i ≤ y
• Result:– Combine lub{ x, i } ≤ y; i ≤ y; i ≤ y– Requirement is lub{ x, i } ≤ y
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-527
Procedure Callstm(a, b);
From previous slides, to be secure, lub{ x, i } ≤ y must hold• In call, x corresponds to a, y to b• Means that lub{ a, i } ≤ b, or a ≤ bMore generally:proc pn(i1, …, im: int; var o1, …, on: int) begin S end;
• S must be secure• For all j and k, if ij ≤ ok, then xj ≤ yk• For all j and k, if oj ≤ ok, then yj ≤ yk
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-528
Exceptionsproc copy(x: intclass { x };
vary: intclass Low)var sum: intclass { x };
z: intclass Low;
beginy:= z:= sum := 0;while z = 0 do begin
sum := sum + x;y:= y+ 1;
end
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-529
Exceptions (cont)
• When sum overflows, integer overflow trap– Procedure exits– Value of x is MAXINT/y– Info flows from y to x, but x ≤ y never checked
• Need to handle exceptions explicitly– Idea: on integer overflow, terminate loop
on integer_overflow_exception sum do z:= 1;
– Now info flows from sum to z, meaning sum ≤ z– This is false (sum = { x } dominates z = Low)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-530
Infinite Loopsproc copy(x: int0..1 class { x };
vary: int0..1 class Low)begin
y:= 0;
while x = 0 do(* nothing *);
y:= 1;
end
• If x = 0 initially, infinite loop• If x = 1 initially, terminates with y set to 1• No explicit flows, but implicit flow from x to y
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-531
Semaphores
Use these constructs:wait(x): if x = 0 then block until x > 0; x:= x – 1;
signal(x): x:= x + 1;
– x is semaphore, a shared variable– Both executed atomically
Consider statementwait(sem); x:= x + 1;
• Implicit flow from sem to x– Certification must take this into account!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-532
Flow Requirements
• Semaphores in signal irrelevant– Don’t affect information flow in that process
• Statement S is a wait– shared(S): set of shared variables read
• Idea: information flows out of variables in shared(S)– fglb(S): glb of assignment targets following S– So, requirement is shared(S) ≤ fglb(S)
• begin S1; … Sn end– All Si must be secure– For all i, shared(Si) ≤ fglb(Si)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-533
Examplebegin
x:= y + z; (* S1 *)wait(sem); (* S2 *)a:= b* c – x; (* S3 *)
end
• Requirements:– lub{ y, z } ≤ x– lub{ b, c, x } ≤ a– sem ≤ a
• Because fglb(S2) = a and shared(S2) = sem
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-534
Concurrent Loops
• Similar, but wait in loop affects all statements in loop– Because if flow of control loops, statements in loop
before wait may be executed after wait
• Requirements– Loop terminates– All statements S1, …, Sn in loop secure– lub{ shared(S1), …, shared(Sn) } ≤ glb(t1, …, tm)
• Where t1, …, tm are variables assigned to in loop
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-535
Loop Examplewhile i< ndo begin
a[i] := item; (* S1 *)wait(sem); (* S2 *)i:= i+ 1; (* S3 *)
end
• Conditions for this to be secure:– Loop terminates, so this condition met– S1 secure if lub{ i, item } ≤ a[i]– S2 secure if sem ≤ i and sem ≤ a[i]– S3 trivially secure
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-536
cobegin/coend
cobegin
x:= y + z; (* S1 *)
a:= b* c – y; (* S2 *)
coend
• No information flow among statements– For S1, lub{ y, z } ≤ x– For S2, lub{ b, c, y } ≤ a
• Security requirement is both must hold– So this is secure if lub{ y, z } ≤ x ∧ lub{ b, c, y } ≤ a
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-537
Soundness
• Above exposition intuitive• Can be made rigorous:
– Express flows as types– Equate certification to correct use of types– Checking for valid information flows same as
checking types conform to semantics imposed by security policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-538
Execution-Based Mechanisms
• Detect and stop flows of information that violate policy– Done at run time, not compile time
• Obvious approach: check explicit flows– Problem: assume for security, x ≤ y
if x = 1 then y:= a;
– When x ≠ 1, x = High, y = Low, a = Low, appears okay—but implicit flow violates condition!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-539
Fenton’s Data Mark Machine
• Each variable has an associated class• Program counter (PC) has one too• Idea: branches are assignments to PC, so
you can treat implicit flows as explicit flows• Stack-based machine, so everything done in
terms of pushing onto and popping from a program stack
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-540
Instruction Description
• skip means instruction not executed• push(x, x) means push variable x and its
security class x onto program stack• pop(x, x) means pop top value and security
class from program stack, assign them to variable x and its security class xrespectively
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-541
Instructions
• x:= x+ 1 (increment)– Same as:if PC ≤ xthen x:= x+ 1 else skip
• if x = 0 then goto nelse x:= x –1 (branch and save PC on stack)– Same as:if x = 0 then beginpush(PC, PC); PC := lub{PC, x}; PC := n;
end else if PC ≤ xthenx:= x-1
else
skip;
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-542
More Instructions
• if’x= 0 then goto n else x:= x – 1 (branch without saving PC on stack)– Same as:if x = 0 then
if x ≤PC then PC := n else skip
else
if PC ≤ xthen x:= x-1 else skip
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-543
More Instructions
• return (go to just after last if)– Same as:pop(PC, PC);
• halt(stop)– Same as:if program stack emptythen halt
– Note stack empty to prevent user obtaining information from it after halting
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-544
Example Program
1 if x = 0 then goto 4 else x:= x-1
2 if z = 0 then goto 6 else z:= z-1
3 halt
4 z:= z-1
5 return
6 y:= y-1
7 return
• Initially x = 0 or x = 1, y = 0, z = 0• Program copies value of x to y
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-545
Example Executionx y z PC PC stack check1 0 0 1 Low —0 0 0 2 Low — Low ≤ x0 0 0 6 z (3, Low)0 1 0 7 z (3, Low) PC ≤ y0 1 0 3 Low —
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-546
Handling Errors
• Ignore statement that causes error, but continue execution– If aborted or a visible exception taken, user
could deduce information– Means errors cannot be reported unless user has
clearance at least equal to that of the information causing the error
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-547
Variable Classes
• Up to now, classes fixed– Check relationships on assignment, etc.
• Consider variable classes– Fenton’s Data Mark Machine does this for PC– On assignment of form y := f(x1, …, xn), y
changed to lub{ x1, …, xn }– Need to consider implicit flows, also
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-548
Example Program(* Copy value from xto y* Initially, xis 0 or 1 *)proc copy(x: intclass { x };
vary: intclass { y })varz: intclass variable { Low };begin
y:= 0;z:= 0;if x = 0 then z:= 1;if z = 0 then y:= 1;
end;
• z changes when z assigned to• Assume y < x
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-549
Analysis of Example
• x = 0– z:= 0 sets z to Low– if x = 0 then z:= 1 sets z to 1 and z to x– So on exit, y = 0
• x = 1– z:= 0 sets z to Low– if z = 0 then y:= 1 sets y to 1 and checks that lub{Low,
z} ≤ y– So on exit, y = 1
• Information flowed from x to y even though y < x
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-550
Handling This (1)
• Fenton’s Data Mark Machine detects implicit flows violating certification rules
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-551
Handling This (2)
• Raise class of variables assigned to in conditionals even when branch not taken
• Also, verify information flow requirements even when branch not taken
• Example:– In if x= 0 then z:= 1, z raised to x whether or not x = 0– Certification check in next statement, that z ≤ y, fails, as
z = x from previous statement, and y ≤ x
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-552
Handling This (3)
• Change classes only when explicit flows occur, but all flows (implicit as well as explicit) force certification checks
• Example– When x = 0, first “if” sets z to Low then checks x ≤ z– When x = 1, first “if” checks that x ≤ z– This holds if and only if x = Low
• Not possible as y < x = Low and there is no such class
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-553
Example Information Flow Control Systems
• Use access controls of various types to inhibit information flows
• Security Pipeline Interface– Analyzes data moving from host to destination
• Secure Network Server Mail Guard– Controls flow of data between networks that
have different security classifications
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-554
Security Pipeline Interface
• SPI analyzes data going to, from host– No access to host main memory– Host has no control over SPI
host
second disk
first diskSPI
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-555
Use
• Store files on first disk• Store corresponding crypto checksums on second
disk• Host requests file from first disk
– SPI retrieves file, computes crypto checksum– SPI retrieves file’s crypto checksum from second disk– If a match, file is fine and forwarded to host– If discrepency, file is compromised and host notified
• Integrity information flow restricted here– Corrupt file can be seen but will not be trusted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-556
Secure Network Server Mail Guard (SNSMG)
• Filters analyze outgoing messages– Check authorization of sender– Sanitize message if needed (words and viruses, etc.)
• Uses type checking to enforce this– Incoming, outgoing messages of different type– Only appropriate type can be moved in or out
MTA MTA
out in
filtersSECRET computer
UNCLASSIFIED computer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-557
Key Points
• Both amount of information, direction of flow important– Flows can be explicit or implicit
• Compiler-based checks flows at compile time
• Execution-based checks flows at run time
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-558
Chapter 16: Confinement Problem
• What is the problem?• Isolation: virtual machines, sandboxes• Detecting covert channels• Analyzing covert channels• Mitigating covert channels
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-559
Overview
• The confinement problem• Isolating entities
– Virtual machines– Sandboxes
• Covert channels– Detecting them– Analyzing them– Mitigating them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-560
Example Problem
• Server balances bank accounts for clients• Server security issues:
– Record correctly who used it– Send only balancing info to client
• Client security issues:– Log use correctly– Do not save or retransmit data client sends
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-561
Generalization
• Client sends request, data to server• Server performs some function on data• Server returns result to client• Access controls:
– Server must ensure the resources it accesses on behalf of client include only resources client is authorized to access
– Server must ensure it does not reveal client’s data to any entity not authorized to see the client’s data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-562
Confinement Problem
• Problem of preventing a server from leaking information that the user of the service considers confidential
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-563
Total Isolation
• Process cannot communicate with any other process
• Process cannot be observed
Impossible for this process to leak information– Not practical as process uses observable
resources such as CPU, secondary storage, networks, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-564
Example
• Processes p, q not allowed to communicate– But they share a file system!
• Communications protocol:– p sends a bit by creating a file called 0 or 1, then a
second file called send• p waits until send is deleted before repeating to send another
bit– q waits until file send exists, then looks for file 0 or 1;
whichever exists is the bit• q then deletes 0, 1, and send and waits until send is recreated
before repeating to read another bit
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-565
Covert Channel
• A path of communication not designed to be used for communication
• In example, file system is a (storage) covert channel
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-566
Rule of Transitive Confinement
• If p is confined to prevent leaking, and it invokes q, then q must be similarly confined to prevent leaking
• Rule: if a confined process invokes a second process, the second process must be as confined as the first
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-567
Lipner’s Notes
• All processes can obtain rough idea of time– Read system clock or wall clock time– Determine number of instructions executed
• All processes can manipulate time– Wait some interval of wall clock time– Execute a set number of instructions, then block
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-568
Kocher’s Attack
• This computes x = az mod n, where z = z0 … zk–1
x:= 1; atmp:= a;for i:= 0 to k–1 do beginif zi= 1 then
x:= (x* atmp) mod n;atmp := (atmp * atmp) mod n;
endresult:= x;
• Length of run time related to number of 1 bits in z
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-569
Isolation
• Virtual machines– Emulate computer– Process cannot access underlying computer
system, anything not part of that computer system
• Sandboxing– Does not emulate computer– Alters interface between computer, process
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-570
Virtual Machine (VM)
• A program that simulates hardware of computer system
• Virtual machine monitor (VMM) provides VM on which conventional OS can run– Each VM is one subject; VMM knows nothing about
processes running on each VM– VMM mediates all interactions of VM with resources,
other VMS– Satisfies rule of transitive closure
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-571
Example: KVM/370
• Security-enhanced version of IBM VM/370 VMM• Goals
– Provide virtual machines for users– Prevent VMs of different security classes from
communicating
• Provides minidisks; some VMs could share some areas of disk– Security policy controlled access to shared areas to
limit communications to those allowed by policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-572
DEC VAX VMM
• VMM is security kernel– Can run Ultrix OS or VMS OS
• Invoked on trap to execute privileged instruction– Only VMM can access hardware directly– VM kernel, executive levels both mapped into physical
executive level• VMM subjects: users, VMs
– Each VM has own disk areas, file systems– Each subject, object has multilevel security, integrity
labels
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-573
Sandbox
• Environment in which actions of process are restricted according to security policy– Can add extra security-checking mechanisms to
libraries, kernel• Program to be executed is not altered
– Can modify program or process to be executed• Similar to debuggers, profilers that add breakpoints• Add code to do extra checks (memory access, etc.)
as program runs (software fault isolation)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-574
Example: Limiting Execution
• Sidewinder– Uses type enforcement to confine processes– Sandbox built into kernel; site cannot alter it
• Java VM– Restricts set of files that applet can access and hosts to which
applet can connect
• DTE, type enforcement mechanism for DTEL– Kernel modifications enable system administrators to configure
sandboxes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-575
Example: Trapping System Calls
• Janus: execution environment– Users restrict objects, modes of access
• Two components– Framework does run-time checking– Modules determine which accesses allowed
• Configuration file controls modules loaded, constraints to be enforced
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-576
Janus Configuration File# basic module
basic
— Load basic module# define subprocess environment variables
putenvIFS=”\t\n“PATH=/sbin:/bin:/usr/bin TZ=PST8PDT
— Define environmental variables for process# deny access to everything except files under /usr
path deny read,write *
path allow read,write /usr/*
— Deny all file accesses except to those under /usr# allow subprocessto read files in library directories
# needed for dynamic loading
path allow read /lib/* /usr/lib/* /usr/local/lib/*
— Allow reading of files in these directories (all dynamic load libraries are here)# needed so child can execute programs
path allow read,exec /sbin/* /bin/* /usr/bin/*
— Allow reading, execution of subprograms in these directories
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-577
Janus Implementation
• System calls to be monitored defined in modules• On system call, Janus framework invoked
– Validates system call with those specific parameters are allowed
– If not, sets process environment to indicate call failed– If okay, framework gives control back to process; on
return, framework invoked to update state• Example: reading MIME mail
– Embed “delete file” in Postscript attachment– Set Janus to disallow Postscript engine access to files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-578
Covert Channels
• Channel using shared resources as a communication path
• Covert storage channel uses attribute of shared resource
• Covert timing channel uses temporal or ordering relationship among accesses to shared resource
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-579
Example: File Manipulation
• Communications protocol:– p sends a bit by creating a file called 0 or 1, then a
second file called send• p waits until send is deleted before repeating to send another
bit– q waits until file send exists, then looks for file 0 or 1;
whichever exists is the bit• q then deletes 0, 1, and send and waits until send is recreated
before repeating to read another bit
• Covert storage channel: resource is directory, names of files in directory
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-580
Example: Real-Time Clock
• KVM/370 had covert timing channel– VM1 wants to send 1 bit to VM2– To send 0 bit: VM1 relinquishes CPU as soon as it gets
CPU– To send 1 bit: VM1 uses CPU for full quantum– VM2 determines which bit is sent by seeing how
quickly it gets CPU– Shared resource is CPU, timing because real-time clock
used to measure intervaps between accesses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-581
Example: Ordering of Events
• Two VMs– Share cylinders 100–200 on a disk– One is High, one is Low; process on High VM
wants to send to process on Low VM• Disk scheduler uses SCAN algorithm• Low process seeks to cylinder 150 and
relinquishes CPU– Now we know where the disk head is
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-582
Example (con’t)
• High wants to send a bit– To send 1 bit, High seeks to cylinder 140 and relinquish
CPU– To send 0 bit, High seeks to cylinder 160 and relinquish
CPU• Low issues requests for tracks 139 and 161
– Seek to 139 first indicates a 1 bit– Seek to 161 first indicates a 0 bit
• Covert timing channel: uses ordering relationship among accesses to transmit information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-583
Noise
• Noiseless covert channel uses shared resource available to sender, receiver only
• Noisy covert channel uses shared resource available to sender, receive, and others– Need to minimize interference enough so that
message can be read in spite of others’ use of channel
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-584
Key Properties
• Existence– Determining whether the covert channel exists
• Bandwidth– Determining how much information can be sent
over the channel
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-585
Detection
• Covert channels require sharing• Manner of sharing controls which subjects
can send, which subjects can receive information using that shared resource
• Porras, Kemmerer: model flow of information through shared resources with a tree– Called covert flow trees
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-586
Goal Symbol Tree Nodes• Modification: attribute modified• Recognition: attribute modification detected• Direct recognition: subject can detect attribute
modification by referencing attribute directly or calling function that returns it
• Inferred recognition: subject can detect attribute modification without direct reference
• Inferred-via: info passed from one attribute to another via specified primitive (e.g. syscall)
• Recognized-new-state: modified attribute specified by inferred-via goal
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-587
Other Tree Nodes
• Operation symbol represents primitive operation• Failure symbol indicates information cannot be
sent along path• And symbol reached when for all children
– Child is operation; and– If child goal, then goal is reached
• Or symbol reached when for any child:– Child is operation; or– If child goal, then goal is reached
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-588
Constructing Tree
• Example: files in file system have 3 attributes– locked: true when file locked– isopen: true when file opened– inuse: set containing PID of processes having file open
• Functions:– read_access(p, f): true if p has read rights over file f– empty(s): true if set s is empty– random: returns one of its arguments chosen at random
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-589
Locking and Opening Routines(* lock the file if it is not locked and not opened; otherwise indicate it is locked by returning false *)
procedure Lockfile(f: file): boolean;
begin
if not f.locked and empty(f.inuse) then
f.locked := true;
end;
(* unlock the file *)
procedure Unlockfile(f: file);
begin
if f.locked then
f.locked := false;
end;
(* say whether the file is locked *)
function Filelocked(f: file): boolean;
begin
Filelocked:= f.locked;
end;
(* open the file if it isn’t locked and the process has the right to read the file *)
procedure Openfile(f: file);
begin
if not f.locked and read_access(process_id, f) then
(* add process ID to inuse set *)
f.inuse = f.inuse + process_id;
end;
(* if the process can read the file, say if the file is open, otherwise return a value at random *)
function Fileopened(f: file): boolean;
begin
if not read_access(process_id, f) then
Fileopened := random(true, false);
else
Fileopened := not isempty(f.inuse);
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-590
Attributes and Operations
inuse∅locked∅∅return
∅inuse∅∅lockedmodify
inuselocked, inuse
lockedlockedlocked, inuse
reference
FileopenedOpenfileFilelockedUnlockfileLockfile
∅ means no attribute affected in specified manner
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-591
Tree Construction
• This is for attribute locked– Goal state: “covert storage channel via attribute
locked”– Type of goal controls construction
• “And” node has 2 children, a “modification” and a “recognition”– Here, both “of attribute locked”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-592
First Step
• Put “and” node under goal
• Put children under “and” node
Covert storage channelvia attribute locked
Modification ofattribute locked
Recognition ofattribute locked
•
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-593
Second Step
• Operations Lockfileand Unlockfile modify locked– See attribute and
operations table
Modification ofattribute locked
Lockfile Unlockfile
+
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-594
Third Step• “Recognition” had direct,
inferred recognition children
• Direct recognition child: “and” node with Filelocked child– Filelocked returns value of
locked• Inferred recognition child:
“or” node with “inferred-via” node– Infers locked from inuse
Recognition ofattribute locked
+
Direct recognition ofattribute locked
+
Filelocked
Indirect recognition ofattribute locked
+
Infer attribute lockedvia attribute inuse
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-595
Fourth Step
• “Inferred-via” node requires Openfile– Change in attribute
inuse represented by recognize-new-state goal
Openfile
Infer attribute lockedvia attribute inuse
•
Recognition ofattribute inuse
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-596
Fifth Step
• “Recognize-new-state” node– Direct recognition node:
“or” child, Fileopened node beneath (recognizes change in inuse directly)
– Inferred recognition node: “or” child, FALSE node beneath (nothing recognizes change in inuse indirectly)
Recognition ofattribute inuse
+
Direct recognition ofattribute inuse
Indirect recognition ofattribute inuse
+
Fileopened
+
FALSE
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-597
Final Tree
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-598
Finding Covert Channels
• Find sequences of operations that modify attribute– ( Lockfile ), ( Unlockfile )
• Find sequences of operations that recognize modifications to attribute– ( Filelocked ), ( Openfile, Fileopened ) )
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-599
Covert Channel Commands
• Sequences with first element from first list, second element from second list– Lockfile, then Filelocked– Unlockfile, then Filelocked– Lockfile, then Openfile, then Fileopened– Unlockfile, then Openfile, then Fileopened
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-600
Mitigation
• Goal: obscure amount of resources a process uses– Receiver cannot determine what part sender is
using and what part is obfuscated• How to do this?
– Devote uniform, fixed amount of resources to each process
– Inject randomness into allocation, use of resources
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-601
Example: Pump
communications bufferholds n items
Low process High process
Highbuffer
Lowbuffer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-602
Covert Timing Channel
• High process can control rate at which pump sends it messages
• Initialization: Low sends messages to pump until communications buffer full– Low gets ACK for each message put into the buffer; no
ACK for messages when communications buffer full• Protocol: sequence of trials; for each trial
– High sends a 1 by reading a message• Then Low gets ACK when it sends another message
– High sends a 0 by not reading a message• Then Low doesn’t gets ACK when it sends another message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-603
How to Fix
• Assume: Low process, pump can process messages faster than High process
• Case 1: High process handles messages more quickly than Low process gets acknowledgements– Pump artificially delaying ACKs
• Low process waits for ACK regardless of whether buffer is full– Low cannot tell whether buffer is full
• Closes covert channel– Not optimal (processes may wait even when
unnecessary)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-604
How to Fix (con’t)
• Case 2: Low process sends messages faster than High process can remove them– Maximizes performance– Opens covert channel
• Case 3: Pump, processes handle messages at same rate– Decreases bandwidth of covert channel, increases
performance– Opens covert channel, sub-optimal performance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-605
Adding Noise to Direct Channel
• Kang, Moskowitz: do this in such a way as to approximate case 3– Reduces covert channel’s capacity to 1/nr
• r time between Low process sending message and receiving ACK when buffer not full
– Conclusion: pump substantially reduces capacity of covert channel between High, Low processes when compared with direct connection
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-606
Key Points
• Confinement problem: prevent leakage of information– Solution: separation and/or isolation
• Shared resources offer paths along which information can be transferred
• Covert channels difficult if not impossible to eliminate– Bandwidth can be greatly reduced, however!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-607
Chapter 17: Introduction to Assurance
• Overview• Why assurance?• Trust and assurance• Life cycle and assurance• Building security in vs. adding security later
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-608
Overview
• Trust• Problems from lack of assurance• Types of assurance• Life cycle and assurance• Waterfall life cycle model• Other life cycle models• Adding security afterwards
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-609
Trust
• Trustworthy entity has sufficient credible evidence leading one to believe that the system will meet a set of requirements
• Trust is a measure of trustworthiness relying on the evidence
• Assurance is confidence that an entity meets its security requirements based on evidence provided by applying assurance techniques
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-610
Relationships
Policy
Mechanisms
Assurance
Statement of requirements that explicitly definesthe security expectations of the mechanism(s)
Provides justification that the mechanism meets policythrough assurance evidence and approvals based onevidence
Executable entities that are designed and implementedto meet the requirements of the policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-611
Problem Sources1. Requirements definitions, omissions, and mistakes2. System design flaws3. Hardware implementation flaws, such as wiring and chip flaws4. Software implementation errors, program bugs, and compiler bugs5. System use and operation errors and inadvertent mistakes6. Willful system misuse7. Hardware, communication, or other equipment malfunction8. Environmental problems, natural causes, and acts of God9. Evolution, maintenance, faulty upgrades, and decommissions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-612
Examples• Challenger explosion
– Sensors removed from booster rockets to meet accelerated launch schedule
• Deaths from faulty radiation therapy system– Hardware safety interlock removed– Flaws in software design
• Bell V22 Osprey crashes– Failure to correct for malfunctioning components; two faulty ones
could outvote a third• Intel 486 chip
– Bug in trigonometric functions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-613
Role of Requirements
• Requirements are statements of goals that must be met– Vary from high-level, generic issues to low-
level, concrete issues• Security objectives are high-level security
issues• Security requirements are specific, concrete
issues
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-614
Types of Assurance
• Policy assurance is evidence establishing security requirements in policy is complete, consistent, technically sound
• Design assurance is evidence establishing design sufficient to meet requirements of security policy
• Implementation assurance is evidence establishing implementation consistent with security requirements of security policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-615
Types of Assurance
• Operational assurance is evidence establishing system sustains the security policy requirements during installation, configuration, and day-to-day operation– Also called administrative assurance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-616
Life Cycle
Security requirements
Design
Implementation
1
32
4
Assurancejustification
Design andimplementationrefinement
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-617
Life Cycle
• Conception• Manufacture• Deployment• Fielded Product Life
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-618
Conception• Idea
– Decisions to pursue it• Proof of concept
– See if idea has merit• High-level requirements analysis
– What does “secure” mean for this concept?– Is it possible for this concept to meet this meaning of security?– Is the organization willing to support the additional resources
required to make this concept meet this meaning of security?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-619
Manufacture
• Develop detailed plans for each group involved– May depend on use; internal product requires
no sales• Implement the plans to create entity
– Includes decisions whether to proceed, for example due to market needs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-620
Deployment
• Delivery– Assure that correct masters are delivered to
production and protected– Distribute to customers, sales organizations
• Installation and configuration– Ensure product works appropriately for specific
environment into which it is installed– Service people know security procedures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-621
Fielded Product Life
• Routine maintenance, patching– Responsibility of engineering in small
organizations– Responsibility may be in different group than
one that manufactures product• Customer service, support organizations• Retirement or decommission of product
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-622
Waterfall Life Cycle Model
• Requirements definition and analysis– Functional and non-functional– General (for customer), specifications
• System and software design• Implementation and unit testing• Integration and system testing• Operation and maintenance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-623
Relationship of Stages
Requirementsdefinition andanalysis
System andsoftwaredesign
Implementationand unittesting Integration
and systemtesting
Operationandmaintenance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-624
Models
• Exploratory programming– Develop working system quickly– Used when detailed requirements specification cannot
be formulated in advance, and adequacy is goal– No requirements or design specification, so low
assurance
• Prototyping– Objective is to establish system requirements– Future iterations (after first) allow assurance techniques
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-625
Models
• Formal transformation– Create formal specification– Translate it into program using correctness-preserving
transformations– Very conducive to assurance methods
• System assembly from reusable components– Depends on whether components are trusted– Must assure connections, composition as well– Very complex, difficult to assure
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-626
Models
• Extreme programming– Rapid prototyping and “best practices”– Project driven by business decisions– Requirements open until project complete– Programmers work in teams– Components tested, integrated several times a day– Objective is to get system into production as quickly as
possible, then enhance it– Evidence adduced after development needed for
assurance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-627
Security: Built In or Add On?
• Think of security as you do performance– You don’t build a system, then add in
performance later• Can “tweak” system to improve performance a little• Much more effective to change fundamental
algorithms, design
• You need to design it in– Otherwise, system lacks fundamental and
structural concepts for high assurance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-628
Reference Validation Mechanism
• Reference monitor is access control concept of an abstract machine that mediates all accesses to objects by subjects
• Reference validation mechanism (RVM) is an implementation of the reference monitor concept.– Tamperproof– Complete (always invoked and can never be bypassed)– Simple (small enough to be subject to analysis and
testing, the completeness of which can be assured)• Last engenders trust by providing assurance of correctness
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-629
Examples
• Security kernel combines hardware and software to implement reference monitor
• Trusted computing base (TCB) is all protection mechanisms within a system responsible for enforcing security policy– Includes hardware and software– Generalizes notion of security kernel
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-630
Adding On Security
• Key to problem: analysis and testing• Designing in mechanisms allow assurance at all
levels– Too many features adds complexity, complicates
analysis• Adding in mechanisms makes assurance hard
– Gap in abstraction from requirements to design may prevent complete requirements testing
– May be spread throughout system (analysis hard)– Assurance may be limited to test results
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-631
Example
• 2 AT&T products– Add mandatory controls to UNIX system– SV/MLS
• Add MAC to UNIX System V Release 3.2
– SVR4.1ES• Re-architect UNIX system to support MAC
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-632
Comparison
• Architecting of System– SV/MLS: used existing kernel modular
structure; no implementation of least privilege– SVR4.1ES: restructured kernel to make it
highly modular and incorporated least privilege
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-633
Comparison
• File Attributes (inodes)– SV/MLS added separate table for MAC labels, DAC permissions
• UNIX inodes have no space for labels; pointer to table added• Problem: 2 accesses needed to check permissions• Problem: possible inconsistency when permissions changed• Corrupted table causes corrupted permissions
– SVR4.1ES defined new inode structure• Included MAC labels• Only 1 access needed to check permissions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-634
Key Points
• Assurance critical for determining trustworthiness of systems
• Different levels of assurance, from informal evidence to rigorous mathematical evidence
• Assurance needed at all stages of system life cycle• Building security in is more effective than adding
it later
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-635
Chapter 18: Evaluating Systems
• Goals• Trusted Computer System Evaluation
Criteria• FIPS 140• Common Criteria• SSE-CMM
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-636
Overview
• Goals– Why evaluate?
• Evaluation criteria– TCSEC (aka Orange Book)– FIPS 140– Common Criteria– SSE-CMM
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-637
Goals
• Show that a system meets specific security requirements under specific conditions– Called a trusted system– Based on specific assurance evidence
• Formal evaluation methodology– Technique used to provide measurements of
trust based on specific security requirements and evidence of assurance
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-638
Evaluation Methodology• Provides set of requirements defining security functionality
for system• Provides set of assurance requirements delineating steps
for establishing that system meets its functional requirements
• Provides methodology for determining that system meets functional requirements based on analysis of assurance evidence
• Provides measure of result indicating how trustworthy system is with respect to security functional requirements– Called level of trust
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-639
Why Evaluate?
• Provides an independent assessment, and measure of assurance, by experts– Includes assessment of requirements to see if they are
consistent, complete, technically sound, sufficient to counter threats
– Includes assessment of administrative, user, installation, other documentation that provides information on proper configuration, administration, use of system
• Independence critical– Experts bring fresh perspectives, eyes to assessment
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-640
Bit of History
• Government, military drove early evaluation processes– Their desire to use commercial products led to
businesses developing methodologies for evaluating security, trustworthiness of systems
• Methodologies provide combination of– Functional requirements– Assurance requirements– Levels of trust
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-641
TCSEC: 1983–1999
• Trusted Computer System Evaluation Criteria– Also known as the Orange Book– Series that expanded on Orange Book in specific areas
was called Rainbow Series– Developed by National Computer Security Center, US
Dept. of Defense• Heavily influenced by Bell-LaPadula model and
reference monitor concept• Emphasizes confidentiality
– Integrity addressed by *-property
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-642
Functional Requirements
• Discretionary access control requirements– Control sharing of named objects– Address propagation of access rights, ACLs,
granularity of controls• Object reuse requirements
– Hinder attacker gathering information from disk or memory that has been deleted
– Address overwriting data, revoking access rights, and assignment of resources when data in resource from previous use is present
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-643
Functional Requirements
• Mandatory access control requirements (B1 up)– Simple security condition, *-property– Description of hierarchy of labels
• Label requirements (B1 up)– Used to enforce MAC– Address representation of classifications, clearances,
exporting labeled information, human-readable output• Identification, authentication requirements
– Address granularity of authentication data, protecting that data, associating identity with auditable actions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-644
Functional Requirements
• Audit requirements– Define what audit records contain, events to be
recorded; set increases as other requirements increase• Trusted path requirements (B2 up)
– Communications path guaranteed between user, TCB• System architecture requirements
– Tamperproof reference validation mechanism– Process isolation– Enforcement of principle of least privilege– Well-defined user interfaces
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-645
Functional Requirements
• Trusted facility management (B2 up)– Separation of operator, administrator roles
• Trusted recovery (A1)– Securely recover after failure or discontinuity
• System integrity requirement– Hardware diagnostics to validate on-site
hardware, firmware of TCB
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-646
Assurance Requirements
• Configuration management requirements (B2 up)– Identify configuration items, consistent mappings
among documentation and code, tools for generating TCB
• System architecture requirements– Modularity, minimize complexity, etc.– TCB full reference validation mechanism at B3
• Trusted distribution requirement (A1)– Address integrity of mapping between masters and on-
site versions– Address acceptance procedures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-647
Assurance Requirements
• Design specification, verification requirements– B1: informal security policy model shown to be
consistent with its axioms– B2: formal security policy model proven to be
consistent with its axioms, descriptive top-level specification (DTLS)
– B3: DTLS shown to be consistent with security policy model
– A1: formal top-level specification (FTLS) shown consistent with security policy model using approved formal methods; mapping between FTLS, source code
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-648
Assurance Requirements
• Testing requirements– Address conformance with claims, resistance to
penetration, correction of flaws– Requires searching for covert channels for some classes
• Product documentation requirements– Security Features User’s Guide describes uses,
interactions of protection mechanisms– Trusted Facility Manual describes requirements for
running system securely• Other documentation: test, design docs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-649
Evaluation Classes A and BA1 Verified protection; significant use of formal methods;
trusted distribution; code, FTLS correspondenceB3 Security domains; full reference validation mechanism;
increases trusted path requirements, constrains code development; more DTLS requirements; documentation
B2 Structured protection; formal security policy model; MAC for all objects, labeling; trusted path; least privilege; covert channel analysis, configuration management
B1 Labeled security protection; informal security policy model; MAC for some objects; labeling; more stringent security testing
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-650
Evaluation Classes C and D
C2 Controlled access protection; object reuse, auditing, more stringent security testing
C1 Discretionary protection; minimal functional, assurance requirements; I&A controls; DAC
D Did not meet requirements of any other class
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-651
Evaluation Process
• Run by government, no fee to vendor• 3 stages
– Application: request for evaluation• May be denied if gov’t didn’t need product
– Preliminary technical review• Discussion of evaluation process, schedules,
development process, technical content, etc.• Determined schedule for evaluation
– Evaluation phase
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-652
Evaluation Phase
• 3 parts; results of each presented to technical review board composed of senior evaluators noton evaluating team; must approve that part before moving on to next part– Design analysis: review design based on documentation
provided; developed initial product assessment report• Source code not reviewed
– Test analysis: vendor’s, evaluators’ tests– Final evaluation report
• Once approved, all items closed, rating given
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-653
RAMP
• Ratings Maintenance Program goal: maintain assurance for new version of evaluated product
• Vendor would update assurance evidence• Technical review board reviewed vendor’s report
and, on approval, assigned evaluation rating to new version of product
• Note: major changes (structural, addition of some new functions) could be rejected here and a full new evaluation required
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-654
Impact
• New approach to evaluating security– Based on analyzing design, implementation,
documentation, procedures– Introduced evaluation classes, assurance requirements,
assurance-based evaluation– High technical standards for evaluation– Technical depth in evaluation procedures
• Some problems– Evaluation process difficult, lacking in resources– Mixed assurance, functionality together– Evaluations only recognized in US
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-655
Scope Limitations
• Written for operating systems– NCSC introduced “interpretations” for other things
such as networks (Trusted Network Interpretation, the Red Book), databases (Trusted Database Interpretation, the Purple or Lavender Book)
• Focuses on needs of US government– Most commercial firms do not need MAC
• Does not address integrity or availability– Critical to commercial firms
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-656
Process Limitations
• Criteria creep (expansion of requirements defining classes)– Criteria interpreted for specific product types– Sometimes strengthened basic requirements over time– Good for community (learned more about security), but
inconsistent over time• Length of time of evaluation
– Misunderstanding depth of evaluation– Management practices of evaluation– As was free, sometimes lacking in motivation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-657
Contributions
• Heightened awareness in commercial sector to computer security needs
• Commercial firms could not use it for their products– Did not cover networks, applications– Led to wave of new approaches to evaluation– Some commercial firms began offering certifications
• Basis for several other schemes, such as Federal Criteria, Common Criteria
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-658
FIPS 140: 1994–Present
• Evaluation standard for cryptographic modules (implementing cryptographic logic or processes)– Established by US government agencies and Canadian
Security Establishment• Updated in 2001 to address changes in process and
technology– Officially, FIPS 140-2
• Evaluates only crypto modules– If software, processor executing it also included, as is
operating system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-659
Requirements
• Four increasing levels of security• FIPS 140-1 covers basic design,
documentation, roles, cryptographic key management, testing, physical security (from electromagnetic interference), etc.
• FIPS 140-2 covers specification, ports & interfaces; finite state model; physical security; mitigation of other attacks; etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-660
Security Level 1
• Encryption algorithm must be FIPS-approved algorithm
• Software, firmware components may be executed on general-purpose system using unevaluated OS
• No physical security beyond use of production-grade equipment required
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-661
Security Level 2
• More physical security– Tamper-proof coatings or seals or pick-resistent locks
• Role-based authentication– Module must authenticate that operator is authorized to
assume specific role and perform specific services
• Software, firmware components may be executed on multiuser system with OS evaluated at EAL2 or better under Common Criteria– Must use one of specified set of protection profiles
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-662
Security Level 3
• Enhanced physical security– Enough to prevent intruders from accessing critical
security parameters within module• Identity-based authentication• Strong requirements for reading, altering critical
security parameters• Software, firmware components require OS to
have EAL3 evaluation, trusted path, informal security policy model– Can use equivalent evaluated trusted OS instead
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-663
Security Level 4
• “Envelope of protection” around module that detects, responds to all unauthorized attempts at physical access– Includes protection against environmental conditions or
fluctuations outside module’s range of voltage, temperatures
• Software, firmware components require OS meet functional requirements for Security Level 3, and assurance requirements for EAL4– Equivalent trusted operating system may be used
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-664
Impact
• By 2002, 164 modules, 332 algorithms tested– About 50% of modules had security flaws– More than 95% of modules had documentation errors– About 25% of algorithms had security flaws– More than 65% had documentation errors
• Program greatly improved quality, security of cryptographic modules
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-665
Common Criteria: 1998–Present
• Began in 1998 with signing of Common Criteria Recognition Agreement with 5 signers– US, UK, Canada, France, Germany
• As of May 2002, 10 more signers– Australia, Finland, Greece, Israel, Italy, Netherlands,
New Zealand, Norway, Spain, Sweden; India, Japan, Russia, South Korea developing appropriate schemes
• Standard 15408 of International Standards Organization
• De facto US security evaluation standard
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-666
Evaluation Methodology
• CC documents– Overview of methodology, functional requirements,
assurance requirements• CC Evaluation Methodology (CEM)
– Detailed guidelines for evaluation at each EAL; currently only EAL1–EAL4 defined
• Evaluation Scheme or National Scheme– Country-specific infrastructures implementing CEM– In US, it’s CC Evaluation and Validation Scheme;
NIST accredits commercial labs to do evaluations
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-667
CC Terms
• Target of Evaluation (TOE): system or product being evaluated
• TOE Security Policy (TSP): set of rules regulating how assets managed, protected, distributed within TOE
• TOE Security Functions (TSF): set consisting of all hardware, software, firmware of TOE that must be relied on for correct enforcement of TSP– Generalization of TCB
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-668
Protection Profiles
• CC Protection Profile (PP): implementation-independent set of security requirements for category of products or systems meeting specific consumer needs– Includes functional requirements
• Chosen from CC functional requirements by PP author– Includes assurance requirements
• Chosen from CC assurance requirements; may be EAL plus others
– PPs for firewalls, desktop systems, etc.– Evolved from ideas in earlier criteria
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-669
Form of PP
1. Introduction• PP Identification and PP Overview
2. Product or System Family Description• Includes description of type, general features of
product or system
3. Product or System Family Security Environment• Assumptions about intended use, environment of use;• Threats to the assets; and• Organizational security policies for product or system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-670
Form of PP (con’t)
4. Security Objectives• Trace security objectives for product back to aspects
of identified threats and/or policies• Trace security objectives for environment back to
threats not completely countered by product or systemand/or policies or assumptions not completely met by product or system
5. IT Security Requirements• Security functional requirements drawn from CC• Security assurance requirements based on an EAL
• May supply other requirements without reference to CC
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-671
Form of PP (con’t)
6. Rationale• Security Objectives Rationale demonstrates stated
objectives traceable to all assumptions, threats, policies
• Security Requirements Rationale demonstrates requirements for product or system and for environment traceable to objectives and meet them
• This section provides assurance evidence that PP is complete, consistent, technically sound
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-672
Security Target
• CC Security Target (ST): set of security requirements and specifications to be used as basis for evaluation of identified product or system– Can be derived from a PP, or directly from CC
• If from PP, ST can reference PP directly– Addresses issues for specific product or system
• PP addresses issues for a family of potential products or systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-673
How It Works
• Find appropriate PP and develop appropriate ST based upon it– If no PP, use CC to develop ST directly
• Evaluate ST in accordance with assurance class ASE– Validates that ST is complete, consistent,
technically sound• Evaluate product or system against ST
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-674
Form of ST
1. Introduction• ST Identification, ST Overview• CC Conformance Claim
• Part 2 (or part 3) conformant if all functional requirements are from part 2 (or part 3) of CC
• Part 2 (or part 3) extended if uses extended requirements defined by vendor as well
2. Product or System Description• Describes TOE as aid to understanding its security
requirement
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-675
Form of ST (con’t)
3.Product or System Family Security Environment
4.Security Objectives5.IT Security Requirements
• These are the same as for a PP
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-676
Form of ST (con’t)
6. Product or System Summary Specification• Statement of security functions, description of
how these meet functional requirements• Statement of assurance measures specifying
how assurance requirements met7. PP Claims
• Claims of conformance to (one or more) PP requirements
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-677
Form of ST (con’t)
8. Rationale• Security objectives rationale demonstrates stated objectives
traceable to assumptions, threats, policies• Security requirements rationale demonstrates requirements for
TOE and environment traceable to objectives and meets them• TOE summary specification rationale demonstrates how TOE
security functions and assurance measures meet security requirements
• Rationale for not meeting all dependencies• PP claims rationale explains differences between the ST objectives
and requirements and those of any PP to which conformance is claimed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-678
CC Requirements
• Both functional and assurance requirements• EALs built from assurance requirements• Requirements divided into classes based on
common purpose• Classes broken into smaller groups (families)• Families composed of components, or sets of
definitions of detailed requirements, dependent requirements and definition of hierarchy of requirements
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-679
Security Functional Requirements
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-680
SSE-CMM: 1997–Present
• Based on Software Engineering Capability Maturity Model (SE-CMM or just CMM)
• Defines requirements for process of developing secure systems, not for systems themselves– Provides maturity levels, not levels of trust– Used to evaluate an organization’s security
engineering
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-681
SSE-CMM Model
• Process capability: range of expected results that can be achieved by following process– Predictor of future project outcomes
• Process performance: measure of actual results• Process maturity: extent to which a process
explicitly defined, managed, measured, controlled, and is effective
• Divides process into 11 areas, and 11 more for project and organizational practices– Each process area contains a goal, set of base processes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-682
Process Areas
• Process areas:– Administer security controls– Assess impact, security risk,
threat, vulnerability– Build assurance argument– Coordinate security– Monitor system security
posture– Provide security input– Specify security needs– Verify, validate security
• Practices:– Ensure quality– Manage configuration, project
risk– Monitor, control technical effort– Plan technical effort– Define, improve organization’s
systems engineering process– Manage product line evolution– Provide ongoing skills,
knowledge– Coordinate with suppliers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-683
Example: Assess Threat
• Goal: threats to the security of the system will be identified and characterized
• Base processes:– Identify natural, man-made threats– Identify threat units of measure– Assess threat agent capability, threat likelihood– Monitor threats and their characteristics
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-684
Capability Maturity Levels
• Performed informally: perform base processes• Planned and tracked: address project-level definition,
planning, performance, verification issues• Well-defined: focus on defining, refining standard practice
and coordinating it across organization• Quantitatively controlled: focus on establishing
measurable quality goals, objectively managing their performance
• Continuously improving: improve organizational capability, process effectiveness
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-685
Using the SSE-CMM
• Begin with process area– Identify area goals, base processes– If all processes present, determine how mature base
processes are• Assess them against capability maturity levels• May require interacting with those who use the base processes
– Do this for each process area• Level of maturity for area is lowest level of the base processes
for that area• Tabular representation (called Rating Profile) helps
communicate results
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-686
Key Points
• First public, widely used evaluation methodology was TCSEC (Orange Book)– Criticisms led to research and development of
other methodologies• Evolved into Common Criteria• Other methodologies used for special
environments
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-687
Chapter 19: Malicious Logic
• What is malicious logic• Types of malicious logic• Defenses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-688
Overview
• Defining malicious logic• Types
– Trojan horses– Computer viruses and worms– Other types
• Defenses– Properties of malicious logic– Trust
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-689
Malicious Logic
• Set of instructions that cause site security policy to be violated
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-690
Example
• Shell script on a UNIX system:cp /bin/sh/tmp/.xyzzy
chmod u+s,o+x /tmp/.xyzzy
rm ./ls
ls$*
• Place in program called “ls” and trick someone into executing it
• You now have a setuid-to-them shell!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-691
Trojan Horse
• Program with an overt purpose (known to user) and a covert purpose (unknown to user)– Often called a Trojan– Named by Dan Edwards in Anderson Report
• Example: previous script is Trojan horse– Overt purpose: list files in directory– Covert purpose: create setuid shell
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-692
Example: NetBus
• Designed for Windows NT system• Victim uploads and installs this
– Usually disguised as a game program, or in one• Acts as a server, accepting and executing
commands for remote administrator– This includes intercepting keystrokes and
mouse motions and sending them to attacker– Also allows attacker to upload, download files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-693
Replicating Trojan Horse
• Trojan horse that makes copies of itself– Also called propagating Trojan horse– Early version of animal game used this to delete copies
of itself
• Hard to detect– 1976: Karger and Schell suggested modifying compiler
to include Trojan horse that copied itself into specific programs including later version of the compiler
– 1980s: Thompson implements this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-694
Thompson's Compiler• Modify the compiler so that when it compiles
login , login accepts the user's correct password or a fixed password (the same one for all users)
• Then modify the compiler again, so when it compiles a new version of the compiler, the extra code to do the first step is automatically inserted
• Recompile the compiler• Delete the source containing the modification and
put the undoctored source back
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-695
login source correct compiler login executable
user password
login source doctored compiler login executable
magic passworduser password or
logged in
logged in
The Login Program
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-696
compiler source correct compiler compiler executable
login source
compiler source doctored compiler compiler executable
correct login executable
login source
rigged login executable
The Compiler
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-697
Comments
• Great pains taken to ensure second version of compiler never released– Finally deleted when a new compiler executable from a
different system overwrote the doctored compiler
• The point: no amount of source-level verification or scrutiny will protect you from using untrustedcode– Also: having source code helps, but does not ensure
you’re safe
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-698
Computer Virus
• Program that inserts itself into one or more files and performs some action– Insertion phase is inserting itself into file– Execution phase is performing some (possibly null)
action
• Insertion phase must be present– Need not always be executed– Lehigh virus inserted itself into boot file only if boot
file not infected
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-699
Pseudocodebeginvirus:
if spread-conditionthen begin
for some set of target files do begin
if target is not infected then begin
determine where to place virus instructions
copy instructions from beginvirusto endvirus
into target
alter target to execute added instructions
end;
end;
end;
perform some action(s)
goto beginning of infected program
endvirus:
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-700
Trojan Horse Or Not?
• Yes– Overt action = infected program’s actions– Covert action = virus’ actions (infect, execute)
• No– Overt purpose = virus’ actions (infect, execute)– Covert purpose = none
• Semantic, philosophical differences– Defenses against Trojan horse also inhibit computer
viruses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-701
History
• Programmers for Apple II wrote some– Not called viruses; very experimental
• Fred Cohen– Graduate student who described them– Teacher (Adleman) named it “computer virus”– Tested idea on UNIX systems and UNIVAC
1108 system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-702
Cohen’s Experiments
• UNIX systems: goal was to get superuserprivileges– Max time 60m, min time 5m, average 30m– Virus small, so no degrading of response time– Virus tagged, so it could be removed quickly
• UNIVAC 1108 system: goal was to spread– Implemented simple security property of Bell-LaPadula– As writing not inhibited (no *-property enforcement),
viruses spread easily
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-703
First Reports
• Brain (Pakistani) virus (1986)– Written for IBM PCs– Alters boot sectors of floppies, spreads to other
floppies• MacMag Peace virus (1987)
– Written for Macintosh– Prints “universal message of peace” on March
2, 1988 and deletes itself
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-704
More Reports
• Duff’s experiments (1987)– Small virus placed on UNIX system, spread to
46 systems in 8 days– Wrote a Bourne shell script virus
• Highland’s Lotus 1-2-3 virus (1989)– Stored as a set of commands in a spreadsheet
and loaded when spreadsheet opened– Changed a value in a specific row, column and
spread to other files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-705
Types of Viruses
• Boot sector infectors• Executable infectors• Multipartite viruses• TSR viruses• Stealth viruses• Encrypted viruses• Polymorphic viruses• Macro viruses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-706
Boot Sector Infectors
• A virus that inserts itself into the boot sector of a disk– Section of disk containing code– Executed when system first “sees” the disk
• Including at boot time …
• Example: Brain virus– Moves disk interrupt vector from 13H to 6DH– Sets new interrupt vector to invoke Brain virus– When new floppy seen, check for 1234H at location 4
• If not there, copies itself onto disk after saving original bootblock
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-707
Executable Infectors
• A virus that infects executable programs– Can infect either .EXE or .COM on PCs– May prepend itself (as shown) or put itself anywhere,
fixing up binary so it is executed at some point
Header Executable code and data
0 100 1000
Header Executable code and data
0 100 1000 1100
Virus code
200
First program instruction to be e xecuted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-708
Executable Infectors (con’t)
• Jerusalem (Israeli) virus– Checks if system infected
• If not, set up to respond to requests to execute files– Checks date
• If not 1987 or Friday 13th, set up to respond to clock interrupts and then run program
• Otherwise, set destructive flag; will delete, not infect, files– Then: check all calls asking files to be executed
• Do nothing for COMND.COM• Otherwise, infect or delete
– Error: doesn’t set signature when .EXE executes• So .EXE files continually reinfected
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-709
Multipartite Viruses
• A virus that can infect either boot sectors or executables
• Typically, two parts– One part boot sector infector– Other part executable infector
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-710
TSR Viruses
• A virus that stays active in memory after the application (or bootstrapping, or disk mounting) is completed– TSR is “Terminate and Stay Resident”
• Examples: Brain, Jerusalem viruses– Stay in memory after program or disk mount is
completed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-711
Stealth Viruses
• A virus that conceals infection of files• Example: IDF virus modifies DOS service
interrupt handler as follows:– Request for file length: return length of
uninfected file– Request to open file: temporarily disinfect file,
and reinfect on closing– Request to load file for execution: load infected
file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-712
Encrypted Viruses
• A virus that is enciphered except for a small deciphering routine– Detecting virus by signature now much harder as most
of virus is enciphered
Virus code Enciphered virus codeDecipheringroutine
Deciphering key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-713
Example(* Decryption code of the 1260 virus *)
(* initialize the registers with the keys *)
rA = k1; rB = k2;
(* initialize rC with the virus;
starts at sov, ends at eov*)
rC = sov;
(* the enciphermentloop *)
while (rC != eov) do begin
(* encipher the byte of the message *)
(*rC) = (*rC) xorrA xorrB;
(* advance all the counters *)
rC = rC + 1;
rA = rA + 1;
end
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-714
Polymorphic Viruses
• A virus that changes its form each time it inserts itself into another program
• Idea is to prevent signature detection by changing the “signature” or instructions used for deciphering routine
• At instruction level: substitute instructions• At algorithm level: different algorithms to achieve
the same purpose• Toolkits to make these exist (Mutation Engine,
Trident Polymorphic Engine)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-715
Example
• These are different instructions (with different bit patterns) but have the same effect:– add 0 to register– subtract 0 from register– xor 0 with register– no-op
• Polymorphic virus would pick randomly from among these instructions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-716
Macro Viruses
• A virus composed of a sequence of instructions that are interpreted rather than executed directly
• Can infect either executables (Duff’s shell virus) or data files (Highland’s Lotus 1-2-3 spreadsheet virus)
• Independent of machine architecture– But their effects may be machine dependent
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-717
Example
• Melissa– Infected Microsoft Word 97 and Word 98 documents
• Windows and Macintosh systems
– Invoked when program opens infected file– Installs itself as “open” macro and copies itself into
Normal template• This way, infects any files that are opened in future
– Invokes mail program, sends itself to everyone in user’s address book
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-718
Computer Worms
• A program that copies itself from one computer to another
• Origins: distributed computations– Schoch and Hupp: animations, broadcast messages– Segment: part of program copied onto workstation– Segment processes data, communicates with worm’s
controller– Any activity on workstation caused segment to shut
down
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-719
Example: Internet Worm of 1988
• Targeted Berkeley, Sun UNIX systems– Used virus-like attack to inject instructions into running
program and run them– To recover, had to disconnect system from Internet and
reboot– To prevent re-infection, several critical programs had to
be patched, recompiled, and reinstalled• Analysts had to disassemble it to uncover function• Disabled several thousand systems in 6 or so hours
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-720
Example: Christmas Worm
• Distributed in 1987, designed for IBM networks• Electronic letter instructing recipient to save it and
run it as a program– Drew Christmas tree, printed “Merry Christmas!”– Also checked address book, list of previously received
email and sent copies to each address• Shut down several IBM networks• Really, a macro worm
– Written in a command language that was interpreted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-721
Rabbits, Bacteria
• A program that absorbs all of some class of resources
• Example: for UNIX system, shell commands:while true
domkdirx
chdirxdone
• Exhausts either disk space or file allocation table (inode) space
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-722
Logic Bombs
• A program that performs an action that violates the site security policy when some external event occurs
• Example: program that deletes company’s payroll records when one particular record is deleted– The “particular record” is usually that of the person
writing the logic bomb– Idea is if (when) he or she is fired, and the payroll
record deleted, the company loses all those records
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-723
Defenses
• Distinguish between data, instructions• Limit objects accessible to processes• Inhibit sharing• Detect altering of files• Detect actions beyond specifications• Analyze statistical characteristics
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-724
Data vs. Instructions
• Malicious logic is both– Virus: written to program (data); then executes
(instructions)
• Approach: treat “data” and “instructions” as separate types, and require certifying authority to approve conversion– Keys are assumption that certifying authority will not
make mistakes and assumption that tools, supporting infrastructure used in certifying process are not corrupt
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-725
Example: LOCK
• Logical Coprocessor Kernel– Designed to be certified at TCSEC A1 level
• Compiled programs are type “data”– Sequence of specific, auditable events required
to change type to “executable”• Cannot modify “executable” objects
– So viruses can’t insert themselves into programs (no infection phase)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-726
Example: Duff and UNIX
• Observation: users with execute permission usually have read permission, too– So files with “execute” permission have type
“executable”; those without it, type “data”– Executable files can be altered, but type
immediately changed to “data”• Implemented by turning off execute permission• Certifier can change them back
– So virus can spread only if run as certifier
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-727
Limiting Accessibility
• Basis: a user (unknowingly) executes malicious logic, which then executes with all that user’s privileges– Limiting accessibility of objects should limit
spread of malicious logic and effects of its actions
• Approach draws on mechanisms for confinement
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-728
Information Flow Metrics
• Idea: limit distance a virus can spread• Flow distance metric fd(x):
– Initially, all info x has fd(x) = 0– Whenever info y is shared, fd(y) increases by 1– Whenever y1, …, yn used as input to compute z,
fd(z) = max(fd(y1), …, fd(yn))• Information x accessible if and only if for
some parameter V, fd(x) < V
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-729
Example
• Anne: VA = 3; Bill, Cathy: VB = VC = 2• Anne creates program P containing virus• Bill executes P
– P tries to write to Bill’s program Q• Works, as fd(P) = 0, so fd(Q) = 1 < VB
• Cathy executes Q– Q tries to write to Cathy’s program R
• Fails, as fd(Q) = 1, so fd(R) would be 2
• Problem: if Cathy executes P, R can be infected– So, does not stop spread; slows it down greatly, though
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-730
Implementation Issues
• Metric associated with information, not objects– You can tag files with metric, but how do you tag the
information in them?– This inhibits sharing
• To stop spread, make V = 0– Disallows sharing– Also defeats purpose of multi-user systems, and is
crippling in scientific and developmental environments• Sharing is critical here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-731
Reducing Protection Domain
• Application of principle of least privilege• Basic idea: remove rights from process so it
can only perform its function– Warning: if that function requires it to write, it
can write anything– But you can make sure it writes only to those
objects you expect
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-732
Example: ACLs and C-Lists
• s1 owns file f1 and s2 owns program p2 and file f3– Suppose s1 can read, write f1, execute p2, write f3– Suppose s2 can read, write, execute p2 and read f3
• s1 needs to run p2– p2 contains Trojan horse
• So s1 needs to ensure p12 (subject created when s1 runs p2) can’t write to f3
– Ideally, p12 has capability { (s1, p2, x ) } so no problem• In practice, p12 inherits s1’s rights—bad! Note s1 does not own
f3, so can’t change its rights over f3
• Solution: restrict access by others
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-733
Authorization Denial Subset
• Defined for each user si• Contains ACL entries that others cannot
exercise over objects si owns• In example: R(s2) = { (s1, f3, w) }
– So when p12 tries to write to f3, as p12 owned by s1 and f3 owned by s2, system denies access
• Problem: how do you decide what should be in your authorization denial subset?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-734
Karger’s Scheme
• Base it on attribute of subject, object• Interpose a knowledge-based subsystem to
determine if requested file access reasonable– Sits between kernel and application
• Example: UNIX C compiler– Reads from files with names ending in “.c”, “.h”– Writes to files with names beginning with “/tmp/ctm”
and assembly files with names ending in “.s”• When subsystem invoked, if C compiler tries to
write to “.c” file, request rejected
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-735
Lai and Gray
• Implemented modified version of Karger’sscheme on UNIX system– Allow programs to access (read or write) files named
on command line– Prevent access to other files
• Two types of processes– Trusted (no access checks or restrictions)– Untrusted (valid access list controls access)
• VAL initialized to command line arguments plus any temporary files that the process creates
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-736
File Access Requests
1. If file on VAL, use effective UID/GID of process to determine if access allowed
2. If access requested is read and file is world-readable, allow access
3. If process creating file, effective UID/GID controls allowing creation– Enter file into VAL as NNA (new non-argument); set
permissions so no other process can read file4. Ask user. If yes, effective UID/GID controls
allowing access; if no, deny access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-737
Example
• Assembler invoked from compileras x.s /tmp/ctm2345
and creates temp file /tmp/as1111– VAL is
x.s /tmp/ctm2345 /tmp/as1111
• Now Trojan horse tries to copy x.s to another file– On creation, file inaccessible to all except creating user
so attacker cannot read it (rule 3)– If file created already and assembler tries to write to it,
user is asked (rule 4), thereby revealing Trojan horse
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-738
Trusted Programs
• No VALs applied here– UNIX command interpreters
• csh, sh– Program that spawn them
• getty, login– Programs that access file system recursively
• ar, chgrp, chown, diff, du, dump, find, ls, restore, tar– Programs that often access files not in argument list
• binmail, cpp, dbx, mail, make, script, vi– Various network daemons
• fingerd, ftpd, sendmail, talkd, telnetd, tftpd
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-739
Guardians, Watchdogs
• System intercepts request to open file• Program invoked to determine if access is to
be allowed– These are guardians or watchdogs
• Effectively redefines system (or library) calls
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-740
Trust
• Trust the user to take explicit actions to limit their process’ protection domain sufficiently– That is, enforce least privilege correctly
• Trust mechanisms to describe programs’ expected actions sufficiently for descriptions to be applied, and to handle commands without such descriptions properly
• Trust specific programs and kernel– Problem: these are usually the first programs malicious
logic attack
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-741
Sandboxing
• Sandboxes, virtual machines also restrict rights– Modify program by inserting instructions to
cause traps when violation of policy– Replace dynamic load libraries with
instrumented routines
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-742
Example: Race Conditions
• Occur when successive system calls operate on object– Both calls identify object by name– Rebind name to different object between calls
• Sandbox: instrument calls:– Unique identifier (inode) saved on first call– On second call, inode of named file compared to that of
first call• If they differ, potential attack underway …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-743
Inhibit Sharing
• Use separation implicit in integrity policies• Example: LOCK keeps single copy of
shared procedure in memory– Master directory associates unique owner with
each procedure, and with each user a list of other users the first trusts
– Before executing any procedure, system checks that user executing procedure trusts procedure owner
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-744
Multilevel Policies
• Put programs at the lowest security level, all subjects at higher levels– By *-property, nothing can write to those
programs– By ss-property, anything can read (and execute)
those programs• Example: DG/UX system
– All executables in “virus protection region” below user and administrative regions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-745
Detect Alteration of Files
• Compute manipulation detection code (MDC) to generate signature block for each file, and save it
• Later, recompute MDC and compare to stored MDC– If different, file has changed
• Example: tripwire– Signature consists of file attributes, cryptographic
checksums chosen from among MD4, MD5, HAVAL, SHS, CRC-16, CRC-32, etc.)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-746
Assumptions
• Files do not contain malicious logic when original signature block generated
• Pozzo & Grey: implement Biba’s model on LOCUS to make assumption explicit– Credibility ratings assign trustworthiness numbers from
0 (untrusted) to n (signed, fully trusted)– Subjects have risk levels
• Subjects can execute programs with credibility ratings ≥ risk level
• If credibility rating < risk level, must use special command to run program
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-747
Antivirus Programs
• Look for specific sequences of bytes (called “virus signature” in file– If found, warn user and/or disinfect file
• Each agent must look for known set of viruses
• Cannot deal with viruses not yet analyzed– Due in part to undecidability of whether a
generic program is a virus
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-748
Detect Actions Beyond Spec
• Treat execution, infection as errors and apply fault tolerant techniques
• Example: break program into sequences of nonbranching instructions– Checksum each sequence, encrypt result– When run, processor recomputes checksum,
and at each branch co-processor compares computed checksum with stored one
• If different, error occurred
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-749
N-Version Programming
• Implement several different versions of algorithm• Run them concurrently
– Check intermediate results periodically– If disagreement, majority wins
• Assumptions– Majority of programs not infected– Underlying operating system secure– Different algorithms with enough equal intermediate
results may be infeasible• Especially for malicious logic, where you would check file
accesses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-750
Proof-Carrying Code
• Code consumer (user) specifies safety requirement• Code producer (author) generates proof code
meets this requirement– Proof integrated with executable code– Changing the code invalidates proof
• Binary (code + proof) delivered to consumer• Consumer validates proof• Example statistics on Berkeley Packet Filter:
proofs 300–900 bytes, validated in 0.3 –1.3 ms– Startup cost higher, runtime cost considerably shorter
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-751
Detecting Statistical Changes
• Example: application had 3 programmers working on it, but statistical analysis shows code from a fourth person—may be from a Trojan horse or virus!
• Other attributes: more conditionals than in original; look for identical sequences of bytes not common to any library routine; increases in file size, frequency of writing to executables, etc.– Denning: use intrusion detection system to detect these
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-752
Key Points
• A perplexing problem– How do you tell what the user asked for is not
what the user intended?• Strong typing leads to separating data,
instructions• File scanners most popular anti-virus agents
– Must be updated as new viruses come out
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-753
Chapter 20: Vulnerability Analysis
• Background• Penetration Studies• Example Vulnerabilities• Classification Frameworks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-754
Overview
• What is a vulnerability?• Penetration studies
– Flaw Hypothesis Methodology– Examples
• Vulnerability examples• Classification schemes
– RISOS– PA– NRL Taxonomy– Aslam’s Model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-755
Definitions
• Vulnerability, security flaw: failure of security policies, procedures, and controls that allow a subject to commit an action that violates the security policy– Subject is called an attacker– Using the failure to violate the policy is
exploiting the vulnerability or breaking in
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-756
Formal Verification
• Mathematically verifying that a system satisfies certain constraints
• Preconditions state assumptions about the system
• Postconditions are result of applying system operations to preconditions, inputs
• Required: postconditions satisfy constraints
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-757
Penetration Testing
• Testing to verify that a system satisfies certain constraints
• Hypothesis stating system characteristics, environment, and state relevant to vulnerability
• Result is compromised system state• Apply tests to try to move system from state in
hypothesis to compromised system state
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-758
Notes
• Penetration testing is a testing technique, not a verification technique– It can prove the presence of vulnerabilities, but not the
absence of vulnerabilities• For formal verification to prove absence, proof
and preconditions must include all external factors– Realistically, formal verification proves absence of
flaws within a particular program, design, or environment and not the absence of flaws in a computer system (think incorrect configurations, etc.)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-759
Penetration Studies
• Test for evaluating the strengths and effectiveness of all security controls on system– Also called tiger team attack or red team attack– Goal: violate site security policy– Not a replacement for careful design, implementation,
and structured testing– Tests system in toto, once it is in place
• Includes procedural, operational controls as well as technological ones
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-760
Goals
• Attempt to violate specific constraints in security and/or integrity policy– Implies metric for determining success– Must be well-defined
• Example: subsystem designed to allow owner to require others to give password before accessing file (i.e., password protect files)– Goal: test this control– Metric: did testers get access either without a password
or by gaining unauthorized access to a password?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-761
Goals
• Find some number of vulnerabilities, or vulnerabilities within a period of time– If vulnerabilities categorized and studied, can draw
conclusions about care taken in design, implementation, and operation
– Otherwise, list helpful in closing holes but not more• Example: vendor gets confidential documents, 30
days later publishes them on web– Goal: obtain access to such a file; you have 30 days– Alternate goal: gain access to files; no time limit (a
Trojan horse would give access for over 30 days)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-762
Layering of Tests
1. External attacker with no knowledge of system• Locate system, learn enough to be able to access it
2. External attacker with access to system• Can log in, or access network servers• Often try to expand level of access
3. Internal attacker with access to system• Testers are authorized users with restricted accounts
(like ordinary users)• Typical goal is to gain unauthorized privileges or
information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-763
Layering of Tests (con’t)
• Studies conducted from attacker’s point of view• Environment is that in which attacker would
function• If information about a particular layer irrelevant,
layer can be skipped– Example: penetration testing during design,
development skips layer 1– Example: penetration test on system with guest account
usually skips layer 2
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-764
Methodology
• Usefulness of penetration study comes from documentation, conclusions– Indicates whether flaws are endemic or not– It does not come from success or failure of
attempted penetration• Degree of penetration’s success also a
factor– In some situations, obtaining access to
unprivileged account may be less successful than obtaining access to privileged account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-765
Flaw Hypothesis Methodology
1. Information gathering• Become familiar with system’s functioning
2. Flaw hypothesis• Draw on knowledge to hypothesize vulnerabilities
3. Flaw testing• Test them out
4. Flaw generalization• Generalize vulnerability to find others like it
5. (maybe) Flaw elimination• Testers eliminate the flaw (usually not included)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-766
Information Gathering
• Devise model of system and/or components– Look for discrepencies in components– Consider interfaces among components
• Need to know system well (or learn quickly!)– Design documents, manuals help
• Unclear specifications often misinterpreted, or interpreted differently by different people
– Look at how system manages privileged users
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-767
Flaw Hypothesizing
• Examine policies, procedures– May be inconsistencies to exploit– May be consistent, but inconsistent with design or
implementation– May not be followed
• Examine implementations– Use models of vulnerabilities to help locate potential
problems– Use manuals; try exceeding limits and restrictions; try
omitting steps in procedures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-768
Flaw Hypothesizing (con’t)
• Identify structures, mechanisms controlling system– These are what attackers will use– Environment in which they work, and were built, may
have introduced errors
• Throughout, draw on knowledge of other systems with similarities– Which means they may have similar vulnerabilities
• Result is list of possible flaws
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-769
Flaw Testing
• Figure out order to test potential flaws– Priority is function of goals
• Example: to find major design or implementation problems, focus on potential system critical flaws
• Example: to find vulnerability to outside attackers, focus on external access protocols and programs
• Figure out how to test potential flaws– Best way: demonstrate from the analysis
• Common when flaw arises from faulty spec, design, or operation
– Otherwise, must try to exploit it
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-770
Flaw Testing (con’t)
• Design test to be least intrusive as possible– Must understand exactly why flaw might arise
• Procedure– Back up system– Verify system configured to allow exploit
• Take notes of requirements for detecting flaw– Verify existence of flaw
• May or may not require exploiting the flaw• Make test as simple as possible, but success must be
convincing– Must be able to repeat test successfully
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-771
Flaw Generalization
• As tests succeed, classes of flaws emerge– Example: programs read input into buffer on stack,
leading to buffer overflow attack; others copy command line arguments into buffer on stack ⇒ these are vulnerable too
• Sometimes two different flaws may combine for devastating attack– Example: flaw 1 gives external attacker access to
unprivileged account on system; second flaw allows any user on that system to gain full privileges ⇒ any external attacker can get full privileges
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-772
Flaw Elimination
• Usually not included as testers are not best folks to fix this– Designers and implementers are
• Requires understanding of context, details of flaw including environment, and possibly exploit– Design flaw uncovered during development can be
corrected and parts of implementation redone• Don’t need to know how exploit works
– Design flaw uncovered at production site may not be corrected fast enough to prevent exploitation
• So need to know how exploit works
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-773
Michigan Terminal System
• General-purpose OS running on IBM 360, 370 systems
• Class exercise: gain access to terminal control structures– Had approval and support of center staff– Began with authorized account (level 3)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-774
Step 1: Information Gathering
• Learn details of system’s control flow and supervisor– When program ran, memory split into segments– 0-4: supervisor, system programs, system state
• Protected by hardware mechanisms– 5: system work area, process-specific information
including privilege level• Process should not be able to alter this
– 6 on: user process information• Process can alter these
• Focus on segment 5
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-775
Step 2: Information Gathering
• Segment 5 protected by virtual memory protection system– System mode: process can access, alter data in segment
5, and issue calls to supervisor– User mode: segment 5 not present in process address
space (and so can’t be modified)
• Run in user mode when user code being executed• User code issues system call, which in turn issues
supervisor call
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-776
How to Make a Supervisor Call
• System code checks parameters to ensure supervisor accesses authorized locations only– Parameters passed as list of addresses (X, X+1, X+2) constructed in
user segment– Address of list (X) passed via register
X
X X + 1 X + 2
X + 2 …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-777
Step 3: Flaw Hypothesis• Consider switch from user to system mode
– System mode requires supervisor privileges• Found: a parameter could point to another element in
parameter list– Below: address in location X+1 is that of parameter at X+2– Means: system or supervisor procedure could alter parameter’s
address after checking validity of old address
X
X X + 1 X + 2
X + 2 …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-778
Step 4: Flaw Testing
• Find a system routine that:– Used this calling convention;– Took at least 2 parameters and altered 1– Could be made to change parameter to any value (such
as an address in segment 5)• Chose line input routine
– Returns line number, length of line, line read• Setup:
– Set address for storing line number to be address of line length
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-779
Step 5: Execution
• System routine validated all parameter addresses– All were indeed in user segment
• Supervisor read input line– Line length set to value to be written into segment 5
• Line number stored in parameter list– Line number was set to be address in segment 5
• When line read, line length written into location address of which was in parameter list– So it overwrote value in segment 5
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-780
Step 6: Flaw Generalization
• Could not overwrite anything in segments 0-4– Protected by hardware
• Testers realized that privilege level in segment 5 controlled ability to issue supervisor calls (as opposed to system calls)– And one such call turned off hardware protection for
segments 0-4 …• Effect: this flaw allowed attackers to alter
anything in memory, thereby completely controlling computer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-781
Burroughs B6700
• System architecture: based on strict file typing– Entities: ordinary users, privileged users, privileged
programs, OS tasks• Ordinary users tightly restricted• Other 3 can access file data without restriction but constrained
from compromising integrity of system– No assemblers; compilers output executable code– Data files, executable files have different types
• Only compilers can produce executables• Writing to executable or its attributes changes its type to data
• Class exercise: obtain status of privileged user
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-782
Step 1: Information Gathering
• System had tape drives– Writing file to tape preserved file contents– Header record prepended to tape that indicates
file attributes including type• Data could be copied from one tape to
another– If you change data, it’s still data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-783
Step 2: Flaw Hypothesis
• System cannot detect change to executable file if that file is altered off-line
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-784
Step 3: Flaw Testing
• Write small program to change type of any file from data to executable– Compiled, but could not be used yet as it would alter
file attributes, making target a data file– Write this to tape
• Write a small utility to copy contents of tape 1 to tape 2– Utility also changes header record of contents to
indicate file was a compiler (and so could output executables)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-785
Creating the Compiler
• Run copy program– As header record copied, type becomes “compiler”
• Reinstall program as a new compiler• Write new subroutine, compile it normally, and
change machine code to give privileges to anyone calling it (this makes it data, of course)– Now use new compiler to change its type from data to
executable• Write third program to call this
– Now you have privileges
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-786
Corporate Computer System
• Goal: determine whether corporate security measures were effective in keeping external attackers from accessing system
• Testers focused on policies and procedures– Both technical and non-technical
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-787
Step 1: Information Gathering
• Searched Internet– Got names of employees, officials– Got telephone number of local branch, and
from them got copy of annual report• Constructed much of the company’s
organization from this data– Including list of some projects on which
individuals were working
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-788
Step 2: Get Telephone Directory
• Corporate directory would give more needed information about structure– Tester impersonated new employee
• Learned two numbers needed to have something delivered off-site: employee number of person requesting shipment, and employee’s Cost Center number
– Testers called secretary of executive they knew most about
• One impersonated an employee, got executive’s employee number
• Another impersonated auditor, got Cost Center number– Had corporate directory sent to off-site “subcontractor”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-789
Step 3: Flaw Hypothesis
• Controls blocking people giving passwords away not fully communicated to new employees– Testers impersonated secretary of senior
executive• Called appropriate office• Claimed senior executive upset he had not been
given names of employees hired that week• Got the names
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-790
Step 4: Flaw Testing
• Testers called newly hired people– Claimed to be with computer center– Provided “Computer Security Awareness Briefing”
over phone– During this, learned:
• Types of comnputer systems used• Employees’ numbers, logins, and passwords
• Called computer center to get modem numbers– These bypassed corporate firewalls
• Success
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-791
Penetrating a System
• Goal: gain access to system• We know its network address and nothing else• First step: scan network ports of system
– Protocols on ports 79, 111, 512, 513, 514, and 540 are typically run on UNIX systems
• Assume UNIX system; SMTP agent probably sendmail– This program has had lots of security problems– Maybe system running one such version …
• Next step: connect to sendmail on port 25
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-792
Output of Network Scanftp 21/tcp File Transfer
telnet 23/tcp Telnet
smtp 25/tcp Simple Mail Transfer
finger 79/tcp Finger
sunrpc 111/tcp SUN Remote Procedure Call
exec 512/tcp remote process execution (rexecd)
login 513/tcp remote login (rlogind)
shell 514/tcp rlogin style exec (rshd)
printer 515/tcp spooler (lpd)
uucp 540/tcp uucpd
nfs 2049/tcp networked file system
xterm 6000/tcp x-windows server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-793
Output of sendmail220 zzz.com sendmail3.1/zzz.3.9, Dallas, Texas, ready at Wed, 2 Apr 97
22:07:31 CSTVersion 3.1 has the “wiz” vulnerability that recognizesthe “shell” command … so let’s try itStart off by identifying yourself
helo xxx.org
250 zzz.com Hello xxx.org, pleased to meet youNow see if the “wiz” command works … if it says “commandunrecognized”, we’re out of luck
wiz
250 Enter, O mighty wizard!It does! And we didn’t need a password … so get a shell
shell
#And we have full privileges as the superuser, root
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-794
Penetrating a System (Revisited)
• Goal: from an unprivileged account on system, gain privileged access
• First step: examine system– See it has dynamically loaded kernel– Program used to add modules is loadmodule and must
be privileged– So an unprivileged user can run a privileged program
… this suggests an interface that controls this– Question: how does loadmodule work?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-795
loadmodule
• Validates module ad being a dynamic load module• Invokes dynamic loader ld.so to do actual load;
also calls arch to determine system architecture (chip set)– Check, but only privileged user can call ld.so
• How does loadmodule execute these programs?– Easiest way: invoke them directly using system(3),
which does not reset environment when it spawns subprogram
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-796
First Try
• Set environment to look in local directory, write own version of ld.so, and put it in local directory– This version will print effective UID, to demonstrate
we succeeded• Set search path to look in current working
directory before system directories• Then run loadmodule
– Nothing is printed—darn!– Somehow changing environment did not affect
execution of subprograms—why not?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-797
What Happened
• Look in executable to see how ld.so, arch invoked– Invocations are “/bin/ld.so”, “/bin/arch”– Changing search path didn’t matter as never used
• Reread system(3) manual page– It invokes command interpreter sh to run subcommands
• Read sh(1) manual page– Uses IFS environment variable to separate words– These are by default blanks … can we make it include a
“/”?• If so, sh would see “/bin/ld.so” as “bin” followed by “ld.so”, so
it would look for command “bin”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-798
Second Try
• Change value of IFS to include “/”• Change name of our version of ld.so to bin
– Search path still has current directory as first place to look for commands
• Run loadmodule– Prints that its effective UID is 0 (root)
• Success!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-799
Generalization
• Process did not clean out environment before invoking subprocess, which inherited environment– So, trusted program working with untrusted
environment (input) … result should be untrusted, but is trusted!
• Look for other privileged programs that spawn subcommands– Especially if they do so by calling system(3) …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-800
Penetrating s System redux
• Goal: gain access to system• We know its network address and nothing
else• First step: scan network ports of system
– Protocols on ports 17, 135, and 139 are typically run on Windows NT server systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-801
Output of Network Scanqotd 17/tcp Quote of the Day
ftp 21/tcp File Transfer [Control]
loc-srv 135/tcp Location Service
netbios-ssn 139/tcp NETBIOS Session Service [JBP]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-802
First Try
• Probe for easy-to-guess passwords– Find system administrator has password
“Admin”– Now have administrator (full) privileges on
local system• Now, go for rights to other systems in
domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-803
Next Step
• Domain administrator installed service running with domain admin privileges on local system
• Get program that dumps local security authority database– This gives us service account password– We use it to get domain admin privileges, and
can access any system in domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-804
Generalization
• Sensitive account had an easy-to-guess password– Possible procedural problem
• Look for weak passwords on other systems, accounts
• Review company security policies, as well as education of system administrators and mechanisms for publicizing the policies
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-805
Debate
• How valid are these tests?– Not a substitute for good, thorough specification,
rigorous design, careful and correct implementation, meticulous testing
– Very valuable a posteriori testing technique• Ideally unnecessary, but in practice very necessary
• Finds errors introduced due to interactions with users, environment– Especially errors from incorrect maintenance and
operation– Examines system, site through eyes of attacker
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-806
Problems
• Flaw Hypothesis Methodology depends on caliber of testers to hypothesize and generalize flaws
• Flaw Hypothesis Methodology does not provide a way to examine system systematically– Vulnerability classification schemes help here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-807
Vulnerability Classification• Describe flaws from differing perspectives
– Exploit-oriented– Hardware, software, interface-oriented
• Goals vary; common ones are:– Specify, design, implement computer system without
vulnerabilities– Analyze computer system to detect vulnerabilities– Address any vulnerabilities introduced during system operation– Detect attempted exploitations of vulnerabilities
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-808
Example Flaws
• Use these to compare classification schemes• First one: race condition (xterm)• Second one: buffer overflow on stack
leading to execution of injected code (fingerd)
• Both are very well known, and fixes available!– And should be installed everywhere …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-809
Flaw #1: xterm• xterm emulates terminal under X11 window system
– Must run as root user on UNIX systems• No longer universally true; reason irrelevant here
• Log feature: user can log all input, output to file– User names file– If file does not exist, xterm creates it, makes owner the user– If file exists, xterm checks user can write to it, and if so opens file
to append log to it
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-810
File Exists• Check that user can write to file requires special system
call– Because root can append to any file, check in open will always
succeedCheck that user can write to file “/usr/tom/X”
if (access(“/usr/tom/X”, W_OK) == 0){Open “/usr/tom/X” to append log entries
if ((fd = open(“/usr/tom/X”, O_WRONLY|O_APPEND))< 0){
/* handle error: cannot open file */
}
}
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-811
Problem
• Binding of file name “/usr/tom/X” to file object can change between first and second lines– (a) is at access; (b) is at open– Note file opened is not file checked
/etc
passwd X
open(“/usr/tom/X”, O_WRITE)
passwd data
/etc
passwd
usr
access(“/usr/tom/X”, W_OK)
X datapasswd data X data
(a) (b)
tomX
usrtom
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-812
Flaw #2: fingerd
• Exploited by Internet Worm of 1988– Recurs in many places, even now
• finger client send request for information to server fingerd (finger daemon)– Request is name of at most 512 chars– What happens if you send more?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-813
Buffer Overflow• Extra chars overwrite rest of stack,
as shown• Can make those chars change
return address to point to beginning of buffer
• If buffer contains small program to spawn shell, attacker gets shell on target system
main localvariables
return addressof main
other returnstate info
gets localvariables
parameter togets
input buffer
main localvariables
address ofinput buffer
other returnstate info
gets localvariables
program toinvoke shell
Aftermessage
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-814
Frameworks• Goals dictate structure of classification scheme
– Guide development of attack tool ⇒ focus is on steps needed to exploit vulnerability
– Aid software development process ⇒ focus is on design and programming errors causing vulnerabilities
• Following schemes classify vulnerability as n-tuple, each element of n-tuple being classes into which vulnerability falls– Some have 1 axis; others have multiple axes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-815
Research Into Secure Operating Systems (RISOS)
• Goal: aid computer, system managers in understanding security issues in OSes, and help determine how much effort required to enhance system security
• Attempted to develop methodologies and software for detecting some problems, and techniques for avoiding and ameliorating other problems
• Examined Multics, TENEX, TOPS-10, GECOS, OS/MVT, SDS-940, EXEC-8
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-816
Classification Scheme• Incomplete parameter validation• Inconsistent parameter validation• Imnplicit sharing f privileged/confidential data• Asynchronous validation/inadequate serialization• Inadequate identification/authentication/authorization• Violable prohibition/limit• Exploitable logic error
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-817
Incomplete Parameter Validation• Parameter not checked before use• Example: emulating integer division in kernel (RISC chip
involved)– Caller provided addresses for quotient, remainder– Quotient address checked to be sure it was in user’s protection
domain– Remainder address not checked
• Set remainder address to address of process’ level of privilege• Compute 25/5 and you have level 0 (kernel) privileges
• Check for type, format, range of values, access rights, presence (or absence)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-818
Inconsistent Parameter Validation
• Each routine checks parameter is in proper format for that routine but the routines require different formats
• Example: each database record 1 line, colons separating fields– One program accepts colons, newlines as pat of data within fields– Another program reads them as field and record separators– This allows bogus records to be entered
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-819
Implicit Sharing of Privileged / Confidential Data
• OS does not isolate users, processes properly• Example: file password protection
– OS allows user to determine when paging occurs– Files protected by passwords
• Passwords checked char by char; stops at first incorrect char– Position guess for password so page fault occurred between 1st, 2nd char
• If no page fault, 1st char was wrong; if page fault, it was right– Continue until password discovered
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-820
Asynchronous Validation / Inadequate Serialization
• Time of check to time of use flaws, intermixing reads and writes to create inconsistencies
• Example: xterm flaw discussed earlier
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-821
Inadequate Identification / Authorization / Authentication
• Erroneously identifying user, assuming another’s privilege, or tricking someone into executing program without authorization
• Example: OS on which access to file named “SYS$*DLOC$” meant process privileged– Check: can process access any file with qualifier name beginning
with “SYS” and file name beginning with “DLO”?– If your process can access file “SYSA*DLOC$”, which is
ordinary file, your process is privileged
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-822
Violable Prohibition / Limit• Boundary conditions not handled properly• Example: OS kept in low memory, user process in high
memory– Boundary was highest address of OS– All memory accesses checked against this– Memory accesses not checked beyond end of high memory
• Such addresses reduced modulo memory size– So, process could access (memory size)+1, or word 1, which is
part of OS …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-823
Exploitable Logic Error• Problems not falling into other classes
– Incorrect error handling, unexpected side effects, incorrect resource allocation, etc.
• Example: unchecked return from monitor– Monitor adds 1 to address in user’s PC, returns
• Index bit (indicating indirection) is a bit in word• Attack: set address to be –1; adding 1 overflows, changes index bit, so
return is to location stored in register 1– Arrange for this to point to bootstrap program stored in other
registers• On return, program executes with system privileges
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-824
Legacy of RISOS• First funded project examining vulnerabilities• Valuable insight into nature of flaws
– Security is a function of site requirements and threats– Small number of fundamental flaws recurring in many contexts– OS security not critical factor in design of OSes
• Spurred additional research efforts into detection, repair of vulnerabilities
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-825
Program Analysis (PA)
• Goal: develop techniques to find vulnerabilities
• Tried to break problem into smaller, more manageable pieces
• Developed general strategy, applied it to several OSes– Found previously unknown vulnerabilities
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-826
Classification Scheme• Improper protection domain initialization and enforcement
– Improper choice of initial protection domain– Improper isolation of implementation detail– Improper change– Improper naming– Improper deallocation or deletion
• Improper validation• Improper synchronization
– Improper indivisibility– Improper sequencing
• Improper choice of operand or operation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-827
Improper Choice of Initial Protection Domain
• Initial incorrect assignment of privileges, security and integrity classes
• Example: on boot, protection mode of file containing identifiers of all users can be altered by any user– Under most policies, should not be allowed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-828
Improper Isolation of Implementation Detail
• Mapping an abstraction into an implementation in such a way that the abstraction can be bypassed
• Example: VMs modulate length of time CPU is used by each to send bits to each other
• Example: Having raw disk accessible to system as ordinary file, enabling users to bypass file system abstraction and write directly to raw disk blocks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-829
Improper Change
• Data is inconsistent over a period of time• Example: xterm flaw
– Meaning of “/usr/tom/X” changes between access and open
• Example: parameter is validated, then accessed; but parameter is changed between validation and access– Burroughs B6700 allowed allowed this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-830
Improper Naming
• Multiple objects with same name• Example: Trojan horse
– loadmodule attack discussed earlier; “bin” could be a directory or a program
• Example: multiple hosts with same IP address– Messages may be erroneously routed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-831
Improper Deallocation or Deletion
• Failing to clear memory or disk blocks (or other storage) after it is freed for use by others
• Example: program that contains passwords that a user typed dumps core– Passwords plainly visible in core dump
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-832
Improper Validation
• Inadequate checking of bounds, type, or other attributes or values
• Example: fingerd’s failure to check input length
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-833
Improper Indivisibility• Interrupting operations that should be uninterruptable
– Often: “interrupting atomic operations”• Example: mkdir flaw (UNIX Version 7)
– Created directories by executing privileged operation to create file node of type directory, then changed ownership to user
– On loaded system, could change binding of name of directory to be that of password file after directory created but before change of ownership
– Attacker can change administrator’s password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-834
Improper Sequencing• Required order of operations not enforced• Example: one-time password scheme
– System runs multiple copies of its server– Two users try to access same account
• Server 1 reads password from file• Server 2 reads password from file• Both validate typed password, allow user to log in• Server 1 writes new password to file• Server 2 writes new password to file
– Should have every read to file followed by a write, and vice versa; not two reads or two writes to file in a row
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-835
Improper Choice of Operand or Operation
• Calling inappropriate or erroneous instructions
• Example: cryptographic key generation software calling pseudorandom number generators that produce predictable sequences of numbers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-836
Legacy
• First to explore automatic detection of security flaws in programs and systems
• Methods developed but not widely used– Parts of procedure could not be automated– Complexity– Procedures for obtaining system-independent
patterns describing flaws not complete
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-837
NRL Taxonomy• Goals:
– Determine how flaws entered system– Determine when flaws entered system– Determine where flaws are manifested in system
• 3 different schemes used:– Genesis of flaws– Time of flaws– Location of flaws
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-838
Genesis of Flaws
• Inadvertent (unintentional) flaws classified using RISOS categories; not shown above
– If most inadvertent, better design/coding reviews needed– If most intentional, need to hire more trustworthy developers and do more security-
related testing
Intentional
Malicious
Trojan horseNonreplicating
ReplicatingTrapdoor
Logic/time bomb
Nonmalicious
Covert channel
Other
Storage
Timing
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-839
Time of Flaws
• Development phase: all activities up to release of initial version of software
• Maintenance phase: all activities leading to changes in softwareperformed under configuration control
• Operation phase: all activities involving patching and not underconfiguration control
Time ofintroduction
Development
Maintenance
Operation
Requirement/specification/designSource code
Object code
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-840
Location of Flaw
• Focus effort on locations where most flaws occur, or where most serious flaws occur
Location
Software
Hardware
Operating system
Support
Application
Privileged utilitiesUnprivileged utilities
System initializationMemory managementProcess management/schedulingDevice managementFile managementIdentification/authenticationOther/unkno wn
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-841
Legacy• Analyzed 50 flaws• Concluded that, with a large enough sample size, an
analyst could study relationships between pairs of classes– This would help developers focus on most likely places, times, and
causes of flaws• Focused on social processes as well as technical details
– But much information required for classification not available for the 50 flaws
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-842
Aslam’s Model
• Goal: treat vulnerabilities as faults and develop scheme based on fault trees
• Focuses specifically on UNIX flaws• Classifications unique and unambiguous
– Organized as a binary tree, with a question at each node. Answer determines branch you take
– Leaf node gives you classification• Suited for organizing flaws in a database
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-843
Top Level• Coding faults: introduced during software development
– Example: fingerd’s failure to check length of input string before storing it in buffer
• Emergent faults: result from incorrect initialization, use, or application– Example: allowing message transfer agent to forward mail to arbitrary file
on system (it performs according to specification, but results create a vulnerability)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-844
Coding Faults• Synchronization errors: improper serialization of operations, timing
window between two operations creates flaw– Example: xterm flaw
• Condition validation errors: bounds not checked, access rights ignored, input not validated, authentication and identification fails– Example: fingerd flaw
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-845
Emergent Faults• Configuration errors: program installed incorrectly
– Example: tftp daemon installed so it can access any file; then anyone can copy any file
• Environmental faults: faults introduced by environment– Example: on some UNIX systems, any shell with “-” as first char of name
is interactive, so find a setuid shell script, create a link to name “-gotcha”, run it, and you has a privileged interactive shell
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-846
Legacy
• Tied security flaws to software faults• Introduced a precise classification scheme
– Each vulnerability belongs to exactly 1 class of security flaws
– Decision procedure well-defined, unambiguous
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-847
Comparison and Analysis
• Point of view– If multiple processes involved in exploiting the
flaw, how does that affect classification?• xterm, fingerd flaws depend on interaction of two
processes (xterm and process to switch file objects; fingerd and its client)
• Levels of abstraction– How does flaw appear at different levels?
• Levels are abstract, design, implementation, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-848
xterm and PA Classification
• Implementation level– xterm: improper change– attacker’s program: improper deallocation or
deletion– operating system: improper indivisibility
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-849
xterm and PA Classification• Consider higher level of abstraction, where directory is
simply an object– create, delete files maps to writing; read file status, open file maps
to reading– operating system: improper sequencing
• During read, a write occurs, violating Bernstein conditions
• Consider even higher level of abstraction– attacker’s process: improper choice of initial protection domain
• Should not be able to write to directory containing log file• Semantics of UNIX users require this at lower levels
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-850
xterm and RISOS Classification
• Implementation level– xterm: asynchronous validation/inadequate
serialization– attacker’s process: exploitable logic error and
violable prohibition/limit– operating system: inconsistent parameter
validation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-851
xterm and RISOS Classification
• Consider higher level of abstraction, where directory is simply an object (as before)– all: asynchronous validation/inadequate
serialization• Consider even higher level of abstraction
– attacker’s process: inadequate identification/authentication/authorization
• Directory with log file not protected adequately• Semantics of UNIX require this at lower levels
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-852
xterm and NRL Classification• Time, location unambiguous
– Time: during development– Location: Support:privileged utilities
• Genesis: ambiguous– If intentional:
• Lowest level: inadvertent flaw of serialization/aliasing– If unintentional:
• Lowest level: nonmalicious: other– At higher levels, parallels that of RISOS
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-853
xterm and Aslam’s Classification• Implementation level
– attacker’s process: object installed with incorrect permissions• attacker’s process can delete file
– xterm: access rights validation error• xterm doesn’t properly valisate file at time of access
– operating system: improper or inadequate serialization error• deletion, creation should not have been interspersed with access, open
– Note: in absence of explicit decision procedure, all could go into class race condition
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-854
The Point
• The schemes lead to ambiguity– Different researchers may classify the same
vulnerability differently for the same classification scheme
• Not true for Aslam’s, but that misses connections between different classifications– xterm is race condition as well as others; Aslam
does not show this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-855
fingerd and PA Classification
• Implementation level– fingerd: improper validation– attacker’s process: improper choice of operand
or operation– operating system: improper isolation of
implementation detail
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-856
fingerd and PA Classification• Consider higher level of abstraction, where storage space
of return address is object– operating system: improper change– fingerd: improper validation
• Because it doesn’t validate the type of instructions to be executed, mistaking data for valid ones
• Consider even higher level of abstraction, where security-related value in memory is changing and data executed that should not be executable– operating system: improper choice of initial protection domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-857
fingerd and RISOS Classification
• Implementation level– fingerd: incomplete parameter validation– attacker’s process: violable prohibition/limit– operating system: inadequate
identification/authentication/authorization
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-858
fingerd and RISOS Classification• Consider higher level of abstraction, where storage space
of return address is object– operating system: asynchronous validation/inadequate serialization– fingerd: inadequate identification/authentication/authorization
• Consider even higher level of abstraction, where security-related value in memory is changing and data executed that should not be executable– operating system: inadequate
identification/authentication/authorization
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-859
fingerd and NRL Classification
• Time, location unambiguous– Time: during development– Location: support: privileged utilities
• Genesis: ambiguous– Known to be inadvertent flaw– Parallels that of RISOS
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-860
fingerd and Aslam Classification
• Implementation level– fingerd: boundary condition error– attacker’s process: boundary condition error
• operating system: environmental fault– If decision procedure not present, could also have been
access rights validation errors
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-861
Summary
• Classification schemes requirements– Decision procedure for classifying vulnerability– Each vulnerability should have unique
classification• Above schemes do not meet these criteria
– Inconsistent among different levels of abstraction
– Point of view affects classification
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-862
Key Points
• Given large numbers of non-secure systems in use now, unrealistic to expect less vulnerable systems to replace them
• Penetration studies are effective tests of systems provided the test goals are known and tests are structured well
• Vulnerability classification schemes aid in flaw generalization and hypothesis
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-863
Chapter 21: Auditing
• Overview• What is auditing?• What does an audit system look like?• How do you design an auditing system?• Auditing mechanisms• Examples: NFSv2, LAFS
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-864
What is Auditing?
• Logging– Recording events or statistics to provide
information about system use and performance• Auditing
– Analysis of log records to present information about the system in a clear, understandable manner
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-865
Uses
• Describe security state– Determine if system enters unauthorized state
• Evaluate effectiveness of protection mechanisms– Determine which mechanisms are appropriate
and working– Deter attacks because of presence of record
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-866
Problems
• What do you log?– Hint: looking for violations of a policy, so
record at least what will show such violations• What do you audit?
– Need not audit everything– Key: what is the policy involved?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-867
Audit System Structure
• Logger– Records information, usually controlled by
parameters• Analyzer
– Analyzes logged information looking for something
• Notifier– Reports results of analysis
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-868
Logger
• Type, quantity of information recorded controlled by system or program configuration parameters
• May be human readable or not– If not, usually viewing tools supplied– Space available, portability influence storage
format
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-869
Example: RACF
• Security enhancement package for IBM’s MVS/VM
• Logs failed access attempts, use of privilege to change security levels, and (if desired) RACF interactions
• View events with LISTUSERS commands
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-870
RACF: Sample EntryUSER=EW125004 NAME=S.J.TURNER OW NER=SECADM CREATED=88.004DEFAULT-GROUP=HUMRES PASSDATE=88.004 PASS-INTERVAL=30ATTRIBUTES=ADSPREVOKE DATE=NONE RESUME-DATE=NONELAST-ACCESS=88.020/14:15:10CLASS AUTHORIZATIONS=NONENO-INSTALLATION-DATANO-MODEL-NAMELOGON ALLO W ED (DAYS) (TIME)--------------------------------ANYDAY ANYTIMEGROUP=HUMRES AUTH=JOIN CONNECT-OWNER=SECADM
CONNECT-DATE=88.004CONNECTS= 15 UACC=READ LAST-CONNECT=88.018/16:45:06CONNECT ATTRIBUTES=NONEREVOKE DATE=NONE RESUME DATE=NONEGROUP=PERSNL AUTH=JOIN CONNECT-OWNER=SECADM CONNECT-DATE:88.004CONNECTS= 25 UACC=READ LAST-CONNECT=88.020/14:15:10CONNECT ATTRIBUTES=NONEREVOKE DATE=NONE RESUME DATE=NONESECURITY-LEVEL=NONE SPECIFIEDCATEGORY AUTHORIZATIONNONE SPECIFIED
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-871
Example: Windows NT• Different logs for different types of events
– System event logs record system crashes, component failures, and other system events
– Application event logs record events that applications request be recorded
– Security event log records security-critical events such as logging in and out, system file accesses, and other events
• Logs are binary; use event viewer to see them• If log full, can have system shut down, logging disabled, or
logs overwritten
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-872
Windows NT Sample EntryDate:2/12/2000 Source: SecurityTime: 13:03 Category: Detailed TrackingType: Success EventID: 592User:WINDSOR\AdministratorComputer: WINDSOR
Description:A new process has been created:
New Process ID: 2216594592Image File Name:
\Program Files\Internet Explorer\IEXPLORE.EXECreator Process ID: 2217918496User Name: AdministratorFDomain: WINDSORLogon ID: (0x0,0x14B4c4)
[would be in graphical format]
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-873
Analyzer
• Analyzes one or more logs– Logs may come from multiple systems, or a
single system– May lead to changes in logging– May lead to a report of an event
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-874
Examples• Using swatch to find instances of telnet from tcpd logs:
/telnet/&!/localhost/&!/*.site.com/
• Query set overlap control in databases– If too much overlap between current query and past queries, do not answer
• Intrusion detection analysis engine (director)– Takes data from sensors and determines if an intrusion is occurring
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-875
Notifier
• Informs analyst, other entities of results of analysis
• May reconfigure logging and/or analysis on basis of results
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-876
Examples
• Using swatch to notify of telnets/telnet/&!/localhost/&!/*.site.com/ mail staff
• Query set overlap control in databases– Prevents response from being given if too much
overlap occurs• Three failed logins in a row disable user
account– Notifier disables account, notifies sysadmin
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-877
Designing an Audit System• Essential component of security mechanisms• Goals determine what is logged
– Idea: auditors want to detect violations of policy, which provides a set of constraints that the set of possible actions must satisfy
– So, audit functions that may violate the constraints• Constraint pi : action ⇒ condition
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-878
Example: Bell-LaPadula• Simple security condition and *-property
– S reads O ⇒ L(S) ≥ L(O)– S writes O ⇒ L(S) ≤ L(O)– To check for violations, on each read and write, must log L(S), L(O),
action (read, write), and result (success, failure)– Note: need not record S, O!
• In practice, done to identify the object of the (attempted) violation and the user attempting the violation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-879
Implementation Issues• Show non-security or find violations?
– Former requires logging initial state as well as changes• Defining violations
– Does “write” include “append” and “create directory”?• Multiple names for one object
– Logging goes by object and not name– Representations can affect this (if you read raw disks, you’re reading files;
can your auditing system determine which file?)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-880
Syntactic Issues
• Data that is logged may be ambiguous– BSM: two optional text fields followed by two
mandatory text fields– If three fields, which of the optional fields is
omitted?• Solution: use grammar to ensure well-
defined syntax of log files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-881
Exampleentry: date host prog[ bad ] user [ “from” host ] “to”
user “on” tty
date : daytime
host: string
prog: string “:”
bad : “FAILED”
user: string
tty : “/dev/” string
• Log file entry format defined unambiguously• Audit mechanism could scan, interpret entries without confusion
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-882
More Syntactic Issues
• Context– Unknown user uses anonymous ftp to retrieve
file “/etc/passwd”– Logged as such– Problem: which /etc/passwd file?
• One in system /etc directory• One in anonymous ftp directory /var/ftp/etc, and as
ftp thinks /var/ftp is the root directory, /etc/passwdrefers to /var/ftp/etc/passwd
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-883
Log Sanitization• U set of users, P policy defining set of information C(U)
that U cannot see; log sanitized when all information in C(U) deleted from log
• Two types of P– C(U) can’t leave site
• People inside site are trusted and information not sensitive to them– C(U) can’t leave system
• People inside site not trusted or (more commonly) information sensitive to them
• Don’t log this sensitive information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-884
Logging Organization
• Top prevents information from leaving site– Users’ privacy not protected from system administrators, other administrative
personnel• Bottom prevents information from leaving system
– Data simply not recorded, or data scrambled before recording
Logging system Log UsersSanitizer
Logging system Log UsersSanitizer
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-885
Reconstruction
• Anonymizing sanitizer cannot be undone– No way to recover data from this
• Pseudonymizing sanitizer can be undone– Original log can be reconstructed
• Importance– Suppose security analysis requires access to
information that was sanitized?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-886
Issue
• Key: sanitization must preserve properties needed for security analysis
• If new properties added (because analysis changes), may have to resanitizeinformation– This requires pseudonymous sanitization or the
original log
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-887
Example• Company wants to keep its IP addresses secret, but wants a
consultant to analyze logs for an address scanning attack– Connections to port 25 on IP addresses 10.163.5.10, 10.163.5.11,
10.163.5.12, 10.163.5.13, 10.163.5.14, 10.163.5.15– Sanitize with random IP addresses
• Cannot see sweep through consecutive IP addresses– Sanitize with sequential IP addresses
• Can see sweep through consecutive IP addresses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-888
Generation of Pseudonyms1. Devise set of pseudonyms to replace sensitive information
• Replace data with pseudonyms• Maintain table mapping pseudonyms to data
2. Use random key to encipher sensitive datum and use secret sharing scheme to share key• Used when insiders cannot see unsanitized data, but outsiders (law
enforcement) need to• Requires t out of n people to read data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-889
Application Logging
• Applications logs made by applications– Applications control what is logged– Typically use high-level abstractions such as:
su: bishop to root on /dev/ttyp0
– Does not include detailed, system call level information such as results, parameters, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-890
System Logging• Log system events such as kernel actions
– Typically use low-level events3876 ktrace CALL execve(0xbfbff0c0,0xbfbff5cc,0xbfbff5d8)3876 ktrace NAMI "/usr/bin/su"3876 ktrace NAMI "/usr/libexec/ld- elf.so.1" 3876 su RET xecve 0 3876 su CALL __sysctl(0xbfbff47c,0x2,0x2805c928,0xbfbff478,0,0)3876 su RET __sysctl 0 3876 su CALL mmap(0,0x8000,0x3,0x1002,0xffffffff,0,0,0)3876 su RET mmap 671473664/0x2805e0003876 su CALL geteuid3876 su RET geteuid 0
– Does not include high-level abstractions such as loading libraries (as above)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-891
Contrast• Differ in focus
– Application logging focuses on application events, like failure to supply proper password, and the broad operation (what was the reason for the access attempt?)
– System logging focuses on system events, like memory mapping or file accesses, and the underlying causes (why did access fail?)
• System logs usually much bigger than application logs• Can do both, try to correlate them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-892
Design• A posteriori design
– Need to design auditing mechanism for system not built with security in mind
• Goal of auditing– Detect any violation of a stated policy
• Focus is on policy and actions designed to violate policy; specific actions may not be known
– Detect actions known to be part of an attempt to breach security• Focus on specific actions that have been determined to indicate
attacks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-893
Detect Violations of Known Policy
• Goal: does system enter a disallowed state?• Two forms
– State-based auditing• Look at current state of system
– Transition-based auditing• Look at actions that transition system from one state
to another
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-894
State-Based Auditing
• Log information about state and determine if state allowed– Assumption: you can get a snapshot of system
state– Snapshot needs to be consistent– Non-distributed system needs to be quiescent– Distributed system can use Chandy-Lamport
algorithm, or some other algorithm, to obtain this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-895
Example
• File system auditing tools– Thought of as analyzing single state (snapshot)– In reality, analyze many slices of different state
unless file system quiescent– Potential problem: if test at end depends on
result of test at beginning, relevant parts of system state may have changed between the first test and the last
• Classic TOCTTOU flaw
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-896
Transition-Based Auditing
• Log information about action, and examine current state and proposed transition to determine if new state would be disallowed– Note: just analyzing the transition may not be
enough; you may need the initial state– Tend to use this when specific transitions
always require analysis (for example, change of privilege)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-897
Example
• TCP access control mechanism intercepts TCP connections and checks against a list of connections to be blocked– Obtains IP address of source of connection– Logs IP address, port, and result
(allowed/blocked) in log file– Purely transition-based (current state not
analyzed at all)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-898
Detect Known Violations of Policy
• Goal: does a specific action and/or state that is known to violate security policy occur?– Assume that action automatically violates
policy– Policy may be implicit, not explicit– Used to look for known attacks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-899
Example
• Land attack– Consider 3-way handshake to initiate TCP connection
(next slide)– What happens if source, destination ports and addresses
the same? Host expects ACK(t+1), but gets ACK(s+1).– RFC ambiguous:
• p. 36 of RFC: send RST to terminate connection• p. 69 of RFC: reply with empty packet having current
sequence number t+1 and ACK number s+1—but it receives packet and ACK number is incorrect. So it repeats this … system hangs or runs very slowly, depending on whether interrupts are disabled
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-900
3-Way Handshake and LandNormal:1. srcseq = s, expects ACK s+12. destseq = t, expects ACK t+1;
src gets ACK s+13. srcseq = s+1, destseq = t+1;
dest gets ACK t+1Land:1. srcseq = destseq = s, expects
ACK s+12. srcseq = destseq = t, expects
ACK t+1 but gets ACK s+13. Never reached; recovery from
error in 2 attempted
Source
Destination
SYN(s) ACK(s+1)SYN(t) ACK(t+1)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-901
Detection
• Must spot initial Land packet with source, destination addresses the same
• Logging requirement:– source port number, IP address– destination port number, IP address
• Auditing requirement:– If source port number = destination port number and source IP
address = destination IP address, packet is part of a Land attack
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-902
Auditing Mechanisms
• Systems use different mechanisms– Most common is to log all events by default,
allow system administrator to disable logging that is unnecessary
• Two examples– One audit system designed for a secure system– One audit system designed for non-secure
system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-903
Secure Systems
• Auditing mechanisms integrated into system design and implementation
• Security officer can configure reporting and logging:– To report specific events– To monitor accesses by a subject– To monitor accesses to an object
• Controlled at audit subsystem– Irrelevant accesses, actions not logged
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-904
Example 1: VAX VMM
• Designed to be a secure production system– Audit mechanism had to have minimal impact– Audit mechanism had to be very reliable
• Kernel is layered– Logging done where events of interest occur– Each layer audits accesses to objects it controls
• Audit subsystem processes results of logging from mechanisms in kernel– Audit subsystem manages system log– Invoked by mechanisms in kernel
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-905
VAX VMM Audit Subsystem
• Calls provide data to be logged– Identification of event, result– Auxiliary data depending on event– Caller’s name
• Subsystem checks criteria for logging– If request matcher, data is logged– Criteria are subject or object named in audit table, and
severity level (derived from result)– Adds date and time, other information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-906
Other Issues
• Always logged– Programmer can request event be logged– Any attempt to violate policy
• Protection violations, login failures logged when they occur repeatedly
• Use of covert channels also logged
• Log filling up– Audit logging process signaled to archive log when log
is 75% full– If not possible, system stops
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-907
Example 2: CMW
• Compartmented Mode Workstation designed to allow processing at different levels of sensitivity– Auditing subsystem keeps table of auditable events– Entries indicate whether logging is turned on, what type
of logging to use– User level command chaud allows user to control
auditing and what is audited• If changes affect subjects, objects currently being logged, the
logging completes and then the auditable events are changed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-908
CMW Process Control
• System calls allow process to control auditing– audit_on turns logging on, names log filke– audit_write validates log entry given as
parameter, logs entry if logging for that entry is turned on
– audit_suspend suspends logging temporarily– audit_resume resumes logging after suspension– audit_off turns logging off for that process
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-909
System Calls
• On system call, if auditing on:– System call recorded– First 3 parameters recorded (but pointers not
followed)• How audit_write works
– If room in log, append new entry– Otherwise halt system, discard new entry, or
disable event that caused logging• Continue to try to log other events
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-910
Other Ways to Log
• Problem: some processes want to log higher-level abstractions (application logging)– Window manager creates, writes high-level
events to log– Difficult to map low-level events into high-
level ones– Disables low-level logging for window
manager as unnecessary
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-911
CMW Auditing
• Tool (redux) to analyze logged events• Converts binary logs to printable format• Redux allows user to constrain printing
based on several criteria– Users– Objects– Security levels– Events
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-912
Non-Secure Systems
• Have some limited logging capabilities– Log accounting data, or data for non-security
purposes– Possibly limited security data like failed logins
• Auditing subsystems focusing on security usually added after system completed– May not be able to log all events, especially if
limited kernel modifications to support audit subsystem
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-913
Example: Basic Security Module
• BSM enhances SunOS, Solaris security– Logs composed of records made up of tokens
• Token contains information about event: user identity, groups, file system information, network, system call and result, etc. as appropriate
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-914
More About Records
• Records refer to auditable events– Kernel events: opening a file– Application events: failure to authenticate when logging
in• Grouped into audit event classes based on events
causing record generation– Before log created: tell system what to generate records
for– After log created: defined classes control which records
given to analysis tools
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-915
Example Record
• Logs are binary; this is from praudit
header,35,AUE_EXIT,Wed Sep 18 11:35:28 1991, + 570000 msec,
process,bishop,root,root,daemon,1234,
return,Error 0,5
trailer,35
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-916
Auditing File Systems
• Network File System (NFS)– Industry standard– Server exports file system; client imports it– Root of tree being exported called server mount
point; place in client file tree where exported file system imported called client mount point
• Logging and Auditing File System (LAFS)– Built on NFS
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-917
NFS Version 2
• Mounting protocol– Client kernel contacts server’s mount daemon– Daemon checks client is authorized to mount file
system– Daemon returns file handle pointing to server mount
point– Client creates entry in client file system corresponding
to file handle– Access restrictions enforced
• On client side: server not aware of these• On server side: client not aware of these
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-918
File Access Protocol• Process tries to open file as if it were local• Client kernel sends file handle for element of path referring
to remote file to server’s NFS server using LOOKUP request
• If file handle valid, server replies with appropriate file handle
• Client requests attributes with GETATTR– Client then determines if access allowed; if not, denies
• Iterate above three steps until handle obtained for requested file– Or access denied by client
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-919
Other Important Details
• NFS stateless– Server has no idea which files are being
accessed and by whom• NFS access control
– Most servers require requests to come from privileged programs
• Check that source port is 1023 or less– Underlying messages identify user
• To some degree of certainty …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-920
Site Policy
1. NFS servers respond only to authorized clients
2. UNIX access controls regulate access to server’s exported file system
3. No client host can access a non-exported file system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-921
Resulting Constraints
1. File access granted ⇒ client authorized to import file system, user can search all parent directories, user can access file as requested, file is descendent of server’s file system mount point
• From P1, P2, P3
2. Device file created or file type changed to device ⇒ user’s UID is 0
• From P2; only UID 0 can do these actions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-922
More Constraints
3. Possession of file handle ⇒ file handle issued to user• From P1, P2; otherwise unauthorized client could
access files in forbidden ways
4. Operation succeeds ⇒ similar local operation would succeed• From P2; mount should fail if requester UID not 0
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-923
NFS Operations
• Transitions from secure to non-secure state can occur only when NFS command occurs
• Example commands:– MOUNT filesystem
• Mount the named file system on the requesting client, if allowed
– LOOKUP dir_handle file_name• Search in directory with handle dir_handle for file
named file_name; return file handle for file_name
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-924
Logging Requirements
1.When file handle issued, server records handle, UID and GID of user requesting it, client host making request• Similar to allocating file descriptor when file
opened; allows validation of later requests2.When file handle used as parameter, server
records UID, GID of user• Was user using file handle issued that file
handle—useful for detecting spoofs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-925
Logging Requirements
3. When file handle issued, server records relevant attributes of containing object• On LOOKUP, attributes of containing
directory show whether it can be searched4. Record results of each operation
• Lets auditor determine result5. Record file names used as arguments
• Reconstruct path names, purpose of commands
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-926
Audit Criteria: MOUNT
• MOUNT– Check that MOUNT server denies all requests
by unauthorized clients to import file system that host exports
• Obtained from constraints 1, 4• Log requirements 1 (who requests it), 3 (access
attributes—to whom can it be exported), 4 (result)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-927
Audit Criteria: LOOKUP
2. Check file handle comes from client, user to which it was issued• Obtained from constraint 3• Log requirement 1 (who issued to), 2 (who is using)
3. Check that directory has file system mount point as ancestor and user has search permission on directory• Obtained from constraint 1• Log requirements 2 (who is using handle), 3 (owner,
group, type, permissions of object), 4 (result), 5 (reconstruct path name)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-928
LAFS
• File system that records user level activities• Uses policy-based language to automate
checks for violation of policies• Implemented as extension to NFS
– You create directory with lmkdir and attach policy with lattach:
lmkdir/usr/home/xyzzy/project policy
lattach /usr/home/xyzzy/project/lafs/xyzzy/project
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-929
LAFS Components
• Name server• File manager• Configuration assistant
– Sets up required protection modes; interacts with name server, underlying file protection mechanisms
• Audit logger– Logs file accesses; invoked whenever process accesses
file• Policy checker
– Validates policies, checks logs conform to policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-930
How It Works
• No changes to applications• Each file has 3 associated virtual files
– file%log: all accesses to file– file%policy: access control policy for file– file%audit: when accessed, triggers audit in which
accesses are compared to policy for file
• Virtual files not shown in listing– LAFS knows the extensions and handles them properly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-931
Example Policiesprohibit:0900-1700:*:*:wumpus:exec
– No-one can execute wumpus between 9AM and 5PMallow:*:Makefile:*:make:read
allow:*:Makefile:Owner:makedepend:write
allow:*:*.o,*.out:Owner,Group:gcc,ld:write
allow:-010929:*.c,*.h:Owner:emacs,vi,ed:write
– Program make can read Makefile– Owner can change Makefile using makedepend– Owner, group member can create .o, .out files using gcc
and ld– Owner can modify .c, .h files using named editors up to
Sep. 29, 2001
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-932
Comparison
• Security policy controls access– Goal is to detect, report violations– Auditing mechanisms built in
• LAFS “stacked” onto NFS– If you access files not through LAFS, access
not recorded• NFS auditing at lower layer
– So if you use NFS, accesses recorded
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-933
Comparison
• Users can specify policies in LAFS– Use %policy file
• NFS policy embedded, not easily changed– It would be set by site, not users
• Which is better?– Depends on goal; LAFS is more flexible but
easier to evade. Use both together, perhaps?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-934
Audit Browsing
• Goal of browser: present log information in a form easy to understand and use
• Several reasons to do this:– Audit mechanisms may miss problems that auditors
will spot– Mechanisms may be unsophisticated or make invalid
assumptions about log format or meaning– Logs usually not integrated; often different formats,
syntax, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-935
Browsing Techniques
• Text display– Does not indicate relationships between events
• Hypertext display– Indicates local relationships between events– Does not indicate global relationships clearly
• Relational database browsing– DBMS performs correlations, so auditor need not know
in advance what associations are of interest– Preprocessing required, and may limit the associations
DBMS can make
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-936
More Browsing Techniques
• Replay– Shows events occurring in order; if multiple logs,
intermingles entries• Graphing
– Nodes are entities, edges relationships– Often too cluttered to show everything, so graphing
selects subsets of events• Slicing
– Show minimum set of log events affecting object– Focuses on local relationships, not global ones
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-937
Example: Visual Audit Browser• Frame Visualizer
– Generates graphical representation of logs• Movie Maker
– Generates sequence of graphs, each event creating a new graph suitably modified
• Hypertext Generator– Produces page per user, page per modified file, summary and index
pages• Focused Audit Browser
– Enter node name, displays node, incident edges, and nodes at endof edges
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-938
Example Use
• File changed– Use focused audit browser
• Changed file is initial focus• Edges show which processes have altered file
– Focus on suspicious process• Iterate through nodes until method used to gain
access to system determined
• Question: is masquerade occurring?– Auditor knows audit UID of attacker
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-939
Tracking Attacker
• Use hypertext generator to get all audit records with that UID– Now examine them for irregular activity– Frame visualizer may help here– Once found, work forward to reconstruct activity
• For non-technical people, use movie maker to show what happened– Helpful for law enforcement authorities especially!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-940
Example: MieLog
• Computes counts of single words, word pairs– Auditor defines “threshold count”– MieLog colors data with counts higher than threshold
• Display uses graphics and text together– Tag appearance frequency area: colored based on
frequency (e.g., red is rare)– Time information area: bar graph showing number of
log entries in that period of time; click to get entries– Outline of message area: outline of log messages,
colored to match tag appearance frequency area– Message in text area: displays log entry under study
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-941
Example Use
• Auditor notices unexpected gap in time information area– No log entries during that time!?!?
• Auditor focuses on log entries before, after gap– Wants to know why logging turned off, then turned
back on
• Color of words in entries helps auditor find similar entries elsewhere and reconstruct patterns
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-942
Key Points
• Logging is collection and recording; audit is analysis
• Need to have clear goals when designing an audit system
• Auditing should be designed into system, not patched into system after it is implemented
• Browsing through logs helps auditors determine completeness of audit (and effectiveness of audit mechanisms!)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-943
Chapter 22: Intrusion Detection
• Principles• Basics• Models of Intrusion Detection• Architecture of an IDS• Organization• Incident Response
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-944
Principles of Intrusion Detection
• Characteristics of systems not under attack– User, process actions conform to statistically
predictable pattern– User, process actions do not include sequences of
actions that subvert the security policy– Process actions correspond to a set of specifications
describing what the processes are allowed to do
• Systems under attack do not meet at least one of these
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-945
Example
• Goal: insert a back door into a system– Intruder will modify system configuration file or
program– Requires privilege; attacker enters system as an
unprivileged user and must acquire privilege• Nonprivileged user may not normally acquire privilege
(violates #1)• Attacker may break in using sequence of commands that
violate security policy (violates #2)• Attacker may cause program to act in ways that violate
program’s specification
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-946
Basic Intrusion Detection
• Attack tool is automated script designed to violate a security policy
• Example: rootkit– Includes password sniffer– Designed to hide itself using Trojaned versions
of various programs (ps, ls, find, netstat, etc.)– Adds back doors (login, telnetd, etc.)– Has tools to clean up log entries (zapper, etc.)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-947
Detection
• Rootkit configuration files cause ls, du, etc. to hide information– ls lists all files in a directory
• Except those hidden by configuration file– dirdump (local program to list directory entries)
lists them too• Run both and compare counts• If they differ, ls is doctored
• Other approaches possible
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-948
Key Point
• Rootkit does not alter kernel or file structures to conceal files, processes, and network connections– It alters the programs or system calls that
interpret those structures– Find some entry point for interpretation that
rootkit did not alter– The inconsistency is an anomaly (violates #1)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-949
Denning’s Model
• Hypothesis: exploiting vulnerabilities requires abnormal use of normal commands or instructions– Includes deviation from usual actions– Includes execution of actions leading to break-
ins– Includes actions inconsistent with specifications
of privileged programs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-950
Goals of IDS
• Detect wide variety of intrusions– Previously known and unknown attacks– Suggests need to learn/adapt to new attacks or changes
in behavior• Detect intrusions in timely fashion
– May need to be be real-time, especially when system responds to intrusion
• Problem: analyzing commands may impact response time of system
– May suffice to report intrusion occurred a few minutes or hours ago
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-951
Goals of IDS
• Present analysis in simple, easy-to-understand format– Ideally a binary indicator– Usually more complex, allowing analyst to examine
suspected attack– User interface critical, especially when monitoring
many systems • Be accurate
– Minimize false positives, false negatives– Minimize time spent verifying attacks, looking for them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-952
Models of Intrusion Detection
• Anomaly detection– What is usual, is known– What is unusual, is bad
• Misuse detection– What is bad, is known– What is not bad, is good
• Specification-based detection– What is good, is known– What is not good, is bad
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-953
Anomaly Detection
• Analyzes a set of characteristics of system, and compares their values with expected values; report when computed statistics do not match expected statistics– Threshold metrics– Statistical moments– Markov model
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-954
Threshold Metrics
• Counts number of events that occur– Between m and n events (inclusive) expected to
occur– If number falls outside this range, anomalous
• Example– Windows: lock user out after k failed sequential
login attempts. Range is (0, k–1).• k or more failed logins deemed anomalous
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-955
Difficulties
• Appropriate threshold may depend on non-obvious factors– Typing skill of users– If keyboards are US keyboards, and most users
are French, typing errors very common• Dvorak vs. non-Dvorak within the US
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-956
Statistical Moments
• Analyzer computes standard deviation (first two moments), other measures of correlation (higher moments)– If measured values fall outside expected
interval for particular moments, anomalous• Potential problem
– Profile may evolve over time; solution is to weigh data appropriately or alter rules to take changes into account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-957
Example: IDES
• Developed at SRI International to test Denning’s model– Represent users, login session, other entities as ordered
sequence of statistics <q0,j, …, qn,j> – qi,j (statistic i for day j) is count or time interval– Weighting favors recent behavior over past behavior
• Ak,j sum of counts making up metric of kth statistic on jth day• qk,l+1 = Ak,l+1 – Ak,l + 2–rtqk,l where t is number of log entries/total
time since start, r factor determined through experience
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-958
Potential Problems
• Assumes behavior of processes and users can be modeled statistically– Ideal: matches a known distribution such as
Gaussian or normal– Otherwise, must use techniques like clustering
to determine moments, characteristics that show anomalies, etc.
• Real-time computation a problem too
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-959
Markov Model
• Past state affects current transition• Anomalies based upon sequences of events, and
not on occurrence of single event• Problem: need to train system to establish valid
sequences– Use known, training data that is not anomalous– The more training data, the better the model– Training data should cover all possible normal uses of
system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-960
Example: TIM
• Time-based Inductive Learning• Sequence of events is abcdedeabcabc• TIM derives following rules:
R1: ab→c (1.0) R2: c→d (0.5) R3: c→e (0.5)R4: d→e (1.0) R5: e→a (0.5) R6: e→d (0.5)
• Seen: abd; triggers alert– c always follows ab in rule set
• Seen: acf; no alert as multiple events can follow c– May add rule R7: c→f (0.33); adjust R2, R3
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-961
Misuse Modeling
• Determines whether a sequence of instructions being executed is known to violate the site security policy– Descriptions of known or potential exploits grouped
into rule sets– IDS matches data against rule sets; on success, potential
attack found• Cannot detect attacks unknown to developers of
rule sets– No rules to cover them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-962
Example: NFR
• Built to make adding new rules easily• Architecture:
– Packet sucker: read packets from network– Decision engine: uses filters to extract
information– Backend: write data generated by filters to disk
• Query backend allows administrators to extract raw, postprocessed data from this file
• Query backend is separate from NFR process
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-963
N-Code Language
• Filters written in this language• Example: ignore all traffic not intended for 2 web
servers:# list of my web servers
my_web_servers = [ 10.237.100.189 10.237.55.93 ] ;
# we assume all HTTP traffic is on port 80
filter watch tcp( client, dport:80 )
{
if (ip.dest!= my_web_servers)
return;
# now process the packet; we just write out packet info
record system.time, ip.src, ip.destto www._list;
}www_list = recorder(“log”)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-964
Specification Modeling
• Determines whether execution of sequence of instructions violates specification
• Only need to check programs that alter protection state of system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-965
System Traces
• Notion of subtrace (subsequence of a trace) allows you to handle threads of a process, process of a system
• Notion of merge of traces U, V when trace U and trace V merged into single trace
• Filter p maps trace T to subtrace T′ such that, for all events ti ∈ T′, p(ti) is true
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-966
Example: Apply to rdist
• Ko, Levitt, Ruschitzka defined PE-grammar to describe accepted behavior of program
• rdist creates temp file, copies contents into it, changes protection mask, owner of it, copies it into place– Attack: during copy, delete temp file and place
symbolic link with same name as temp file– rdist changes mode, ownership to that of program
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-967
Relevant Parts of Spec
• Specification of chmod, chown says that they can only alter attributes of files that rdist creates
• Chown, chmod of symlink violates this rule as M.newownerid ≠ U (owner of file symlinkpoints to is not owner of file rdist is distributing)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-968
Comparison and Contrast
• Misuse detection: if all policy rules known, easy to construct rulesets to detect violations– Usual case is that much of policy is unspecified, so
rulesets describe attacks, and are not complete• Anomaly detection: detects unusual events, but
these are not necessarily security problems• Specification-based vs. misuse: spec assumes if
specifications followed, policy not violated; misuse assumes if policy as embodied in rulesetsfollowed, policy not violated
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-969
IDS Architecture
• Basically, a sophisticated audit system– Agent like logger; it gathers data for analysis– Director like analyzer; it analyzes data obtained from
the agents according to its internal rules– Notifier obtains results from director, and takes some
action• May simply notify security officer• May reconfigure agents, director to alter collection, analysis
methods• May activate response mechanism
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-970
Agents
• Obtains information and sends to director• May put information into another form
– Preprocessing of records to extract relevant parts
• May delete unneeded information• Director may request agent send other
information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-971
Example
• IDS uses failed login attempts in its analysis• Agent scans login log every 5 minutes,
sends director for each new login attempt:– Time of failed login– Account name and entered password
• Director requests all records of login (failed or not) for particular user– Suspecting a brute-force cracking attempt
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-972
Host-Based Agent
• Obtain information from logs– May use many logs as sources– May be security-related or not– May be virtual logs if agent is part of the kernel
• Very non-portable
• Agent generates its information– Scans information needed by IDS, turns it into
equivalent of log record– Typically, check policy; may be very complex
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-973
Network-Based Agents
• Detects network-oriented attacks– Denial of service attack introduced by flooding a
network• Monitor traffic for a large number of hosts• Examine the contents of the traffic itself• Agent must have same view of traffic as
destination– TTL tricks, fragmentation may obscure this
• End-to-end encryption defeats content monitoring– Not traffic analysis, though
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-974
Network Issues
• Network architecture dictates agent placement– Ethernet or broadcast medium: one agent per subnet– Point-to-point medium: one agent per connection, or
agent at distribution/routing point
• Focus is usually on intruders entering network– If few entry points, place network agents behind them– Does not help if inside attacks to be monitored
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-975
Aggregation of Information
• Agents produce information at multiple layers of abstraction– Application-monitoring agents provide one
view (usually one line) of an event– System-monitoring agents provide a different
view (usually many lines) of an event– Network-monitoring agents provide yet another
view (involving many network packets) of an event
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-976
Director
• Reduces information from agents– Eliminates unnecessary, redundant records
• Analyzes remaining information to determine if attack under way– Analysis engine can use a number of techniques,
discussed before, to do this
• Usually run on separate system– Does not impact performance of monitored systems– Rules, profiles not available to ordinary users
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-977
Example
• Jane logs in to perform system maintenance during the day
• She logs in at night to write reports• One night she begins recompiling the kernel• Agent #1 reports logins and logouts• Agent #2 reports commands executed
– Neither agent spots discrepancy– Director correlates log, spots it at once
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-978
Adaptive Directors
• Modify profiles, rule sets to adapt their analysis to changes in system– Usually use machine learning or planning to
determine how to do this• Example: use neural nets to analyze logs
– Network adapted to users’ behavior over time– Used learning techniques to improve
classification of events as anomalous• Reduced number of false alarms
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-979
Notifier
• Accepts information from director• Takes appropriate action
– Notify system security officer– Respond to attack
• Often GUIs– Well-designed ones use visualization to convey
information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-980
GrIDS GUI
A E
D
C
B
• GrIDS interface showing the progress of a worm as it spreads through network
• Left is early in spread• Right is later on
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-981
Other Examples
• Courtney detected SATAN attacks– Added notification to system log– Could be configured to send email or paging
message to system administrator• IDIP protocol coordinates IDSes to respond
to attack– If an IDS detects attack over a network, notifies
other IDSes on co-operative firewalls; they can then reject messages from the source
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-982
Organization of an IDS
• Monitoring network traffic for intrusions– NSM system
• Combining host and network monitoring– DIDS
• Making the agents autonomous– AAFID system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-983
Monitoring Networks: NSM
• Develops profile of expected usage of network, compares current usage
• Has 3-D matrix for data– Axes are source, destination, service– Each connection has unique connection ID– Contents are number of packets sent over that
connection for a period of time, and sum of data– NSM generates expected connection data– Expected data masks data in matrix, and anything left
over is reported as an anomaly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-984
Problem
• Too much data!– Solution: arrange data
hierarchically into groups
• Construct by folding axes of matrix
– Analyst could expand any group flagged as anomalous(S1, D1, SMTP)
(S1, D1, FTP)…
(S1, D1)
(S1, D2, SMTP)(S1, D2, FTP)
…
(S1, D2)
S1
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-985
Signatures
• Analyst can write rule to look for specific occurrences in matrix– Repeated telnet connections lasting only as long
as set-up indicates failed login attempt• Analyst can write rules to match against
network traffic– Used to look for excessive logins, attempt to
communicate with non-existent host, single host communicating with 15 or more hosts
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-986
Other
• Graphical interface independent of the NSM matrix analyzer
• Detected many attacks– But false positives too
• Still in use in some places– Signatures have changed, of course
• Also demonstrated intrusion detection on network is feasible– Did no content analysis, so would work even with
encrypted connections
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-987
Combining Sources: DIDS
• Neither network-based nor host-based monitoring sufficient to detect some attacks– Attacker tries to telnet into system several times using
different account names: network-based IDS detects this, but not host-based monitor
– Attacker tries to log into system using an account without password: host-based IDS detects this, but not network-based monitor
• DIDS uses agents on hosts being monitored, and a network monitor– DIDS director uses expert system to analyze data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-988
Attackers Moving in Network
• Intruder breaks into system A as alice• Intruder goes from A to system B, and breaks into
B’s account bob• Host-based mechanisms cannot correlate these• DIDS director could see bob logged in over alice’s
connection; expert system infers they are the same user– Assigns network identification number NID to this user
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-989
Handling Distributed Data
• Agent analyzes logs to extract entries of interest– Agent uses signatures to look for attacks
• Summaries sent to director– Other events forwarded directly to director
• DIDS model has agents report:– Events (information in log entries)– Action, domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-990
Actions and Domains
• Subjects perform actions– session_start, session_end, read, write, execute,
terminate, create, delete, move, change_rights, change_user_id
• Domains characterize objects– tagged, authentication, audit, network, system,
sys_info, user_info, utility, owned, not_owned– Objects put into highest domain to which it belongs
• Tagged, authenticated file is in domain tagged• Unowned network object is in domain network
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-991
More on Agent Actions
• Entities can be subjects in one view, objects in another– Process: subject when changes protection mode of
object, object when process is terminated• Table determines which events sent to DIDS director
– Based on actions, domains associated with event– All NIDS events sent over so director can track view of
system• Action is session_start or execute; domain is network
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-992
Layers of Expert System Model
1. Log records2. Events (relevant information from log entries)3. Subject capturing all events associated with a user;
NID assigned to this subject4. Contextual information such as time, proximity to
other events– Sequence of commands to show who is using the
system– Series of failed logins follow
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-993
Top Layers
5. Network threats (combination of events in context)– Abuse (change to protection state)– Misuse (violates policy, does not change state)– Suspicious act (does not violate policy, but of interest)
6. Score (represents security state of network)– Derived from previous layer and from scores
associated with rules• Analyst can adjust these scores as needed
– A convenience for user
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-994
Autonomous Agents: AAFID
• Distribute director among agents• Autonomous agent is process that can act
independently of the system of which it is part• Autonomous agent performs one particular
monitoring function– Has its own internal model– Communicates with other agents– Agents jointly decide if these constitute a reportable
intrusion
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-995
Advantages
• No single point of failure– All agents can act as director– In effect, director distributed over all agents
• Compromise of one agent does not affect others• Agent monitors one resource
– Small and simple• Agents can migrate if needed• Approach appears to be scalable to large networks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-996
Disadvantages
• Communications overhead higher, more scattered than for single director– Securing these can be very hard and expensive
• As agent monitors one resource, need many agents to monitor multiple resources
• Distributed computation involved in detecting intrusions– This computation also must be secured
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-997
Example: AAFID
• Host has set of agents and transceiver– Transceiver controls agent execution, collates
information, forwards it to monitor (on local or remote system)
• Filters provide access to monitored resources– Use this approach to avoid duplication of work and
system dependence– Agents subscribe to filters by specifying records needed– Multiple agents may subscribe to single filter
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-998
Transceivers and Monitors
• Transceivers collect data from agents– Forward it to other agents or monitors– Can terminate, start agents on local system
• Example: System begins to accept TCP connections, so transceiver turns on agent to monitor SMTP
• Monitors accept data from transceivers– Can communicate with transceivers, other monitors
• Send commands to transceiver– Perform high level correlation for multiple hosts– If multiple monitors interact with transceiver, AAFID
must ensure transceiver receives consistent commands
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-999
Other
• User interface interacts with monitors– Could be graphical or textual
• Prototype implemented in PERL for Linux and Solaris– Proof of concept– Performance loss acceptable
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1000
Incident Prevention
• Identify attack before it completes• Prevent it from completing• Jails useful for this
– Attacker placed in a confined environment that looks like a full, unrestricted environment
– Attacker may download files, but gets bogus ones– Can imitate a slow system, or an unreliable one– Useful to figure out what attacker wants– MLS systems provide natural jails
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1001
Intrusion Handling
• Restoring system to satisfy site security policy• Six phases
– Preparation for attack (before attack detected)– Identification of attack
Containment of attack (confinement)Eradication of attack (stop attack)
– Recovery from attack (restore system to secure state)Follow-up to attack (analysis and other actions)
Discussed in what follows
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1002
Containment Phase
• Goal: limit access of attacker to system resources
• Two methods– Passive monitoring– Constraining access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1003
Passive Monitoring
• Records attacker’s actions; does not interfere with attack– Idea is to find out what the attacker is after and/or
methods the attacker is using• Problem: attacked system is vulnerable throughout
– Attacker can also attack other systems• Example: type of operating system can be derived
from settings of TCP and IP packets of incoming connections– Analyst draws conclusions about source of attack
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1004
Constraining Actions
• Reduce protection domain of attacker• Problem: if defenders do not know what
attacker is after, reduced protection domain may contain what the attacker is after– Stoll created document that attacker
downloaded– Download took several hours, during which the
phone call was traced to Germany
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1005
Deception
• Deception Tool Kit– Creates false network interface– Can present any network configuration to attackers– When probed, can return wide range of vulnerabilities– Attacker wastes time attacking non-existent systems
while analyst collects and analyzes attacks to determine goals and abilities of attacker
– Experiments show deception is effective response to keep attackers from targeting real systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1006
Eradication Phase
• Usual approach: deny or remove access to system, or terminate processes involved in attack
• Use wrappers to implement access control– Example: wrap system calls
• On invocation, wrapper takes control of process• Wrapper can log call, deny access, do intrusion detection• Experiments focusing on intrusion detection used multiple
wrappers to terminate suspicious processes– Example: network connections
• Wrapper around servers log, do access control on, incoming connections and control access to Web-based databases
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1007
Firewalls
• Mediate access to organization’s network– Also mediate access out to the Internet
• Example: Java applets filtered at firewall– Use proxy server to rewrite them
• Change “<applet>” to something else– Discard incoming web files with hex sequence CA FE
BA BE• All Java class files begin with this
– Block all files with name ending in “.class” or “.zip”• Lots of false positives
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1008
Intrusion Detection and Isolation Protocol
• Coordinates reponse to attacks• Boundary controller is system that can
block connection from entering perimeter– Typically firewalls or routers
• Neighbor is system directly connected• IDIP domain is set of systems that can send
messages to one another without messages passing through boundary controller
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1009
Protocol
• IDIP protocol engine monitors connection passing through members of IDIP domains– If intrusion observed, engine reports it to neighbors– Neighbors propagate information about attack– Trace connection, datagrams to boundary controllers– Boundary controllers coordinate responses
• Usually, block attack, notify other controllers to block relevant communications
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1010
Example
• C, D, W, X, Y, Z boundary controllers• f launches flooding attack on A• Note after X xuppresses traffic intended for A, W begins
accepting it and A, b, a, and W can freely communicate again
C D
XW
b
a
AeY
Z
f
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1011
Follow-Up Phase
• Take action external to system against attacker– Thumbprinting: traceback at the connection
level– IP header marking: traceback at the packet level– Counterattacking
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1012
Counterattacking
• Use legal procedures– Collect chain of evidence so legal authorities
can establish attack was real– Check with lawyers for this
• Rules of evidence very specific and detailed• If you don’t follow them, expect case to be dropped
• Technical attack– Goal is to damage attacker seriously enough to
stop current attack and deter future attacks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1013
Consequences
1. May harm innocent party• Attacker may have broken into source of attack or may
be impersonating innocent party2. May have side effects
• If counterattack is flooding, may block legitimate use of network
3. Antithetical to shared use of network• Counterattack absorbs network resources and makes
threats more immediate4. May be legally actionable
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1014
Example: Counterworm
• Counterworm given signature of real worm– Counterworm spreads rapidly, deleting all occurrences
of original worm• Some issues
– How can counterworm be set up to delete only targeted worm?
– What if infected system is gathering worms for research?
– How do originators of counterworm know it will not cause problems for any system?
• And are they legally liable if it does?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1015
Key Points
• Intrusion detection is a form of auditing• Anomaly detection looks for unexpected events• Misuse detection looks for what is known to be
bad• Specification-based detection looks for what is
known not to be good• Intrusion response requires careful thought and
planning
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1016
Chapter 23: Network Security
• Introduction to the Drib• Policy Development• Network Organization• Availability• Anticipating Attacks
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1017
Introduction
• Goal: apply concepts, principles, mechanisms discussed earlier to a particular situation– Focus here is on securing network– Begin with description of company– Proceed to define policy– Show how policy drives organization
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1018
The Drib
• Builds and sells dribbles• Developing network infrastructure allowing
it to connect to Internet to provide mail, web presence for consumers, suppliers, other partners
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1019
Specific Problems
• Internet presence required– E-commerce, suppliers, partners– Drib developers need access– External users cannot access development sites
• Hostile takeover by competitor in progress– Lawyers, corporate officers need access to development
data– Developers cannot have access to some corporate data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1020
Goals of Security Policy
• Data related to company plans to be kept secret– Corporate data such as what new products are being
developed is known on a need-to-know basis only
• When customer supplies data to buy a dribble, only folks who fill the order can access that information– Company analysts may obtain statistics for planning
• Lawyers, company officials must approve release of any sensitive data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1021
Policy Development
• Policy: minimize threat of data being leaked to unauthorized entities
• Environment: 3 internal organizations– Customer Service Group (CSG)
• Maintains customer data• Interface between clients, other internal organizations
– Development Group (DG)• Develops, modifies, maintains products• Relies on CSG for customer feedback
– Corporate Group (CG)• Handles patents, lawsuits, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1022
Nature of Information Flow
• Public– Specs of current products, marketing literature
• CG, DG share info for planning purposes– Problems, patent applications, budgets, etc.
• Private– CSG: customer info like credit card numbers– CG: corporate info protected by attorney privilege– DG: plans, prototypes for new products to determine if
production is feasible before proposing them to CG
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1023
Data Classes
• Public data (PD): available to all• Development data for existing products
(DDEP): available to CG, DG only• Development data for future products
(DDFP): available to DG only• Corporate data (CpD): available to CG only• Customer data (CuD): available to CSG
only
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1024
Data Class Changes
• DDFP → DDEP: as products implemented• DDEP → PD: when deemed advantageous to
publicize some development details– For marketing purposes, for example
• CpD → PD: as privileged info becomes public through mergers, lawsiut filings, etc.
• Note: no provision for revealing CuD directly– This protects privacy of Drib’s customers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1025
User Classes
• Outsiders (O): members of public– Access to public data– Can also order, download drivers, send email to
company• Developers (D): access to DDEP, DDFP
– Cannot alter development data for existing products• Corporate executives (C): access to CD
– Can read DDEP, DDFP, CuD but not alter them– Sometimes can make sensitive data public
• Employees (E): access to CuD only
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1026
Access Control Matrix for Policy
rrDDEP
rr, wDDFP
r, wrwCuD
r, wCpD
rrrrPD
ECDO
r is read right, w is write right
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1027
Type of Policy
• Mandatory policy– Members of O, D, C, E cannot change permissions to
allow members of another user class to access data
• Discretionary component– Within each class, individuals may have control over
access to files they own– View this as an issue internal to each group and not of
concern at corporate policy level• At corporate level, discretionary component is “allow always”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1028
Reclassification of Data
• Who must agree for each?– C, D must agree for DDFP → DDEP– C, E must agree for DDEP → PD– C can do CpD → PD
• But two members of C must agree to this
• Separation of privilege met– At least two different people must agree to the
reclassification– When appropriate, the two must come from different
user classes
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1029
Availability
• Drib world-wide multinational corp– Does business on all continents
• Imperative anyone be able to contact Drib at any time– Drib places very high emphasis on customer service– Requirement: Drib’s systems be available 99% of the
time• 1% allowed for planned maintenance, unexpected downtime
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1030
Consistency Check: Goal 1
• Goal 1: keep sensitive info confidential– Developers
• Need to read DDEP, DDFP, and to alter DDFP• No need to access CpD, CuD as don’t deal with
customers or decide which products to market
– Corporate executives• Need to read, alter CpD, and read DDEP
• This matches access permissions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1031
Consistency Check: Goal 2
• Goal 2: only employees who handle purchases can access customer data, and only they and customer can alter it– Outsiders
• Need to alter CuD, do not need to read it
– Customer support• Need to read, alter CuD
– This matches access permissions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1032
Consistency Check: Goal 3
• Goal 3: releasing sensitive info requires corporate approval– Corporate executives
• Must approve any reclassification• No-one can write to PD, except through
reclassification
• This matches reclassification constraints
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1033
Consistency Check:Transitive Closure
rrDDEP
rr, wDDFP
r, wrwCuD
wr, wwCpD
rrrrPD
ECDO
r is read right, w is write right
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1034
Interpretation
• From transitive closure:– Only way for data to flow into PD is by
reclassification– Key point of trust: members of C– By rules for moving data out of DDEP, DDFP,
someone other than member of C must also approve
• Satisfies separation of privilege
• Conclusion: policy is consistent
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1035
Network Organization
• Partition network into several subnets– Guards between them prevent leaks
Outer firewallDMZ
Web server
Mail server
DNS server
Inner firewall
Corporate data subnet Customer data subnet
Development subnetInternal Internalmail serverDNS server
INTERNAL
Internet
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1036
DMZ
• Portion of network separating purely internal network from external network– Allows control of accesses to some trusted
systems inside the corporate perimeter– If DMZ systems breached, internal systems still
safe– Can perform different types of checks at
boundary of internal,DMZ networks and DMZ,Internet network
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1037
Firewalls
• Host that mediates access to a network– Allows, disallows accesses based on
configuration and type of access• Example: block Back Orifice
– BO allows external users to control systems• Requires commands to be sent to a particular port
(say, 25345)– Firewall can block all traffic to or from that
port• So even if BO installed, outsiders can’t use it
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1038
Filtering Firewalls
• Access control based on attributes of packets and packet headers– Such as destination address, port numbers,
options, etc.– Also called a packet filtering firewall– Does not control access based on content– Examples: routers, other infrastructure systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1039
Proxy
• Intermediate agent or server acting on behalf of endpoint without allowing a direct connection between the two endpoints– So each endpoint talks to proxy, thinking it is
talking to other endpoint– Proxy decides whether to forward messages,
and whether to alter them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1040
Proxy Firewall
• Access control done with proxies– Usually bases access control on content as well as
source, destination addresses, etc.– Also called an applications level or application level
firewall– Example: virus checking in electronic mail
• Incoming mail goes to proxy firewall• Proxy firewall receives mail, scans it• If no virus, mail forwarded to destination• If virus, mail rejected or disinfected before forwarding
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1041
Views of a Firewall
• Access control mechanism– Determines which traffic goes into, out of
network• Audit mechanism
– Analyzes packets that enter– Takes action based upon the analysis
• Leads to traffic shaping, intrusion response, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1042
Analysis of Drib Network
• Security policy: “public” entities on outside but may need to access corporate resources– Those resources provided in DMZ
• No internal system communicates directly with systems on Internet– Restricts flow of data to “public”– For data to flow out, must pass through DMZ
• Firewalls, DMZ are “pump”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1043
Implementation
• Conceal all internal addresses– Make them all on 10., 172., or 192.168. subnets
• Inner firewall uses NAT to map addresses to firewall’s address
– Give each host a non-private IP address• Inner firewall never allows those addresses to leave internal
network
• Easy as all services are proxied by outer firewall– Email is a bit tricky …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1044
• Problem: DMZ mail server must know address in order to send mail to internal destination– Could simply be distinguished address that
causes inner firewall to forward mail to internal mail server
• Internal mail server needs to know DMZ mail server address– Same comment
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1045
DMZ Web Server
• In DMZ so external customers can access it without going onto internal network– If data needs to be sent to internal network
(such as for an order), transmission is made separately and not as part of transaction
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1046
Application of Principles
• Least privilege– Containment of internal addresses
• Complete mediation– Inner firewall mediates every access to DMZ
• Separation of privilege– Going to Internet must pass through inner,
outer firewalls and DMZ servers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1047
Application of Principles
• Least common mechanism– Inner, outer firewalls distinct; DMZ servers
separate from inner servers– DMZ DNS violates this principle
• If it fails, multiple systems affected• Inner, outer firewall addresses fixed, so they do not
depend on DMZ DNS
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1048
Outer Firewall Configuration
• Goals: restrict public access to corporate network; restrict corporate access to Internet
• Required: public needs to send, receive email; access web services– So outer firewall allows SMTP, HTTP, HTTPS– Outer firewall uses its address for those of mail,
web servers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1049
Details
• Proxy firewall• SMTP: mail assembled on firewall
– Scanned for malicious logic; dropped if found– Otherwise forwarded to DMZ mail server
• HTTP, HTTPS: messages checked– Checked for suspicious components like very long
lines; dropped if found– Otherwise, forwarded to DMZ web server
• Note: web, mail servers different systems– Neither same as firewall
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1050
Attack Analysis
• Three points of entry for attackers:– Web server ports: proxy checks for invalid,
illegal HTTP, HTTPS requests, rejects them– Mail server port: proxy checks email for
invalid, illegal SMTP requests, rejects them– Bypass low-level firewall checks by exploiting
vulnerabilities in software, hardware• Firewall designed to be as simple as possible• Defense in depth
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1051
Defense in Depth
• Form of separation of privilege• To attack system in DMZ by bypassing
firewall checks, attacker must know internal addresses– Then can try to piggyback unauthorized
messages onto authorized packets• But the rewriting of DMZ addresses
prevents this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1052
Inner Firewall Configuration
• Goals: restrict access to corporate internal network• Rule: block all traffic except for that specifically
authorized to enter– Principle of fail-safe defaults
• Example: Drib uses NFS on some internal systems– Outer firewall disallows NFS packets crossing– Inner firewall disallows NFS packets crossing, too
• DMZ does not need access to this information (least privilege)• If inner firewall fails, outer one will stop leaks, and vice versa
(separation of privilege)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1053
More Configuration
• Internal folks require email– SMTP proxy required
• Administrators for DMZ need login access– So, allow SSH through provided:
• Destination is a DMZ server• Originates at specific internal host (administrative host)
– Violates least privilege, but ameliorated by above• DMZ DNS needs to know address of
administrative host– More on this later
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1054
DMZ
• Look at servers separately:– Web server: handles web requests with Internet
• May have to send information to internal network
– Email server: handles email with Internet• Must forward email to internal mail server
– DNS• Used to provide addresses for systems DMZ servers talk to
– Log server• DMZ systems log info here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1055
DMZ Mail Server
• Performs address, content checking on allemail
• Goal is to hide internal information from outside, but be transparent to inside
• Receives email from Internet, forwards it to internal network
• Receives email from internal network, forwards it to Internet
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1056
Mail from Internet
• Reassemble messages into header, letter, attachments as files
• Scan header, letter, attachments looking for “bad” content– “Bad” = known malicious logic– If none, scan original letter (including attachments and
header) for violation of SMTP spec• Scan recipient address lines
– Address rewritten to direct mail to internal mail server– Forward letter there
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1057
Mail to Internet
• Like mail from Internet with 2 changes:– Step 2: also scan for sensitive data (like
proprietary markings or content, etc.)– Step 3: changed to rewrite all header lines
containing host names, email addresses, and IP addresses of internal network
• All are replaced by “drib.org” or IP address of external firewall
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1058
Administrative Support
• Runs SSH server– Configured to accept connections only from
trusted administrative host in internal network– All public keys for that host fixed; no
negotiation to obtain those keys allowed– Allows administrators to configure, maintain
DMZ mail host remotely while minimizing exposure of host to compromise
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1059
DMZ Web Server
• Accepts, services requests from Internet• Never contacts servers, information sources
in internal network• CGI scripts checked for potential attacks
– Hardened to prevent attacks from succeeding– Server itself contains no confidential data
• Server is www.drib.org and uses IP address of outer firewall when it must supply one
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1060
Updating DMZ Web Server
• Clone of web server kept on internal network– Called “WWW-clone”
• All updates done to WWW-clone– Periodically admins copy contents of WWW-clone to
DMZ web server
• DMZ web server runs SSH server– Used to do updates as well as maintenance,
configuration– Secured like that of DMZ mail server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1061
Internet Ordering
• Orders for Drib merchandise from Internet– Customer enters data, which is saved to file– After user confirms order, web server checks format,
content of file and then uses public key of system on internal customer subnet to encipher it
• This file is placed in a spool area not accessible to web serverprogram
– Original file deleted– Periodically, internal trusted administrative host
uploads these files, and forwards them to internal customer subnet system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1062
Analysis
• If attacker breaks into web server, cannot get order information– There is a slight window where the information of
customers still on system can be obtained
• Attacker can get enciphered files, public key used to encipher them– Use of public key cryptography means it is
computationally infeasible for attacker to determine private key from public key
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1063
DMZ DNS Server
• Supplies DNS information for some hosts to DMZ:– DMZ mail, web, log hosts– Internal trusted administrative host
• Not fixed for various reasons; could be …– Inner firewall– Outer firewall
• Note: Internal server addresses not present– Inner firewall can get them, so DMZ hosts do not need
them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1064
DMZ Log Server
• DMZ systems all log information– Useful in case of problems, attempted compromise
• Problem: attacker will delete or alter them if successful– So log them off-line to this server
• Log server saves logs to file, also to write-once media– Latter just in case log server compromised
• Runs SSH server– Constrained in same way server on DMZ mail server is
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1065
Summary
• Each server knows only what is needed to do its task– Compromise will restrict flow of information but not
reveal info on internal network• Operating systems and software:
– All unnecessary features, servers disabled– Better: create custom systems
• Proxies prevent direct connection to systems– For all services except SSH from internal network to
DMZ, which is itself constrained by source, destination
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1066
Internal Network
• Goal: guard against unauthorized access to information– “read” means fetching file, “write” means depositing
file
• For now, ignore email, updating of DMZ web server, internal trusted administrative host
• Internal network organized into 3 subnets, each corresponding to Drib group– Firewalls control access to subnets
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1067
Internal Mail Server
• Can communicate with hosts on subnets• Subnet may have mail server
– Internal DNS need only know subnet mail server’s address
• Subnet may allow mail to go directly to destination host– Internal DNS needs to know addresses of all destination
hosts
• Either satisfies policy
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1068
WWW-close
• Provides staging area for web updates• All internal firewalls allow access to this
– WWW-clone controls who can put and get what files and where they can be put
• Synchronized with web pages on server– Done via internal trusted administrative host
• Used as testbed for changes in pages– Allows corporate review before anything goes public– If DMZ web server trashed or compromised, all web
pages can be restored quickly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1069
Trusted Administrative Host
• Access tightly controlled– Only system administrators authorized to administer
DMZ systems have access
• All connections to DMZ through inner firewall must use this host– Exceptions: internal mail server, possibly DNS
• All connections use SSH– DMZ SSH servers accept connections from this host
only
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1070
Analysis
• DMZ servers never communicate with internal servers– All communications done via inner firewall
• Only client to DMZ that can come from internal network is SSH client from trusted administrative host– Authenticity established by public key authentication
• Only data non-administrative folks can alter are web pages– Even there, they do not access DMZ
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1071
Analysis
• Only data from DMZ is customer orders and email– Customer orders already checked for potential
errors, enciphered, and transferred in such a way that it cannot be executed
– Email thoroughly checked before it is sent to internal mail server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1072
Assumptions
• Software, hardware does what it is supposed to– If software compromised, or hardware does not
work right, defensive mechanisms fail– Reason separation of privilege is critical
• If component A fails, other components provide additional defenses
• Assurance is vital!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1073
Availability
• Access over Internet must be unimpeded– Context: flooding attacks, in which attackers try to
overwhelm system resources• Example: SYN flood
– Problem: server cannot distinguish legitimate handshake from one that is part of this attack
• Only difference is whether third part of TCP handshake is sent– Flood can overwhelm communication medium
• Can’t do anything about this (except buy a bigger pipe)– Flood can overwhelm resources on our system
• We start here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1074
Intermediate Hosts
• Use routers to divert, eliminate illegitimate traffic– Goal: only legitimate traffic reaches firewall– Example: Cisco routers try to establish
connection with source (TCP intercept mode)• On success, router does same with intended
destination, merges the two• On failure, short time-out protects router resources
and target never sees flood
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1075
Intermediate Hosts
• Use network monitor to track status of handshake– Example: synkill monitors traffic on network
• Classifies IP addresses as not flooding (good), flooding (bad), unknown (new)
• Checks IP address of SYN– If good, packet ignored– If bad, send RST to destination; ends handshake, releasing
resources– If new, look for ACK or RST from same source; if seen, change
to good; if not seen, change to bad• Periodically discard stale good addresses
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1076
Intermediate Hosts
• Problem: don’t solve problem!– They move the locus of the problem to the
intermediate system– In Drib’s case, Drib does not control these
systems• So, consider endpoints
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1077
Endpoint Hosts
• Control how TCP state is stored– When SYN received, entry in queue of pending
connections created• Remains until an ACK received or time-out• In first case, entry moved to different queue• In second case, entry made available for next SYN
– In SYN flood, queue is always full• So, assure legitimate connections space in queue to some level
of probability• Two approaches: SYN cookies or adaptive time-outs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1078
SYN Cookies
• Source keeps state• Example: Linux 2.4.9 kernel
– Embed state in sequence number– When SYN received, compute sequence
number to be function of source, destination, counter, and random data
• Use as reply SYN sequence number• When reply ACK arrives, validate it
– Must be hard to guess
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1079
Adaptive Time-Out
• Change time-out time as space available for pending connections decreases
• Example: modified SunOS kernel– Time-out period shortened from 75 to 15 sec– Formula for queueing pending connections changed:
• Process allows up to b pending connections on port• a number of completed connections but awaiting process• p total number of pending connections• c tunable parameter• Whenever a + p > cb, drop current SYN message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1080
Anticipating Attacks
• Drib realizes compromise may come through unanticipated means– Plans in place to handle this
• Extensive logging– DMZ log server does intrusion detection on
logs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1081
Against Outer Firewall
• Unsuccessful attacks– Logged, then ignored– Security folks use these to justify budget, train
new personnel• Successful attack against SMTP proxy
– Proxy will start non-standard programs– Anomaly detection component of IDS on log
server will report unusual behavior• Security officers monitor this around the clock
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1082
In the DMZ
• Very interested in attacks, successful or not• Means someone who has obtained access to DMZ
launched attack– Some trusted administrator shouldn’t be trusted– Some server on outer firewall is compromised– Software on DMZ system not restrictive enough
• IDS system on DMZ log server looks for misuse (known attacks) to detect this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1083
Ignoring Failed Attacks
• Sounds dangerous– Successful attacker probably tried and failed
earlier• Drib: “So what?”
– Not sufficient personnel to handle all alerts– Focus is on what Drib cares most about
• Successful attacks, or failed attacks where there should be none
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1084
Checking the IDS
• IDS allows Drib to add attack signatures and tune parameters to control reporting of events– Experimented to find good settings– Verify this every month by doing manual
checks for two 1-hour periods (chosen at random) and comparing with reported events
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1085
Key Points
• Begin with policy• Craft network architecture and security
measures from it• Assume failure will occur
– Try to minimize it– Defend in depth– Have plan to handle failures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1086
Chapter 24: System Security
• Introduction• Policy• Networks• Users• Authentication• Processes• Files• Retrospective
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1087
Introduction
• How does administering security affect a system?• Focus on two systems
– DMZ web server– User system in development subnet
• Assumptions– DMZ system: assume any user of trusted administrative
host has authenticated to that system correctly and is a “trusted” user
– Development system: standard UNIX or UNIX-like system which a set of developers can use
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1088
Policy
• Web server policy discussed in Chapter 23– Focus on consequences
• Development system policy components, effects
• Comparison
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1089
DMZ Web Server:Consequences of Policy
1. Incoming web connections come from outer firewall2. Users log in from trusted administrative host; web pages
also downloaded through it3. Log messages go to DMZ log host only4. Web server may query DMZ DNS system for IP addresses5. Other than these, no network services provided6. Runs CGI scripts
– One writes enciphered data to spool area7. Implements services correctly, restricts access as much as
possible8. Public keys reside on web server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1090
Constraints on DMZ Web Server
WC1 No unrequested network connections except HTTP, HTTPS from outer firewall and SSH from trusted administrative host– Replies to DNS queries from DMZ DNS okay
WC2 User access only to those with user access to trusted administrative host– Number of these users as small as possible– All actions attributed to individual account, not
group or group account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1091
Constraints on DMZ Web Server
WC3 Configured to provide minimal access to system– Transfer of enciphered file to spool area should
not be under web server controlWC4 Software is high assurance
– Needs extensive loggingWC5 Contains as few programs, as little software,
configuration information, and other data as possible– Minimizes effects of successful attack
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1092
Development System
• Development network (devnet) background– Firewall separating it from other subnets– DNS server– Logging server for all logs– File servers– User database information servers– Isolated system used to build “base system
configuration” for deployment to user systems– User systems
• What follows applies only to user systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1093
Devnet User System:Policy Components
1. Only authorized users can use devnet systems; can work on any workstation
2. Sysadmins must be able to access workstations at any time
3. Authorized users trusted not to attack systems4. All network communications except email confidential,
integrity checked5. Base standard configuration cannot be changed6. Backups allow any system to be restored7. Periodic, ongoing audits of devnet systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1094
Consequences for Infrastructure
• Firewall at boundary enforces network security policy– Changes to network policy made only at firewall– Devnet systems need not be as tightly secured
• No direct access between Internet, devnet systems– Developers who need to do so have separate
workstations connected to commercial ISP– These are physically disconnected from devnet and
cannot be easily reconnected
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1095
Consequences for User Systems
DC1 Communications authenticated, enciphered, integrity checked– Consistent naming scheme across systems
DC2 Each workstation has privileged accounts for administrators– Multiple administrative accounts to limit access
to particular privileged functionsDC3 Notion of “audit” or “login” identity
associated with each action– So actions can be tied to individuals
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1096
Consequences for User Systems
DC4 Need approval to install program, and must install it in special area
– Separates it from base system softwareDC5 Each workstation protects base system
software from being altered– Best way: keep it on read-only media
DC6 Employee’s files be available continuously– Even if workstation goes down– Same permissions wherever employee accesses
them
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1097
Consequences for User Systems
DC7 Workstations store only transient files, so need not be backed up– Permanent files stores on file server,
mounted remotely– Software, kernel on read-only media
DC8 Logging system to hold logs needed– Security officers need access to systems,
network
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1098
Procedural Mechanisms
• Some restrictions cannot be enforced by technology– Moving files between ISP workstation, devnet
workstation using a floppy– No technological way to prevent this except by
removing floppy drive• Infeasible due to nature of ISP workstations
– Drib has made procedures, consequences for violating procedures, very clear
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1099
Comparison
• Spring from different roles– DMZ web server not a general-use computer– Devnet workstation is
• DMZ web server policy: focus on web server– System provides that service (and supporting services)
only; only administrative users have access as users• Devnet workstation policy: focus on more
complex environment– Software creation, testing, maintenance– Many different users
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1100
Networks
• Both systems need appropriate network protections– Firewalls provide much of this, but separation
of privilege says the systems should too• How do administrators configure these?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1101
DMZ Web Server
• Accepts web requests only from inner firewall– May allow internal users to access web site for testing
purposes in near future• Configuration file for web server software:
order allow, deny evaluate allow, then deny linesallow from outer_firewall anything outer firewall sends is okayallow from inner_firewall anything inner firewall sends is okaydeny from all don’t accept anything else
• Note inner firewall prevents internal hosts from accessing DMZ web server (for now)– If changed, web server configuration will stay same
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1102
DMZ Web Server: Web Server
• Accepts SSH connections only from trusted administrative host
• Configuration file for web software:order allow, deny evaluate allow, then deny linesallow from outer_firewall anything outer firewall sends is okayallow from inner_firewall anything inner firewall sends is okaydeny from all don’t accept anything else
• Note inner firewall prevents internal hosts from accessing DMZ web server (for now)– If changed, web server configuration will stay same
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1103
DMZ Web Server: SSH Server
• Accepts SSH connections only from authorized users coming in from trusted administrative server– SSH provides per host and per user authentication– Public keys pre-loaded on web server
• Configuration file for ssh server:allow trusted_admin_server connections from admin server okaydeny all refuse all others
• Note inner firewall prevents other internal hosts from accessing SSH server on this system– Not expected to change
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1104
Availability
• Need to restart servers if they crash– Automated, to make restart quick
• Script#! /bin/sh
echo $$ > /var/servers/webdwrapper.pid
while true
do
/usr/local/bin/webd
sleep 30
done
• If server terminates, 30 sec later it restarts
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1105
DMZ Web Server: Clients
• DNS client to get IP addresses, host names from DMZ DNS– Client ignores extraneous data– If different responses to query, discard both
• Logging client to send log messages to DMZ log server– Log any attempted connections to any port
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1106
Devnet Workstation
• Servers:– Mail (SMTP) server
• Very simple. just forwards mail to central devnet mail server
– SSH server– Line printer spooler– Logging server
• All use access control wrappers– Used to restrict connections from within devnet as well
as duplicate firewall restrictions
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1107
Access Control Wrappers
• TCP wrappers configured to intercept requests to active ports on workstations– Determines origin (IP address) of request– If okay, allows connection transparently– Log request
• Access controlled by configuration file– Second program examines network requests from
variety of ports– If illicit activity indicated, adds commands to
configuration file to block access requests from that origin
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1108
FTP, Web Services in Devnet
• Special server systems– Neither is on any devnet workstation– To make files, pages available place them in special
areas on file server• FTP, Web servers remotely mount these areas and make them
available to the server daemons
• Benefits– Minimizes number of services that devnet workstations
have to run– Minimizes number of systems that provide these
services
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1109
Checking Security
• Security officers scan network ports on systems– Compare to expected list of authorized systems
and open ports• Discrepencies lead to questions
• Security officers attack devnet systems– Goal: see how well they withstand attacks– Results used to change software, procedures to
improve security
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1110
Comparison
• Location– DMZ web server: all systems assumed hostile, so server
replicates firewall restrictions– Devnet workstation: internal systems trusted, so
workstation relies on firewall to block attacks from non-devnet systems
• Use– DMZ web server: serve web pages, accept commercial
transactions– Devnet workstation: many tasks to provide pleasant
development environment for developers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1111
Users
• What accounts are needed to run systems?– User accounts (“users”)– Administrative accounts (“sysadmins”)
• How should these be configured and maintained?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1112
DMZ Web Server
• At most 2 users and 1 sysadmin– First user reads (serves) web pages, writes to
web transaction areas– Second user moves files from web transaction
area to commerce transaction spooling area– Sysadmin manages system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1113
User Accounts
• Web server account: webbie• Commerce server account: ecommie• CGI script (as webbie) creates file with ACL, in
directory with same ACL:– ( ecommie, { read, write } )
• Commerce server copies file into spooling area (enciphering it appropriately), then deletes original file– Note: webbie can no longer read, write, delete file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1114
Sysadmin Accounts
• One user account per system administrator– Ties actions to individual
• Never log into sysadmin account remotely– Must log into user account, then access sysadmin
account• Supports tying events to individual users• If audit UID not supported, may be more difficult …
• This is allowed from console– Useful if major problems– Three people in room with console at all times
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1115
Devnet Workstation
• One user account per developer• Administrative accounts as needed• Groups correspond to projects• All identities consistent across all devnet
workstations– Example: trusted host protocols, in which a
user authenticated to host A can log into host B without re-authenticating
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1116
Naming Problems
• Host stokes trusts host navier– User Abraham has account abby on navier– Different user Abigail has account abby on stokes– Now Abraham can log into Abigail’s account without
authentication!• File server: hosts navier, stokes both use it
– User abby has UID 8924 on navier– User siobhan has UID 8924 on stokes– File server determines access based on UID– Now abby can read siobhan’s files, and vice versa
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1117
UINFO System
• Central repository defining users, accounts– Uses NIS protocol– All systems on devnet, except firewall, use it
• No user accounts on workstations
– Sysadmin accounts present on UINFO system• Also on each devnet workstation to allow sysadmins to fix
problems with workstation accessing UINFO system (and for local restores)
• Enables developers can log in to any devnetworkstation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1118
About NIS
• NIS uses cleartext messages to send info– Violates requirement as no integrity checking
• Not a problem in this context– Nonadministrative info: sent enciphered, integrity-
checked– Administrative (NIS) info: vulnerable to fake answers
• Idea is that a rogue system sends bogus reply before UINFO can
– Not possible from inside system as are secured– Not possible from outside as firewall will block
message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1119
Comparison
• Differences lie in use of systems– DMZ web server: in area accessible to untrusted users
• Limiting number of users limits damage successful attacker can do
• User info on system, so don’t need to worry about network attacks on that info
• Few points of access– Devnet workstation: in area accessible to only trusted
users• General user access system• Shares user base with other systems• Many points of access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1120
Authentication
• Focus here is on techniques used• All systems require some form
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1121
DMZ Web Server
• SSH: cryptographic authentication for hosts– Does not use IP addresses– Reject connection if authentication fails
• SSH: crypto for user; password on failure– Experimenting with smart card systems, so uses PAM
• Passwords: use MD-5 hash to protect passwords– Can be as long as desired– Proactive password checking to ensure they are hard to
guess– No password aging
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1122
Devnet Workstation
• Requires authentication as unauthorized people have access to physically secure area– Janitors, managers, etc.
• Passwords: proactively checked– Use DES-based hash for NIS compatibility
• Max password length: 8 chars– Aging in effect; time bounds (min 3d, max 90d)
• SSH: like DMZ web server, except:– root access blocked– Must log in as ordinary user, then change to root
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1123
Comparison
• Both use strong authentication– All certificates installed by trusted sysadmins
• Both allow reusable passwords– One uses MD-5, other DES-based hash– One does not age passwords, other does
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1124
Processes
• What each system must run– Goal is to minimize the number of these
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1125
DMZ Web Server
• Necessary processes:– Web server
• Enough privileges to read pages, execute CGI scripts– Commerce server
• Enough privileges to copy files from web server’s area to spool area; not enough to alter web pages
– SSH server (privileged)– Login server (privileged)
• If a physical terminal or console– Any essential OS services (privileged)
• Page daemon, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1126
Potential Problem
• UNIX systems: need privileges to bind to ports under 1024– Including port 80 (for web servers)– But web server is unprivileged!
• Solution 1: Server starts privileged, opens port, drops privileges
• Solution 2: Write wrapper to open port, drop privilege, invoke web server– The wrapper passes open port to web server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1127
File Access
• Augment ACLs with something like capabilities
• Change process notion of “root directory” to limit access to files in file system
• Example: web server needs to access page– Without change: “/usr/Web/pages/index.html”– After change: “/pages/index.html”
• Cannot refer to “/usr/trans” as cannot name it
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1128
Example
• Web server changes root directory to /usr/Web
• Commerce server changes root directory to /usr/trans
• Note “xdir” accessible to both processes
/
usr
Web
pages
trans
xdir 1
webserver
commerceserver
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1129
Interprocess Communications
• Web server needs to tell commerce server a file is ready
• Use shared directory– Web server places file with name “trnsnnnn” in
directory (n is digit)– Commerce server periodically checks directory
for files of that name, operates on them– Alternative: web server signals commerce
server to get file using signal mechanism
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1130
Devnet Workstation
• Servers provide administrative info– Run with as few privileges as possible
• Best: user nobody and group nogroup
– Use master daemon to listen at ports, spawn less privileged servers to service request
– Servers change notion of root directory
• Clients– NIS client to talk to UINFO system– File server client to allow file server access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1131
Devnet Workstation
• Logging mechanism– Records OS calls, parameters, results– Saves it locally, sent to central logging server
• Intrusion detection done; can augment logging as needed• Initially, process start, end, audit and effective UIDs recorded
• Disk space– If disk utilization over 95%, program scans local
systems and deletes all temp files and editor backup files not in use
• Meaning have not been accessed in last 3 days
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1132
Comparison
• DMZ web server: only necessary processes– New software developed, compiled elsewhere– Processes run in very restrictive environment– Processes write to local log, directly to log server
• Devnet workstation: provides environment for developers– More processes for more tasks– Process environment less restrictive to allow sharing,
etc.– Processes write to log server, which does all logging
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1133
Files
• Protections differ due to differences in policies– Use physical limits whenever possible, as these
cannot be corrupted– Use access controls otherwise
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1134
DMZ Web Server
• System programs, configuration files, etc. are on CD-ROM– If attacker succeeds in breaking in, modifying in-core
processes, then sysadmins simply reboot to recover– Public key for internal commerce server here, too
• Only web pages change– Too often to make putting them on CD-ROM – Small hard drive holds pages, spool areas, temp
directories, sysadmin home directory
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1135
Example
• Web server: user webbie– When running, root directory is root of web page
directory, “/mnt/www”– CGI programs owned by root, located in directory
(“/mnt/www/cgi-bin”) mounted from CD-ROM• Keys in “/mnt/www/keys”
– Transaction files in “/mnt/www/pages/trans”• Readable, writable by webbie, ecommie
• Commerce server: user ecommie– Periodically checks “/mnt/www/pages/trans”– Moves files out to “/home/com/transact”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1136
DMZ Web Server
• Everything statically linked– No compilers, dynamic loaders, etc.
• Command interpreter for sysadmin– Programs to start, stop servers– Programs to edit, create, delete, view files– Programs to monitor systems
• No other programs– None to read mail or news, no batching, no web
browsers, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1137
DMZ Web Server
• Checking integrity of DMZ web server– Not done
• If question:– Stop web server– Transfer all remaining transaction files– Reboot system from CD-ROM– Reformat hard drive– Reload contents of user directories, web pages from
WWW-clone– Restart servers
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1138
Devnet Workstation
• Standard configuration for these– Provides folks with needed tools, configurations– Configuration is on bootable CD-ROM
• CD-ROM created on isolated workstation– Changes made to that workstation, then new CD-ROM
created and distributed• Workstations also have hard drive for local
writable storage– Mounted under CD-ROM– Can be wiped if any question of integrity
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1139
Devnet Workstation
• Logs on log server examined using intrusion detection systems– Security officers validate by analyzing 30 min worth of
log entries and comparing result to reports from IDS
• Scans of writable media look for files matching known patterns of intrusions– If found, reboot and wipe hard drive– Then do full check of file server
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1140
Comparison
• Both use physical means to prevent system software from being compromised– Attackers can’t alter CD-ROMs
• Reloading systems– DMZ web server: save transaction files, regenerate
system from WWW-clone• Actually, push files over to internal network system
– Devnet workstation: just reboot, reformat hard drive• Files on hard drive are transient or replicated (logs)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1141
Comparison
• Devnet workstation: users trusted not to attack it– Any developer can use any devnet workstation– Developers may unintentionally introduce Trojan
horses, etc• Hence everything critical on read-only media
• DMZ web server: fewer trusted users– Self-contained; no mounting files remotely, none of its
files mounted remotely– CD-ROM has minimal web server system augmented
only by additional programs tailored for Drib’s purpose
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1142
Summary: DMZ Web Server
• Runs as few services as possible• Keeps everything on unalterable media• Checks source of all connections
– Web: from outer firewall only– SSH: from trusted administrative host only
• Web, commerce servers transfer files via shared directory– They do not directly communicate
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1143
Summary: Devnet Workstation
• Runs as few programs, servers as possible– Many more than DMZ web server, though
• Security prominent but not dominant– Must not interfere with ability of developer to do job– Security mechanisms hinder attackers, help find
attackers, and enable rapid recovery from successful attack
• Access from network allowed– Firewall(s) assumed to keep out unwanted users, so
security mechanisms are second line of defense
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1144
Key Points
• Use security policy to derive security mechanisms
• Apply basic principles, concepts of security– Least privilege, separation of privilege (defense
in depth), economy of mechanism (as few services as possible)
– Identify who, what you are trusting
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1145
Chapter 25: User Security
• Policy• Access• Files, devices• Processes• Electronic communications
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1146
Policy
• Assume user is on Drib development network– Policy usually highly informal and in the mind of the
user• Our users’ policy:
U1 Only users have access to their accountsU2 No other user can read, change file without owner’s
permissionU3 Users shall protect integrity, confidentiality,
availability of their filesU4 Users shall be aware of all commands that they enter
or that are entered on their behalf
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1147
Access
• U1: users must protect access to their accounts– Consider points of entry to accounts
• Passwords• Login procedure• Leaving system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1148
Passwords
• Theory: writing down passwords is BAD!• Reality: choosing passwords randomly makes
them hard to remember– If you need passwords for many systems, assigning
random passwords and not writing something down won’t work
• Problem: Someone can read the written password• Reality: degree of danger depends on
environment, how you record password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1149
Isolated System
• System used to create boot CD-ROM– In locked room; system can only be accessed
from within that room• No networks, modems, etc.
– Only authorized users have keys• Write password on whiteboard in room
– Only people who will see it are authorized to see it
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1150
Multiple Systems
• Non-infrastructure systems: have users use same password– Done via centralized user database shared by all
non-infrastructure systems• Infrastructure systems: users may have
multiple accounts on single system, or may not use centralized database– Write down transformations of passwords
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1151
Infrastructure Passwords
• Drib devnet has 10 infrastructure systems, 2 lead admins (Anne, Paul)– Both require privileged access to all systems– root, Administrator passwords chosen randomly
• How to remember? Memorize an algorithm!– Anne: “change case of 3rd letter, delete last char”– Paul: “add 2 mod 10 to first digit, delete first letter”
• Each gets printout of transformed password
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1152
Papers for Anne and PaulActual password Anne’s version Paul’s versionC04cEJxX C04ceJxX5 RC84cEJxX
4VX9q3GA 4VX9Q3GA2 a2VX9q3GA
8798Qqdt 8798QqDt$ 67f98Qqdt
3WXYwgnw 3WXywgnwS Z1WXYwgnw
feOioC4f feoioC4f9 YfeOioC2f
VRd0Hj9E VRD0Hj9Eq pVRd8Hj9E
e7Bukcba e7BUkcbaX Xe5Bukcba
ywyj5cVw ywYj5cVw* rywyj3cVw
5iUikLB4 5iUIkLB4m 3JiUikLB4
af4hC2kg af4HC2kg+ daf2hC2kg
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1153
Non-Infrastructure Passwords
• Users can pick– Proactive password checker vets proposed password
• Recommended method: passwords based on obscure poems or sayings– Example: “ttrs&vmbi” from first letter of second,
fourth words of each line, putting “&” between them:He took his vorpal sword in hand:Long time the manxome foe he sought—So rested he by the Tumtum tree,And stood awhile in thought.
Third verse of Jabberwocky, from Alice in Wonderland
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1154
Analysis
• Isolated system meets U1– Only authorized users can enter room, read password,
access system• Infrastructure systems meet U1
– Actual passwords not written down– Anne, Paul don’t write down algorithms– Stealing papers does not reveal passwords
• Non-infrastructure systems meet U1– Proactive password checker rejects easy to guess
passwords
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1155
Login Procedure
• User obtains a prompt at which to enter name
• Then comes password prompt• Attacks:
– Lack of mutual authentication– Reading password as it is entered– Untrustworthy trusted hosts
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1156
Lack of Mutual Authentication
• How does user know she is interacting with legitimate login procedure?– Attacker can have Trojan horse emulate login
procedure and record name, password, then print error message and spawn real login
• Simple approach: if name, password entered incorrectly, prompt for retry differed– In UNIX V6, it said “Name” rather than “login”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1157
More Complicated
• Attack program feeds name, password to legitimate login program on behalf of user, so user logged in without realizing attack program is an intermediary
• Approach: trusted path– Example: to log in, user hits specified sequence of
keys; this traps to kernel, which then performs login procedure; key is that no application program can disable this feature, or intercept or modify data sent along this path
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1158
Reading Password As Entered
• Attacker remembers it, uses it later– Sometimes called “shoulder surfing”– Can also read chars from kernel tables, passive
wiretapping, etc.• Approach: encipher all network traffic to defeat
passive wiretapping– Drib: firewalls block traffic to and from Internet,
internal hosts trusted not to capture network traffic– Elsewhere: use SSH, SSL, TLS to provide encrypted
tunnels for other protocols or to provide encrypted login facilities
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1159
Noticing Previous Logins
• Many systems print time, location (terminal) of last login– If either is wrong, probably someone has
unauthorized access to account; needs to be investigated
• Requires user to be somewhat alert during login
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1160
Untrustworthy Trusted Hosts
• Idea: if two hosts under same administrative control, each can rely on authentication from other
• Drib does this for backups– Backup system logs into workstation as user “backup”
• If password required, administrator password needs to be on backup system; considered unacceptable risk
• Solution: all systems trust backup server
• Requires accurate identification of remote host– Usually IP address– Drib uses challenge-response based on cryptography
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1161
Analysis
• Mutual authentication meets U1– Trusted path used when available; other times, system
prints time, place of last login
• Protecting passwords meets U1– Unencrypted passwords only placed on trusted
network; also, system prints time, place of last login
• Trusted hosts meets U1– Based on cryptography, not IP addresses; number of
trusted systems minimal (backup system only)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1162
Leaving the System
• People not authorized to use systems have access to rooms where systems are– Custodians, maintenance workers, etc.
• Once authenticated, users must control access to their session until it ends– What to do when one goes to bathroom?
• Procedures used here
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1163
Walking Away
• Procedures require user to lock monitor– Example: X window system: xlock
• Only user, system administrator can unlock monitor
– Note: be sure locking program does not have master override
• Example: one version of lock program allowed anyone to enter “Hasta la vista!” to unlock monitor
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1164
Modems
• Terminates sessions when remote user hangs up– Problem: this is configurable; may have to set physical
switch• If not done, next to call in connects to previous user’s session
– Problem: older telephone systems may mishandle propagation of call termination
• New connection arrives at telco switch and is forwarded before termination signal arrives at modem
• Same effect as above
• Drib: no modems connected to development systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1165
Analysis
• Procedures about walking away meet U1– Screen locking programs required, as is locking doors
when leaving office; failure to do so involves disciplinary action
– If screen locking password forgotten, system administrators can remotely access system and terminate program
• Procedures about modems meet U1– No modems allowed; hooking one up means getting
fired (or similar nasty action)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1166
Files and Devices
• File protection allows users to refine protection afforded their data– Policy component U2 requires this
• Users manipulate system through devices, so their protection affects user protection as well– Policy components U1, U4 require this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1167
Files
• Often different ways to do one thing– UNIX systems: Pete wants to allow Deb to read file
design, but no-one else to do so• If Pete, Deb have their own group, make file owned by that
group and group readable but not readable by others• If Deb only member of a group, Pete can give group ownership
of file to Deb and set permissions appropriately• Pete can set permissions of containing directory to allow
himself, Deb’s group search permission– Windows NT: same problem
• Use ACL with entries for Pete, Deb only:{ ( Pete, full control ), ( Deb, read ) }
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1168
File Permission on Creation
• Use template to set or modify permissions when file created– Windows NT: new directory inherits parent’s
ACL– UNIX systems: identify permissions to be
denied• umask contains permissions to be disabled, so can
say “always turn off write permission for everyone but owner when file created”
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1169
Group Access
• Provides set of users with same rights• Advantage: use group as role
– All folks working on Widget-NG product in group widgetng
– All files for that product group readable, writable by widgetng
– Membership changes require adding users to, dropping users from group
• No changes to file permissions required
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1170
Group Access
• Disadvantage: use group as abbreviation for set of users; changes to group may allow unauthorized access or deny authorized access– Maria wants Anne, Joan to be able to read movie– System administrator puts all in group maj– Later: sysadmin needs to create group with Maria,
Anne, Joan, and Lorraine• Adds Lorraine to group maj• Now Lorraine can read movie even though Maria didn’t want
her to be able to do so
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1171
File Deletion
• Is the name or the object deleted?• Terms
– File attribute table: contains information about file– File mapping table: contains information allowing OS
to access disk blocks belonging to file– Direct alias: directory entry naming file– Indirect alias: directory entry naming special file
containing name of target file
• Each direct alias is alternative name for same file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1172
Rights and Aliases
• Each direct alias can have different permissions– Owner must change access modes of each alias
in order to control access• Generally false
– File attribute table contains access permissions for each file
• So users can use any alias; rights the same
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1173
Deletion
• Removes directory entry of file– If no more directory entries, data blocks and
table entries released too– Note: deleting directory entry does not mean
file is deleted!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1174
Example
• Anna on UNIX wants to delete file x, setuid to herself– rm xworks if no-one else has a direct alias to it– Sandra has one, so file not deleted (but Anna’s
directory entry is deleted)• File still is setuid to Anna
• How to do this right:– Turn off all permissions on file– Then delete it
• Even if others have direct links, they are not the owners and socan’t change permissions or access file
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1175
Persistence
• Disk blocks of deleted file returned to pool of unused disk blocks
• When reassigned, new process may be able to read previous contents of disk blocks– Most systems offer a “wipe” or “cleaning” procedure
that overwrites disk blocks with zeros or random bit patterns as part of file deletion
– Useful when files being deleted contain sensitive data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1176
Direct, Indirect Aliases
• Some commands act differently on these– Angie executes command to add permission to file to
let Lucy read it– If file name direct alias, works– If file name indirect alias, does it add permission to the
indirect alias or the file itself?• Semantics of systems, commands on systems
differ• Example: on RedHat Linux 7.1, when given indirect alias of
file, chmod changes permissions of actual file, rm deletes indirect alias
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1177
Analysis
• Use of ACLs, umask meet U2– Both set to deny permission to”other” and “group” by
default; user can add permissions back
• Group access controls meet U2– Membership in groups tightly controlled, based on least
privilege
• Deletion meets U2– Procedures require sensitive files be wiped when
deleted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1178
Devices
• Must be protected so user can control commands sent, others cannot see interactions
• Writable devices• Smart terminals• Monitors and window systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1179
Writable Devices
• Restrict access to these as much as possible• Example: tapes
– When process begins writing, ACL of device changes to prevent other processes from writing
– Between mounting of media, process execution, another process can begin writing
– Moral: write protect all mounted media unless it is to be written to
• Example: terminals– Write control sequence to erase screen—send
repeatedly
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1180
Smart Terminals
• Has built-in mechanism for performing special functions– Most important one: block send– The sequence of chars initiating block send do not
appear on screen• Write Trojan horse to send command from user’s
terminal• Next slide: example in mail message sent to Craig
– When Craig reads letter, his startup file becomes world writable
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1181
Trojan Horse LetterDear Craig,
Please be careful. Someone may ask you to execute
chmod 666 .profile
You shouldn’t do it!
Your friend,
Robert
<BLOCK SEND (-2,18), (-2,18)><BLOCK SEND
(-3,0),(3,18)><CLEAR>
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1182
Why So Dangerous?
• With writable terminal, someone must trick user of that terminal into executing command; both attacker and user must enter commands
• With smart terminal, only attacker need enter command; if user merely reads the wrong thing, the attacker’s compromise occurs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1183
Monitors and Window Systems
• Window manager controls what is displayed– Input from input devices– Clients register with manager, can then receive input,
send output through manager• How does manager determine client to get input?
– Usually client in whose window input occurs• Attack: overlay transparent window on screen
– Now all input goes through this window– So attacker sees all input to monitor, including
passwords, cryptographic keys
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1184
Access Control
• Use ACLs, C-Lists, etc.• Granularity varies by windowing system• X window system: host name or token
– Host name, called xhost method– Manager determines host on which client runs– Checks ACL to see if host allowed to connect
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1185
X Windows Tokens
• Called xauth method– X window manager given random number
(magic cookie)• Stored in file “.Xauthority” in user’s home directory
– Any client trying to connect to manager must supply this magic cookie to succeed
• Local processes run by user can access this file• Remote processes require special set-up by user to
work
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1186
Analysis
• Writable devices meet U1, U4– Devnet users have default settings denying all write
access to devices except the user• Smart terminals meet U1, U4
– Drib does not allow use of smart terminals except on systems where all control sequences (such as BLOCK SEND) are shown as printable chars
• Window managers meet U1, U4– Drib uses either xhost or token (xhost by default) on a
trusted network, so IP spoofing not an issue
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1187
Process
• Manipulate objects, including files– Policy component U3 requires users to be aware of how
• Copying, moving files• Accidentally overwriting or erasing files• Encryption, keys, passwords• Start-up settings• Limiting privileges• Malicious logic
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1188
Copying Files
• Duplicates contents• Semantics determines whether attributes
duplicated– If not, may need to set them to prevent compromise
• Example: Mona Anne copies xyzzy on UNIX system to plugh:
cp xyzzy plugh
– If plugh doesn’t exist, created with attributes of xyzzyexcept any setuid, setgid discarded; contents copied
– If plugh exists, attributes not altered; contents copied
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1189
Moving Files
• Semantics determines attributes• Example: Mona Anne moves xyzzy to /tmp/plugh
– If both on same file system, attributes unchanged– If on different file systems, semantically equivalent to:
cp xyzzy/tmp/plugh
rm xyzzy
Permissions may change …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1190
Accidentally Overwriting Files
• Protect users from themselves• Example: deleting by accident
– Intends to delete all files ending in “.o”; pattern is “*.o”, “*” matching any string
– Should type rm *.o– Instead types rm * .o– All files in directory disappear!
• Use modes to protect yourself– Give –i option to rm to prevent this
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1191
Encryption
• Must trust system– Cryptographic keys visible in kernel buffers, swap
space, and/or memory– Anyone who can alter programs used to encrypt,
decrypt can acquire keys and/or contents of encrypted files
• Example: PGP, a public key encryption program– Protects private key with an enciphering key (“pass-
phrase”), which user supplies to authenticate file– If keystroke monitor installed on system, attacker gets
pass-phrase, then private key, then message
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1192
Saving Passwords
• Some systems allow users to put passwords for programs in files– May require file be read-protected but not use
encryption• Example: UNIX ftp clients
– Users can store account names, host names, passwords in .netrc
– Kathy did so but ftp ignored it– She found file was readable by anyone, meaning her
passwords stored in it were now compromised
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1193
Start-Up Settings
• When programs start, often take state info, commands from environment or start-up files– Order of access affects execution
• Example: UNIX command interpreter sh– When it starts, it does the following:
• Read start-up file /etc/profile• Read start-up file .profile in user’s home directory• Read start-up file named in environment variable ENV
– Problem: if any of these files can be altered by untrusted user, sh may execute undesirable commands or enter undesirable state on start
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1194
Limiting Privileges
• Users should know which of their programs grant privileges to others– Also the implications of granting these
• Example: Toni reads email for her boss, Fran– Fran knew not to share passwords, so she made a
setuid-to-Fran shell that Toni could use• Bad idea; gave Toni too much power
– On Toni’s suggestion, Fran began to forward to Toni a copy of every letter
• Toni no longer needed access to Fran’s account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1195
Malicious Logic
• Watch out for search paths• Example: Paula wants to see John’s confidential
designs– Paula creates a Trojan horse that copies design files to
/tmp; calls it ls– Paula places copies of this in all directories she can
write to– John changes to one of these directories, executes ls
• John’s search path begins with current working directory– Paula gets her information
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1196
Search Paths
• Search path to locate program to execute• Search path to locate libraries to be
dynamically loaded when program executes• Search path for configuration files• …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1197
Analysis
• Copying, moving files meets U3– Procedures are to warn users about potential problems
• Protections against accidental overwriting and erasing meet U3– Users’ startup files set protective modes on login
• Passwords not being stored unencrypted meets U3– In addition to policy, Drib modified programs that
accept passwords from disk files to ignore those files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1198
Analysis (con’t)
• Publicizing start up procedures of programs meets U3– Startup files created when account created have
restrictive permissions• Publicizing dangers of setuid, giving extra
privileges meets U3– When account created, no setuid/setgid programs
• Default search paths meet U4– None include world writable directories; this includes
symbol for current working directory
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1199
Electronic Communications
• Checking for malicious content at firewall can make mistakes– Perfect detectors require solving undecidable problem– Users may unintentionally send out material they should not
• Automated e-mail processing• Failing to check certificates• Sending unexpected content
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1200
Automated E-mail Processing
• Be careful it does not automatically execute commands or programs on behalf of other users
• Example: NIMDA worm, embedded in email– When user opens letter, default configuration of mail
passed NIMDA attachment to another program to be displayed
– This executes code comprising worm, thereby infecting system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1201
Failure to Check Certificates
• If certificate invalid or expired, email signed by that certificate may be untrustworthy– Mail readers must check that certificates are valid, or
enable user to determine whether to trust certificate of questionable validity
• Example: Someone obtained certificates under the name of Microsoft– When discovered, issuer immediately revoked both– Had anyone obtained ActiveX applets signed by those
certificates, would have been trusted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1202
Sending Unexpected Content
• Arises when data sent in one format is viewed in another
• Example: sales director sent sales team chart showing effects of proposed reorganization– Spreadsheet also contained confidential information
deleted from spreadsheet but still in the file– Employees used different system to read file, seeing the
spreadsheet data—and also the “deleted” date• Rapid saves often do not delete information, but
rearrange pointers so information appears deleted
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1203
Analysis
• Automated e-mail processing meets U4– All programs configured not to execute attachments,
contents of letters• Certificate handling procedures meet U4
– Drib enhanced all mail reading programs to validate certificates as far as possible, and display certificates it could not validate so user can decide how to proceed
• Publicizing problems with risk of “deleted” data meets U4– Also, progams have “rapid saves” disabled by default
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1204
Key Points
• Users have policies, although usually informal ones
• Aspects of system use affect security even at the user level– System access issues– File and device issues– Process management issues– Electronic communications issues
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1205
Chapter 26: Program Security
• Introduction• Requirements and Policy• Design• Refinement and Implementation• Common Security-Related Programming
Problems• Testing, Maintenance, and Operation• Distribution
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1206
Introduction
• Goal: implement program that:– Verifies user’s identity– Determines if change of account allowed– If so, places user in desired role
• Similar to su(1) for UNIX and Linux systems– User supplies his/her password, not target
account’s
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1207
Why?
• Eliminate password sharing problem– Role accounts under Linux are user accounts– If two or more people need access, both need role
account’s password
• Program solves this problem– Runs with root privileges– User supplies his/her password to authenticate– If access allowed, program spawns command
interpreter with privileges of role account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1208
Requirements
1. Access to role account based on user, location, time of request
2. Settings of role account’s environment replaces corresponding settings of user’s environment, but rest of user’s environment preserved
3. Only root can alter access control information for access to role account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1209
More Requirements
4. Mechanism provides restricted, unrestricted access to role account• Restricted: run only specified commands• Unrestricted: access command interpreter
5. Access to files, directories, objects owned by role account restricted to those authorized to use role account, users trusted to install system programs, root
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1210
Threats
• Group 1: Unauthorized user (UU) accessing role accounts1. UU accesses role account as though authorized user2. Authorized user uses nonsecure channel to obtain
access to role account, thereby revealing authentication information to UU
3. UU alters access control information to gain access to role account
4. Authorized user executes Trojan horse giving UU access to role account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1211
Relationships
User’s search path restricted to own or role account; only trusted users, role account can manipulate executables
2, 4, 54
Restricts change to trusted users33
Restricts location from where user can access role account
12
Restricts who can access role account, protects access control data
1, 51notesrequirementthreat
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1212
More Threats
• Group 2: Authorized user (AU) accessing role accounts5. AU obtains access to role account, performs
unauthorized commands6. AU executes command that performs
functions that user not authorized to perform7. AU changes restrictions on user’s ability to
obtain access to role account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1213
Relationships
root users trusted; users with access to role account trusted
37
Prevent introduction of Trojan horse2, 56
Allows user restricted access to role account, so user can run only specific commands
45notesrequirementthreat
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1214
Design
• Framework for hooking modules together– User interface– High-level design
• Controlling access to roles and commands– Interface– Internals– Storage of access control data
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1215
User Interface
• User wants unrestricted access or to run a specific command (restricted access)
• Assume command line interface– Can add GUI, etc. as needed
• Commandrole role_account[ command]
where– role_account name of role account– command command to be run (optional)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1216
High-Level Design
1. Obtain role account, command, user, location, time of day• If command omitted, assume command interpreter
(unrestricted access)
2. Check user allowed to access role accounta) at specified location;b) at specified time; andc) for specified command (or without restriction)
If user not, log attempt and quit
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1217
High-Level Design (con’t)
3. Obtain user, group information for role account; change privileges of process to role account
4. If user requested specific command, overlay process with command interpreter that spawns named command
5. If user requested unrestricted access, overlay process with command interpreter allowing interactive use
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1218
Ambiguity in Requirements
• Requirements 1, 4 do not say whether command selection restricted by time, location– This design assumes it is
• Backups may need to be run at 1AM and only 1AM• Alternate: assume restricted only by user, role; equally
reasonable– Update requirement 4 to be: Mechanism provides
restricted, unrestricted access to role account• Restricted: run only specified commands• Unrestricted: access command interpreter
Level of access (restricted, unrestricted) depends on user, role, time, location
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1219
Access to Roles, Commands
• Module determines whether access to be allowed– If it can’t get user, role, location, and/or time,
error; return failure• Interface: controls how info passed between
module, caller• Internal structure: how does module handle
errors, access control data structures
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1220
Interface to Module
• Minimize amount of information being passed through interface– Follow standard ideas of information hiding– Module can get user, time of day, location from system– So, need pass only command (if any), role account
name• boolean accessok(rolername, command cmd)
– rname: name of role– cmd: command (empty if unrestricted access desired)– returns true if access granted, false if not (or error)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1221
Internals of Module
• Part 1: gather data to determine if access allowed
• Part 2: retrieve access control information from storage
• Part 3: compare two, determine if access allowed
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1222
Part 1
• Required:– user ID: who is trying to access role account– time of day: when is access being attempted
• From system call to operating system– entry point: terminal or network connection– remote host: name of host from which user
accessing local system (empty if on local system)
• These make up location
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1223
Part 2
• Obtain handle for access control file– May be called a “descriptor”
• Contents of file is sequence of records:role account
user names
locations from which the role account can be accessed
times when the role account can be accessed
command and arguments
• Can list multiple commands, arguments in 1 record– If no commands listed, unrestricted access
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1224
Part 3
• Iterate through access control file– If no more records
• Release handle• Return failure
– Check role• If not a match, skip record (go back to top)
– Check user name, location, time, command• If any does not match, skip record (go back to top)
– Release handle– Return success
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1225
Storing Access Control Data
• Sequence of records; what should contents of fields be?– Location: *any*, *local*, host, domain;
operators not, or (‘,’)*local* , control.fixit.com , .watchu.edu
– User: *any*, user name; operators not, or (‘,’)peter , paul, mary, joan, janis
– Time: *any*, time range
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1226
Time Representation
• Use ranges expressed (reasonably) normallyMon-Thu 9AM-5PM
– Any time between 9AM and 5PM on Mon, Tue, Wed, or Thu
Mon 9AM-Thu 5PM
– Any time between 9AM Monday and 5PM ThursdayApr 15 8AM-Sep 15 6PM
– Any time from 8AM on April 15 to 6PM on September 15, on any year
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1227
Commands
• Command plus arguments shown/bin/install *
– Execute /bin/install with any arguments/bin/cp log /var/inst/log– Copy file log to /var/inst/log/usr/bin/id
– Run program id with no arguments
• User need not supply path names, but commands used must be the ones with those path names
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1228
Refinement and Implementation
• First-level refinement• Second-level refinement• Functions
– Obtaining location– Obtaining access control record– Error handling in reading, matching routines
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1229
First-Level Refinement
• Use pseudocode:boolean accessok(rolername, command cmd);
stat ←false
user ← obtain user ID
timeday← obtain time of day
entry ← obtain entry point (terminal line, remote host)
open access control file
repeat
rec ← get next record from file; EOF if none
if rec ≠ EOF thenstat ← match(rec, rname, cmd, user, timeday, entry)
until rec = EOF or stat = true
close access control file
return stat
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1230
Check Sketch
• Interface right• Stat (holds status of access control check) false
until match made, then true• Get user, time of day, location (entry)• Iterates through access control records
– Get next record– If there was one, sets stat to result of match– Drops out when stat true or no more records
• Close file, releasing handle• Return stat
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1231
Second-Level Refinement
• Map pseudocode to particular language, system– We’ll use C, Linux (UNIX-like system)– Role accounts same as user accounts
• Interface decisions– User, role ID representation– Commands and arguments– Result
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1232
Users and Roles
• May be name (string) or uid_t (integer)– In access control file, either representation okay
• If bogus name, can’t be mapped to uid_t• Kernel works with uid_t
– So access control part needs to do conversion to uid_tat some point
• Decision: represent all user, role IDs as uid_t• Note: no design decision relied upon
representation of user, role accounts, so no need to revisit any
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1233
Commands, Arguments, Result
• Command is program name (string)• Argument is sequence of words (array of
string pointers)• Result is boolean (integer)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1234
Resulting Interface
intaccessok(uid_trname, char *cmd[]);
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1235
Second-Level Refinement
• Obtaining user ID• Obtaining time of day• Obtaining location• Opening access control file• Processing records• Cleaning up
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1236
Obtaining User ID
• Which identity?– Effective ID: identifies privileges of process
• Must be 0 (root), so not this one
– Real ID: identifies user running process
userid = getuid();
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1237
Obtain Time of Day
• Internal representation is seconds since epoch– On Linux, epoch is Jan 1, 1970 00:00:00
timeday = time(NULL);
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1238
Obtaining Location
• System dependent– So we defer, encapsulating it in a function to be
written later
entry = getlocation();
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1239
Opening Access Control File
• Note error checking and logging
if ((fp= fopen(acfile, “r”)) == NULL){
logerror(errno, acfile);
return(stat);
}
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1240
Processing Records
• Internal record format not yet decided– Note use of functions to delay deciding thisdo {
acrec = getnextacrec(fp);
if (acrec!= NULL)
stat = match(rec, rname, cmd, user,
timeday, entry);
} until (acrec== NULL || stat == 1);
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1241
Cleaning Up
• Release handle by closing file
(void) fclose(fp);
return(stat);
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1242
Getting Location
• On login, Linux writes user name, terminal name, time, and name of remote host (if any) in file utmp
• Every process may have associated terminal• To get location information:
– Obtain associated process terminal name– Open utmp file– Find record for that terminal– Get associated remote host from that record
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1243
Security Problems
• If any untrusted process can alter utmp file, contents cannot be trusted– Several security holes came from this
• Process may have no associated terminal• Design decision: if either is true, return
meaningless location– Unless location in access control file is any
wildcard, fails
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1244
getlocation()Outlinehostname getlocation()
myterm ← name of terminal associated with processobtain utmp file access control list
if any user other than root can alter it then
return “*nowhere*”
open utmp file
repeat
term ← get next record from utmpfile; EOF if none
if term ≠ EOF and myterm = term then stat ←true
else stat ←falseuntil term = EOF or stat = true
if host field in utmp record = empty then
host ←“localhost”
else host ← host field of utmp recordclose utmp file
return host
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1245
Access Control Record
• Consider match routine– User name is uid_t (integer) internally
• Easiest: require user name to be uid_t in file• Problems: (1) human-unfriendly; (2) unless binary
data recorded, still need to convert• Decision: in file, user names are strings (names or
string of digits representing integer)
– Location, set of commands strings internally• Decision: in file, represent them as strings
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1246
Time Representation
• Here, time is an interval– May 30 means “any time on May 30”, or “May
30 12AM-May 31 12AM• Current time is integer internally
– Easiest: require time interval to be two integers– Problems: (1) human-unfriendly; (2) unless
binary data recorded, still need to convert– Decision: in file, time interval represented as
string
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1247
Record Format
• Here, commands is repeated once per command, and numcommands is number of commands fieldsrecord
role rnamestring userliststring locationstring timeofdaystring commands[]…string commands[]integer numcom mands
end record;
• May be able to compute numcommands from record
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1248
Error Handling
• Suppose syntax error or garbled record• Error cannot be ignored
– Log it so system administrator can see it• Include access control file name, line or record number
– Notify user, or tell user why there is an error, different question
• Can just say “access denied”• If error message, need to give access control file name, line
number
– Suggests error, log routines part of accessok module
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1249
Implementation
• Concern: many common security-related programming problems– Present management and programming rules– Use framework for describing problems
• NRL: our interest is technical modeling, not reason for or time of introduction
• Aslam: want to look at multiple components of vulnerabilities
• Use PA or RISOS; we choose PA
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1250
Improper Choice of Initial Protection Domain
• Arise from incorrect setting of permissions or privileges– Process privileges– Access control file permissions– Memory protection– Trust in system
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1251
Process Privileges
• Least privilege: no process has more privileges than needed, but each process has the privileges it needs
• Implementation Rule 1:– Structure the process so that all sections
requiring extra privileges are modules. The modules should be as small as possible and should perform only those tasks that require those privileges.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1252
Basis
• Reference monitor– Verifiable: here, modules are small and simple– Complete: here, access to privileged resource only
possible through privileges, which require program to call module
– Tamperproof: separate modules with well-defined interfaces minimizes chances of other parts of program corrupting those modules
• Note: this program, and these modules, are notreference monitors!– We’re approximating reference monitors …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1253
More Process Privileges
• Insufficient privilege: denial of service• Excessive privilege: attacker could exploit
vulnerabilities in program• Management Rule 1:
– Check that the process privileges are set properly.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1254
Implementation Issues
• Can we have privileged modules in our environment?– No; this is a function of the OS– Cannot acquire privileges after start, unless process
started with those privileges
• Which role account?– Non-root: requires separate program for each role
account– Root: one program can handle all role accounts
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1255
Program and Privilege
• Program starts with root privileges• Access control module called
– Needs these privileges to read access control file• Privileges released
– But they can be reacquired …• Privileges reacquired for switch to role account
– Because root can switch to any user• Key points: privileges acquired only when needed,
and relinquished once immediate task is complete
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1256
Access Control File Permissions
• Integrity of process relies upon integrity of access control file
• Management Rule 2:– The program that is executed to create the
process, and all associated control files, must be protected from unauthorized use and modification. Any such modification must be detected.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1257
Program and File
• Program checks integrity of access control file whenever it runs
• Check dependencies, too– If access control file depends on other external
information (like environment variables, included files, etc.), check them
– Document these so maintainers will know what they are
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1258
Permissions
• Set these so only root can alter, move program, access control file
• Implementation Rule 2:– Ensure that any assumptions in the program
are validated. If this is not possible, document them for the installers and maintainers, so they know the assumptions that attackers will try to invalidate.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1259
UNIX Implementation
• Checking permissions: 3 steps– Check root owns file– Check no group write permission, or that root is
single member of the group owner of file• Check list of members of that group first• Check password file next, to ensure no other users
have primary GID the same as the group; these users need not be listed in group file to be group members
– Check no world read, write permission
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1260
Memory Protection
• Shared memory: if two processes have access, one can change data other relies upon, or read data other considers secret
• Implementation Rule 3– Ensure that the program does not share
objects in memory with any other program, and that other programs cannot access the memory of a privileged process.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1261
Memory Management
• Don’t let data be executed, or constants change– Declare constants in program as const– Turn off execute permission for data pages/segments– Do not use dynamic loading
• Management Rule 3:– Configure memory to enforce the principle of least
privilege. If a section of memory is not to contain executable instructions, turn execute permission off for that section of memory. If the contents of a section of memory are not to be altered, make that section read-only.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1262
Trust
• What does program trust?– System authentication mechanisms to authenticate users– UINFO to map users, roles into UIDs– Inability of unprivileged users to alter system clock
• Management Rule 4:– Identify all system components on which the
program depends. Check for errors whenever possible, and identify those components for which error checking will not work.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1263
Improper Isolation of Implementation Detail
• Look for errors, failures of mapping from abstraction to implementation– Usually come out in error messages
• Implementation Rule 4:– The error status of every function must be checked.
Do not try to recover unless the cause of the error, and its effects, do not affect any security considerations. The program should restore the state of the system to the state before the process began, and then terminate.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1264
Resource Exhaustion,User Identifiers
• Role, user are abstractions– The system works with UIDs
• How is mapping done?– Via user information database
• What happens if mapping can’t be made?– In one mail server, returned a default user—so by
arranging that the mapping failed, anyone could have mail appended to any file to which default user could write
– Better: have program fail
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1265
Validating Access Control Entries
• Access control file data implements constraints on access– Therefore, it’s a mapping of abstraction to
implementation• Develop second program using same modules as
first– Prints information in easy-to-read format– Must be used after each change to file, to verify change
does what was desired– Periodic checks too
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1266
Restricting Protection Domain
• Use overlays rather than spawning child– Overlays replace original protection domain with that
of overlaid program– Programmers close all open files, reset signal handlers,
changing privileges to that of role– Potential problem: saved UID, GID
• When privileges dropped in usual way, can regain them because original UID is saved; this is how privileges restored
• Use setuid system call to block this; it changes saved UID too
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1267
Improper Change
• Data that changes unexpectedly or erroneously
• Memory• File contents• File/object bindings
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1268
Memory
• Synchronize interactions with other processes
• Implementation Rule 5:– If a process interacts with other processes,
the interactions should be synchronized. In particular, all possible sequences of interactions must be known and, for all such interactions, the process must enforce the required security policy.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1269
More Memory
• Asynchronous exception handlers: may alter variables, state– Much like concurrent process
• Implementation Rule 6:– Asynchronous exception handlers should not alter
any variables except those that are local to the exception handling module. An exception handler should block all other exceptions when begun, and should not release the block until the handler completes execution, unless the handler has been designed to handle exceptions within itself (or calls an uninvoked exception handler).
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1270
Buffer Overflows
• Overflow not the problem• Changes to variables, state caused by overflow is
the problem– Example: fingerd example: overflow changes return
address to return into stack• Fix at compiler level: put random number between buffer,
return address; check before return address used– Example: login program that stored unhashed, hashed
password in adjacent arrays• Enter any 8-char password, hit space 72 times, enter hash of
that password, and system authenticates you!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1271
Problem
• Trusted data can be affected by untrusted data– Trusted data: return address, hash loaded from
password file– Untrusted data: anything user reads
• Implementation Rule 7:– Whenever possible, data that the process trusts and
data that it receives from untrusted sources (such as input) should be kept in separate areas of memory. If data from a trusted source is overwritten with data from an untrusted source, a memory error will occur.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1272
Our Program
• No interaction except through exception handling– Implementation Rule 5 does not apply
• Exception handling: disable further exception handling, log exception, terminate program – Meets Implementation Rule 6
• Do not reuse variables used for data input; ensure no buffers overlap; check all array, pointer references; any out-of-bounds reference invokes exception handler– Meets Implementation Rule 7
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1273
File Contents
• If access control file changes, either:– File permissions set wrong (Management Rule 2)– Multiple processes sharing file (Implementation Rule 5)
• Dynamic loading: routines not part of executable, but loaded from libraries when program needs them– Note: these may not be the original routines …
• Implementation Rule 8:– Do not use components that may change between
the time the program is created and the time it is run.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1274
Race Conditions
• Time-of-check-to-time-of-use (TOCTTOU) problem– Issue: don’t want file to change after validation but
before access– UNIX file locking advisory, so can’t depend on it
• How we deal with this:– Open file, obtaining file descriptor– Obtain status information using file descriptor– Validate file access
• UNIX semantics assure this is same as for open file object; no changing possible
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1275
Improper Naming
• Ambiguity in identifying object• Names interpreted in context
– Unique objects cannot share names within available context
– Interchangeable objects can, provided they are truly interchangeable
• Management Rule 5:– Unique objects require unique names.
Interchangeable objects may share a name.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1276
Contexts
• Program must control context of interpretation of name– Otherwise, the name may not refer to the expected
object• Example: loadmodule problem
– Dynamically searched for, loaded library modules– Executed program ld.so with superuser privileges to do
this– Default context: use “/bin/ld.so” (system one)– Could change context to use “/usr/anyone/ld.so” (one
with a Trojan horse)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1277
Example
• Context includes:– Character set composing name– Process, file hierarchies– Network domains– Customizations such as search path– Anything else affecting interpretation of name
• Implementation Rule 9:– The process must ensure that the context in which
an object is named identifies the correct object.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1278
Sanitize of Not?
• Replace context with known, safe one on start-up– Program controls interpretation of names now
• File names (access control file, command interpreter program)– Use absolute path names; do not create any
environment variables affecting interpretation• User, role names
– Assume system properly maintained, so no problems• Host names
– No domain part means local domain
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1279
Improper Deallocation, Deletion
• Sensitive information can be exposed if object containing it is reallocated– Erase data, then deallocate
• Implementation Rule 10:– When the process finishes using a sensitive object
(one that contains confidential information or one that should not be altered), the object should be erased, then deallocated or deleted. Any resources not needed should also be released.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1280
Our Program
• Cleartext password for user– Once hashed, overwritten with random bytes
• Access control information– Close file descriptor before command interpreter
overlaid• Because file descriptors can be inherited, and data from
corresponding files read
• Log file– Close log file before command interpreter overlaid
• Same reasoning, but for writing
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1281
Improper Validation
• Something not checked for consistency or correctness– Bounds checking– Type checking– Error checking– Checking for valid, not invalid, data– Checking input– Designing for validation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1282
Bounds Checking
• Indices: off-by-one, signed vs. unsigned• Pointers: no good way to check bounds
automatically• Implementation Rule 11:
– Ensure that all array references access existing elements of the array. If a function that manipulates arrays cannot ensure that only valid elements are referenced, do not use that function. Find one that does, write a new version, or create a wrapper.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1283
Our Program
• Use loops that check bounds in our code• Library functions: understand how they work
– Example: copying strings• In C, string is sequence of chars followed by NUL byte (byte
containing 0)• strcpy never checks bounds; too dangerous• strncpy checks bounds against parameter; danger is not
appending terminal NUL byte– Example: input user string into buffer
• gets reads, loads until newline encountered• fgets reads, loads until newline encountered or a specific
number of characters are read
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1284
Type Checking
• Ensure arguments, inputs, and such are of the right type– Interpreting floating point as integer, or shorts
as longs• Implementation Rule 12:
– Check the types of functions and parameters.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1285
Compilers
• Most compilers can do this– Declare functions before use; specify types of
arguments, result so compiler can check– If compiler can’t do this, usually other programs can—
use them!• Management Rule 6:
– When compiling programs, ensure that the compiler flags report inconsistencies in types. Investigate all such warnings and either fix the problem or document the warning and why it is spurious.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1286
Error Checking
• Always check return values of functions for errors– If function fails, and program accepts result as
legitimate, program may act erroneously• Implementation Rule 13:
– Check all function and procedure executions for errors.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1287
Our Program
• Every function call, library call, system call has return value checked unless return value doesn’t matter– In some cases, return value of close doesn’t
matter, as program exits and file is closed– Here, only true on denial of access or error
• On success, overlay another program, and files must be closed before that overlay occurs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1288
Check for Valid Data
• Know what data is valid, and check for it– Do not check for invalid data unless you are
certain all other data will be valid for as long as the program is used!
• Implementation Rule 14:– Check that a variable’s values are valid.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1289
Example
• Program executed commands in very restrictive environment– Only programs from list could be executed
• Scanned commands looking for metacharactersbefore passing them to shell for execution– Old shell: ‘`’ ordinary character– New shell: ‘`x`’ means “run program x, and replace `x`
with the output of that program
• Result: you could execute any command
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1290
Our Program
• Checks that command being executed matches authorized command– Rejects anything else
• Problem: can allow all users except a specific set to access a role (keyword “not”)– Added because on one key system, only system
administrators and 1 or 2 trainees– Used on that system, but recommended against on all
other systems
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1291
Handling Trade-Off
• Decision that weakened security made to improve useability– Document it and say why
• Management Rule 7:– If a trade-off between security and other factors
results in a mechanism or procedure that can weaken security, document the reasons for the decision, the possible effects, and the situations in which the compromise method should be used. This informs others of the trade-off and the attendant risks.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1292
Checking Input
• Check all data from untrusted sources– Users are untrusted sources
• Implementation Rule 15:– Check all user input for both form and
content. In particular, check integers for values that are too big or too small, and check character data for length and valid characters.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1293
Example
• Setting variables while printing– i contains 2, j contains 21
printf(“%d %d%n %d\n%n”, i, j, &m, i, &n);
stores 4 in m and 7 in n• Format string attack
– User string input stored in str, thenprintf(str)
User enters “log%n”, overwriting some memory location with 3
• If attacker can figure out where that location is, attacker can change the value in that memory location to any desired value
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1294
Designing for Validation
• Some validations impossible due to structure of language or other factors– Example: in C, test for NULL pointer, but not for valid
pointer (unless “valid” means “NULL”)
• Design, implement data structures in such a way that they can be validated
• Implementation Rule 16:– Create data structures and functions in such a way
that they can be validated.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1295
Access Control Entries
• Syntax of file designed to allow for easy error detection:role name
users comma-separated list of users
location comma-separated list of locations
time com ma-separated list of times
command command and arguments
…
command command and arguments
endrole
• Performs checks on data as appropriate– Example: each listed time is a valid time, etc.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1296
Improper Indivisibility
• Operations that should be indivisible are divisible– TOCTTOU race conditions, for example– Exceptions can break single statements/function calls,
etc. into 2 parts as well
• Implementation Rule 17:– If two operations must be performed sequentially
without an intervening operation, use a mechanism to ensure that the two cannot be divided.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1297
Our Program
• Validation, then open, of access control file– Method 1: do access check on file name, then open it
• Problem: if attacker can write to directory in full path name offile, attacker can switch files after validation but before opening
– Method 2 (program uses this): open file, then before reading from it do access check on file descriptor
• As check is done on open file, and file descriptor cannot be switched to another file unless closed, this provides protection
– Method 3 (not implemented): do it all in the kernel as part of the open system call!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1298
Improper Sequencing
• Operations performed in incorrect order• Implementation Rule 18:
– Describe the legal sequences of operations on a resource or object. Check that all possible sequences of the program(s) involved match one (or more) legal sequences.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1299
Our Program
• Sequence of operations follow proper order:– User authenticated– Program checks access– If allowed:
• New, safe environment set up• Command executed in it
• When dropping privileges, note ordinary user cannot change groups, but root can– Change group to that of role account– Change user to that of role account
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1300
Improper Choice ofOperand or Operation
• Erroneous selection of operation or operand• Example: su used to access root account
– Requires user to know root password– If no password file, cannot validate entered password– One program assumed no password file if it couldn’t
open it, and gave user root access to fix problem• Attacker: open all file descriptors possible, spawn su—as open
file descriptors inherited, su couldn’t open any files—not even password file
• Improper operation: should have checked to see if no password file or no available file descriptors
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1301
Assurance
• Use assurance techniques– Document purpose, use of each function– Check algorithm, call
• Management Rule 8:– Use software engineering and assurance
techniques (such as documentation, design reviews, and code reviews) to ensure that operations and operands are appropriate.
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1302
Our Program
• Granting Access– Only when entry matches all characteristics of current
session• When characteristics match, verify access control module
returns true• Check when module returns true, program grants access and
when module returns false, denies access
• Consider UID (type uid_t, or unsigned integer)– Check that it can be considered as integer
• If comparing signed and unsigned, then signed converted to unsigned; check there are no comparisons with negative
numbersr
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1303
Our Program (con’t)
• Consider location– Check that match routine correctly determines whether
location passed in matches pattern in location field of access control entries, and module acts appropriately
• Consider time (type time_t)– Check module interprets time as range– Example: 9AM means 09:00:00—09:59:59, not
09:00:00• If interpreted as exactly 9:00:00, almost impossible for user to
hit exact time, effectively disabling the entry; violates Requirement 4
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1304
Our Program (con’t)
• Signal handlers– Signal indicates: error in program; or request
from user to terminate– Signal should terminate program– If program tries to recover, and continues to
run, and grants access to role account, either it continued in face of error, or it overrode user’s attempt to terminate program
• Either way, choice of improper operation
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1305
Summary
• Approach differs from using checklist of common vulnerabilities
• Approach is design approach– Apply it at each level of refinement– Emphasizes documentation, analysis, understanding or
program, interfaces, execution environment– Documentation will help other analysts, or folks
moving program to new system with different environment
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1306
Testing
• Informal validation of design, operation of program– Goal: show program meets stated requirements– If requirements drive design, implementation then
testing likely to uncover minor problems– If requirements ill posed, or change during
development, testing may uncover major problems• In this case, do not add features to meet requirements!
Redesign and reimplement …
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1307
Process
• Construct environment matching production environment– If range of environments, need to test in all
• Usually considerable overlap, so not so bad …– If repeated failures, check developer
assumptions• May have embedded information about
development environment—one different than testing environment!
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1308
Steps
• Begin with requirements– Appropriate?– Does it solve the problem?
• Proceed to design– Decomposition into modules allows testing of each
module, with stubs to take place of uncompleted modules
• Then to implementation– Test each module– Test interfaces (composition of modules)
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1309
Philosophy
• Execute all possible paths of control– Compare results with expected results
• In practise, infeasible– Analyze paths, order them in some way
• Order depends on requirements– Generate test data for each one, check each
• Security testing: also test least commonly used paths– Usually not as well checked, so miss vulnerabilities
• First check modules, then check composition
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1310
Testing Module
• Goal: ensure module acts correctly– If it calls functions, correctly regardless of what
functions return• Step 1: define “correct behavior”
– Done during refinement, when module specified
• Step 2: list interfaces to module– Use this to execute tests
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1311
Types of Tests
• Normal data tests– Unexceptional data– Exercise as many paths of control as possible
• Boundary data tests– Test limits to interfaces– Example: if string is at most 256 chars, try 255, then
256, then 257 chars– Example: in our program, try UID of –1 in parameter
list• Is it rejected or remapped to 231–1 or 216–1?
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1312
Types of Tests (con’t)
• Exception tests– How module handle interrupts, traps– Example: send program signal to cause core dump, see
if passwords visible in that file
• Random data tests– Give module data generated randomly– Module should fail but restore system to safe state– Example: in one study of UNIX utilities, 30% crashed
when given random inputs
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1313
Testing Composed Modules
• Consider module that calls other modules• Error handling tests
– Assume called modules violate specifications– See if this module violates specification
• Example: logging via mail program– Program logs connecting host by mail
mail –s hostname netadmin
– Gets host name by mapping IP address using DNS– DNS has fake record: hi nobody; rm -rf*; true– When mail command executed, deletes files
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1314
Testing Program
• Testers assemble program, documentation• New tester follows instructions to install,
configure program and tries it– This tester should not be associated with other testers,
so can provide independent assessment of documentation, correctness of instructions
• Problems may be with documentation, installation program or scripts, or program itself
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1315
Distribution
• Place program, documentation in repository where only authorized people can alter it and from where it can be sent to recipients
• Several factors afftct how this is done
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1316
Factors
• Who can use this program?– Licensed to organization: tie each copy to the
organization so it cannot be redistributed• How can availability be ensured?
– Physical means: distribute via CD-ROM, for example
• Mail, messenger services control availability– Electronic means: via ftp, web, etc.
• Ensure site is available
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1317
Factors (con’t)
• How to protect integrity of master copy?– Attacker changing distribution copy can attack
everyone who gets it– Example: tcp_wrappers altered at repository to
incluse backdoor; 59 hosts compromised when they downloaded and installed it
– Damages credibility of vendor– Customers may disbelieve vendors when
warned
November 1, 2004 Introduction to Computer Security©2004 Matt Bishop
Slide #1-1318
Key Points
• Security in programming best done by mimicinghigh assurance techniques
• Begin with requirements analysis and validation• Map requirements to design• Map design to implementation
– Watch out for common vulnerabilities
• Test thoroughly• Distribute carefully