Trusted Operating Systems –What makes an OS “secure” or trustworthy? –Security principles of...

Preview:

Citation preview

Trusted Operating Systems

– What makes an OS “secure” or trustworthy?

– Security principles of a trusted OS.

– How can we provide “assurance” that an OS is trustworthy?

© Most of the material in these slides is taken verbatim from Pfleeger (the textbook)

Trustworthy OS

• An OS is trusted if it provides: – Memory protection– Generation object access control– User authentication

• In a consistent and effective manner.

• Why trusted OS, why not secure OS?

“Secure” Vs. “Trust”

• Word secure reflects a dichotomy: – Something is either secure or not secure.

• On the other hand “trust” gives allowance for approximations. E.g., trust implies meets current security requirements (cannot speak to about the future).

Security Policies

• Before we can determine if an OS is trusted:– We must state a policy.

• Security policy: statement of the security we expect the system to enforce.

– Define formal models that tell us the conditions to assure the policy succeeds.

Simple security policies

• Some policies we already looked at.– Confidentiality– Integrity– Availability

• Some more…– Military security policy “need to know”.

Figure 5-1  Hierarchy of Sensitivities.

Least Sensitive

Military security policy

Figure 5-2  Compartments and Sensitivity Levels.

Compartments in a Military security plicy.

Figure 5-3  Association of Information and Compartments.

A single piece of information may belong to multiple compartments.

Terms• Information falls under different degrees of sensitivity:

– Unclassified to top secret.– Each sensitivity is determined by a rank. E.g., unclassified has

rank 0.

• Need to know: enforced using compartments– E.g., a particular project may need to use information which is

both top secret and secret. Solution; create a compartment to cover the information in both.

– A compartment may include information across multiple sensitivity levels.

• Clearance: A person seeking access to sensitive information must be cleared. Clearance is expressed as a combination: <rank; compartments>

Dominance relation

• Consider subject s and an object o. – s <= o if an only if:

• rank_s <= rank_o and • compartments_s \subset compartments_o

– E,g, a subject can read an object only if: • The clearance level of the subject is at least as high as that of the

information and• The subject has a need to know about all compartments for which

the information is classified. • E.g.information <secret, {Sweden}> can be read by someone

with clearance: <top_secret, {Sweden}> and <secret , {Sweden}> but not by <top_secret, {Crypto}>

Military policy summary

Enforce both sensitivity requirements (hierarchical) in nature and need to know restrictions (non-hierarchical).

Commercial Security Policies

What are some of the needs of a commercial policy?

Figure 5-4  Commercial View of Sensitive Information.

Commercial security policies

Example: Chinese Wall Policy

Addresses needs of commercial organizations: legal, medical, investment and accounting firms. Key protection: conflict of interest.

Abstractions: Objects: elementary objects such as files.

Company groups: at the next level, all objects concerning a particular company are grouped together.

Conflict classes: all groups of objects for competing companies are clustered together.

Figure 5-5  Chinese Wall Security Policy.

Simple policy

A person can access information from a company as long as the person has never accessed information from another company in the same conflict class.

So far… addressed confidentiality not integrity.

In commercial environments, integrity is important as well.

E.g., Suppose a University wanted to purchase some equipment how will it do it? (i) Purchasing clerk creates an order for supply. Sends copies to supplier and receiving department. (ii) Supplier ships the goods to the receiving department. Receiving clerk, checks order with order from (i) and then signs the delivery form. Forwards the delivery form to accounting.(iii) Accounting clerk compares the invoice with the original order (to check for price and other terms)… and only then issues a check.

In the above transaction, the order was important! Why?

Clark Wilson ModelDefines tuples for every operation <userID, transformationProcedure, {CDIs…}>

userID: person who can perform the operation. transformationProcedure: performs only certain operations depending on the data. E.g., writeACheck if the data’s integrity is mainted. CDIs: constrained data items: data items with certain attributes. E.g., when the receiving clerk sends the delivery form to the accounting clerk… the delivery form has been already “checked” by the receiving clerk.Think of these as “stamps” of approval.

Clark Wilson ModelBut such tuples called “well formed transactions” are not sufficient: also need to “separate responsibilities”.

In Clark Wilson model, separation of duties is accomplished by means of dual signatures.

Security Models

While policies tell us what we want…. models tell us formally what conditions we need to enforce in order to achieve a policy.

We study models for various reasons: (i)test a particular policy for completeness and consistency.(ii)Document a policy(iii)Help conceptualize and design an implementation(iv)Check whether an implementation meets its requirements.

Example models

(i) Bell LaPadula Model: to enforce confidentiality.(ii) Biba Model: enforces integrity.

To understand this, we study a structure called Lattice.lattice is a “partial - ordering of data” such that every data item has a least upper bound and the greatest lower bound.

E.g., Military model is a lattice. E.g., <secret, {Sweden}> and <secret, {France}> have a least upper bound and a greatest lower bound.

Figure 5-6  Sample Lattice.

Bell LaPadula Model for Confidentiality

Tells us “what conditions” need to be met to satisfy confidentiality to implement multi-level security policies (e.g., military policies):

Consider a security system with the following properties: (i) system contains a set of subjects S.

(ii) a set of objects O. (iii) each subject s in S and each object o in O has a

fixed security class (C(s), C(o)). In military security examples of class:

secret, top secret etc…(iv) Security classes are ordered by <= symbol.

Bell LaPadula Model for Confidentiality (2)

Properties:

(1) Simple security property: A subject s may have read access to an object o, only if C(o) <= C(s).

Is this property enough to achieve confidentiality? Why or why not?

Bell LaPadula Model for Confidentiality (2)

Properties:

(2) (* property) – read this as the star property

A subject s who has read access to an object o, may have write access to an object p only if C(o) <= C(p).

Why was this needed?

Figure 5-8  Subject, Object, and Rights.

Need for the two policies: Definition of subject, object and

access rights. E.g., s can “r” or “read” object o.

Figure 5-7  Secure Flow of Information.

Bell LaPadula; read down, write up.

Biba Model for Integrity.

Bell LaPadula is only for confidentiality, how about integrity… come up with a policy.

Biba Model for Integrity.

Simple policy: Subject s can modify (write) object o only if I(s) >= I(o).

Here I is similar to C, except I is called Integrity class.

Integrity *-Property: If subject s has read access to object o with integrity

level I(o), s can have write access to object p only if I(o) >= I(p).

Why is the second policy important?

Trusted OS design.

The policies tells us what we want.The model tells us the properties needed to satisfy for the

policies to succeed. Next: designing an OS which is trusted.

Trusted OS design principles.

(i) Principle of least privilege(ii) Economy of mechanism(iii) Open design (iv) Complete mediation(v) Permission based(vi) Separation of privilege.(vii)Least common mechanism(viii)Ease of use.

Review: Overview of an Operating System’s Functions.

Figure 5-11  Security Functions of a Trusted Operating System.

Key features of a trusted OS

(i) User identification and authentication (we already studied this).

(ii) Access control: (i) Mandatory(ii) Discretionary(iii) Role Based

(iii) Complete mediation.(iv) Trusted path(v) Audit(vi) Audit log reduction(vii) Intrusion detection.

Access control..

• After OSses authenticate they authorize the user.– Assign user ids.– Use the user ids to control access to objects

(Access control).

UserIDs and Group IDs

• In order to separate: the OS must first need to identify users. It does this: – By authentication (e.g., using passwords).

– After authentication the user becomes a number on the system: called user id.

• In all GPOSes (general purpose Operating Systems), every user has a specific unique user id.

– The administrator (also called root in UNIX) has a user id of 0.

– Exercise in class: access the “/etc/passwd” file in UNIX to see the userids.

• When a user logs in, every program executed by the user starts running with the user id of that user.

• The same userid is used as part of the Access control list to assign permissions to resources.

UserIds and GroupIDs.

• User Ids and Group Ids in UNIX/Linux are stored in the /etc/passwd file.

• An example entry from /etc/passwd file: root:*:0:0:System Administrator:/var/root:/bin/sh

• Windows/UNIX access control policy: A program running with a specific user id can access (read, write, or execute) any resource which that specific user id is permitted to access.

• E.g., if a root is editing a document in Microsoft Word. That same process MS Word can be used to open and edit say the password file.

UserIDs and Group IDs• Example:

– Consider a file in a GPOS with associated permissions in UNIX/Linux (simply run “ls -lt” to obtain this output).

– If user “Alice” logs into the computer, can she:• Open the file passwd in program MS word for reading?

• Open the file passwd in program MS word for writing?

A program can only access a resource if the user executing the program has permissions to access that resource.

-rw-r--r-- 1 root wheel 1932 Jan 13 2006 /etc/passwd

Permissions:Owner can read and write, Group can read, Others/world can read

owner

group

Minding our language …

• Permissions: access rights associated with a resource. E.g., “read” permission on a file.

• Privileges: access rights given to a program to do a task. E.g., Does MS word executed by Bob has the permissions to write into Alice’s folder?

• Both permissions and privileges almost mean the same. But sometimes are different.

• Administrator = root = superuser

Authorization.

• There are various ways to authorize.

• They all fall under the area of “access control”, i.e., controlling the access to a certain object (e.g., a file).

Authorization

• Authorization can range from simple rules to very complex fine-grained rules.

• Examples:– Simple rules:

• User ABC can only access files in folder C:/

• User ABC cannot execute any program for more than 20 minutes.

– Complex rules:• Military/secret agency situations: The Vice-President can read

information about a secret agent and can share that information with her/his Chief of Staff but not with the Press.

Authorization (2)• Authorization in Operating Systems

– Most OSes (Windows, Linux) maintains a data structure (list) called Access Control List (ACL).

– ACLs contain a list of users, the resources each user can access, and the set of permissions with which the user can access the resources.

– E.g, consider a file in the following folder:C:\Document and Settings\Bob\493Project.doc

The OS maintains the following information:(1) Which user can access the file 493Project.doc

(1) E.g., can Bob access the file?

(2) In what way can Bob access the file? (1) E.g., can he read the file, change the contents (write) of the file or both?

Authorization (3)• Simple ACLUser Resource Privilege

Alice C:\My Documents\Alice read, write

Bob C:\My Documents\Bob read, write, execute

Most Operating systems also assign “roles” or group users.

E.g.,

Bob Student, SecurityGroup // Alice belongs to Student and Security group.

Alice Faculty, SecurityGroup.

Then an ACL such as this will give same permissions to both Bob and Alice:

SecurityGroup C:\SecurityDocs read, write

Figure 4-10  Directory Access. Any Issues?

Multiple ways of maintaining access control: Protecting files. Simple approach: for each user maintain a list of files accessible + permissions.

Figure 4-11  Alternative Access Paths.

One issue: alternative access path

Figure 4-12  Access Control List.

Solution: access control lists – associate with each file the permissions.

Access Control Models

• Access control modelsConsider the following two scenarios:– Scenario 1: In a University where computers are public and accessed by many people. There are many

students and the student body is a rotating population (I.e., students graduate). Assume student Bob writes a song in a file.

Should Bob be able to share the song with whomever he wants to?

– Scenario 2: A spy, Alice, creates a secret document about a country’s intention to acquire high protein tomatoes. Now,

Should Alice be able to share the info on the tomatoes with whomever she wants to?

In both scenarios, Alice and Bob were the ones who created the file. However, their ability to assign permissions is different.

To capture such different scenarios we use what are called Access Control models (What are the different ways to control access to files).

Access control models

• The most commonly used model is called discretionary access control model (or DAC for short).

DAC controls access as follows:The user creating a resource is its owner.

The owner determines the authorized users of that resource.

E.g., when Bob creates a file, Bob can determine who can access that file.– This is the model on all general purpose Operating Systems

(GPOSs) such as Windows, Linux.

• Exercise: Take any existing file on Linux and set its permissions to read only.

Access Control Models (2)

• Mandatory Access Control (MAC)

An “administrator” determines authorizations.

Hence, a person who creates a resource is not the owner of the resource and does not determine the authorizations.

Can you think of situations where such a model is applicable?

Role-based access control (RBAC)

• RBAC is a twist on MAC.– Like MAC, the administrator determines authorizations– Every user is assigned different roles. A user is logged into one role at any given

time.– Authorizations are given to roles.

• E.g., consider a faculty member. She/he can be (at the same time):– Head of the department– Professor– Advisor to an ACM student body.

• In RBAC each of the three roles is assigned different authorizations. E.g., Head of the Department can access transcripts of all students. However, Advisor to ACM cannot.

• Exercise: how will you implement RBAC on Linux?

Access control in UNIX.

(1) Concept of permissions in UNIX

(2) The passwd and group files.

(3) Some issues: (1) What does it mean when a directory is

executable.(2) How can you ensure someone reads a file in a

directory but cannot see what other files are stored in the directory?

Next

(1) Capabilities

(1) So far we have seen that the OS maintains all the ACLs (access control lists).

(2) But sometimes we may need to have the object itself store its ACL.

(3) This is where capabilities come in: (1) Capabilities are stored by objects and they

can use these to obtain services.(4) Example: kerberos.

Figure 4-13  Process Execution Domain.

Figure 4-14  Passing Objects to a Subject.

How to design a trusted OS so far … (Summary)

• What we want in terms of security (security policies): • How to represent security policies that we want enforced.

• Examples: Military security model to represent the military policy (using Dominance relation).

• How to represent confidentiality in a commercial environment: use Chinese Wall• How to represent Conflict of Interest in a commercial environment: use Clark Wilson

• Properties that the implementation must satisfy to successfully implement the policies (security models):

• To implement confidentiality policy in Military, we will need Bell LaPadula model.

• Implementations are vulnerable (e.g., to programming bugs). How to make them less vulnerable? (Principles of secure design. )• E.g., Least Privilege, Least common mechanism etc..

• What features to achieve the principles of secure design?• Access control, Trusted Path, Audit, Intrusion Detection.

• Next: Brief introduction to Assurance.

Next: Assurance in trusted Oses.

Now that we have seen how to:

(i) Security policies

(ii) Security models

(iii) Principles of trusted OS design

Now: we will see how to provide assurance that a specific OS is trusted.

Assurance..

• ways of convincing others that a model, design, and implementation are correct.

• Specifically, – What justifies our confidence in the security

features of an operating system? – If someone else has evaluated the system, how

have the confidence levels of operating systems been rated?

Assurance.

• During assessement of an OS, we must recognize that:– operating systems are used in different environments;– And, in some applications, less secure operating systems may be

acceptable. • Therefore, we need:

– ways of determining whether a particular operating system is appropriate for a certain set of needs.

• Previously, we looked at design and process techniques for building confidence in the quality and correctness of a system.

• Now, we explore ways to actually demonstrate the security of an operating system, using techniques:testing, formal verification, and validation. 

Typical vulnerabilities in OSes.

• User interface.– The user interface is performed by independent, intelligent hardware

subsystems. The human–computer interface often falls outside the security kernel or security restrictions implemented by an operating system.

– Code to interact with users is often much more complex and much more dependent on the specific device hardware than code for any other component of the computing system.

– User interactions are often character oriented. In the interest of fast data transfer, the operating systems designers may have tried to take shortcuts by limiting the number of instructions executed by the operating system during actual data transfer. Sometimes the instructions eliminated are those that enforce security policies as each character is transferred.

Vulnerabilities in OS (2)

• Ambiguity in policy– Trade-off between seperation and isolation.– E.g., processes should have separate memory,

but processes must be able to share libraries.

• These trade-offs makes it hard to clearly state a policy.

Vulnerabilities in OS (3)

• Incomplete mediation.– Every access to hardware and other resources

must be mediated by the OS. (principle of complete mediation).

– But not always true. • E.g., the working of the open, read and write system

calls.

Vulnerabilities in OS (4)

• Generality– Users can download and install device drivers

and other packages.

Exploitations.

• Classic time of check to to time of use flaws.– Race conditions (we have seen this already).– System call weaknesses.

Assurance.

• How can we assure that an OS satisfies a certain policy?

• What would you do?

Assurance.

• Assurance is usually provided using these three techniques:

– Testing– Formal verification– Validation.

Providing assurance: testing..

• Most widely used.

• Realistic: testing is done on the actual code, not on some abstract concept.

• Any limitations?

Providing assurance: testing..

• Limitations: – Can only demonstrate existence of problems,

not absence! – Very complex because of numerous internal

states.– Sometimes for testing, software is interposed or

changed to deliver external events. Increasing chances of vulnerabilities.

Penetration testing.

• Also called tiger team analysis or ethical hacking.

Formal verification

• Use mathematical rules to demonstrate that security properties are satisfied.

• E.g., using assertions.

• This is a difficult task in which theorem provers are often used.

Example of formal verification

• Program that finds the smallest number among “n” numbers.– Numbers stored in

array A[].

Assertions are statements that must hold true about the program’s variables and

values.

This has to be true for the program to

work correctly. Here, the input

should be positive number

Assertions are statements that must hld trues

about the program’s

variables and values.

As part of formal verification we

must show that Q follows P (i.e., Q

is true if P is true) and then R is true if Q is true

etc…Can you prove

this? Does P follow Q?

Validation.

• Validation is a counterpart of verification. – Verification checks for the quality of implementation.

– Validation makes sure that all the requirements are met.

• To validate code: – Requirements checking.

– Design and code reviews

– System testing.

Evaluation

• Who should evaluate Security software to provide assurance?

– Most consumers and programmers are not security aware. Usually third party evaluates.

– How should we evaluate? – There are many approaches: U.S. Orange Book,

Europe evaluation etc…

U.S. Orange Book evaluation

• Official name: Trusted Computer System Evaluation Criteria (TCSEC).

• Acutual evaluations are guided and sanctioned by the National Computer Security Center (NCSC) of the National Security Agency (NSA).

Introduction to TCSEC.

• Levels of trust are divided into four divisions: A > B > C > D

• Each division has multiple subdivisions: – E.g., Division A has A1 etc… higher number

tighter security requirements. – A look at the criteria ….( see textbook Table 5-

7 or picture).

TCSEC

• The table's pattern reveals four clusters of ratings:– D, with no requirements

– C1/C2/B1, requiring security features common to many commercial operating systems

– B2, requiring a precise proof of security of the underlying model and a narrative specification of the trusted computing base

– B3/A1, requiring more precisely proven descriptive and formal designs of the trusted computing base

How to use TCSEC.

• Suppose you develop an OS and then decide to add a security measure. – You may qualify to get a C1 or C2 rating.– But you cannot get B2 as security must be part

of the design in order to get B2 rating.

Understanding each class.

• Class D: Minimal Protection

• This class is applied to systems that have been evaluated for a higher category but have failed the evaluation. No security characteristics are needed for a D rating.

Class C1.

• Class C1: Discretionary Security Protection• A system evaluated as C1:

• separates users from data. • Controls must seemingly be sufficient to implement access limitation,

to allow users to protect their own data. • Controls may not have been evaluated stringently. • The controls of a C1 system may not have been stringently evaluated;• To qualify for a C1 rating, a system must have a domain that includes

security functions and that is protected against tampering.• A keyword in the classification is "discretionary." A user is

"allowed" to decide when the controls apply, when they do not, and which named individuals or groups are allowed access.

Class C2

• Class C2: Controlled Access Protection

• A C2 system still implements discretionary access control, although the granularity of control is finer. The audit trail must be capable of tracking each individual's access (or attempted access) to each object.

Class B1

• Class B1: Labeled Security Protection– Non discretionary access control must also be available.

– . At the B1 level, each controlled subject and object must be assigned a security level. (For class B1, the protection system does not need to control every object.)

– Each controlled object must be individually labeled for security level, and these labels must be used as the basis for access control decisions.

– The access control must be based on a model employing both hierarchical levels and nonhierarchical categories. (The military model is an example of a system with hierarchical levels—unclassified, classified, secret, top secret—and nonhierarchical categories, need-to-know category sets.)

– The mandatory access policy is the Bell–La Padula model. Thus, a B1 system must implement Bell–La Padula controls for all accesses, with user discretionary access controls to further limit access.

Other classes

• Class B2: Structured Protection

• Class B3: Security Domains

• Class A1: Verified Design

The Green Book: German Information Security Agency

• German Green Book– Produced in West Germany 5 years after US

TCSEC. – Identified 8 basic security functions deemed

sufficient to enforce a broad spectrum of security policies:

The Green Book: German Information Security Agency

• identification and authentication: unique and certain association of an identity with a subject or object

• administration of rights: the ability to control the assignment and revocation of access rights between subjects and objects

• verification of rights: mediation of the attempt of a subject to exercise rights with respect to an object

• audit: a record of information on the successful or attempted unsuccessful exercise of rights

The Green Book: German Information Security Agency

• object reuse: reusable resources reset in such a way that no information flow occurs in contradiction to the security policy

• error recovery: identification of situations from which recovery is necessary and invocation of an appropriate action

• continuity of service: identification of functionality that must be available in the system and what degree of delay or loss (if any) can be tolerated

• data communication security: peer entity authentication, control of access to communications resources, data confidentiality, data integrity, data origin authentication, and non-repudiation

U.S Combined Federal Criteria

• Successor to the TCSEC

• In response to other efforts.

• Two key notions: – Protection profile: detail of security needs

produced by the customer– Security target (aka Target of evaluation):

Vendor maps security needs to what a product provides.

Protection profile

• Rationale– Protection policy and regulations

– Information protection philosophy

– Expected threats

– Environmental assumptions

– Intended use

• Functionality

• Assurance

• Dependencies.

Security Target

• Maps product to the protection profile.

• Discusses: – Which threats are countered by which features. – To what degree of assurance and using which

mechanisms.

• Next: Common criteria (combines U.S efforts with Canadians and Europeans).

Common Criteria• From the PalMe Project (Vetterling).

– “They have two types:• Security functional requirements

– Requirements on the product

– E.g (Table 5-12 in textbook): Identification and authentication, security audit, resource utilization, protection of trusted security functions, privacy, communication

• Security assurance requirements. – Requirements on the process.

– E.g., Development, testing, vulnerability assessment., Life cycle support, guidance document, delivery and operation.

• “The number and strictness of the assurance requirements to be fulfilled depends on the Evaluation Assurance Level”

Information on this slide verbatim from “Secure Systems Development based on CC” by Vetterling and Wimmel. SIGSOFT 2002.

Process.

• Security target – Developers have to describe the TOE (target of evaluation) and its

boundaries.

– Assets of the TOE and the threats.

– Security objectives corresponding to those threats.

– Countermeasures against threats. • Specified by the functional requirements and assurance requirements.

Information on this slide verbatim from “Secure Systems Development based on CC” by Vetterling and Wimmel. SIGSOFT 2002.

Figure 5-24  Classes, Families, and Components in Common Criteria.

Figure 5-25  Functionality or Assurance Packages in Common Criteria.

Figure 5-26  Protection Profiles and Security Targets in Common Criteria.

Figure 5-27  Criteria Development Efforts.

Recommended