46
1 cs59 1 chow C. Edward Chow Security Policies CS591 – Chapter 5.2/5.4 of Security in Computing

Security Policies

Embed Size (px)

DESCRIPTION

Security Policies. C. Edward Chow. CS591 – Chapter 5.2/5.4 of Security in Computing. Goals of Security Policies. Security policy is a statement of the security we expect the system to enforce. - PowerPoint PPT Presentation

Citation preview

Page 1: Security Policies

1cs591 chow

C. Edward ChowC. Edward Chow

Security PoliciesSecurity Policies

CS591 – Chapter 5.2/5.4 of Security in Computing

Page 2: Security Policies

2cs591 chow

Goals of Security PoliciesGoals of Security Policies

Security policy is a statement of the security we expect the system to enforce.

Security policy also called information flow policy, prevents unauthorized disclosure of information.

Example: Privacy Act requires that certain personal data be kept confidential. E.g., income tax return info only available to IRS and legal authority with court order. It limits the distribution of documents/info.

Example: Military security policy is based on protecting classified information.

Each piece of info is ranked at a particular sensitivity level such as unclassified, restricted, confidential, secret, or top secret.

Users are classified with the same classification level.

Policy guards against higher sensitive document to be leaked to lower ranked users.

Security policy is a statement of the security we expect the system to enforce.

Security policy also called information flow policy, prevents unauthorized disclosure of information.

Example: Privacy Act requires that certain personal data be kept confidential. E.g., income tax return info only available to IRS and legal authority with court order. It limits the distribution of documents/info.

Example: Military security policy is based on protecting classified information.

Each piece of info is ranked at a particular sensitivity level such as unclassified, restricted, confidential, secret, or top secret.

Users are classified with the same classification level.

Policy guards against higher sensitive document to be leaked to lower ranked users.

Page 3: Security Policies

3cs591 chow

Hierarchy of SensitivitiesHierarchy of Sensitivities

Page 4: Security Policies

4cs591 chow

Discretionary Access Control (DAC)Discretionary Access Control (DAC)

DAC: Mechanism where a user can set access control to allow or deny access to an object (Section 5.4)

Also called Identity-based access control (IBAC). It is a traditional access control techniques implemented by

traditional operating system such as Unix. Based on user identity and ownership Programs run by a user inherits all privileges granted to the

user. Programs is free to change access to the user’s objects Support only two major categories of users:

– Completely trusted admins– Completely untrusted ordinary users

DAC: Mechanism where a user can set access control to allow or deny access to an object (Section 5.4)

Also called Identity-based access control (IBAC). It is a traditional access control techniques implemented by

traditional operating system such as Unix. Based on user identity and ownership Programs run by a user inherits all privileges granted to the

user. Programs is free to change access to the user’s objects Support only two major categories of users:

– Completely trusted admins– Completely untrusted ordinary users

Page 5: Security Policies

5cs591 chow

Problems with DACProblems with DAC Each users has complete discretion over his objects.

What is wrong with that? Difficult to enforce a system-wide security policy, e.g.

– A user can leak classified documents to a unclassified users.– Other examples?

Only based user’s identity and ownership, Ignoring security relevant info such as User’s role Function of the program Trustworthiness of the program

– Compromised program can change access to the user’s objects– Compromised program inherit all the permissions granted to the users

(especially the root user) Sensitivity of the data Integrity of the data

Only support coarse-grained privileges Unbounded privilege escalation Too simple classification of users (How about more than two categories of users?)

Each users has complete discretion over his objects. What is wrong with that? Difficult to enforce a system-wide security policy, e.g.

– A user can leak classified documents to a unclassified users.– Other examples?

Only based user’s identity and ownership, Ignoring security relevant info such as User’s role Function of the program Trustworthiness of the program

– Compromised program can change access to the user’s objects– Compromised program inherit all the permissions granted to the users

(especially the root user) Sensitivity of the data Integrity of the data

Only support coarse-grained privileges Unbounded privilege escalation Too simple classification of users (How about more than two categories of users?)

Page 6: Security Policies

6cs591 chow

Mandatory Access Control (MAC)Mandatory Access Control (MAC)

MAC: Mechanism where system control access to an object and a user cannot alter that access.

Occasionally called rule-based access control? Defined by three major properties:

Administratively-defined security policyControl over all subjects (process) and objects (files,

sockets, network interfaces)Decisions based on all security-relevant info

MAC access decisions are based on labels that contains security-relevant info.

MAC: Mechanism where system control access to an object and a user cannot alter that access.

Occasionally called rule-based access control? Defined by three major properties:

Administratively-defined security policyControl over all subjects (process) and objects (files,

sockets, network interfaces)Decisions based on all security-relevant info

MAC access decisions are based on labels that contains security-relevant info.

Page 7: Security Policies

7cs591 chow

What Can MAC Offer?What Can MAC Offer?

Supports a wide variety of categories of users in system. For example, Users with labels: (secret, {EUR, US}) (top secret,

{NUC, US}). Here security level is specified by the two-tuple: (clearance,

category) Strong separation of security domains System, application, and data integrity Ability to limit program privileges

Confine the damage caused by flowed or malicious software Processing pipeline guarantees Authorization limits for legitimate users

Supports a wide variety of categories of users in system. For example, Users with labels: (secret, {EUR, US}) (top secret,

{NUC, US}). Here security level is specified by the two-tuple: (clearance,

category) Strong separation of security domains System, application, and data integrity Ability to limit program privileges

Confine the damage caused by flowed or malicious software Processing pipeline guarantees Authorization limits for legitimate users

Page 8: Security Policies

8cs591 chow

Mandatory and Discretionary Access Control

Mandatory and Discretionary Access Control

Bell-LaPadula model combines Mandatory and Discretionary Access Controls.

“S has discretionary read (write) access to O”

means that the access control matrix entry for S and O corresponding to the discretionary access control component contains a read (write) right.

A B C D OQS read(D)T

If the mandatory controls not present, S would be able to read (write) O.

Bell-LaPadula model combines Mandatory and Discretionary Access Controls.

“S has discretionary read (write) access to O”

means that the access control matrix entry for S and O corresponding to the discretionary access control component contains a read (write) right.

A B C D OQS read(D)T

If the mandatory controls not present, S would be able to read (write) O.

Page 9: Security Policies

9cs591 chow

Bell-LaPadula ModelBell-LaPadula Model

Also called the multi-level model, Was proposed by Bell and LaPadula of MITRE for enforcing access

control in government and military applications. It corresponds to military-style classifications. In such applications, subjects and objects are often partitioned into

different security levels. A subject can only access objects at certain levels determined by

his security level. For instance, the following are two typical access specifications:

“Unclassified personnel cannot read data at confidential levels” and “Top-Secret data cannot be written into the files at unclassified levels”

Also called the multi-level model, Was proposed by Bell and LaPadula of MITRE for enforcing access

control in government and military applications. It corresponds to military-style classifications. In such applications, subjects and objects are often partitioned into

different security levels. A subject can only access objects at certain levels determined by

his security level. For instance, the following are two typical access specifications:

“Unclassified personnel cannot read data at confidential levels” and “Top-Secret data cannot be written into the files at unclassified levels”

Page 10: Security Policies

10cs591 chow

Informal DescriptionInformal Description

Simplest type of confidentiality classification is a set of security clearances arranged in a linear (total) ordering.

Clearances represent the security levels. The higher the clearance, the more sensitive the info. Basic confidential classification system:

individuals documentsTop Secret (TS) Tamara, Thomas Personnel FilesSecret (S) Sally, Samuel Electronic MailsConfidential (C) Claire, Clarence Activity Log FilesRestrictedUnclassified (UC) Ulaley, Ursula Telephone Lists

Simplest type of confidentiality classification is a set of security clearances arranged in a linear (total) ordering.

Clearances represent the security levels. The higher the clearance, the more sensitive the info. Basic confidential classification system:

individuals documentsTop Secret (TS) Tamara, Thomas Personnel FilesSecret (S) Sally, Samuel Electronic MailsConfidential (C) Claire, Clarence Activity Log FilesRestrictedUnclassified (UC) Ulaley, Ursula Telephone Lists

Page 11: Security Policies

11cs591 chow

Star Property (Preliminary Version)Star Property (Preliminary Version)

Let L(S)=ls be the security clearance of subject S. Let L(O)=lo be the security classification of object ). For all security classification li, i=0,…, k-1, li<li+1 Simple Security Condition (Read Down):

S can read O if and only if lo<=ls and S has discretionary read access to O.

*-Property (Star property) (Write Up): S can write O if and only if ls<=lo and S has discretionary write access to O.

TS guy can not write documents lower than TS. Prevent classified information leak.

No Read UP; No Write Down! But how can different groups communicate?

Trusted Solaris Example.

Let L(S)=ls be the security clearance of subject S. Let L(O)=lo be the security classification of object ). For all security classification li, i=0,…, k-1, li<li+1 Simple Security Condition (Read Down):

S can read O if and only if lo<=ls and S has discretionary read access to O.

*-Property (Star property) (Write Up): S can write O if and only if ls<=lo and S has discretionary write access to O.

TS guy can not write documents lower than TS. Prevent classified information leak.

No Read UP; No Write Down! But how can different groups communicate?

Trusted Solaris Example.

Page 12: Security Policies

12cs591 chow

Secure Flow of InformationSecure Flow of Information

Page 13: Security Policies

13cs591 chow

Basic Security TheoremBasic Security Theorem

Let be a system with secure initial state 0

Let T be the set of state transformations. If every element of T preserves the simple security

condition, preliminary version, and the *-property, preliminary version, Then every state i, i≥0, is secure.

Let be a system with secure initial state 0

Let T be the set of state transformations. If every element of T preserves the simple security

condition, preliminary version, and the *-property, preliminary version, Then every state i, i≥0, is secure.

Page 14: Security Policies

14cs591 chow

Categories and Need to Know PrincipleCategories and Need to Know Principle

Expand the model by adding a set of categories. Each category describe a kind of information. These categories arise from the “need to know” principle

no subject should be able to read objects unless reading them is necessary for that subject to perform its function.

Example: three categories: NUC, EUR, US. Each security level and category form a security level or

compartment. Subjects have clearance at (are cleared into, or are in) a

security level. Objects are at the level of (or are in) a security level.

Expand the model by adding a set of categories. Each category describe a kind of information. These categories arise from the “need to know” principle

no subject should be able to read objects unless reading them is necessary for that subject to perform its function.

Example: three categories: NUC, EUR, US. Each security level and category form a security level or

compartment. Subjects have clearance at (are cleared into, or are in) a

security level. Objects are at the level of (or are in) a security level.

Page 15: Security Policies

15cs591 chow

Security LatticeSecurity Lattice

William may be cleared into level (SECRET, {EUR}) George into level (TS, {NUC, US}). A document may be classified as (C, {EUR}) Someone with clearance at (TS, {NUC, US}) will be

denied access to document with category EUR.

William may be cleared into level (SECRET, {EUR}) George into level (TS, {NUC, US}). A document may be classified as (C, {EUR}) Someone with clearance at (TS, {NUC, US}) will be

denied access to document with category EUR.

{NUC, EUR, US}

{NUC, EUR} {NUC, US} {EUR, US}

{NUC} {EUR} {US}

Page 16: Security Policies

16cs591 chow

Dominate (dom) RelationDominate (dom) Relation

The security level (L, C) dominates the security level (L’, C’) if and only if L’ L and C’ C

Dom dominate relation is false. Geroge is cleared into security level (S, {NUC, EUR}) DocA is classified as (C, {NUC}) DocB is classified as (S, {EUR, US}) DocC is classified as (S, {EUR}) George dom DocA George dom DocB George dom DocC

The security level (L, C) dominates the security level (L’, C’) if and only if L’ L and C’ C

Dom dominate relation is false. Geroge is cleared into security level (S, {NUC, EUR}) DocA is classified as (C, {NUC}) DocB is classified as (S, {EUR, US}) DocC is classified as (S, {EUR}) George dom DocA George dom DocB George dom DocC

Page 17: Security Policies

17cs591 chow

New Security Condition and *-PropertyNew Security Condition and *-Property

Let C(S) be the category set of subject S. Let C(O) be the category set of object O. Simple Security Condition (no read up):

S can read O if and only if S dom O and S has discretionary read access to O.

*-Property (no write down): S can write to O if and only if O dom S and S has discretionary write access to O.

Basic Security Theorem: Let be a system with secure initial state 0

Let T be the set of state transformations.If every element of T preserves the simple security condition, preliminary version, and the *-property, preliminary version, Then every state i, i≥0, is secure.

Let C(S) be the category set of subject S. Let C(O) be the category set of object O. Simple Security Condition (no read up):

S can read O if and only if S dom O and S has discretionary read access to O.

*-Property (no write down): S can write to O if and only if O dom S and S has discretionary write access to O.

Basic Security Theorem: Let be a system with secure initial state 0

Let T be the set of state transformations.If every element of T preserves the simple security condition, preliminary version, and the *-property, preliminary version, Then every state i, i≥0, is secure.

Page 18: Security Policies

18cs591 chow

Allow Write Down?Allow Write Down?

Bell-LaPadula allows higher-level subject to write into lower level object that low level subject can read.

A subject has a maximum security level and a current security level. maximum security level must dominate current security level.

A subject may (effectively) decrease its security level from the maximum in order to communicate with entities at lower security levels.

Colonel’s maximum security level is (S, {NUC, EUR}). She changes her current security level to (S, {EUR}). Now she can create document at Major is clearance level (S, {EUR}).

Bell-LaPadula allows higher-level subject to write into lower level object that low level subject can read.

A subject has a maximum security level and a current security level. maximum security level must dominate current security level.

A subject may (effectively) decrease its security level from the maximum in order to communicate with entities at lower security levels.

Colonel’s maximum security level is (S, {NUC, EUR}). She changes her current security level to (S, {EUR}). Now she can create document at Major is clearance level (S, {EUR}).

Page 19: Security Policies

19cs591 chow

Data General B2 Unix SystemData General B2 Unix System

Data General B2 Unix (DG/UX) provides mandatory access controls (MAC).

The MAC label is a label identifying a particular compartment. The initial label (assigned at login time) is the label assigned to the

user in a database called Authorization and Authentication (A&A) Database.

When a process begins, it is assigned to MAC label of its parent (whoever creates it).

Objects are assigned labels at creation. The labels can be explicit or implicit.

The explicit label is stored as parts of the object’s attributes. The implicit label derives from the parent directory of the object. IMPL_HI: the least upper bound of all components in DG/UX lattice

has IMPL_HI as label. IMPL_LO: the greatest lower bound of all components in DG/UX

lattice has IMPL_LO as the label

Data General B2 Unix (DG/UX) provides mandatory access controls (MAC).

The MAC label is a label identifying a particular compartment. The initial label (assigned at login time) is the label assigned to the

user in a database called Authorization and Authentication (A&A) Database.

When a process begins, it is assigned to MAC label of its parent (whoever creates it).

Objects are assigned labels at creation. The labels can be explicit or implicit.

The explicit label is stored as parts of the object’s attributes. The implicit label derives from the parent directory of the object. IMPL_HI: the least upper bound of all components in DG/UX lattice

has IMPL_HI as label. IMPL_LO: the greatest lower bound of all components in DG/UX

lattice has IMPL_LO as the label

Page 20: Security Policies

20cs591 chow

Three MAC Regions in DG/UX MAC Lattice

Three MAC Regions in DG/UX MAC Lattice

Figure 5-3 The three MAC regions in the MAC lattice (modified from the DG/UX Security Manual [257], p. 4-7, Figure 4-4). TCB stands for "trusted computing base.“

Page 21: Security Policies

21cs591 chow

Accesses with MAC LabelsAccesses with MAC Labels

• Read up and write up from users to Admin Region not allowed.

• Admin processes sanitize data sent to user processes with MAC Labels in the user region.

• System programs are in the lowest region.• No user can write to or alter them.• Only programs with the same label as the directory

can create files in that directory.• The above restriction will prevent

• compiling (need to access /tmp)• mail delivery (need to access mail spool

directory)• Solution multilevel directory.

• Read up and write up from users to Admin Region not allowed.

• Admin processes sanitize data sent to user processes with MAC Labels in the user region.

• System programs are in the lowest region.• No user can write to or alter them.• Only programs with the same label as the directory

can create files in that directory.• The above restriction will prevent

• compiling (need to access /tmp)• mail delivery (need to access mail spool

directory)• Solution multilevel directory.

Page 22: Security Policies

22cs591 chow

Multilevel DirectoryMultilevel Directory

A directory with a set of subdirectories, one for each label. These hidden directories normally invisible to the user. When a process with label MAC_A creates a file in /tmp, it actually

create a file in hidden directory under /tmp with label MAC_A The parent directory of a file in /tmp is the hidden directory. A reference to the parent directory goes to the hidden directory. Process A with MAC_A creates /tmp/a. Process B with MAC_B

creates /tmp/a. Each of them performs “cd /tmp/a; cd ..”The system call stat(“.”, &stat_buffer) returns different inode number for each process. It returns the inode number of the respective hidden directory.

Try “stat” command to display file and related status. DG/UX provides dg_mstat(“.”, &stat_buffer) to translate the current

working directory to the multilevel directory

A directory with a set of subdirectories, one for each label. These hidden directories normally invisible to the user. When a process with label MAC_A creates a file in /tmp, it actually

create a file in hidden directory under /tmp with label MAC_A The parent directory of a file in /tmp is the hidden directory. A reference to the parent directory goes to the hidden directory. Process A with MAC_A creates /tmp/a. Process B with MAC_B

creates /tmp/a. Each of them performs “cd /tmp/a; cd ..”The system call stat(“.”, &stat_buffer) returns different inode number for each process. It returns the inode number of the respective hidden directory.

Try “stat” command to display file and related status. DG/UX provides dg_mstat(“.”, &stat_buffer) to translate the current

working directory to the multilevel directory

Page 23: Security Policies

23cs591 chow

Mounting Unlabeled File SystemMounting Unlabeled File System All files in that file system need to be labeled. Symbolic links aggravate this problem. Does the MAC label the target of the link

control, or does the MAC label the link itself? DG/UX uses a notion of inherited labels (called implicit labels) to solve this problem.

The following rules control the way objects are labeled.1. Roots of file systems have explicit MAC labels. If a file system without labels is

mounted on a labeled file system, the root directory of the mounted file system receives an explicit label equal to that of the mount point. However, the label of the mount point, and of the underlying tree, is no longer visible, and so its label is unchanged (and will become visible again when the file system is unmounted).

2. An object with an implicit MAC label inherits the label of its parent.3. When a hard link to an object is created, that object must have an explicit label; if it

does not, the object's implicit label is converted to an explicit label. A corollary is that moving a file to a different directory makes its label explicit.

4. If the label of a directory changes, any immediate children with implicit labels have those labels converted to explicit labels before the parent directory's label is changed.

5. When the system resolves a symbolic link, the label of the object is the label of the target of the symbolic link. However, to resolve the link, the process needs access to the symbolic link itself.

All files in that file system need to be labeled. Symbolic links aggravate this problem. Does the MAC label the target of the link

control, or does the MAC label the link itself? DG/UX uses a notion of inherited labels (called implicit labels) to solve this problem.

The following rules control the way objects are labeled.1. Roots of file systems have explicit MAC labels. If a file system without labels is

mounted on a labeled file system, the root directory of the mounted file system receives an explicit label equal to that of the mount point. However, the label of the mount point, and of the underlying tree, is no longer visible, and so its label is unchanged (and will become visible again when the file system is unmounted).

2. An object with an implicit MAC label inherits the label of its parent.3. When a hard link to an object is created, that object must have an explicit label; if it

does not, the object's implicit label is converted to an explicit label. A corollary is that moving a file to a different directory makes its label explicit.

4. If the label of a directory changes, any immediate children with implicit labels have those labels converted to explicit labels before the parent directory's label is changed.

5. When the system resolves a symbolic link, the label of the object is the label of the target of the symbolic link. However, to resolve the link, the process needs access to the symbolic link itself.

Page 24: Security Policies

24cs591 chow

Interesting Case with Hard LinksInteresting Case with Hard Links Let /x/y/z: and /x/a/b be hard links to the same object. Suppose y has an explicit

label IMPL_HI and a an explicit label IMPL_B. Then the file object can be accessed by a process at IMPL_HI as /x/y/z and by a process at IMPL_B as /x/alb. Which label is correct? Two cases arise.

Suppose the hard link is created while the file system is on a DG/UX B2 system. Then the DG/UX system converts the target's implicit label to an explicit one (rule 3). Thus, regardless of the path used to refer to the object, the label of the object will be the same.

Suppose the hard link exists when the file system is mounted on the DG/UX B2 system. In this case, the target had no file label when it was created, and one must be added. If no objects on the paths to the target have explicit labels, the tar get will have the same (implicit) label regardless of the path being used. But if any object on any path to the target of the link acquires an explicit label, the target's label may depend on which path is taken. To avoid this, the implicit labels of a directory's children must be preserved when the directory's label is made explicit. Rule 4 does this.

Because symbolic links interpolate path names of files, rather than store Mode numbers, computing the label of symbolic links is straightforward. If /x/y/z is a sym bolic link to /a/b/c, then the MAC label of c is computed in the usual way. However, the symbolic link itself is a file, and so the process must also have access to the link file z.

Let /x/y/z: and /x/a/b be hard links to the same object. Suppose y has an explicit label IMPL_HI and a an explicit label IMPL_B. Then the file object can be accessed by a process at IMPL_HI as /x/y/z and by a process at IMPL_B as /x/alb. Which label is correct? Two cases arise.

Suppose the hard link is created while the file system is on a DG/UX B2 system. Then the DG/UX system converts the target's implicit label to an explicit one (rule 3). Thus, regardless of the path used to refer to the object, the label of the object will be the same.

Suppose the hard link exists when the file system is mounted on the DG/UX B2 system. In this case, the target had no file label when it was created, and one must be added. If no objects on the paths to the target have explicit labels, the tar get will have the same (implicit) label regardless of the path being used. But if any object on any path to the target of the link acquires an explicit label, the target's label may depend on which path is taken. To avoid this, the implicit labels of a directory's children must be preserved when the directory's label is made explicit. Rule 4 does this.

Because symbolic links interpolate path names of files, rather than store Mode numbers, computing the label of symbolic links is straightforward. If /x/y/z is a sym bolic link to /a/b/c, then the MAC label of c is computed in the usual way. However, the symbolic link itself is a file, and so the process must also have access to the link file z.

Page 25: Security Policies

25cs591 chow

Enable Flexible Write in DG/UXEnable Flexible Write in DG/UX

Provide a range of labels called MAC tuple. A range is a set of labels expressed by a lower bound and an upper

hound. A MAC tuple consists of up to three ranges (one for each of the regions in Figure 5-3).

Example: A system has two security levels. TS and S, the former dominating the latter. The categories are COMP. NUC, and ASIA. Examples of ranges are:

[(S, { COMP } ), (TS, { COMP } )] [( S, ), (TS, { COMP, NUC. ASIA } )] [( S, { ASIA } ), ( TS, { ASIA, NUC } )] The label ( TS, { COMP }) is in the first two ranges.

The label ( S, { NUC, ASIA } ) is in the last two ranges. However,[( S, {ASIA} ), ( TS, { COMP, NUC} )]is not a valid range because ( TS, {COMP. NUC } ) dom ( S, { ASIA } ).

Provide a range of labels called MAC tuple. A range is a set of labels expressed by a lower bound and an upper

hound. A MAC tuple consists of up to three ranges (one for each of the regions in Figure 5-3).

Example: A system has two security levels. TS and S, the former dominating the latter. The categories are COMP. NUC, and ASIA. Examples of ranges are:

[(S, { COMP } ), (TS, { COMP } )] [( S, ), (TS, { COMP, NUC. ASIA } )] [( S, { ASIA } ), ( TS, { ASIA, NUC } )] The label ( TS, { COMP }) is in the first two ranges.

The label ( S, { NUC, ASIA } ) is in the last two ranges. However,[( S, {ASIA} ), ( TS, { COMP, NUC} )]is not a valid range because ( TS, {COMP. NUC } ) dom ( S, { ASIA } ).

Page 26: Security Policies

26cs591 chow

IntegrityIntegrity Problem area: systems require data to be changed accurately and follow the rules.

Disclosure is not a major concern. Lipner [636] identifies five requirements for preserving data integriy:

1. Users will not write their own programs, but will use existing production programs and databases.

2. Programmers will develop and test programs on a nonproduction system; if they need access to actual data, they will be given production data via a special process, but will use it on their development system.

3. A special process must be followed to install a program from the development system onto the production system.

4. The special process in requirement 3 must be controlled and audited.5. The managers and auditors must have access to both the system state and the

system logs that are generated. Auditing: the process of analyzing systems to determine what actions took place and

who performed them. It uses extensive logging. These requirement suggest 3 principles of operation:

Separation of duty (two different people? perform two critical steps) Separation of function (program not developed on production system;

production data for development needs to be sanitized.) Auditing. (Commercial systems emphasize recovery and accountability.)

Problem area: systems require data to be changed accurately and follow the rules. Disclosure is not a major concern.

Lipner [636] identifies five requirements for preserving data integriy:1. Users will not write their own programs, but will use existing production programs

and databases.2. Programmers will develop and test programs on a nonproduction system; if they

need access to actual data, they will be given production data via a special process, but will use it on their development system.

3. A special process must be followed to install a program from the development system onto the production system.

4. The special process in requirement 3 must be controlled and audited.5. The managers and auditors must have access to both the system state and the

system logs that are generated. Auditing: the process of analyzing systems to determine what actions took place and

who performed them. It uses extensive logging. These requirement suggest 3 principles of operation:

Separation of duty (two different people? perform two critical steps) Separation of function (program not developed on production system;

production data for development needs to be sanitized.) Auditing. (Commercial systems emphasize recovery and accountability.)

Page 27: Security Policies

27cs591 chow

Different NeedsDifferent Needs

Commercial firms grant access based on individual needs and has a larger categories large number of security levels.

In military environment, creation of compartment is centralized. In commercial firms, it is decentralized.

Aggregating distributed inoncuous info, one can often deduce sensitive information. The Bell-LaPadula Model lack capability to track what questions have been asked.

Commercial firms grant access based on individual needs and has a larger categories large number of security levels.

In military environment, creation of compartment is centralized. In commercial firms, it is decentralized.

Aggregating distributed inoncuous info, one can often deduce sensitive information. The Bell-LaPadula Model lack capability to track what questions have been asked.

Page 28: Security Policies

28cs591 chow

Biba Integrity ModelBiba Integrity Model

In 1977, Biba [94] studied the nature of the integrity of systems. He proposed three policies, one of which was the mathematical dual of the Bell-LaPadula Model.

A system consists of a set S of subjects, a set 0 of objects, and a set I of integrity levels. The levels are ordered.

The relation < I x I holds when the second integrity level dominates the first.

The relation ≤ I x I holds when the second integrity level either dominates or is the same as the first.

The function min: I x II gives the lesser of the two integrity levels The function i:S O1 returns the integrity level of an object or a subject. The relation r S x 0 defines the ability of a subject to read an object; the relation w S x 0 defines the ability of a subject to write to an object; the relation x S x S defines the ability of a subject to invoke (execute)

another subject.

In 1977, Biba [94] studied the nature of the integrity of systems. He proposed three policies, one of which was the mathematical dual of the Bell-LaPadula Model.

A system consists of a set S of subjects, a set 0 of objects, and a set I of integrity levels. The levels are ordered.

The relation < I x I holds when the second integrity level dominates the first.

The relation ≤ I x I holds when the second integrity level either dominates or is the same as the first.

The function min: I x II gives the lesser of the two integrity levels The function i:S O1 returns the integrity level of an object or a subject. The relation r S x 0 defines the ability of a subject to read an object; the relation w S x 0 defines the ability of a subject to write to an object; the relation x S x S defines the ability of a subject to invoke (execute)

another subject.

Page 29: Security Policies

29cs591 chow

Intuition Behind Model ConstructionIntuition Behind Model Construction

The higher the level, the more confidence one has that a program will execute correctly (or detect problems with its inputs and stop executing).

Data at a higher level is more accurate, reliable, trustworthy than data at a lower level.

Integrity labels, in general, are not also security labels. They are assigned and maintained separately, because the reasons behind the labels are different. Security labels primarily limit the flow of information; integrity labels primarily inhibit the modification of information.

They may overlap, however, with sur prising results

The higher the level, the more confidence one has that a program will execute correctly (or detect problems with its inputs and stop executing).

Data at a higher level is more accurate, reliable, trustworthy than data at a lower level.

Integrity labels, in general, are not also security labels. They are assigned and maintained separately, because the reasons behind the labels are different. Security labels primarily limit the flow of information; integrity labels primarily inhibit the modification of information.

They may overlap, however, with sur prising results

Page 30: Security Policies

30cs591 chow

Test case: Information Transfer PathTest case: Information Transfer Path

Biba tests his policies against the notion of an information transfer path:

Definition 6-1. An information transfer path is a sequence of objects o1, ..., on+1 and a corresponding sequence of subjects s1, ..., sn such that si r oi and si w oi+1 for all i,1≤i≤n.

Intuitively, data in the object o1 can be transferred into the object on+1 along an information flow path by a succession of reads and writes.

Biba tests his policies against the notion of an information transfer path:

Definition 6-1. An information transfer path is a sequence of objects o1, ..., on+1 and a corresponding sequence of subjects s1, ..., sn such that si r oi and si w oi+1 for all i,1≤i≤n.

Intuitively, data in the object o1 can be transferred into the object on+1 along an information flow path by a succession of reads and writes.

Page 31: Security Policies

31cs591 chow

Low-Water-Mark PolicyLow-Water-Mark Policy

Whenever a subject accesses an object, the policy changes the integrity level of the subject to the lower of the subject and the object. Specifically:1. s S can write to o O if and only if i(o) ≤ i(s).2. If s S reads o O, then i'(s) = rnin(i(s), i(o)), where i'(s) is the

subject's integrity level after the read.

3. s1 S can execute s2 S if and only if i(s2) ≤ i(s1). Rule 1 prevents writing to higher level (higher trusted). Prevent

implant of incorrect or false data. Rule 2 assume that the subject will rely on the data with lower

integrity level. Therefore his integrity level should be lowered. (Contaminating subject and actions)

Rule 3 prevent a less trusted invoker to control the execution of more truested subjects.

Whenever a subject accesses an object, the policy changes the integrity level of the subject to the lower of the subject and the object. Specifically:1. s S can write to o O if and only if i(o) ≤ i(s).2. If s S reads o O, then i'(s) = rnin(i(s), i(o)), where i'(s) is the

subject's integrity level after the read.

3. s1 S can execute s2 S if and only if i(s2) ≤ i(s1). Rule 1 prevents writing to higher level (higher trusted). Prevent

implant of incorrect or false data. Rule 2 assume that the subject will rely on the data with lower

integrity level. Therefore his integrity level should be lowered. (Contaminating subject and actions)

Rule 3 prevent a less trusted invoker to control the execution of more truested subjects.

Page 32: Security Policies

32cs591 chow

Constrains Information Transfer PathConstrains Information Transfer Path

This low-water-mark policy constrains any information transfer path. Theorem 6-1. If there is an information transfer path from object o1 O to

object on+1 O, then enforcement of the low-water-mark policy requires thati(on+1) ≤ i(o1) for all n > 1.

Proof: Without loss of generality, assume that each read and write was per formed in the order of the indices of the vertices. By induction, for any 1 ≤ k ≤ n,

i(sk) = min { i(oj) I 1 ≤ j ≤ k } after k reads. As the nth write succeeds, by rule 1, i(on+1) < i(sn ). Thus, by transitivity. i(on+1) < i(o1).

This policy prevents direct modifications that would lower integrity labels. It also prevents indirect modification by lowering the integrity label of a subject that reads from an object with a lower integrity level.

The problem with this policy is that, in practice. the subjects change integrity levels. In particular, the level of a subject is nonincreasing. which means that it will soon be unable to access objects at a high integrity level.

How about decrease object integrity level rather than subject integrity level?

This low-water-mark policy constrains any information transfer path. Theorem 6-1. If there is an information transfer path from object o1 O to

object on+1 O, then enforcement of the low-water-mark policy requires thati(on+1) ≤ i(o1) for all n > 1.

Proof: Without loss of generality, assume that each read and write was per formed in the order of the indices of the vertices. By induction, for any 1 ≤ k ≤ n,

i(sk) = min { i(oj) I 1 ≤ j ≤ k } after k reads. As the nth write succeeds, by rule 1, i(on+1) < i(sn ). Thus, by transitivity. i(on+1) < i(o1).

This policy prevents direct modifications that would lower integrity labels. It also prevents indirect modification by lowering the integrity label of a subject that reads from an object with a lower integrity level.

The problem with this policy is that, in practice. the subjects change integrity levels. In particular, the level of a subject is nonincreasing. which means that it will soon be unable to access objects at a high integrity level.

How about decrease object integrity level rather than subject integrity level?

Page 33: Security Policies

33cs591 chow

Ring PolicyRing Policy

The ring policy ignores the issue of indirect modification and focuses on direct modification only. This solves the problems described above. The rules are as follows.

1. Any subject may read any object, regardless of integrity levels.

2. s S can write to o O if and only if i(o) ≤ i(s).

3. s1 S can execute s2 S if and only if i(s2) -< i(s1)

The difference between this policy and the low-water-mark policy is simply that any subject can read any object. Hence, Theorem 6-1 holds for this model, too.

The ring policy ignores the issue of indirect modification and focuses on direct modification only. This solves the problems described above. The rules are as follows.

1. Any subject may read any object, regardless of integrity levels.

2. s S can write to o O if and only if i(o) ≤ i(s).

3. s1 S can execute s2 S if and only if i(s2) -< i(s1)

The difference between this policy and the low-water-mark policy is simply that any subject can read any object. Hence, Theorem 6-1 holds for this model, too.

Page 34: Security Policies

34cs591 chow

Biba Model (Strict Integrity Policy)Biba Model (Strict Integrity Policy)

This model is the dual of the Bell-LaPadula Model, and is most commonly called "Biba's model."

Its rules are as follows.1. s S can read o O if and only if i(s) ≤ i(o).2. s S can write to o O if and only if i(o) ≤ i(s).

3. s1 S can execute s2 S if and only if i(s2) ≤ i(s1). Given these rules, Theorem 6-1 still holds, but its proof changes

(see Exercise 1). Note that rules I and 2 imply that if both read and write are allowed, i(s) = i(o).

Like the low-water-mark policy, this policy prevents indirect as well as direct modification of entities without authorization. By replacing the notion of "integrity level" with "integrity compartments," and adding the notion of discretionary controls, one obtains the full dual of Bell-LaPadula.

This model is the dual of the Bell-LaPadula Model, and is most commonly called "Biba's model."

Its rules are as follows.1. s S can read o O if and only if i(s) ≤ i(o).2. s S can write to o O if and only if i(o) ≤ i(s).

3. s1 S can execute s2 S if and only if i(s2) ≤ i(s1). Given these rules, Theorem 6-1 still holds, but its proof changes

(see Exercise 1). Note that rules I and 2 imply that if both read and write are allowed, i(s) = i(o).

Like the low-water-mark policy, this policy prevents indirect as well as direct modification of entities without authorization. By replacing the notion of "integrity level" with "integrity compartments," and adding the notion of discretionary controls, one obtains the full dual of Bell-LaPadula.

Page 35: Security Policies

35cs591 chow

Example: LOCUS Distributed OSExample: LOCUS Distributed OS

Pozzo and Gray [817, 818] implemented Biba's strict integrity model on the distributed operating, system LOCUS [811 ].

Goal: limit execution domains for each program to prevent untrusted software from altering data or other software.

Approach: make the level of trust in software and data explicit. They have different classes of executable programs. Their credibility ratings (Biba's integrity levels) assign a measure of

trustworthiness on a scale from 0 (untrusted) to n (highly trusted), depending on the source of the software.

Trusted file systems contain only executable files with the same credibility level.

Associated with each user (process) is a risk level that starts out set to the highest credibility level at which that user can execute.

Users may execute programs with credibility levels at least as great as the user's risk level.

To execute programs at a lower credibility level, a user must use the run-untrusted command. This acknowledges the risk that the user is taking.

Pozzo and Gray [817, 818] implemented Biba's strict integrity model on the distributed operating, system LOCUS [811 ].

Goal: limit execution domains for each program to prevent untrusted software from altering data or other software.

Approach: make the level of trust in software and data explicit. They have different classes of executable programs. Their credibility ratings (Biba's integrity levels) assign a measure of

trustworthiness on a scale from 0 (untrusted) to n (highly trusted), depending on the source of the software.

Trusted file systems contain only executable files with the same credibility level.

Associated with each user (process) is a risk level that starts out set to the highest credibility level at which that user can execute.

Users may execute programs with credibility levels at least as great as the user's risk level.

To execute programs at a lower credibility level, a user must use the run-untrusted command. This acknowledges the risk that the user is taking.

Page 36: Security Policies

36cs591 chow

Chinese Wall ModelChinese Wall Model

It describes policies that prevent conflict of interest. Examples

in British Law, provide defense against criminal charges. Stock Exchange and Investment house. Prevent traders

represents clients with conflict interest. Definition 7-1. The objects of the database are items of information

related to a company. Definition 7-2. A company dataset (CD) contains objects related to

a single company. Definition 7-3. A conflict of interest (COI) class contains the

datasets of companies in competition. Let COI(O) represent the COI class that contains object 0, and let

CD(O) be the company dataset that contains object 0. The model assumes that each object belongs to exactly one COI class.

It describes policies that prevent conflict of interest. Examples

in British Law, provide defense against criminal charges. Stock Exchange and Investment house. Prevent traders

represents clients with conflict interest. Definition 7-1. The objects of the database are items of information

related to a company. Definition 7-2. A company dataset (CD) contains objects related to

a single company. Definition 7-3. A conflict of interest (COI) class contains the

datasets of companies in competition. Let COI(O) represent the COI class that contains object 0, and let

CD(O) be the company dataset that contains object 0. The model assumes that each object belongs to exactly one COI class.

Page 37: Security Policies

37cs591 chow

CD and COICD and COI

Page 38: Security Policies

38cs591 chow

CW-Simple Security ConditionCW-Simple Security Condition

Consider temporal element. After accessing Bank of America, Anthony should not transfer to work on Cityband’s profolio.

PR(S) is the set of objects that S has read. CW-Simple Security Condition, Preliminary

Version: S can read 0 if and only if either of the following is true.1. There is an object O' such that S has accessed O'

and CD(O') = CD(O).2. For all objects O’, O’ PR(S) COI(O') COI(O).

Initially, PR(S) = 0, and the initial read request is assumed to be granted.

Consider temporal element. After accessing Bank of America, Anthony should not transfer to work on Cityband’s profolio.

PR(S) is the set of objects that S has read. CW-Simple Security Condition, Preliminary

Version: S can read 0 if and only if either of the following is true.1. There is an object O' such that S has accessed O'

and CD(O') = CD(O).2. For all objects O’, O’ PR(S) COI(O') COI(O).

Initially, PR(S) = 0, and the initial read request is assumed to be granted.

Page 39: Security Policies

39cs591 chow

Consider Sanitized DataConsider Sanitized Data

In practice, companies have information they can release publicly, such as annual stockholders' reports and filings before government commissions. The Chinese Wall model should not consider this information restricted, because it is available to all. Hence, the model distinguishes between sanitized data and unsanitized data; the latter falls under the CW-simple security condition, preliminary version, whereas the former does not. The CW-simple security condition can be reformulated to include this notion.

CW-Simple Security Condition: S can read 0 if and only if any of the following holds.1. There is an object O' such that S has accessed O' and CD(O')

= CD(O).2. For all objects O', O' PR(S) COI(O') COI(O). 3. O is a sanitized object.

In practice, companies have information they can release publicly, such as annual stockholders' reports and filings before government commissions. The Chinese Wall model should not consider this information restricted, because it is available to all. Hence, the model distinguishes between sanitized data and unsanitized data; the latter falls under the CW-simple security condition, preliminary version, whereas the former does not. The CW-simple security condition can be reformulated to include this notion.

CW-Simple Security Condition: S can read 0 if and only if any of the following holds.1. There is an object O' such that S has accessed O' and CD(O')

= CD(O).2. For all objects O', O' PR(S) COI(O') COI(O). 3. O is a sanitized object.

Page 40: Security Policies

40cs591 chow

CW-*-PropertyCW-*-Property

Suppose Anthony and Susan work in the same trading house. Anthony can read objects in Bank of America's CD, and Susan can read objects in Citibank's CD. Both can read objects in ARCO's CD. If Anthony can also write to objects in ARCO's CD, then he can read information from objects in Bank of America's CD and write to objects in ARCO's CD, and then Susan can read that information; so, Susan can indi rectly obtain information from Bank of America's CD, causing a conflict of interest. The CW-simple security condition must be augmented to prevent this.

CW-*-Property: A subject S may write to an object 0 if and only if both of the following conditions hold.

1. The CW-simple security condition permits S to read O.2. For all unsanitized objects O S can read 0' CD(O') = CD(O).

In the example above, Anthony can read objects in both Bank of America's CD and ARCO's CD. Thus, condition 1 is met. However, assuming that Bank of America's CD contains unsanitized objects (a reasonable assumption), then because Anthony can read those objects, condition 2 is false. Hence, Anthony cannot write to objects in ARCO's CD.

Suppose Anthony and Susan work in the same trading house. Anthony can read objects in Bank of America's CD, and Susan can read objects in Citibank's CD. Both can read objects in ARCO's CD. If Anthony can also write to objects in ARCO's CD, then he can read information from objects in Bank of America's CD and write to objects in ARCO's CD, and then Susan can read that information; so, Susan can indi rectly obtain information from Bank of America's CD, causing a conflict of interest. The CW-simple security condition must be augmented to prevent this.

CW-*-Property: A subject S may write to an object 0 if and only if both of the following conditions hold.

1. The CW-simple security condition permits S to read O.2. For all unsanitized objects O S can read 0' CD(O') = CD(O).

In the example above, Anthony can read objects in both Bank of America's CD and ARCO's CD. Thus, condition 1 is met. However, assuming that Bank of America's CD contains unsanitized objects (a reasonable assumption), then because Anthony can read those objects, condition 2 is false. Hence, Anthony cannot write to objects in ARCO's CD.

Page 41: Security Policies

41cs591 chow

Take-Grant SystemTake-Grant System

Introduce by Jone[Jo78] Expanded by Lipton and Snyder[Lip77][SNY81].

Introduce by Jone[Jo78] Expanded by Lipton and Snyder[Lip77][SNY81].

Page 42: Security Policies

42cs591 chow

Trusted SolarisTrusted Solaris

It implements MLS. Here is the link to

trusted solaris user guide.

To the right is a label builder.

Page 43: Security Policies

43cs591 chow

Workspace and related MenuWorkspace and related Menu

Bottom panel shows the current security label Menu for changing roles

Bottom panel shows the current security label Menu for changing roles

Page 44: Security Policies

44cs591 chow

Warning when Copying FilesWarning when Copying Files

Page 45: Security Policies

45cs591 chow

History Security-Enhanced Linux (SELinux)

National Security Agency (NSA) and Secure Computing Corporation (SCC) provide strong MAC. Flexible support for security policies (no single MAC policy can satisfy

everyone’s security requirements) Cleanly separate the security policy logic from enforcing mechanism Developed DTMach, DTOS (Mach-based prototype) Apply formal method to validate the security properties of the

architecture (High Assurance) Work with Univ. Utah Flux Research Group

integrate the architecture to Fluke research operating system Result: Flask architecture support dynamic security policies.

NSA create SELinux integrate Flash architecture to Linux OS. NAI implements control on procfs and devpts fiel ssytems MITRE/SCC contribute application security policies, modified utility

programs

National Security Agency (NSA) and Secure Computing Corporation (SCC) provide strong MAC. Flexible support for security policies (no single MAC policy can satisfy

everyone’s security requirements) Cleanly separate the security policy logic from enforcing mechanism Developed DTMach, DTOS (Mach-based prototype) Apply formal method to validate the security properties of the

architecture (High Assurance) Work with Univ. Utah Flux Research Group

integrate the architecture to Fluke research operating system Result: Flask architecture support dynamic security policies.

NSA create SELinux integrate Flash architecture to Linux OS. NAI implements control on procfs and devpts fiel ssytems MITRE/SCC contribute application security policies, modified utility

programs

Page 46: Security Policies

46cs591 chow

SELinuxSELinux

Support Separation policies:

– Enforce legal restriction on data– Establish well-defined user roles– Restrict access to classified data

Containment policies for – Restrict web server access to only authorized data– Minimize damage caused by virues/malicious code

Integrity policies that protect unauthorized modifications to data and applications

Invocation policies that guarantee data is processed as required.

Support Separation policies:

– Enforce legal restriction on data– Establish well-defined user roles– Restrict access to classified data

Containment policies for – Restrict web server access to only authorized data– Minimize damage caused by virues/malicious code

Integrity policies that protect unauthorized modifications to data and applications

Invocation policies that guarantee data is processed as required.