33
Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Confidentiality-preserving Proof Theories for Distributed Proof Systems

  • Upload
    malo

  • View
    25

  • Download
    0

Embed Size (px)

DESCRIPTION

Confidentiality-preserving Proof Theories for Distributed Proof Systems. Kazuhiro Minami National Institute of Informatics. FAIS 2011. Distributed proving is an effective way to combine information in different administrative domains. Distributed authorization - PowerPoint PPT Presentation

Citation preview

Confidentiality-preserving Proof Theoriesfor Distributed Proof Systems

Kazuhiro MinamiNational Institute of Informatics

FAIS 2011

Distributed proving is an effective way to combine information in different

administrative domains• Distributed authorization– Make a granting decision by constructing a proof

from security policies– Examples: DL[Li03], DKAL [Gurevich08], SD3

[Jim01], SecPAL [Becker07], and Grey [Bauer05]

• Data fusion in pervasive environments– Infer a user’s activity from sensor data owned by

different organizations

Distributed Proof System• Consist of multiple principals, which consist of a

knowledge base and an inference engine• Construct a proof by exchanging proofs in a peer-to-

peer way • Support rules and facts in Datalog with the says

operator (e.g., BAN logic)

ÇÇ ÇÇ

Quoted fact

Protecting each domain’s confidential information is crucial

• Each organization in a virtual business coalition needs to protect its proprietary information from the others

• A location server must protect users’ location information with proper privacy policies

• To do these, principals in a distributed proof system could limit access to their sensitive information with discretionary access-control policies

To determine the safety of a system involving multiple principals is not so trivial

Suppose that principal p0 is willing to disclose the truth of fact f0 only to p2 What if p2 still derives

fact f2?

Problem Statements

• How should define confidentiality and safety in distributed proof systems?• Is it possible to derive more facts that a system that enforces

confidentiality policies on a principal-to-principal basis?• If so, is there any upper bound in terms of the proving power of

distributed proof systems?

Outline

• System model based on a TTP• Safety definition based on non-deducibility• Safety analysis– DAC system– NE system– CE system

• Conclusion

Abstract System Model

• Parameterize a distributed proof system D with a set of inference rules I and a finite set of principals P (i.e., D[P, I])

• Only consider the initial and final state of system D based on a trusted third-party model (TTP)

Datalog inference rule:

Reference System D[IS]

(COND)

(SAYS)

•The body of a rule contains a set of quoted facts (e.g., q1 = (p1 says f1))•All the information is freely shared among principals

TTP is a fixpoint function that computes the final state of a system

Trusted Third Party (TTP)

p1 p2pn

KB1 KBn

fixpoint1(KB) fixpointn(KB)

KB2

fixpoint2(KB)

Inference

rules I

Soundness Requirement

Definition

Definition (Soundness)

A distributed proof system D[I] is sound if

Any confidentiality-preserving system D[I] should not prove a fact that is not provable with the reference system D[IS]

Outline

• System model based on a TTP• Safety definition based on non-deducibility• Safety analysis– DAC system– NE system– CE system

• Conclusion

Confidentiality Policies

• Each principal defines a discretionary access-control policy on its local fact

• Each confidentiality policy is defined with the predicate release(principal_name, fact_name)

• E.g., if Alice is willing to disclose her location to Bob, she could add the policy – release(Bob, loc(Alice, L)) to her knowledge base.

Attack Model• A set of malicious colluding principals A try to infer the truth of a confidential facts f in non-malicious principal pi’s knowledge base KBi

A

System D

Fact f0 is confidential because all the principals in A are not authorized to learn its truth

Fact f1 is NOT confidential because p4 is authorized to learn its truth

Attack Model (Cont.)

ASystem D

Malicious principals only use their initial and final states ) to perform inferences

Attack Model (Cont.)

ASystem D

Malicious principals only use their initial and final states to perform inferences

are available

Sutherland’s non-deducibility model inferences by considering all possible worlds W

Consider two information functions v1: W → X and v2: W → Y.

X

Y

W

v1

Publicview

Privateview

w

w’

x

v2

y

y’

W’ = { w v⎢ 1(w) = x}

Y’

This cannot be possible!This cannot be possible!

Nondeducibility considers information flow between two information functions regarding system

configuration

A set of initial configurations

Initial and final states of malicious principals in set A

Confidential facts that are actually maintained by non-malicious principals

Informationflow

Function v 1

Function v2

Safety DefinitionWe say that a distributed proof system D[P, I] is safe if

for every possible initial state KB,for every possible subset of principals A,for every possible subset of confidential facts Q, there exists another initial state KB’ such that

1. v1(KB) = v1(KB’), and

2. Q = v2(KB’).

Malicious principals A has the same initial and final local states

Non-malicious principals could posses any subset of confidential facts

Outline

• System model based on a TTP• Safety definition based on non-deducibility• Safety analysis– DAC system– NE system– CE system

• Conclusion

DAC System D[IDAC]

Enforce confidentiality policies on a principal-to-principal basis

(COND)

(DAC-SAYS)

Example Derivations in D[IDAC]

(DAC-SAYS)

(COND)

D[P, IDAC] is Safe because deviations performed by one principal are transparent from others

Let P and A be {p0, p1} and {p1} respectively

KB0KB1

KB’0

Principal p1 cannot distinguishKB0 and KB’0

NE System D[INE]• Introduce function Ei to represent an encrypted value

•Associate each fact or quoted fact q with an encrypted value e• Each principal performs an inference on an encrypted fact (q, e)• Principals cannot infer the truth of an encrypted fact without decrypting it• TTP discards encrypted facts from the final system state

Inference Rules INE

(ECOND)

(DEC1) (DEC2)

(ENC-SAYS)

Example Derivations

(ENC-SAYS)

(ECOND)

(ENC-SAYS)

(DEC1)

(DEC1)

(DEC2)

(ECOND)

Analysis of System D[INE]• The strategy we use for the DAC system does not

work• Need to make sure that every malicious principals

receive an encrypted fact of the same structureMalicious principals A

KB0

KB0

NE System is Safe

• All the encrypted values must be decrypted in the exact reverse order

• Can collapse a proof for a malicious princpal’s fact such that all the confidential facts are only mentioned in non-malicious principals’ rules

• Thus, can make all the confidential facts transparent from the malicious principals by modifying non-malicious principals’ rules

Conversion Method – Part 1

• Keep collapsing proofs by modifying non-malicious principals’ rules– If a proof contains a subsequence

replace the sequence above with

• Eventually, all the confidential facts only appear in non-malicious principals rules

Conversion Method – Part 2• Given a set of quoted facts Q that should be in KB’• Case 1: (pi says f) is not in Q, but f is in KBi*, – Remove (pi says f) from the body of every non-malicious principal rule

• Case 2: (pi says f) is in Q, but f is not in KBi*, – Remove all non-malicious principal’ rules whose body contains (pi says f)

CE System D[ICE] is NOT safe• An encrypted value can be decrypted in any arbitrary order

• Consequently, we cannot collapse a proof as we did for the NE system

(CE-DEC)

Summary• Develop formal definitions of safety for

distributed proof systems based on the notion of nondeducibility

• Show that the NE system, which derives more facts than the DAC system, is indeed safe

• Provide an unsafe result of the CE system, which extends the NE system with commutative encryption

• The proof system with the maximum proving power exists somewhere between the NE and CE systems

Thank you!