33
Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Embed Size (px)

Citation preview

Page 1: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Confidentiality-preserving Proof Theoriesfor Distributed Proof Systems

Kazuhiro MinamiNational Institute of Informatics

FAIS 2011

Page 2: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Distributed proving is an effective way to combine information in different

administrative domains• Distributed authorization– Make a granting decision by constructing a proof

from security policies– Examples: DL[Li03], DKAL [Gurevich08], SD3

[Jim01], SecPAL [Becker07], and Grey [Bauer05]

• Data fusion in pervasive environments– Infer a user’s activity from sensor data owned by

different organizations

Page 3: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Distributed Proof System• Consist of multiple principals, which consist of a

knowledge base and an inference engine• Construct a proof by exchanging proofs in a peer-to-

peer way • Support rules and facts in Datalog with the says

operator (e.g., BAN logic)

ÇÇ ÇÇ

Quoted fact

Page 4: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Protecting each domain’s confidential information is crucial

• Each organization in a virtual business coalition needs to protect its proprietary information from the others

• A location server must protect users’ location information with proper privacy policies

• To do these, principals in a distributed proof system could limit access to their sensitive information with discretionary access-control policies

Page 5: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

To determine the safety of a system involving multiple principals is not so trivial

Suppose that principal p0 is willing to disclose the truth of fact f0 only to p2 What if p2 still derives

fact f2?

Page 6: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Problem Statements

• How should define confidentiality and safety in distributed proof systems?• Is it possible to derive more facts that a system that enforces

confidentiality policies on a principal-to-principal basis?• If so, is there any upper bound in terms of the proving power of

distributed proof systems?

Page 7: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Outline

• System model based on a TTP• Safety definition based on non-deducibility• Safety analysis– DAC system– NE system– CE system

• Conclusion

Page 8: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Abstract System Model

• Parameterize a distributed proof system D with a set of inference rules I and a finite set of principals P (i.e., D[P, I])

• Only consider the initial and final state of system D based on a trusted third-party model (TTP)

Datalog inference rule:

Page 9: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Reference System D[IS]

(COND)

(SAYS)

•The body of a rule contains a set of quoted facts (e.g., q1 = (p1 says f1))•All the information is freely shared among principals

Page 10: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

TTP is a fixpoint function that computes the final state of a system

Trusted Third Party (TTP)

p1 p2pn

KB1 KBn

fixpoint1(KB) fixpointn(KB)

KB2

fixpoint2(KB)

Inference

rules I

Page 11: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Soundness Requirement

Definition

Definition (Soundness)

A distributed proof system D[I] is sound if

Any confidentiality-preserving system D[I] should not prove a fact that is not provable with the reference system D[IS]

Page 12: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Outline

• System model based on a TTP• Safety definition based on non-deducibility• Safety analysis– DAC system– NE system– CE system

• Conclusion

Page 13: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Confidentiality Policies

• Each principal defines a discretionary access-control policy on its local fact

• Each confidentiality policy is defined with the predicate release(principal_name, fact_name)

• E.g., if Alice is willing to disclose her location to Bob, she could add the policy – release(Bob, loc(Alice, L)) to her knowledge base.

Page 14: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Attack Model• A set of malicious colluding principals A try to infer the truth of a confidential facts f in non-malicious principal pi’s knowledge base KBi

A

System D

Fact f0 is confidential because all the principals in A are not authorized to learn its truth

Fact f1 is NOT confidential because p4 is authorized to learn its truth

Page 15: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Attack Model (Cont.)

ASystem D

Malicious principals only use their initial and final states ) to perform inferences

Page 16: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Attack Model (Cont.)

ASystem D

Malicious principals only use their initial and final states to perform inferences

are available

Page 17: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Sutherland’s non-deducibility model inferences by considering all possible worlds W

Consider two information functions v1: W → X and v2: W → Y.

X

Y

W

v1

Publicview

Privateview

w

w’

x

v2

y

y’

W’ = { w v⎢ 1(w) = x}

Y’

This cannot be possible!This cannot be possible!

Page 18: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Nondeducibility considers information flow between two information functions regarding system

configuration

A set of initial configurations

Initial and final states of malicious principals in set A

Confidential facts that are actually maintained by non-malicious principals

Informationflow

Function v 1

Function v2

Page 19: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Safety DefinitionWe say that a distributed proof system D[P, I] is safe if

for every possible initial state KB,for every possible subset of principals A,for every possible subset of confidential facts Q, there exists another initial state KB’ such that

1. v1(KB) = v1(KB’), and

2. Q = v2(KB’).

Malicious principals A has the same initial and final local states

Non-malicious principals could posses any subset of confidential facts

Page 20: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Outline

• System model based on a TTP• Safety definition based on non-deducibility• Safety analysis– DAC system– NE system– CE system

• Conclusion

Page 21: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

DAC System D[IDAC]

Enforce confidentiality policies on a principal-to-principal basis

(COND)

(DAC-SAYS)

Page 22: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Example Derivations in D[IDAC]

(DAC-SAYS)

(COND)

Page 23: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

D[P, IDAC] is Safe because deviations performed by one principal are transparent from others

Let P and A be {p0, p1} and {p1} respectively

KB0KB1

KB’0

Principal p1 cannot distinguishKB0 and KB’0

Page 24: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

NE System D[INE]• Introduce function Ei to represent an encrypted value

•Associate each fact or quoted fact q with an encrypted value e• Each principal performs an inference on an encrypted fact (q, e)• Principals cannot infer the truth of an encrypted fact without decrypting it• TTP discards encrypted facts from the final system state

Page 25: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Inference Rules INE

(ECOND)

(DEC1) (DEC2)

(ENC-SAYS)

Page 26: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Example Derivations

(ENC-SAYS)

(ECOND)

(ENC-SAYS)

(DEC1)

(DEC1)

(DEC2)

(ECOND)

Page 27: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Analysis of System D[INE]• The strategy we use for the DAC system does not

work• Need to make sure that every malicious principals

receive an encrypted fact of the same structureMalicious principals A

KB0

KB0

Page 28: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

NE System is Safe

• All the encrypted values must be decrypted in the exact reverse order

• Can collapse a proof for a malicious princpal’s fact such that all the confidential facts are only mentioned in non-malicious principals’ rules

• Thus, can make all the confidential facts transparent from the malicious principals by modifying non-malicious principals’ rules

Page 29: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Conversion Method – Part 1

• Keep collapsing proofs by modifying non-malicious principals’ rules– If a proof contains a subsequence

replace the sequence above with

• Eventually, all the confidential facts only appear in non-malicious principals rules

Page 30: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Conversion Method – Part 2• Given a set of quoted facts Q that should be in KB’• Case 1: (pi says f) is not in Q, but f is in KBi*, – Remove (pi says f) from the body of every non-malicious principal rule

• Case 2: (pi says f) is in Q, but f is not in KBi*, – Remove all non-malicious principal’ rules whose body contains (pi says f)

Page 31: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

CE System D[ICE] is NOT safe• An encrypted value can be decrypted in any arbitrary order

• Consequently, we cannot collapse a proof as we did for the NE system

(CE-DEC)

Page 32: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Summary• Develop formal definitions of safety for

distributed proof systems based on the notion of nondeducibility

• Show that the NE system, which derives more facts than the DAC system, is indeed safe

• Provide an unsafe result of the CE system, which extends the NE system with commutative encryption

• The proof system with the maximum proving power exists somewhere between the NE and CE systems

Page 33: Confidentiality-preserving Proof Theories for Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

Thank you!