3
85 OCTOBER 2010 SECURITY Published by the IEEE Computer Society 0018-9162/10/$26.00 © 2010 IEEE In Trust We Trust A variation of a well- known logic problem goes like this: You’re traveling to Happytown when you arrive at a fork in the road at which identical twin brothers man a tollbooth. You have been told that one brother is a complete liar, and the other is completely honest. But because the brothers are identical, you don’t know which one to trust. You also know that you can only pose one question to either of the brothers, and you have no opportunity to test the veracity of a brother’s answer a priori. To take the correct branch of the fork, what question should you ask and to which brother should you pose it? The answer is: You ask either brother “which branch would your brother tell me to take to reach Happy-town?” You then take the branch opposite the one given in the answer. The solution to this problem is a relatively simple one logically, but the problem becomes much more difficult to solve if you alter the trust conditions. For example, what if either brother is apt to tell the truth only part of the time? To consider this problem of trust in the context of cloud computing, we must first understand the nature of trust. If we reach back far enough, the word “trust” is associated with “true” (www.merriam-webster.com/netdict/ trust). That connection makes sense to us today, but a modern understand- ing of trust also includes an element of risk. If you know something has happened, then you don’t need to trust someone else’s testimony. For whatever reasons, trust requires us to take a bit of a flier. Trust is less confident than know, but also more confident than hope. Trust is one of a few English words that is both a verb and a noun. We give trust to someone by trust- ing them. The word “trustworthy” extends the idea, establishing that some entities are likely to be good targets for risk taking, and others are not. Clearly, both the idea and the word originally applied to people, but the idea of “trustworthy com- puting” is now commonplace. For example, the National Science Foun- dation funds projects under that title (www.nsf.gov/funding/pgm_summ. jsp?pims_id=503326). Legally, a trust is “Property that is held by one party, the trustee, for the benefit of another, the benefi- ciary. Trust also encompasses any relationship in which one acts as a fiduciary or guardian for another” (www.yourdictionary.com/law/ trust). This gives us another slant on trust as a thing held by one person for another. Trust isn’t a static property; it must be evaluated perpetually. Some- one we trust today could betray us tomorrow. Can we regain trust in that person? Can the level of trust ever be the same? TRUST AND E-TRUST IN THE INFORMATION AGE Philosophers have long acknowl- edged the importance of trust in relationships. For example, the Stanford Encyclopedia of Philosophy includes an extensive entry on trust (http://plato.stanford.edu/entries/ trust), complete with a bibliography that reflects that the ethics of trust provide an active area of discussion among philosophers. One research area particularly active today is trust mediated electronically—“e-trust” if you will—although that term isn’t at all standardized. Issues involving e-trust have broadened and compli- cated both philosophical and ethical notions of trust. On the surface, the extension of traditional, face-to-face trust to e-trust seems straightforward, but scholars point out that the two types of encounters are significantly dis- Keith W. Miller, University of Illinois at Springfield Jeffrey Voas, National Institute of Standards and Technology Phil Laplante, Pennsylvania State University Trust isn’t a static property; it must be evaluated perpetually. Can we regain trust? Can the level of trust ever be the same?

SECURITY In Trust We Trusttombrett.ie/courses/cloud-computing/Security... · 86 COMPUTER SECURITY • AHP, an artificial agent trusts a human based on physical encounters; • AHE,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SECURITY In Trust We Trusttombrett.ie/courses/cloud-computing/Security... · 86 COMPUTER SECURITY • AHP, an artificial agent trusts a human based on physical encounters; • AHE,

85OCTOBER 2010

SECURITY

Published by the IEEE Computer Society0018-9162/10/$26.00 © 2010 IEEE

In Trust We Trust

A variation of a well-known logic problem goes like this: You’re traveling to Happytown

when you arrive at a fork in the road at which identical twin brothers man a tollbooth. You have been told that one brother is a complete liar, and the other is completely honest. But because the brothers are identical, you don’t know which one to trust. You also know that you can only pose one question to either of the brothers, and you have no opportunity to test the veracity of a brother’s answer a priori.

To take the correct branch of the fork, what question should you ask and to which brother should you pose it? The answer is: You ask either brother “which branch would your brother tell me to take to reach Happy-town?” You then take the branch opposite the one given in the answer. The solution to this problem is a relatively simple one logically, but the problem becomes much more difficult to solve if you alter the trust conditions. For example, what if either brother is apt to tell the truth only part of the time?

To consider this problem of trust in the context of cloud computing, we must first understand the nature of trust. If we reach back far enough, the

word “trust” is associated with “true” (www.merriam-webster.com/netdict/trust). That connection makes sense to us today, but a modern understand-ing of trust also includes an element of risk. If you know something has happened, then you don’t need to trust someone else’s testimony. For whatever reasons, trust requires us to take a bit of a flier. Trust is less confident than know, but also more confident than hope.

Trust is one of a few English words that is both a verb and a noun. We give trust to someone by trust-ing them. The word “trustworthy” extends the idea, establishing that some entities are likely to be good targets for risk taking, and others are not. Clearly, both the idea and the word originally applied to people, but the idea of “trustworthy com-puting” is now commonplace. For example, the National Science Foun-dation funds projects under that title (www.nsf.gov/funding/pgm_summ.jsp?pims_id=503326).

Legally, a trust is “Property that is held by one party, the trustee, for the benefit of another, the benefi-ciary. Trust also encompasses any relationship in which one acts as a fiduciary or guardian for another” (www.yourdict ionary.com/law/

trust). This gives us another slant on trust as a thing held by one person for another.

Trust isn’t a static property; it must be evaluated perpetually. Some-one we trust today could betray us tomorrow. Can we regain trust in that person? Can the level of trust ever be the same?

TRUST AND E-TRUST IN THE INFORMATION AGE

Philosophers have long acknowl-edged the importance of trust in relationships. For example, the Stanford Encyclopedia of Philosophy includes an extensive entry on trust (http://plato.stanford.edu/entries/trust), complete with a bibliography that reflects that the ethics of trust provide an active area of discussion among philosophers. One research area particularly active today is trust mediated electronically—“e-trust” if you will—although that term isn’t at all standardized. Issues involving e-trust have broadened and compli-cated both philosophical and ethical notions of trust.

On the surface, the extension of traditional, face-to-face trust to e-trust seems straightforward, but scholars point out that the two types of encounters are significantly dis-

Keith W. Miller, University of Illinois at Springfield

Jeffrey Voas, National Institute of Standards and Technology

Phil Laplante, Pennsylvania State University

Trust isn’t a static property; it must be evaluated perpetually. Can we regain trust? Can the level of trust ever be the same?

Page 2: SECURITY In Trust We Trusttombrett.ie/courses/cloud-computing/Security... · 86 COMPUTER SECURITY • AHP, an artificial agent trusts a human based on physical encounters; • AHE,

COMPUTER 86

SECURITY

• AHP, an artificial agent trusts a human based on physical encounters;

• AHE, an artificial agent trusts a human based on electronically mediated encounters;

• AAP, an artificial agent trusts another artificial agent based on physical encounters; and

• AAE, establishes electronically mediated trust from one artifi-cial agent to another

TRUST IN (AND TRUSTING) DATA

Each of the trust relationships among humans and artificial agents requires a decision-making entity. But we contend that a different kind of e-trust, trust in data, doesn’t require such an agent. This con-cept has yet to be discussed in the growing literature on e-trust, but we consider it vitally important. Trust in data is more like our previous bridge example and less like trust between a human and an artificial agent. There is no expectation of an active elec-tronic trust partner in data, but there is still a risky reliance—in this case based on the stored data’s integrity.

Trusting data involves ensuring that it’s appropriate for use: accurate, precise, available, and uncorrupted. Application-specific properties are most important for a particular user and use; but in each case, users must seek the correct data, then risk that their assessment is accurate.

Trusting data is an extension of trustworthy computing, but it focuses on the data alone. If we declare that a particular system is trustworthy, we assert the integrity of both the soft-ware and hardware as well as the processing steps and the data inher-ent in the process.

WHERE THE DATA RESIDES AFFECTS TRUST

The cloud computing paradigm dramatizes the importance of trust in data. When users risk their data’s integrity, where that data resides

takes on new importance. We tend to think of “our data” on “our machines.” Assuring the contextual integrity of data, its security (includ-ing appropriate backups), and its availability traditionally is done in-house, on machines at least someone in our organization can physically touch. Perhaps the data resides on a PC in my office, in which case only I can touch the machine that stores it. Or the data might reside on a shared server, in which case someone in my IT department has control of the machine and data.

When the data is stored in the cloud, however, the situation becomes more complicated. Now we enter the realm of “trust without touch” ( J. Zheng et al., “Trust without Touch: Jump-Start Trust with Social Chat,” Proc. Conf. Human Factors in Comput-ing Systems, Extended Abstracts, ACM Press, 2001, pp. 293-294). Yes, the data resides on hardware that some-one can touch, but the point of cloud computing is that the data’s user and owner do not own the hardware on which the data resides. If your data is in the cloud, “trusting your data” has two interacting layers: trust in the nature of the data itself, and trust in the system (remote from you) of hard-ware, software, and humans who keep that data whole and accessible.

Trusting data when it resides in the cloud is a dramatic example of the importance of a sociotechnical system (D.G. Johnson and J. Wet-more, eds., Technology and Society: Building our Sociotechnical Future, MIT Press, 2007). Analyzing technol-ogies is only one aspect of a much larger, tightly connected system of people, protocols, and artifacts that sociologists and others have used for decades. But the complexity of com-puting systems such as the Internet and the growing importance of com-puting in our society have made the study of sociotechnical systems increasingly important when trying to study the impact of a computing innovation.

tinct from each other and, therefore, e-trust must be studied separately. One such area is e-commerce, in which establishing trust between buyers and sellers online is a criti-cal issue.

Another intriguing notion is that we consider trust relationships to involve not only humans, but also artificial agents. As computer soft-ware becomes more sophisticated, new possibilities for relationships arise between humans, programs, webbots, robots, and the like.

Trusting an entity that isn’t human is not particularly novel; after all, it makes sense to trust (or mistrust) a particular bridge or airplane design. But it’s novel to consider the idea that an artificial agent might be capable of trusting (or mistrusting) either another artificial agent or a human. Once we allow for that possibility, human face-to-face trust becomes a special case in a much broader range of possibilities (F. Grodzinsky, K. Miller, and M. Wolf, “Toward a Model of Trust and E-Trust Processes Using Object-Oriented Methodologies,” Proc. ETHICOMP, 2010; www.ccsr.cse.dmu.ac.uk/conferences/ethicomp/ethicomp2010/ethicomp2010_pro-ceedings.php).

If we represent a human as H, an artificial agent as A, physical (includ-ing face-to-face encounters) as P, and electronically mediated encounters as E, there are eight separate kinds of encounters to consider when thinking about trust:

• HHP, the traditional notion of trust in which one human trusts another based on face-to-face encounters;

• HHE, electronically mediated trust—one human trusting another;

• HAP, a human trusts an arti-ficial agent based on physical encounters;

• HAE, a human trusts an artificial agent based on electronically mediated encounters;

Page 3: SECURITY In Trust We Trusttombrett.ie/courses/cloud-computing/Security... · 86 COMPUTER SECURITY • AHP, an artificial agent trusts a human based on physical encounters; • AHE,

87OCTOBER 2010

studied for loading and traffic impli-cations. Data, and our reliance on it, provides a sociotechnical system we must trust, but only so long as we regularly verify that our trust is a reasonable risk.

Cloud providers hope that such warranties in their service-level agreements (SLAs) are enough to overcome the touch concern. How-ever, most providers offer little in the way of guarantees in their SLAs, leaving a “consumer beware” label on everything they purchase.

T rust is essential for human relationships, and e-trust in its various forms is crucial

to our relationship with computing. Every instance of trust involves taking a risk and trusting in our machines, data, and sociotechnical systems to hedge against future threats. We think it important to base such estimates explicitly on past performance and a clear-eyed assessment of whether or not the systems and people involved are worthy of trust. Bayesian estima-tion allows for a priori probability as well. This kind of assessment is only possible when we are conscious of the sociotechnical systems in which our computing occurs, and of the impor-tance for safeguarding trust within those systems.

Keith W. Miller is the Schewe Profes-sor in Liberal Arts and Sciences at the University of Illinois at Springfield. Contact him at [email protected].

Jeffrey Voas is a computer scientist at National Institute of Standards and Technology. Contact him at [email protected].

Phil Laplante is a professor of soft-ware engineering in the School of Graduate Professional Studies at Pennsylvania State University. Contact him at [email protected].

TRUST AS A BINARY?We’d like to pose a challenge that

involves cloud computing: Is it useful to treat trust as a binary? Given that trust can’t generally be guaranteed in digital systems or data, are there aspects that we can combine to give us some level of belief in its existence? For example, current cloud systems are advertised to have three nines of reliability (99.9 percent), thus suggest-ing around nine hours of downtime per year (www.google.com/apps/intl/en/business/infrastructure_security.html). For a 911 telephone system, such reliability would likely not be tolerated, but it might be “good enough” for clouds unless a particu-lar user desperately needs access to stored data during those downtimes.

So, if a cloud provider could guar-antee enforcement of particular security policies, certain levels of fault tolerance, certain rules related to privacy, and how discarded hardware components were decommissioned, a certain number of nines—and certain policies on data integrity and redun-dancy—might then let users trust their IT needs to the cloud. However, is this enough to address trust with-out touch? Can we quantify the value of touch with respect to trust?

Even a once-trusted cloud can be found to be unsafe upon discovery of some breach, however. One company that provides executive protection and security services to Wall Street firms conducts weekly building sweeps with a bomb-sniffing dog because a building trusted last week can’t necessarily be trusted this week. The sweeps are never at the same time nor in the same pattern. We probably have to test clouds this way too, sweeping each cloud node with “bomb-sniffing dogs” irregularly but periodically.

But we can’t stop there. The com-plex interactions among clouds create a system of systems. In this case, chain of custody trust must be enforced using Byzantine agree-ments, though this approach must be

TRUSTING CLOUD COMPUTING’S COMPLEXITIES

Cloud computing offers a case in point. Trusting in “my data” requires relying on a huge and complicated system of people, things, and agree-ments ranging from economic concerns to physical ones such as what will the most recent backup be for my data if disaster strikes. In the case of the brothers left at the fork in the road, we need only model two of these people and one thing (the toll, which could represent the comput-ing cost).

While it would be convenient to alter the rules of the logic game to pretest the brothers’ veracity, in cloud computing, do we really have the luxury of pretesting services prior to trusting them, or must we trust them at first blush? Even if we could pretest cloud services, what kinds of questions should we ask or what test computations should we conduct? We can’t assume that services are either completely untrustworthy or com-pletely trustworthy—sometimes a cloud will behave erratically or worse.

One of us knew a US Army sol-dier who served as an interrogator during the Vietnam War. At that time, the interrogation techniques involved asking a series of n ques-tions in which the correct answer was known. The n + 1st question was the one with the unknown answer. If a prisoner answered the first n ques-tions with candor, it showed a high likelihood that the n + 1st question would also be answered with candor.

In this case, we argue that the probability of a lie being given on the n + 1st question (after n truth-ful responses) is 1/(n + 2). We frame this as a test-failure problem and use the result from Keith Miller and col-leagues (“Estimating the Probability of Failure When Testing Reveals No Failures,” IEEE Trans. Software Eng., Jan. 1992, pp. 33-43). Can we use a similar technique to assess a cloud’s trustworthiness?

Editor: Jeffrey Voas, National Institute of Standards and Technology; [email protected]