5

Click here to load reader

Fairy Dust, Secrets, and the Real World I Collegesws/pubs/fairydust.pdf · Fairy Dust, Secrets, and the Real World ... be using the ’s private key in an SSL handshake; only Alice’s

Embed Size (px)

Citation preview

Page 1: Fairy Dust, Secrets, and the Real World I Collegesws/pubs/fairydust.pdf · Fairy Dust, Secrets, and the Real World ... be using the ’s private key in an SSL handshake; only Alice’s

Threats PerspectiveEditor: S.W. Smith, [email protected]

Fairy Dust, Secrets, and the Real World

PUBLISHED BY THE IEEE COMPUTER SOCIETY � 1540-7993/03/$17.00 © 2003 IEEE � IEEE SECURITY & PRIVACY 89

cynicism. Too many of us believebreaking RSA is at least as hard as fac-toring when it’s the other wayaround: breaking RSA is no harderthan factoring, (and it might even beeasier). Furthermore, when design-ing security protocols and systemsthat use these properties, we dependtoo often on fairy dust: critical prop-erties that we uncritically assume tobe unassailable.

One of these critical assumptionsis that secrets remain secret. This as-sumption underlies designs thatmake basic use of cryptography, suchas an e-commerce server armed withan SSL private key, a user’s desktopmail client equipped with S/MIMEprivate keys for signed and encryptedemail, or even a simple identity cardthat electronically authorizes itsholder to receive some service, suchas dormitory or lab access.

We design complex security ar-chitectures that begin with the as-sumption that if an entity uses a par-ticular secret in a certain way, thenthat entity must be authentic: Onlythe SSL server www.foo.bar shouldbe using the www.foo.bar’s privatekey in an SSL handshake; only Alice’semail client should sign with Alice’sprivate key. But making this assump-tion hold in the real world requires

that the actual device that is theserver or the mail client store and usethe private key, without revealing itto adversaries.

More esoteric architectures use“trustable” entities in attempts to buildsecure protocols for problems associ-ated with multiple parties and conflictsof interest. For example, flexible elec-tronic auctions might be made secureby having each party use a privatechannel to inject a bidding strategyinto a trusted mediator, which thenplays these strategies against each otherto reveal the highest bid. Practical pri-vate information retrieval might re-quire that a small trusted componentparticipate in the shuffling.

For such applications, the trustedentity building block starts by assum-ing a secure coprocessor platform—andarchitectures for secure coprocessorsusually start from the assumption thatthe hardware ensures that critical pri-vate keys remain inaccessible to any-one except a well-defined computa-tional entity within an untamperedplatform. In particular, the machine’sowner, with direct access to thehardware, should not be able to getthe keys—otherwise, the auctioneercan cheat, and the private informa-tion server can spy on the privatequeries. Indeed, the “trustability” of

these trusted third parties criticallydepends on secrets remaining secret.

Thus, on many levels, it all comesdown to keeping secrets. When de-signing and deploying security archi-tectures and systems, it’s easy to thinkabout many other things: protocols,algorithms, key lengths, interoper-ability with current software, maybeeven the user interface. But, at thefoundation, something needs tostore and use secrets. This storage andusage of secrets needs to be instanti-ated in the real world, as devices andcomputational processes. This jumpfrom concept to reality introducesthreats that designers can easily over-look. In this article, we look at someof these issues.

Storing secretsTo begin, how do we securely storesecrets in a device? Storing them in ageneral-purpose computing envi-ronment is loaded with issues. It isrisky to depend on a general-purposeoperating system to protect an appli-cation’s memory, given the historicaltendency of OSs—particularly thosehighly integrated with desktopapplications—to provide many av-enues of access for malicious code.What does the OS do when it re-claims a page frame, or a disk block?Protection through careful codingalso might not work, as CryptLib ar-chitect Peter Gutmann (http://online.securityfocus.com/archive/82/297827) recently pointed out, be-cause common development com-pilers can easily optimize away a care-ful programmer’s attempt to clearvulnerable memory. Further, shouldadversaries have the opportunity towander through memory, they canclearly distinguish cryptographic keys

S.W. SMITHDartmouthCollegeIn an invited talk at a recent security conference, a noted

member of our field’s old guard derided cryptography as

“fairy dust” that we sprinkle on our protocols and designs,

hoping it magically will make them secure. While I am

not sure I share his conclusions in general, I do share some of his

Page 2: Fairy Dust, Secrets, and the Real World I Collegesws/pubs/fairydust.pdf · Fairy Dust, Secrets, and the Real World ... be using the ’s private key in an SSL handshake; only Alice’s

Threats Perspective

as byte strings with significantlyhigher entropy. Adi Shamir andNicko van Someren pointed this outin a paper presented at the FinancialCryptography Conference in 1999.

Alternatively, instead of a gen-eral-purpose machine, we shoulduse a specialized device to hide oursecrets. In the 1990s, we witnessedthe merging of several such devicefamilies: independent personal tokensthat users (or other mobile entities)can carry with them; cryptographic ac-celerators to improve performance byoffloading cryptographic computa-tion from a host; and secure coproces-sors to improve security by offload-ing sensitive computation from ahost. My own exposure to the needfor devices that actually keep secretsat a high level of assurance camefrom many years of designing appli-cations and architectures for securecoprocessors. (For a thorough over-view of architecture and applicationsof secure coprocessing, see www.research.ibm.com/secure_systems/scop.htm).

Unfortunately, boundaries amongspecialized devices blurred because ofoverlapping requirements: personaltokens needed to be secure againstthieves, or perhaps the users them-selves; coprocessors and tokens endedup requiring cryptography; and accel-

erators needed physical security andprogrammability. (Of course, puttinga general-purpose computational en-vironment within an armored box re-introduces the general-purpose issuesalready discussed. As Gutmann wouldsay, “lather, rinse, repeat.”)

Besides, if we’re hiding secrets inphysical devices, how easy is it to justopen them up and take a look? Wayback in 1996, Ross Anderson andMarkus Kuhn1 showed how easy itwas to explore single-chip devicessuch as smart cards. Exploitableweaknesses included UV radiation toreturn chips to a “factory mode”where secrets are accessible; fumingnitric acid to remove potting com-pounds but not the electronics; mi-croprobes and laser cutters; and fo-cused ion beams to modify circuits.

Steve Weingart,2 a longtime hard-ware security architect and my occa-sional collaborator at IBM, followedup with a longer laundry list of attacksfrom his own experience. These ex-tend to larger devices with moreelaborate physical protections that tryto zeroize sensitive memory beforeadversaries can reach it.

In addition to various probing ap-proaches, careful machining (includ-ing using tools as mundane as sand,water, and hands) can be quite effec-tive in removing barriers without

triggering zeroization. Shaped-charge explosives can create plasmalances to separate sensitive memoryfrom its destruction circuitry beforethe latter can do its work.

Even if the physical protectionswork, environmental factors such asextremely cold temperatures and ra-diation can cause SRAM to imprintand safely retain its secrets for an ad-versary to inspect after an attack. Evenlong-term storage of the same bits inSRAM can cause such imprinting.

Anderson and Kuhn continuewith their work; a recent result in-volves using flash bulbs and a micro-scope to probe smart card electroni-cally erasable programmable read-only memory.3 Other ongoing effortsin this spirit include penetrating phys-ical protections in the XBox gamingdevice (www.xenatera.com/bunnie/proj/anatak/xboxmod.html). Wein-gart’s work, however, focused pri-marily on defense (as we will discussshortly). These researchers remainactive in the area.

Using secretsEven if we could hide a secret effec-tively, our device must occasionallyperform operations with it. Using asecret is not a clean, abstract process;it must occur in real physical andcomputational environments. Anadversary can observe and manipu-late these environments with surpris-ing effectiveness.

For a classic example of this ap-proach, let’s look back to passwordchecking in the Tenex operating sys-tem, an early 1970s timesharing sys-tem for the PDP-10. At first glance,the number of attempts necessary toguess a secret password is exponentialto the password’s length. Tenex madeit much easier to gain access becauseit checked a guess one character attime, and stopped at the first mis-match. By lining up a guess across theboundary between a resident pageand a nonresident page and observ-ing whether a page fault occurredwhen the system checked the guess,an adversary could verify whether a

90 JANUARY/FEBRUARY 2003 � http://computer.org/security/

Opening move

In the Threats Perspective department, we want to overcomethe temptation to think about security in terms of a narrow

set of bad actions against a narrow set of machines. Instead,we’ll look at security issues that challenge practitioners workingin specialized application domains and technology areas. Inparticular, we want to expose readers to threat perspectivesthat may not be readily apparent from typical points of view.Please send contributions and suggestions to me at [email protected].

—S.W. Smith

S.W. Smith is a Security & Privacy task force member. See page 8.

Page 3: Fairy Dust, Secrets, and the Real World I Collegesws/pubs/fairydust.pdf · Fairy Dust, Secrets, and the Real World ... be using the ’s private key in an SSL handshake; only Alice’s

Threats Perspective

specific prefix of a guess was correct.The fact that the secret password

comparison occurred on a real ma-chine led to an observable propertythat turned an intractable exponen-tial search into a feasible linear one.

If we fast forward to 1995, thesame basic problem—an observableartifact lets an adversary verify theprefix of a guess—emerged for moreabstract cryptographic devices.4 Let’sconsider modular exponentiationwith a secret exponent, the core ofthe RSA cryptosystem.

The time that the modular opera-tion takes depends on the exponentand the operand, and is well under-stood. Suppose an adversary guesseda secret exponent, calculated thetime for that guess on an operand,and then actually measured that time.If only the first k bits of the guesswere correct, then the adversary’smodel would be correct for the firstk bits, but wrong for the remainder.

However, over enough samples,the difference between predicted andreal times would form a distributionwith variance proportional to n-k, sothe adversary could confirm the cor-rectness of a guessed k-bit prefix ofthe secret by calculating the variance.With enough samples, this artifact ofthe physical implementation of RSA(and other cryptosystems) turns aninfeasible exponential search into afeasible linear one. Instantiating thecryptography in the real world leadsto threats that do not always show upon a programmer’s white board.

Paul Kocher’s timing attack trig-gered an avalanche of side-channelanalysis (www.research.ibm.com/intsec) in the open world, by Kocherand others.

In addition to the time-of-opera-tion approach, physical devices haveother observable physical character-istics that depend on hidden secrets.One natural characteristic is power.When complementary metal-oxidesemiconductors switch, they con-sume power; an adversary couldmeasure this consumption and at-tempt to deduce things about the

operation. With simple power analysis,an adversary tries to draw conclu-sions from a simple power trace. SPAcan be quite effective; a co-worker ofmine managed to extract a DES keyfrom a commercial device with a sin-gle power trace—an initial paritycheck led to a mix of 56 spikes, someshort, some tall, one for each bit.More advanced differential poweranalysis looks at more subtle statisticalcorrelations between the secret bitsand power consumption.

Other types of observable physi-cal properties exploited by re-searchers in the last few years to re-veal hidden secrets include EMFradiation and even the changes inroom light from CRT displays. (Inthe classified world, the use of suchemanations from the real-world in-stantiations of algorithms has fallenunder the Tempest program, some ofwhich has become declassified.)

In attacks described earlier, an ad-versary learns information by observ-ing physical properties of the devicedoing the computation. However, anadversary can sometimes learn moreby carefully inducing errors in thecomputation.

In their physical attacks on smartcards, Anderson and Kuhn observedthat carefully timed voltage spikes ona processor could disrupt its currentinstruction. Applying these spikes atcritical points during an operationon a secret—such as when compar-ing the number of DES iterationscompleted against the number de-sired—can transform a designer’s in-tended cryptographic operation intoone much more amenable to crypt-

analysis. (This provided a practicaldemonstration of speculative workon such differential fault analysis.)

In the last two years, researchershave focused attention on the APIlevel of these devices.5,6 Besidesphysical properties, instantiation ofabstract ideas in the real world alsocan lead to feature creep. As MikeBond paraphrases Needham, cleanabstract designs tend to become“Swiss Army knives.” In particular,cryptographic accelerators havefound major commercial applicationin banking networks: for ATM andcredit card processing, devices needto transmit, encode, and verify PINs.However, the accumulation of usagescenarios leads to a transaction setcomplexity that permits many cleverways to piece together program callsthat disclose sensitive PINs and keys.Jolyon Clulow, in particular, dis-cusses many amusing attacks possiblefrom exploiting error behavior re-sulting from devious modificationsof legitimate transaction requests.

Traditional defensesHow do we build devices that actu-ally retain their secrets? Weingart2

gives a good overview of the multi-ple defense components:

• tamper evidence—ensuring thattamper causes some observableconsequence

• tamper resistance—making it hardto tamper with the device

• tamper detection—having the de-vice able to sense when tamper isoccurring

• tamper response—having the de-

JANUARY/FEBRUARY 2003 � http://computer.org/security/ 91

In their physical attacks on smart cards,

Anderson and Kuhn observed that carefully

timed voltage spikes on a processor could

disrupt its current instruction.

Page 4: Fairy Dust, Secrets, and the Real World I Collegesws/pubs/fairydust.pdf · Fairy Dust, Secrets, and the Real World ... be using the ’s private key in an SSL handshake; only Alice’s

Threats Perspective

vice take some appropriate coun-termeasure

Weingart also presents the oper-ating envelope concept he evolvedin his earlier work. If certain aspectsof the defense mechanism requirecertain environmental properties(such as voltage and temperature) tofunction, then the device shouldtreat departures from that envelopeas a tamper event.

Our team put these principlesinto practice. Our coprocessor archi-tecture paper7 provides a more de-tailed discussion of a practical defenseinstantiation that our team designedand built at IBM. We dispensed withtamper evidence, because tamper ev-idence is worthless without an audittrail; our usage model did not guar-antee one. Instead, we focused on re-sistance (the device was housed intwo metal cans filled with resin); de-tection (the resin contained a con-ductive mesh, chemically and physi-cally similar to the resin); andresponse (changes to the mesh—orother detected tamper events—trig-gered direct zeroization of the sensi-tive SRAM).

To enforce the operating enve-lope, we detected anomalies in volt-age, temperature, and radiation fromthe moment the device left the fac-tory. We also ensured that sensitivememory was regularly, randomly in-verted to avoid imprinting. The largeform factor created Faraday cages toprotect against electromagnetic radi-ation, and power circuitry sophisti-cated enough to protect against SPAand DPA.

To date, we know of no successfulattack on the basic coprocessor plat-form or on the security configura-tion software that controls what thebox does. We make such claims hesi-tantly, however, because we cannotprove the absence of flaws, just theabsence of successful penetrations upto some point in time.

Starting with protection of se-crets, we then made design choices ina product’s lifecycle, software config-

uration, and other hardware protec-tion that actually yields a secure co-processor platform suitable for sometrusted third-party applications.

Unfortunately, this was a some-what Pyrrhic victory—the primarycommercial use of our box was tohouse cryptographic acceleratorsoftware vulnerable to the API-levelattacks discussed earlier.

New directionsSecure interaction in the distributed,heterogeneous, mutually suspiciousenvironment that is today’s Internetappears to require that we havetrustable places to keep and wield se-crets. Security designs start out by as-suming these places exist. Our quicklook at history suggests that it can berisky to assume that current comput-ing technology can live up to thistask. So what do we do? How do wekeep secrets in the real world?

Looking at the weaknesses of tra-ditional computing technologies,some researchers are exploring fun-damentally different approaches. Asan example, effectively hiding infor-mation in storage bits or circuitstructure of a consumer-level chiphas been difficult—an adversary al-ways seems to find a way to peel offprotections and probe. In response,researchers at MIT have proposed—and prototyped—silicon physical un-known functions8 that use the randomdelays in circuit elements as a secret.In theory, these delays are byproductsof the manufacturing process, andthe only way to measure the delay isto use the circuit. If these hypotheseshold up, this technology could en-able some interesting applications.Many researchers (including myself)are looking into this.

Other researchers have beenmoving away from silicon circuits toother physical forms of storage andcomputation. Ravi Pappu and hiscolleagues have built physical one-wayfunctions9 from optical epoxy tokenscontaining tiny glass spheres. The“secret” is how this exact arrange-ment of spheres scatters laser light

sent in from different angles. The re-searchers believe that determiningthe arrangement might be possible,but cloning a token, even from thisinformation, is intractable.

Another novel direction is the useof microelectromechanical systems for in-fosecurity (www.cs.dartmouth.edu/~brd/Research/MEMS/ISTS/mems.html). MEMS are physical de-vices—levers, gears, springs—thatare small: feature size less than 1 mi-cron, and total size typically between10 and 100 microns. Such smallphysical devices might not be as sus-ceptible to circuit-based side-chan-nels such as EMF and power analysis,and (for the foreseeable future) fabri-cation cost places a very high thresh-old for adversaries.

The preceding new approachesstart to blur our original problem.Do we want to hide a secret wechoose? Is it sufficient to hide a ran-dom secret that we cannot choose? Isit sufficient to have a device with aunique property that cannot bephysically duplicated?

This blurring indicates anothernew line of thinking: If technologycannot support our assumption of atrustable place, perhaps we can makedo with a weaker assumption. In thisdirection, researchers have consid-ered alternate trust models, and pro-tocols to support these models.

On a high design level, examplesinclude general multi-party computation,encrypted functions, and threshold cryptog-raphy, which transform a sensitivecomputation into a larger computa-tion among many parties, but which ismore resilient to various trust compro-mises. Harnessing such transforma-tion in practical ways is an area of on-going research. Some examplesinclude distributed systems, mobileagent security, and ad-hoc PKI. An-other new trust model is white-boxcryptography (see www.scs.carleton.ca/~paulv for some representative pa-pers): encoding computation in sucha way that it can hide its secret even ifruns on an adversary’s computer.

Other recent work tries to bridge

92 JANUARY/FEBRUARY 2003 � http://computer.org/security/

Page 5: Fairy Dust, Secrets, and the Real World I Collegesws/pubs/fairydust.pdf · Fairy Dust, Secrets, and the Real World ... be using the ’s private key in an SSL handshake; only Alice’s

Threats Perspective

these theoretical results with morepractical systems views by reducingthe role (and required trust) of thetrustable place. Some examples in-clude recent work in practical privateinformation retrieval and mobilecode security (where a trusted thirdparty is involved only in a small partof the operation), and the commod-ity model of server-assisted cryptog-raphy and privacy protocols (whereonly partially trusted third partiesparticipate, and do so before the pro-tocol starts).

To some extent, recent commer-cial efforts to incorporate some hard-ware security into common desktops(the multi-vendor Trusted Comput-ing Platform Alliance, and also Mi-crosoft’s Palladium initiative) fall intothis category.10 Of course, given thehistory of sound ideas proving leakywhen they become real devices inthe real world, several aspects of thesenotes could have relevance for thesenew efforts.

Computation must exist in thephysical world. Security designs

that require secrets must hide and usethem in the real world. Unfortu-nately, the real world offers morepaths to secret storage and more ob-servable computational artifacts thanthese security designs anticipate.Careful integration of physical de-fenses and security architecture can

sometimes succeed against the adver-sary class designers consider. How-ever, in the long term, we hope for ei-ther a quantum leap in physicallydefensible technology—or a signifi-cant reduction in the properties thatdesigns force us to assume about ourcomputers. It would be nice to havefairy dust that works.

References1 R. Anderson and M. Kuhn, “Tam-

per Resistance—A CautionaryNote,” Proc. 2nd Usenix Workshop onElectronic Commerce, Usenix Assoc.,1996, p. 1–11.

2. S. Weingart, “Physical SecurityDevices for Computer Subsystems:A Survey of Attacks and Defenses,”C. K. Koc and C. Paar, eds., Proc.2nd Workshop on Cryptographic Hard-ware and Embedded Systems (CHES00), Springer-Verlag LNCS 1965,2000, pp. 302–317.

3. S. Skorobogatov and R. Anderson,“Optical Fault Induction Attacks,”to be published in Proc. 4th Workshopon Cryptographic Hardware andEmbedded Systems (CHES 02), 2002.

4. P. Kocher, “Timing Attacks onImplementations of Diffie-Hellman,RSA, DSS, and Other Systems,” N.Koblitz, ed., Advances in Cryptology(Crypto 96), Springer-Verlag LNCS1109, pp. 104–113.

5. M. Bond and R. Anderson, “API-Level Attacks on Embedded Sys-tems,” Computer, vol. 29, October,

2001, pp. 67–75.6. J. Clulow, “I Know Your PIN,” RSA

Conference 2002, Europe, 2002.7. S.W. Smith and S. Weingart,

“Building a High-Performance,Programmable Secure Coproces-sor,” Computer Networks, vol. 31,April, 1999, pp. 831–860

8. B. Gassend et al., “Silicon PhysicalRandom Functions,” Proc. 9th ACMConference on Computer and Commu-nications Security, ACM Press, 2002,pp. 148–160.

9. R. Pappu et al, “Physical One-WayFunctions,” Science, vol. 297, Sept.,2002, pp. 2026–2030.

10.P. England and M. Peinado,“Authenticated Operation of OpenComputing Devices,” L. Baten andJ. Seberry eds., Proc. 7th AustralasianConf. Information Security and Privacy(ACISP 2002), Springer-VerlagLNCS 2384, June, 2002, pp.346–361.

S.W. Smith is currently an assistant pro-fessor of computer science at DartmouthCollege. Previously, he was a research staffmember at IBM Watson, working on securecoprocessor design and validation, and astaff member at Los Alamos National Lab-oratory, doing security reviews and designsfor public-sector clients. He received his BAin mathematics from Princeton and hisMSc and PhD in computer science fromCarnegie Mellon University. He is a mem-ber of ACM, Usenix, the IEEE ComputerSociety, Phi Beta Kappa, and Sigma Xi.Contact him at [email protected] orthrough his home page, www.cs.dartmouth.edu/~sws/.

JANUARY/FEBRUARY 2003 � http://computer.org/security/ 93

computer.org/publications/

Transactions on

� Computers

� Information Technologyin Biomedicine

� Knowledge and Data Engineering

� Mobile Computing

� Multimedia

� Networking

� Parallel and Distributed Systems

� Pattern Analysis and Machine Intelligence

� Software Engineering

� Very Large Scale Integration Systems

� Visualization and Computer Graphics

Eleven good reasons why close to 100,000 computing professionals join the IEEE Computer SocietyEleven good reasons why close to 100,000 computing professionals join the IEEE Computer Society