7
Future Generation Computer Systems 16 (2000) 343–349 New paradigms – old paradigms? Dieter Gollmann Microsoft Research, 1 Guildhall Street, Cambridge CB2 3NH, UK Accepted 3 March 1999 Abstract Cryptography plays a central role in security on the web, as documented by many contributions to this volume. However, sometimes the importance of cryptography is exaggerated. This paper explores the foundations that are needed to provide the basis for cryptographic protection, and for web security in general. We will discuss different paradigms that underpin the use of cryptography and of public-key infrastructures, observing a shift in paradigms as far as the protection requirements are concerned, and a clash of paradigms between communications and computer security. We conclude that assurance, a topic neglected in current discussions of security, is today the weakest link in web security. ©2000 Elsevier Science B.V. All rights reserved. Keywords: Cryptography; Web security; Access control policies; Public key infrastructures 1. Introduction ‘Security on the web’ is almost inevitably associ- ated with the use of cryptography. On occasion the impression is created that only regulations US export regulations and similar legal impediments prevent the universal implementation of strong security mecha- nisms on the web. Whilst it is true that cryptography has an important role to play in web security, and that political influences have led to the adoption of ‘weak’ cryptographic algorithms, weak both in fact and even more in public perception, it is nevertheless the case that there are much more fundamental security prob- lems that have to be solved to facilitate security on the web. These fundamental problems can be grouped into two categories, protection and policy. For example, even the strongest cryptographic algorithm is useless when keys can be compromised easily. This principle E-mail address: [email protected] (D. Gollmann). is well understood, but the current situation on the web is far from being satisfactory. A personal computer in a private home managed by its owner can hardly be expected to be the kind of tamper resistant device normally envisaged for the storage of sensitive keying material. A web browser that happens to run various security protocols on behalf of a user, and therefore gets access to that user’s private keys has probably not been designed as a security relevant piece of software. When protection mechanisms are designed for a new environment, it is only too natural to fall back on components that had proven their value in previ- ous generations of computer systems. This can lead to the mistaken conclusion that these mechanisms are still used to enforce essentially the same policies and counter the same type of threats as in the past. In this respect, cryptography comes with baggage from com- munications security. Although some issues in web se- curity are typical communications security problems, there are also scenarios where there are no longer ‘trusting’ parties wishing to communicate securely in 0167-739X/00/$ – see front matter ©2000 Elsevier Science B.V. All rights reserved. PII:S0167-739X(99)00058-8

New paradigms – old paradigms?

Embed Size (px)

Citation preview

Page 1: New paradigms – old paradigms?

Future Generation Computer Systems 16 (2000) 343–349

New paradigms – old paradigms?

Dieter GollmannMicrosoft Research, 1 Guildhall Street, Cambridge CB2 3NH, UK

Accepted 3 March 1999

Abstract

Cryptography plays a central role in security on the web, as documented by many contributions to this volume. However,sometimes the importance of cryptography is exaggerated. This paper explores the foundations that are needed to provide thebasis for cryptographic protection, and for web security in general. We will discuss different paradigms that underpin the useof cryptography and of public-key infrastructures, observing a shift in paradigms as far as the protection requirements areconcerned, and a clash of paradigms between communications and computer security. We conclude that assurance, a topicneglected in current discussions of security, is today the weakest link in web security. ©2000 Elsevier Science B.V. All rightsreserved.

Keywords:Cryptography; Web security; Access control policies; Public key infrastructures

1. Introduction

‘Security on the web’ is almost inevitably associ-ated with the use of cryptography. On occasion theimpression is created that only regulations US exportregulations and similar legal impediments prevent theuniversal implementation of strong security mecha-nisms on the web. Whilst it is true that cryptographyhas an important role to play in web security, and thatpolitical influences have led to the adoption of ‘weak’cryptographic algorithms, weak both in fact and evenmore in public perception, it is nevertheless the casethat there are much more fundamental security prob-lems that have to be solved to facilitate security on theweb.

These fundamental problems can be grouped intotwo categories,protection and policy. For example,even the strongest cryptographic algorithm is uselesswhen keys can be compromised easily. This principle

E-mail address:[email protected] (D. Gollmann).

is well understood, but the current situation on the webis far from being satisfactory. A personal computerin a private home managed by its owner can hardlybe expected to be the kind of tamper resistant devicenormally envisaged for the storage of sensitive keyingmaterial. A web browser that happens to run varioussecurity protocols on behalf of a user, and thereforegets access to that user’s private keys has probably notbeen designed as a security relevant piece of software.

When protection mechanisms are designed for anew environment, it is only too natural to fall backon components that had proven their value in previ-ous generations of computer systems. This can leadto the mistaken conclusion that these mechanisms arestill used to enforce essentially the same policies andcounter the same type of threats as in the past. In thisrespect, cryptography comes with baggage from com-munications security. Although some issues in web se-curity are typical communications security problems,there are also scenarios where there are no longer‘trusting’ parties wishing to communicate securely in

0167-739X/00/$ – see front matter ©2000 Elsevier Science B.V. All rights reserved.PII: S0167-739X(99)00058-8

Page 2: New paradigms – old paradigms?

344 D. Gollmann / Future Generation Computer Systems 16 (2000) 343–349

a ‘hostile’ environment, but ‘hostile’ parties needinga ‘trusted’ environment to communicate. Likewise,computer security comes with baggage from main-frame operating systems, where a system administra-tor was in (total) control of the entire system andaccess control policies were typically phrased in termsof user or group identities. On the web, there is nosingle administrator, user identities are often not verymeaningful as a basis for access control decisions, andyou may not only wish to protect yourself from codethat runs on your machine, you may also wish to pro-tect your code when it runs on a machine managed bysomeone else.

Unfortunately, some of these problems are muchmore difficult to solve but receive less attention thantopics in cryptography. This paper attempts to redressthe balance and steer research on future generationof computer systems towards challenges beyond cryp-tography. Actually, many of these challenges are notthat new at all. Principles for designing secure oper-ating systems have been studied since the 1960s (atleast) but current operating systems hardly adhere tothem and many lessons learnt a few decades ago havefallen into oblivion. Hence, this paper will also ex-amine how many of the new problems encountered inweb security are new indeed, and whether we reallyneed new paradigms to tackle web security, or whetherwe should rather remember some of the old paradigmswhen designing security systems for the web.

2. The role of cryptography and how it is changing

Frequently, the benefits of cryptography are ex-plained by referring to the following setting. Alicewants to talk to Bob, but there is an intruder Eve whohas access to the communications link between Aliceand Bob. Eve can delete, add, and modify messages(Fig. 1). Cryptography allows Alice and Bob to hidethe content of messages from Eve (confidentiality),

Fig. 1. The old paradigm of communications security.

Fig. 2. Electronic commerce – a newsecurity paradigm.

and to detect when messages have been modified (in-tegrity) or inserted (authenticity). In this old paradigmfor cryptographic applications, trusting parties usecryptography to communicate ‘securely’ via an un-trusted communications system. The threats consid-ered do not include cheating by Alice or Bob. Thisassumption often underpins the analysis of securityprotocols. For example, the BAN logic of authenti-cation [3] quite explicitly deals only with protocolexecutions where Alice and Bob follow the protocolrules and do not disclose secrets. Such an analysisfocuses completely on security violations caused byactions of an intruder.

The scenario just described is adequate for websecurity if sending encrypted email messages is thepinnacle of your ambitions. However, much of the cur-rent interest in web security is triggered by applica-tions that can be summarized under the term ‘elec-tronic commerce’. Here, a customer and a merchantengage in a business transaction while a third party,the bank in Fig. 2, may guarantee the validity of thetransaction both to customer and merchant.

We now face a new security paradigm very differ-ent from the old paradigm of communications secu-rity. Distrusting parties need a trusted environment toconduct a business transaction. In this case, the threatmodel explicitly includes the possibility that a partyparticipating in a protocol run may misbehave. Theadversary is no longer an intruder but a fraudulent in-sider, and the third party guarantees rather than com-promises security. (It should not come as a surprise thatonce the attention shifts to commerce, insider fraudbecomes an issue.) Some of the ‘old’ security require-ments, like confidentiality of sensitive data in tran-sit, still have a place in electronic commerce. Equally,intruders should not be able to forge orders or pay-ments, but integrity will now be more concerned withfraudulent changes made by the customer or the mer-chant. Similarly, customer and merchant will be lessinterested in the identity of their business partner than

Page 3: New paradigms – old paradigms?

D. Gollmann / Future Generation Computer Systems 16 (2000) 343–349 345

in the validity of the transaction they are engaged in.In this context,authenticitysometimes refers to thefact that the bank has approved the transaction.

This short discussion illustrates two equally impor-tant points. First, a change in application may cause achange in security requirements and thus, a change inthe goals a protocol is expected to meet. This changemay be obscured by the fact that we are still usingthe same terms as before. Authentication in a com-munications protocol may mean something quite dif-ferent from authentication in an electronic paymentprotocol.

Secondly, a change in application may correspondto a fundamental change of the threat scenario. So,even if protocol goals remain unchanged, the fact thata protocol has been verified for one particular envi-ronment does not mean that it will deliver the sameguarantees in any other environment. The public keyNeedham–Schroeder authentication protocol has be-come the standard illustration for this phenomenon[8]. In [3], some security properties of this protocolare derived within the framework of the BAN logicunder the traditional assumptions of communicationssecurity. In [6], the same protocol is analyzed witha different formal tool, and an attack is found. Thediscovery of this attack is not due to the fact that aformalism was used that provided better insight intothe protocol. Rather, the analysis in [6] allows for thepossibility that an insider misbehaves. Maybe by ac-cident, as its modeling of the attacker is driven moreby the idiosyncrasies of the verification methodologythan by a threat analysis, the new analysis deals withthreats different from those considered in [3] and [8].

These observations lead to a first comment on websecurity. A description of a cryptographic protocolshould include a precise description of the protocolgoals and of the environment the protocol is intendedfor. Too often this is not the case. For example, authen-tication is a particularly multi-faceted term. Hence, itis not self-evident which goals a so-called authenti-cation protocol is supposed to achieve. Even a moreprecise specification like ‘verify a claimed identity’may need further explanation. It is by no means clearwhat constitutes an identity. Section 3 will return tothis theme. Confusions about terminology are exacer-bated by a tendency to re-use terms with a new mean-ing when describing security mechanisms in a newapplication.

In the same vein, the description of an attack shouldinclude a precise description of the protocol goals thatare violated and of the environment where these vio-lations are feasible. There is a fundamental differencebetween an attack, viz a protocol execution violatingone of the stated security properties within the envi-ronment the protocol is designed for, and a protocolexecution outside the intended environment that hassome unexpected properties. Of course, it is of inter-est to establish the limitations of a protocol, both withrespect to the goals it may achieve and with respect tothe environments in can be used in, but establishinglimitations is not synonymous with finding attacks.

Whilst this plea for precision is valid in general, it isparticularly relevant for web security, which happensto be an area that subsumes a number of differentsecurity scenarios, and where there is indeed a shiftbetween paradigms.

3. Access control policies

The Unix permission bits (see e.g. [4]) exemplifythe old paradigm of access control. Access rights areexpressed in terms of user identities, group identities,and three elementary access operations, read, write,and execute. The simplicity of this mechanism hidesa more complex picture. Parameters that are presentand could be used in access control decisions include1. the identity of the user requesting access to a

resource,2. the identity of the code that is used to access a

resource,3. the location of the user, or of the code.

Mainframe operating systems that limit their pro-tection mechanisms to preventing users from violatingthe integrity of system code and from interfering witheach other, can do a reasonable job by only employinguser identities for access control decisions. Ignoringdial-in connections for the moment, there are only lo-cal users, so user location is not such an important se-curity discriminator. Access decisions that depend onwhether code is in system space or user space could beviewed as a very elementary case of referring to codelocation, but they do not require mechanisms that dealwith code location in a more general way. Even so,there were operating systems like Digital’s VMS thatconsidered the location (terminal) a user has logged

Page 4: New paradigms – old paradigms?

346 D. Gollmann / Future Generation Computer Systems 16 (2000) 343–349

in from, and also the time of log in. Even further inthe past, the Cambridge Titan operating system usedcode identity and code location as access controlparameters [7].

So, richer access control models can be found evenin mainframe operating systems. In web applications,we are well advised to re-examine the utility of all po-tential access control parameters. Some of the aspectshidden in a mainframe environment are now immedi-ately visible. A user may be local or request accessto a resource over the Internet. The identities of localusers may still be known, but a web server will alsowant to give controlled access to remote users that arenot known to the system a priori. Access decisionsmay then depend on the location of the remote user,or on the code the remote user wants to execute.

The new paradigm for access control on the webuses concepts like Internet zones and signed code,putting more emphasis on the source of a request andon the creator of a program than on the identity of theuser who runs that program. At the same time, it be-comes altogether more difficult to employ user identi-ties. To verify a user identity, the system needs infor-mation about the user. This information could be a se-cret shared between user and system, but this solutionimplies that the shared secret is already in place whenthe user’s access request arrives. In a typical web ap-plication, this would not be the case. Certificates seemto offer a solution more suitable for the web.

Today, there is substantial research into the design ofpublic-key infrastructures (PKIs) and in the proper useof certificates. A certificate is a data structure signed bya trusted authority binding a name to a cryptographickey. This key is then used to verify that a requestwas made by the entity indicated by the name in thecertificate. Two questions are raised immediately. Whoshould the trusted authority be? What is indicated by aname? Broadly speaking, there are two sets of answersto these question, inspired by two different paradigms.

The X.509 Directory Authentication Framework [5]emerged from a communications paradigm. Names areaddresses for messages to be delivered to (Fig. 3). A

Fig. 3. For the message to be delivered, xxxx has to be a ‘global’name known outside the local system.

local name is not of much use as a delivery addressas it has to be interpreted by the communications sys-tem, and addresses have to be unique globally. Theentities operating the communications system are in areasonable position to vouch for the fact that they as-sociate a particular address (name) with a particularcryptographic key. On the other hand, the entities op-erating the communications system normally do notvouch for the identity of a user located at a particu-lar address (name). If a person is associated with anaddress (telephone number), it is the person receivingthe bills rather than the only person who can makecalls from that number. In summary, a PKI followingthe communications paradigm binds globally uniqueaddresses to cryptographic keys. This binding can beguaranteed by the entities operating the communica-tions system.

The Internet X.509 public key infrastructure (PKIX)actually tries to go one step further and include theidentities of human users in certificates. At first glance,this may seem to be a minor modification, just likesending a letter to a named person rather than to ‘TheOccupier’ at a given address. However, in the lattercase the communications system can check itself thata message is delivered to the correct address. In thefirst case, it may need a document issued by anotherparty (e.g. an identity card issued by the police) tomake this check.

The Simple Public-Key Infrastructure (SPKI) [9]fits into the paradigm of computer security. Certifi-cates are employed for the purpose of authorisation.Authentication verifies the source of an access request(Fig. 4). Access rights are assigned to keys rather thanto individual users. Names are then identifiers usedin access control decisions, not user identities. Theseidentifiers are interpreted by the local system and thereis no direct need for globally unique identifiers.

In this paradigm, neither is there an obvious can-didate for the role of trusted authority, nor is there arequirement for such an authority. Binding names tocryptographic keys is the task of the local system inthe first place. By allowing some other party to bind

Fig. 4. For the access control decision, yyyy has to be a ‘local’name known inside the local system.

Page 5: New paradigms – old paradigms?

D. Gollmann / Future Generation Computer Systems 16 (2000) 343–349 347

names to keys, the security administration on thelocal system would transfer a part of its tasks to an-other entity and would rely on that entity for the valid-ity of its access control decisions. Such a deferral ofaccess decisions is still a matter of local security poli-cies (see e.g. [2]). Hence, there is no need for a globalPKI. Each local system sets up its own local PKI. Nat-urally, with such a local binding the communicationssystem could not guarantee to its own users that abinding between name and cryptographic key is valid.

Access control policies on machines participatingin web services will be different from the ‘mainframe’policies we have become accustomed to. In this sense,web security seems to move to a new paradigm. But, asargued in this section, we are actually returning to oldparadigms that had been discarded for sometime andnow are coming back into their own right. To quote[7] on the Titan operating system:

In particular, it was possible to use the identity of aprogram as a parameter for access-control decisionsas well as, or instead of, the identity of the user,a feature which Cambridge people have ever sinceregarded as strange to omit.

Nevertheless, the search for the most appropriate se-curity policies for web participants is still an ongo-ing concern. This search, however, is hampered by aclash in paradigms. Communications and computingmay use the same terms and mechanisms but still fol-low different objectives, as illustrated by our remarkson certificates, and by a heated public discussion onthe very nature of certificates (see e.g. [9]). Oncemore, an overlap in terminology hides a difference inparadigms. It should not come as a surprise if trustmanagement systems encounter problems in describ-ing precisely the properties established by verifying acertificate.

4. Assurance

Security on the web is rooted in the devices that ac-tually run cryptographic protocols or enforce accesscontrol on resources available on the web. In the pre-vious two sections we have examined some of thedifficulties one has to face when dealing with secu-rity policies for the web. Now, we turn to an issueeven more crucial. How can we get assurance that oursecurity policies are properly enforced and that oursecurity mechanisms cannot be compromised?

The old paradigms for high assurance systems weredeveloped decades ago.1. A reference monitor that has to handle every access

to a resource. Security relies on the integrity of thereference monitor.

2. Protection modes that allow a machine to switchbetween user and supervisor mode.

3. Simple architectures that separate security relevantcomponents from the rest of the system can achievehigher assurance.How does this compare to the systems currently in

use? As an extreme, consider a user connected to theInternet from a PC. In its original conception, the userof a PC was also its system manager, so PC operat-ing systems did not make a distinction between userand supervisor mode. Such a PC is hardly a ‘personalcomputer’ when it is connected to the Internet. Anysecurity mechanism has to run on top of the operat-ing system and any service that accepts input from thenetwork has to implement its own security controls.There is no single reference monitor and a flaw in anyof the services may allow an attacker to compromisethe entire system.

Putting aside the issue of PC security, there still re-main considerable obstacles for achieving a reasonablelevel of assurance. Web browsers manage user keys forcryptographic protection and implement sandboxes foraccess control. Web browsers are complex pieces ofsoftware, but they are now part of the Trusted Comput-ing Base (TCB). Not only is it cumbersome to evaluatecomplex software to a higher assurance level, it alsobecomes rather difficult to decide which parts of thesystem are security relevant. As a related example, itconcerns an operating system not a browser, considerthe ‘You are now in France attack’ against WindowsNT suggested in [10]. If an installation is told that itis in France, it will automatically disable all crypto-graphic algorithms that do not comply with the rele-vant French regulations. Hence, an attacker who canmodify the relevant location information can down-grade the cryptographic protection available to users.Maybe surprisingly, location information has becomesecurity relevant.

Finally, we mention another new paradigm for sys-tems design that is relevant for security on the web,security servicesrunning on untrusted platforms.A security service is software that implements se-curity relevant functionality like access control or

Page 6: New paradigms – old paradigms?

348 D. Gollmann / Future Generation Computer Systems 16 (2000) 343–349

cryptographic protection of messages. The apparentadvantage of this approach lies in the ease with whichfunctionality can be adapted to new requirements, andthe promise that ‘security’ can simply be bolted onto an existing system. Unfortunately, these servicesoffer very little assurance. Security services do notlimit the security relevant part of the system. Quitethe opposite, any system component invoked by thesecurity service, i.e. the best part of the untrustedplatform, becomes security relevant and therefore partof the TCB.

Despite the argument that the old paradigms forbuilding secure systems fail to address current needs[1], there are no obvious new paradigms for the de-sign of secure systems that fit today’s software andhardware architectures while providing the degree ofassurance desirable for electronic commerce.

5. Conclusion

We have examined the impact of new paradigms inthree areas of web security, cryptography, access con-trol policies, and assurance, proceeding from the easyto the difficult. In the application of cryptography, anew paradigm is encountered where the adversary isno longer an intruder but a fraudulent insider. Adher-ing to the old paradigm when designing cryptographicprotocols for new applications can lead to wrong de-cisions. Start from a clean slate and forget the past incryptography!

PKIs present us with a clash between twoparadigms. In web security, these two paradigms be-come closely intertwined and the meeting betweencomputing and communications is not without itspitfalls. Acknowledging the existence of these twoparadigms, and of their differences, may lead to abetter understanding of the role PKIs can play in websecurity.

The old paradigms for building secure systems to ahigher level of assurance curry little favour today. At-tempts to place security services on top of untrustedplatforms may please superficially (in the literal mean-ing of the word), but do not constitute a recipe forachieving the level of assurance appropriate for sys-tems handling sensitive user information. Return tothe past in computer security and rediscover the oldparadigms!

Today, there is no lack of effort in searching fornew security features. There is very little work on as-surance directly addressing open/distributed systemsin general, or the web in particular. Open/distributedsystem strive to present a service layer that hides theunderlying systems (operating system and hardware)from the applications. Security services can ‘guaran-tee’ security, if they are properly implemented and ifthe attacker kindly agrees not to bypass the serviceslayer. Security relevant information exists at the layersbelow. If there is insufficient protection at the lowerlayers, the foundations of the fortress will crumble.

It is a futile exercise to build fortresses on sand. Wecan either dig deeper looking for a solid foundationsfor the security services. The challenge is then to im-plant security at the layers below interfering as littleas possible with the existing platforms. If we give uphope and decide to abandon fortresses because theyare too difficult to build and live as nomads in cyber-space, we have to carry our valuables with us. Thiscreates a role for security tokens like smart cards, un-til these are sufficiently flexible so that the problemswith existing software architectures recur.

References

[1] B. Blakely, The emperor’s old armour, in: Proceedings ofthe New Security Paradigms Workshop, September 1996, pp.2–16.

[2] M. Blaze, J. Feigenbaum, J. Lacy, Decentralized trustmanagement, in: Proceedings of the IEEE Symposium onSecurity and Privacy, IEEE Computer Soc. Press, SilverSpring. MD, 1996, pp. 164–173.

[3] M. Burrows, M. Abadi, R. Needham, A logic ofauthentication, DEC Systems Research Center, Report 39,revised February 1990.

[4] S. Garfinkel, G. Spafford, Practical Unix & Internet Security,2nd ed., O’Reilly & Associates, 1996.

[5] ITU, Information technology – open systems interconnection:the directory: authentication framework, ITU-T Recommen-dation X.509, (1997 E), June 1997.

[6] G. Lowe, Breaking and fixing the Needham-Schroederpublic-key protocol using FDR, in: Proceedings of TACAS,vol. 1055, Springer, Berlin, 1996, pp. 147–166.

[7] R. Needham, Later developments at Cambridge: Titan, CAP,and the Cambridge Ring, Annals of the History of Computing14 (4) (1992) 57.

[8] R. Needham, M. Schroeder, Using encryption for authenti-cation in large networks of computers, Communications ofthe ACM 21 (1978) 993–999.

[9] C.M. Ellison, B. Frantz, B. Lampson, R. Rivest, B.M.Thomas, T. Ylonen, SPKI Certificate Theory, Internet Draft,17 November 1998.

Page 7: New paradigms – old paradigms?

D. Gollmann / Future Generation Computer Systems 16 (2000) 343–349 349

[10] Bugware, http://oliver.efri.hr/∼crv/security/bugs/NT/france.html, accessed January 1999.

Dieter Gollmann received his Dipl.-Ing.in Engineering Mathematics (1979) andDr. Techn. (1984) from the University ofLinz, Austria, where he was a ResearchAssistant in the Department of SystemScience. He was a Lecturer in ComputerScience at Royal Holloway College andlater a Scientific Assistant at the Univer-sity of Karlsruhe, Germany, where he wasawarded the ‘venia legendi’ for computer

science in 1991. He rejoined Royal Holloway in 1990, initially asa Senior Lecturer in Computer Science, subsequently as a Reader,and finally as a Professor. He was a Visiting Professor at theTechnical University of Graz in 1991 and an Adjunct Professorat the Information Security Research Centre, QUT, Brisbane, in1995. He has contributed to national and European projects inthe areas of dependable communications and computing. He hasserved on the program committees of the major European confer-ences on computer security (ESORICS), and cryptography (EU-ROCRYPT) as well as other international conferences in theseareas. He has been acting as a Consultant for HP Laborato-ries (Bristol) and joined Microsoft Research in Cambridge in1998.