15
Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius, and Sean W. Smith Abstract—Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hide the client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymity for abusive purposes such as defacing popular websites. Website administrators routinely rely on IP-address blocking for disabling access to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As a result, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behaving users alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blocking users without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior — servers can blacklist users for whatever reason, and the privacy of blacklisted users is maintained. Index Terms—anonymous blacklisting, privacy, revocation 1 I NTRODUCTION Anonymizing networks such as Tor [18] route traf- fic through independent nodes in separate adminis- trative domains to hide a client’s IP address. Unfor- tunately, some users have misused such networks — under the cover of anonymity, users have repeatedly defaced popular websites such as Wikipedia. Since web- site administrators cannot blacklist individual malicious users’ IP addresses, they blacklist the entire anonymiz- ing network. Such measures eliminate malicious activity through anonymizing networks at the cost of denying anonymous access to behaving users. In other words, a few “bad apples” can spoil the fun for all. (This has happened repeatedly with Tor. 1 ) There are several solutions to this problem, each pro- viding some degree of accountability. In pseudonymous credential systems [14], [17], [23], [28], users log into websites using pseudonyms, which can be added to a blacklist if a user misbehaves. Unfortunately, this ap- proach results in pseudonymity for all users, and weakens the anonymity provided by the anonymizing network. Anonymous credential systems [10], [12] employ group signatures. Basic group signatures [1], [6], [15] allow servers to revoke a misbehaving user’s anonymity by This research was supported in part by the NSF, under grant CNS-0524695, and the Bureau of Justice Assistance, under grant 2005-DD-BX-1091. The views and conclusions do not necessarily reflect the views of the sponsors. Nymble first appeared in a PET ’07 paper [24]. This paper presents a significantly improved construction and a complete rewrite and evaluation of our (open-source) implementation (see Section 2.7 for a summary of changes). Patrick Tsang, Cory Cornelius and Sean Smith are with the Department of Computer Science, Dartmouth College, Hanover NH, USA. Apu Kapadia is with Indiana University Bloomington, IN, USA. Manuscript received xxxx. 1. The Abuse FAQ for Tor Server Operators lists several such examples at http://tor.eff.org/faq-abuse.html.en. complaining to a group manager. Servers must query the group manager for every authentication, and thus lacks scalability. Traceable signatures [26] allow the group man- ager to release a trapdoor that allows all signatures gen- erated by a particular user to be traced; such an approach does not provide the backward unlinkability [30] that we desire, where a user’s accesses before the complaint remain anonymous. Backward unlinkability allows for what we call subjective blacklisting, where servers can blacklist users for whatever reason since the privacy of the blacklisted user is not at risk. In contrast, approaches without backward unlinkability need to pay careful at- tention to when and why a user must have all their connections linked, and users must worry about whether their behaviors will be judged fairly. Subjective blacklisting is also better suited to servers such as Wikipedia, where misbehaviors such as ques- tionable edits to a webpage, are hard to define in mathe- matical terms. In some systems, misbehavior can indeed be defined precisely. For instance, double-spending of an “e-coin” is considered a misbehavior in anonymous e-cash systems [8], [13], following which the offend- ing user is deanonymized. Unfortunately, such systems work for only narrow definitions of misbehavior — it is difficult to map more complex notions of misbehavior onto “double spending” or related approaches [32]. With dynamic accumulators [11], [31], a revocation op- eration results in a new accumulator and public param- eters for the group, and all other existing users’ credentials must be updated, making it impractical. Verifier-local revo- cation (VLR) [2], [7], [9] fixes this shortcoming by requir- ing the server (“verifier”) to perform only local updates during revocation. Unfortunately, VLR requires heavy computation at the server that is linear in the size of the blacklist. For example, for a blacklist with 1,000 entries,

Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

Nymble: Blocking Misbehaving Users inAnonymizing Networks

Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius, and Sean W. Smith

Abstract—Anonymizing networks such as Tor allow users to access Internet services privately by using a series of routers to hidethe client’s IP address from the server. The success of such networks, however, has been limited by users employing this anonymityfor abusive purposes such as defacing popular websites. Website administrators routinely rely on IP-address blocking for disablingaccess to misbehaving users, but blocking IP addresses is not practical if the abuser routes through an anonymizing network. As aresult, administrators block all known exit nodes of anonymizing networks, denying anonymous access to misbehaving and behavingusers alike. To address this problem, we present Nymble, a system in which servers can “blacklist” misbehaving users, thereby blockingusers without compromising their anonymity. Our system is thus agnostic to different servers’ definitions of misbehavior — servers canblacklist users for whatever reason, and the privacy of blacklisted users is maintained.

Index Terms—anonymous blacklisting, privacy, revocation

F

1 INTRODUCTION

Anonymizing networks such as Tor [18] route traf-fic through independent nodes in separate adminis-trative domains to hide a client’s IP address. Unfor-tunately, some users have misused such networks —under the cover of anonymity, users have repeatedlydefaced popular websites such as Wikipedia. Since web-site administrators cannot blacklist individual malicioususers’ IP addresses, they blacklist the entire anonymiz-ing network. Such measures eliminate malicious activitythrough anonymizing networks at the cost of denyinganonymous access to behaving users. In other words,a few “bad apples” can spoil the fun for all. (This hashappened repeatedly with Tor.1)

There are several solutions to this problem, each pro-viding some degree of accountability. In pseudonymouscredential systems [14], [17], [23], [28], users log intowebsites using pseudonyms, which can be added to ablacklist if a user misbehaves. Unfortunately, this ap-proach results in pseudonymity for all users, and weakensthe anonymity provided by the anonymizing network.

Anonymous credential systems [10], [12] employ groupsignatures. Basic group signatures [1], [6], [15] allowservers to revoke a misbehaving user’s anonymity by

This research was supported in part by the NSF, under grant CNS-0524695,and the Bureau of Justice Assistance, under grant 2005-DD-BX-1091. Theviews and conclusions do not necessarily reflect the views of the sponsors.Nymble first appeared in a PET ’07 paper [24]. This paper presents asignificantly improved construction and a complete rewrite and evaluation ofour (open-source) implementation (see Section 2.7 for a summary of changes).

• Patrick Tsang, Cory Cornelius and Sean Smith are with the Departmentof Computer Science, Dartmouth College, Hanover NH, USA.

• Apu Kapadia is with Indiana University Bloomington, IN, USA.

Manuscript received xxxx.1. The Abuse FAQ for Tor Server Operators lists several such examples

at http://tor.eff.org/faq-abuse.html.en.

complaining to a group manager. Servers must query thegroup manager for every authentication, and thus lacksscalability. Traceable signatures [26] allow the group man-ager to release a trapdoor that allows all signatures gen-erated by a particular user to be traced; such an approachdoes not provide the backward unlinkability [30] that wedesire, where a user’s accesses before the complaintremain anonymous. Backward unlinkability allows forwhat we call subjective blacklisting, where servers canblacklist users for whatever reason since the privacy ofthe blacklisted user is not at risk. In contrast, approacheswithout backward unlinkability need to pay careful at-tention to when and why a user must have all theirconnections linked, and users must worry about whethertheir behaviors will be judged fairly.

Subjective blacklisting is also better suited to serverssuch as Wikipedia, where misbehaviors such as ques-tionable edits to a webpage, are hard to define in mathe-matical terms. In some systems, misbehavior can indeedbe defined precisely. For instance, double-spending ofan “e-coin” is considered a misbehavior in anonymouse-cash systems [8], [13], following which the offend-ing user is deanonymized. Unfortunately, such systemswork for only narrow definitions of misbehavior — it isdifficult to map more complex notions of misbehavioronto “double spending” or related approaches [32].

With dynamic accumulators [11], [31], a revocation op-eration results in a new accumulator and public param-eters for the group, and all other existing users’ credentialsmust be updated, making it impractical. Verifier-local revo-cation (VLR) [2], [7], [9] fixes this shortcoming by requir-ing the server (“verifier”) to perform only local updatesduring revocation. Unfortunately, VLR requires heavycomputation at the server that is linear in the size of theblacklist. For example, for a blacklist with 1,000 entries,

Page 2: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

2

each authentication would take tens of seconds,2 a pro-hibitive cost in practice. In contrast, our scheme takes theserver about one millisecond per authentication, which isseveral thousand times faster than VLR. We believe theselow overheads will incentivize servers to adopt such asolution when weighed against the potential benefits ofanonymous publishing (e.g., whistle-blowing, reporting,anonymous tip lines, activism, and so on.).3

1.1 Our solutionWe present a secure system called Nymble, which pro-vides all the following properties: anonymous authen-tication, backward unlinkability, subjective blacklisting,fast authentication speeds, rate-limited anonymous con-nections, revocation auditability (where users can verifywhether they have been blacklisted), and also addressesthe Sybil attack [19] to make its deployment practical.

In Nymble, users acquire an ordered collection ofnymbles, a special type of pseudonym, to connect towebsites. Without additional information, these nymblesare computationally hard to link,4 and hence using thestream of nymbles simulates anonymous access to ser-vices. Websites, however, can blacklist users by obtaininga seed for a particular nymble, allowing them to linkfuture nymbles from the same user — those used beforethe complaint remain unlinkable. Servers can thereforeblacklist anonymous users without knowledge of theirIP addresses while allowing behaving users to connectanonymously. Our system ensures that users are awareof their blacklist status before they present a nymble,and disconnect immediately if they are blacklisted. Al-though our work applies to anonymizing networks ingeneral, we consider Tor for purposes of exposition. Infact, any number of anonymizing networks can rely onthe same Nymble system, blacklisting anonymous usersregardless of their anonymizing network(s) of choice.

1.2 Contributions of this paperOur research makes the following contributions:

• Blacklisting anonymous users. We provide a meansby which servers can blacklist users of an anonymiz-ing network while maintaining their privacy.

• Practical performance. Our protocol makes use ofinexpensive symmetric cryptographic operations tosignificantly outperform the alternatives.

• Open-source implementation. With the goal of con-tributing a workable system, we have built an open-source implementation of Nymble, which is pub-licly available.5 We provide performance statisticsto show that our system is indeed practical.

2. In Boneh and Shacham’s construction [7], computation at theserver involves 3+2|BL| pairing operations, each of which takes tensof ms (see benchmarks at http://crypto.stanford.edu/pbc).

3. http://www.torproject.org/torusers.html.en4. Two nymbles are linked if one can infer that they belong to the

same user with probability better than random guessing.5. The Nymble Project. http://www.cs.dartmouth.edu/∼nymble.

ServerRegistrationUser

Registration Tor

Nymble Connection

System Setup

CredentialAcquisition

Blacklist Update

andComplaining

NymbleManager

ServerUser

PseudonymManager

Fig. 1. The Nymble system architecture showing thevarious modes of interaction. Note that users interact withthe NM and servers though the anonymizing network.

Some of the authors of this paper have publishedtwo anonymous authentication schemes, BLAC [33] andPEREA [34], which eliminate the need for a trustedthird party for revoking users. While BLAC and PEREAprovide better privacy by eliminating the TTP, Nymbleprovides authentication rates that are several orders ofmagnitude faster than BLAC and PEREA (see Section 6).Nymble thus represents a practical solution for blockingmisbehaving users of anonymizing networks.

We note that an extended version of this article isavailable as a technical report [16].

2 AN OVERVIEW TO NYMBLE

We now present a high-level overview of the Nymblesystem, and defer the entire protocol description andsecurity analysis to subsequent sections.

2.1 Resource-based blockingTo limit the number of identities a user can obtain (calledthe Sybil attack [19]), the Nymble system binds nymblesto resources that are sufficiently difficult to obtain ingreat numbers. For example, we have used IP addressesas the resource in our implementation, but our schemegeneralizes to other resources such as email addresses,identity certificates, and trusted hardware. We addressthe practical issues related with resource-based blockingin Section 8, and suggest other alternatives for resources.

We do not claim to solve the Sybil attack. This problemis faced by any credential system [19], [27], and we suggestsome promising approaches based on resource-basedblocking since we aim to create a real-world deployment.

2.2 The Pseudonym ManagerThe user must first contact the Pseudonym Manager (PM)and demonstrate control over a resource; for IP-addressblocking, the user must connect to the PM directly (i.e.,not through a known anonymizing network), as shownin Figure 1. We assume the PM has knowledge about

Page 3: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

3

Tor routers, for example, and can ensure that users arecommunicating with it directly.6 Pseudonyms are de-terministically chosen based on the controlled resource,ensuring that the same pseudonym is always issued forthe same resource.

Note that the user does not disclose what server heor she intends to connect to, and the PM’s duties arelimited to mapping IP addresses (or other resources) topseudonyms. As we will explain, the user contacts thePM only once per linkability window (e.g., once a day).

2.3 The Nymble ManagerAfter obtaining a pseudonym from the PM, the userconnects to the Nymble Manager (NM) through theanonymizing network, and requests nymbles for accessto a particular server (such as Wikipedia). A user’srequests to the NM are therefore pseudonymous, andnymbles are generated using the user’s pseudonym andthe server’s identity. These nymbles are thus specific toa particular user-server pair. Nevertheless, as long as thePM and the NM do not collude, the Nymble systemcannot identify which user is connecting to what server;the NM knows only the pseudonym-server pair, and thePM knows only the user identity-pseudonym pair.

To provide the requisite cryptographic protection andsecurity properties, the NM encapsulates nymbles withinnymble tickets. Servers wrap seeds into linking tokens andtherefore we will speak of linking tokens being usedto link future nymble tickets. The importance of theseconstructs will become apparent as we proceed.

2.4 TimeNymble tickets are bound to specific time periods. Asillustrated in Figure 2, time is divided into linkabilitywindows of duration W , each of which is split into Ltime periods of duration T (i.e., W = L∗T ). We will referto time periods and linkability windows chronologicallyas t1, t2, . . . , tL and w1, w2, . . . respectively. While a user’saccess within a time period is tied to a single nymbleticket, the use of different nymble tickets across timeperiods grants the user anonymity between time periods.Smaller time periods provide users with higher ratesof anonymous authentication, while longer time periodsallow servers to rate-limit the number of misbehaviorsfrom a particular user before he or she is blocked. Forexample, T could be set to 5 minutes, and W to 1 day(and thus L = 288). The linkability window allows fordynamism since resources such as IP addresses can get re-assigned and it is undesirable to blacklist such resourcesindefinitely, and it ensures forgiveness of misbehaviorafter a certain period of time. We assume all entitiesare time synchronized (for example, with time.nist.govvia the Network Time Protocol (NTP)), and can thuscalculate the current linkability window and time period.

6. Note that if a user connects through an unknown anonymizingnetwork or proxy, the security of our system is no worse than thatprovided by real IP-address blocking, where the user could have usedan anonymizing network unknown to the server.

t1 t* tc tL... ... ... ...t1tLw*

Misbehavior Complaint

time

Futureconnections

become linkable

Previous connections remain anonymous and unlinkable

Connections anonymous

and unlinkableonce again

...

Fig. 2. The life cycle of a misbehaving user. If the servercomplains in time period tc about a user’s connection int∗, the user becomes linkable starting in tc. The complaintin tc can include nymble tickets from only tc−1 and earlier.

2.5 Blacklisting a userIf a user misbehaves, the server may link any futureconnection from this user within the current linkabilitywindow (e.g., the same day). Consider Figure 2 as anexample: A user connects and misbehaves at a serverduring time period t∗ within linkability window w∗. Theserver later detects this misbehavior and complains tothe NM in time period tc (t∗ < tc ≤ tL) of the samelinkability window w∗. As part of the complaint, theserver presents the nymble ticket of the misbehavinguser and obtains the corresponding seed from the NM.The server is then able to link future connections bythe user in time periods tc, tc + 1, . . . , tL of the samelinkability window w∗ to the complaint. Therefore, oncethe server has complained about a user, that user isblacklisted for the rest of the day, for example (thelinkability window). Note that the user’s connectionsin t1, t2, . . . , t

∗, t∗ + 1, . . . , tc remain unlinkable (i.e., in-cluding those since the misbehavior and until the timeof complaint). Even though misbehaving users can beblocked from making connections in the future, theusers’ past connections remain unlinkable, thus provid-ing backward unlinkability and subjective blacklisting.

2.6 Notifying the user of blacklist statusUsers who make use of anonymizing networks expecttheir connections to be anonymous. If a server obtainsa seed for that user, however, it can link that user’ssubsequent connections. It is of utmost importance, then,that users be notified of their blacklist status before theypresent a nymble ticket to a server. In our system, theuser can download the server’s blacklist and verify herstatus. If blacklisted, the user disconnects immediately.

Since the blacklist is cryptographically signed by theNM, the authenticity of the blacklist is easily verifiedif the blacklist was updated in the current time period(only one update to the blacklist per time period isallowed). If the blacklist has not been updated in the cur-rent time period, the NM provides servers with “daisies”every time period so that users can verify the freshness

Page 4: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

4

of the blacklist (“blacklist from time period told is freshas of time period tnow”). As discussed in Section 4.3.4,these daisies are elements of a hash chain, and providea lightweight alternative to digital signatures. Usingdigital signatures and daisies, we thus ensure that raceconditions are not possible in verifying the freshness ofa blacklist. A user is guaranteed that he or she will notbe linked if the user verifies the integrity and freshnessof the blacklist before sending his or her nymble ticket.

2.7 Summary of updates to the Nymble protocolWe highlight the changes to Nymble since our confer-ence paper [24]. Previously, we had proved only theprivacy properties associated with nymbles as part ofa two-tiered hash chain. Here we prove security at theprotocol level. This process gave us insights into possible(subtle) attacks against privacy, leading us to redesignour protocols and refine our definitions of privacy. Forexample, users are now either legitimate or illegitimate,and are anonymous within these sets (see Section 3). Thisredefinition affects how a user establishes a “Nymbleconnection” (see Section 5.5), and now prevents theserver from distinguishing between users who havealready connected in the same time period and thosewho are blacklisted, resulting in larger anonymity sets.

A thorough protocol redesign has also resulted in sev-eral optimizations. We have eliminated blacklist versionnumbers and users do not need to repeatedly obtain thecurrent version number from the NM. Instead serversobtain proofs of freshness every time period, and usersdirectly verify the freshness of blacklists upon down-load. Based on a hash-chain approach, the NM issueslightweight daisies to servers as proof of a blacklist’sfreshness, thus making blacklist updates highly efficient.Also, instead of embedding seeds, on which users mustperform computation to verify their blacklist status, theNM now embeds a unique identifier nymble∗, which theuser can directly recognize. Lastly, we have compactedseveral datastructures, especially the servers’ blacklists,which are downloaded by users in each connection, andreport on the various sizes in detail in Section 6. We alsoreport on our open-source implementation of Nymble,completely rewritten as a C library for efficiency.

3 SECURITY MODEL

Nymble aims for four security goals. We provide infor-mal definitions here; a detailed formalism can be foundin our technical report [16], which explains how thesegoals must also resist coalition attacks.

3.1 Goals and threatsAn entity is honest when its operations abide by thesystem’s specification. An honest entity can be curious:it attempts to infer knowledge from its own information(e.g., its secrets, state, and protocol communications). Anhonest entity becomes corrupt when it is compromised

by an attacker, and hence reveals its information at thetime of compromise, and operates under the attacker’sfull control, possibly deviating from the specification.

Blacklistability assures that any honest server canindeed block misbehaving users. Specifically, if an honestserver complains about a user that misbehaved in thecurrent linkability window, the complaint will be suc-cessful and the user will not be able to “nymble-connect,”i.e., establish a Nymble-authenticated connection, to theserver successfully in subsequent time periods (follow-ing the time of complaint) of that linkability window.

Rate-limiting assures any honest server that no usercan successfully nymble-connect to it more than oncewithin any single time period.

Non-frameability guarantees that any honest userwho is legitimate according to an honest server cannymble-connect to that server. This prevents an attackerfrom framing a legitimate honest user, e.g., by gettingthe user blacklisted for someone else’s misbehavior. Thisproperty assumes each user has a single unique identity.When IP addresses are used as the identity, it is possiblefor a user to “frame” an honest user who later obtainsthe same IP address. Non-frameability holds true onlyagainst attackers with different identities (IP addresses).

A user is legitimate according to a server if she has notbeen blacklisted by the server, and has not exceeded therate limit of establishing Nymble-connections. Honestservers must be able to differentiate between legitimateand illegitimate users.

Anonymity protects the anonymity of honest users,regardless of their legitimacy according to the (possiblycorrupt) server; the server cannot learn any more infor-mation beyond whether the user behind (an attempt tomake) a nymble-connection is legitimate or illegitimate.

3.2 Trust assumptions

We allow the servers and the users to be corrupt andcontrolled by an attacker. Not trusting these entities isimportant because encountering a corrupt server and/oruser is a realistic threat. Nymble must still attain its goalsunder such circumstances. With regard to the PM andNM, Nymble makes several assumptions on who trustswhom to be how for what guarantee. We summarize thesetrust assumptions as a matrix in Figure 3. Should a trustassumption become invalid, Nymble will not be able toprovide the corresponding guarantee.

For example, a corrupt PM or NM can violate Black-listability by issuing different pseudonyms or credentialsto blacklisted users. A dishonest PM (resp. NM) canframe a user by issuing her the pseudonym (resp. creden-tial) of another user who has already been blacklisted.To undermine the Anonymity of a user, a dishonest PM(resp. NM) can first impersonate the user by cloningher pseudonym (resp. credential) and then attempt toauthenticate to a server—a successful attempt revealsthat the user has already made a connection to theserver during the time period. Moreover, by studying the

Page 5: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

5

Who Whom How What

Servers PM & NM honest Blacklistability& Rate-limiting

Users PM & NM honest Non-frameabilityUsers PM honest AnonymityUsers NM honest & Anonymity

not curiousUsers PM or NM honest Non-identification

Fig. 3. Who trusts whom to be how for what guarantee.

complaint log, a curious NM can deduce that a user hasconnected more than once if she has been complainedabout two or more times. As already described in Sec-tion 2.3, the user must trust that at least the NM or PM ishonest to keep the user and server identity pair private.

4 PRELIMINARIES

4.1 Notation

The notation a ∈R S represents an element drawnuniformly at random from non-empty set S. N0 is theset of non-negative integers, and N is the set N0\{0}.s[i] is the i-th element of list s. s||t is the concatenationof (the unambiguous encoding of) lists s and t. Theempty list is denoted by ∅. We sometimes treat listsof tuples as dictionaries. For example, if L is the list((Alice, 1234), (Bob, 5678)), then L[Bob] denotes the tuple(Bob, 5678). If A is a (possibly probabilistic) algorithm,then A(x) denotes the output when A is executed giventhe input x. a := b means that b is assigned to a.

4.2 Cryptographic primitives

Nymble uses the following building blocks (concreteinstantiations are suggested in Section 6):

• Secure cryptographic hash functions. These are one-way and collision-resistant functions that resemblerandom oracles [5]. Denote the range of the hashfunctions by H.

• Secure message authentication (MA) [3]. These con-sist of the key generation (MA.KeyGen), and themessage authentication code (MAC) computation(MA.Mac) algorithms. Denote the domain of MACsby M.

• Secure symmetric-key encryption (Enc) [4]. These con-sist of the key generation (Enc.KeyGen), encryption(Enc.Encrypt), and decryption (Enc.Decrypt) algo-rithms. Denote the domain of ciphertexts by Γ.

• Secure digital signatures (Sig) [22]. These consist ofthe key generation (Sig.KeyGen), signing (Sig.Sign),and verification (Sig.Verify) algorithms. Denote thedomain of signatures by Σ.

4.3 Data structures

Nymble uses several important data structures:

Algorithm 1 PMCreatePseudonym

Input: (uid ,w) ∈ H × NPersistent state: pmState ∈ SP

Output: pnym ∈ P1: Extract nymKeyP ,macKeyNP from pmState2: nym := MA.Mac(uid ||w ,nymKeyP )3: mac := MA.Mac(nym||w ,macKeyNP )4: return pnym := (nym,mac)

Algorithm 2 NMVerifyPseudonym

Input: (pnym,w) ∈ P × NPersistent state: nmState ∈ SN

Output: b ∈ {true, false}1: Extract macKeyNP from nmState2: (nym,mac) := pnym3: return mac ?= MA.Mac(nym||w ,macKeyNP )

4.3.1 PseudonymsThe PM issues pseudonyms to users. A pseudonympnym has two components nym and mac: nym is apseudo-random mapping of the user’s identity (e.g.,IP address),7 the linkability window w for which thepseudonym is valid, and the PM’s secret key nymKeyP ;mac is a MAC that the NM uses to verify the integrityof the pseudonym. Algorithms 1 and 2 describe theprocedures of creating and verifying pseudonyms.

4.3.2 Seeds and nymblesA nymble is a pseudo-random number, which serves asan identifier for a particular time period. Nymbles (pre-sented by a user) across periods are unlinkable unless aserver has blacklisted that user. Nymbles are presentedas part of a nymble ticket, as described next.

As shown in Figure 4, seeds evolve throughout alinkability window using a seed-evolution function f ; theseed for the next time period (seednext) is computed fromthe seed for the current time period (seedcur) as

seednext = f(seedcur).

The nymble (nymblet) for a time period t is evaluatedby applying the nymble-evaluation function g to its corre-sponding seed (seed t), i.e.,

nymblet = g(seed t).

The NM sets seed0 to a pseudo-random mappingof the user’s pseudonym pnym, the (encoded) identitysid of the server (e.g., domain name), the linkabilitywindow w for which the seed is valid, and the NM’ssecret key seedKeyN . Seeds are therefore specific to user-server-window combinations. As a consequence, a seedis useful only for a particular server to link a particularuser during a particular linkability window.

7. In Nymble, identities (users’ and servers’) are encoded into afixed-length string using a cryptographic hash function.

Page 6: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

6

nymble1

seed1seed0 seed2ff

nymble2 nymbleL

seedLf f. . .

nymble*

g g g g

. . .

Fig. 4. Evolution of seeds and nymbles. Given seed i itis easy to compute nymblei,nymblei+1, . . . ,nymbleL, butnot nymble∗,nymble1, . . . ,nymblei−1.

In our Nymble construction, f and g are two distinctcryptographic hash functions. Hence, it is easy to com-pute future nymbles starting from a particular seed byapplying f and g appropriately, but infeasible to com-pute nymbles otherwise. Without a seed, the sequenceof nymbles appears unlinkable, and honest users canenjoy anonymity. Even when a seed for a particular timeperiod is obtained, all the nymbles prior to that timeperiod remain unlinkable.

4.3.3 Nymble tickets and credentialsA credential contains all the nymble tickets for a par-ticular linkability window that a user can present toa particular server. Algorithm 3 describes the follow-ing procedure of generating a credential upon request.A ticket contains a nymble specific to a server, timeperiod, and linkability window. ctxt is encrypted datathat the NM can use during a complaint involving thenymble ticket. In particular, ctxt contains the first nymble(nymble∗) in the user’s sequence of nymbles, and theseed used to generate that nymble. Upon a complaint,the NM extracts the user’s seed and issues it to the serverby evolving the seed, and nymble∗ helps the NM torecognize whether the user has already been blacklisted.

The MACs macN and macNS are used by the NMand the server respectively to verify the integrity of thenymble ticket as described in Algorithms 4 and 5. Aswill be explained later, the NM will need to verify theticket’s integrity upon a complaint from the server.

4.3.4 BlacklistsA server’s blacklist is a list of nymble∗’s correspondingto all the nymbles that the server has complained about.Users can quickly check their blacklisting status at aserver by checking to see whether their nymble∗ appearsin the server’s blacklist (see Algorithm 6).

Blacklist integrity It is important for users to be ableto check the integrity and freshness of blacklists, becauseotherwise servers could omit entries or present olderblacklists and link users without their knowledge. TheNM signs the blacklist (see Algorithm 7), along with theserver identity sid , the current time period t, currentlinkability window w, and target (used for freshness,explained soon), using its signing key signKeyN . As willbe explained later, during a complaint procedure, theNM needs to update the server’s blacklist, and thus

Algorithm 3 NMCreateCredential

Input: (pnym, sid ,w) ∈ P ×H× NPersistent state: nmState ∈ SN

Output: cred ∈ D1: Extract macKeyNS ,macKeyN , seedKeyN , encKeyN

from keys in nmState2: seed0 := f(Mac(pnym||sid ||w , seedKeyN ))3: nymble∗ := g(seed0)4: for t from 1 to L do5: seed t := f(seed t−1)6: nymblet := g(seed t)7: ctxt t := Enc.Encrypt(nymble∗||seed t , encKeyN )8: ticket ′t := sid ||t ||w ||nymblet ||ctxt t9: macN,t := MA.Mac(ticket ′t ,macKeyN )

10: macNS ,t := MA.Mac(ticket ′t ||macN,t ,macKeyNS )11: tickets[t ] := (t ,nymblet , ctxt t ,macN,t ,macNS ,t)12: return cred := (nymble∗, tickets)

Algorithm 4 NMVerifyTicket

Input: (sid , t ,w , ticket) ∈ H × N2 × TPersistent state: svrStateOutput: b ∈ {true, false}

1: Extract macKeyN from keys in nmState2: (·,nymble, ctxt ,macN ,macNS ) := ticket3: content := sid ||t ||w ||nymble||ctxt4: return macN

?= MA.Mac(content,macKeyN )

needs to check the integrity of the blacklist presentedby the server. To make this operation more efficient, theNM also generates a MAC using its secret key macKeyN

(line 3). At the end of the signing procedure, the NMreturns a blacklist certificate (line 6), which containsthe time period for which the certificate was issued, adaisy (used for freshness, explained soon), mac and sig .Algorithms 8 and 9 describe how users and the NM canverify the integrity and freshness of blacklists.

Blacklist freshness If the NM has signed the blacklistfor the current time period, users can simply verifythe digital signature in the certificate to infer that theblacklist is both valid (not tampered with) and fresh(since the current time period matches the time periodin the blacklist certificate). To prove the freshness ofblacklists every time period, however, the servers wouldneed to get the blacklists digitally signed every timeperiod, thus imposing a high load on the NM. To speedup this process, we use a hash chain [20], [29] to certifythat “blacklist from time period t is still fresh.”

For each complaint, the NM generates a new randomseed daisyL for a hash chain corresponding to timeperiod L. It then computes daisyL−1, daisyL−2, . . . , daisyt

up to current time period t by successively hashing theprevious daisy to generate the next with a cryptographichash function h. For example, daisy5 = h(daisy6).

As outlined later (in Algorithm 13), target is set todaisyt. Now, until the next update to the blacklist, the

Page 7: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

7

Algorithm 5 ServerVerifyTicket

Input: (t ,w , ticket) ∈ N2 × TPersistent state: svrStateOutput: b ∈ {true, false}

1: Extract sid ,macKeyNS from svrState2: (·,nymble, ctxt ,macN ,macNS ) := ticket3: content := sid ||t ||w ||nymble||ctxt ||macN

4: return macNS?= MA.Mac(content,macKeyNS )

Algorithm 6 UserCheckIfBlacklisted

Input: (sid , blist) ∈ H × Bn, n, ` ∈ N0

Persistent state: usrState ∈ SU

Output: b ∈ {true, false}1: Extract nymble∗ from cred in usrEntries[sid ] in

usrState2: return (nymble∗

?∈ blist)

NM need only release daisies for the current time periodinstead of digitally signing the blacklist. Given a certifi-cate from an older time period and daisyt for currenttime period t, users can verify the integrity and freshnessof the blacklist by computing the target from daisyt.

4.3.5 Complaints and linking tokensA server complains to the NM about a misbehaving userby submitting the user’s nymble ticket that was used inthe offending connection. The NM returns a seed, fromwhich the server creates a linking token, which containsthe seed and the corresponding nymble.

Each server maintains a list of linking tokens in alinking-list, and updates each token on the list every timeperiod. When a user presents a nymble ticket, the serverchecks the nymble within the ticket against the nymblesin the linking-list entries. A match indicates that the userhas been blacklisted.

4.4 Communication channelsNymble utilizes three types of communication channels,namely type-Basic, -Auth and -Anon (Figure 6).

We assume that a public-key infrastructure (PKI) suchas X.509 is in place, and that the NM, the PM and all theservers in Nymble have obtained a PKI credential froma well-established and trustworthy CA. (We stress thatthe users in Nymble, however, need not possess a PKIcredential.) These entities can thus realize type-Basic andtype-Auth channels to one another by setting up a TLS8

connection using their PKI credentials.All users can realize type-Basic channels to the NM, the

PM and any server, again by setting up a TLS connection.Additionally, by setting up a TLS connection over theTor anonymizing network,9 users can realize a type-Anonchannel to the NM and any server.

8. The Transport Layer Security Protocol Version 1.2. IETF RFC 5246.9. While we acknowledge the existence of attacks on Tor’s

anonymity, we assume Tor provides perfect anonymity [21] for thesake of arguing Nymble’s own anonymity guarantee.

daisy1target daisy2hh

daisyLh h. . .

Fig. 5. Given daisy i it is easy to verify the freshness ofthe blacklist by applying h i times to obtain target. Onlythe NM can compute the next daisy i+1 in the chain.

Algorithm 7 NMSignBL

Input: (sid , t ,w , target , blist) ∈ H×N2 ×H×Bn, n ∈ N0

Persistent state: nmState ∈ SN

Output: cert ∈ C1: Extract macKeyN , signKeyN from keys in nmState2: content := sid ||t ||w ||target ||blist3: mac := MA.Mac(content,macKeyN )4: sig := Sig.Sign(content, signKeyN )5: daisy := target6: return cert := (t , daisy , t ,mac, sig)

5 OUR NYMBLE CONSTRUCTION

5.1 System setupDuring setup, the NM and the PM interact as follows.

1) The NM executes NMInitState() (see Algo-rithm 10) and initializes its state nmState to thealgorithm’s output.

2) The NM extracts macKeyNP from nmState andsends it to the PM over a type-Auth channel.macKeyNP is a shared secret between the NM andthe PM, so that the NM can verify the authenticityof pseudonyms issued by the PM.

3) The PM generates nymKeyP by runningMac.KeyGen() and initializes its state pmState tothe pair (nymKeyP ,macKeyNP ).

4) The NM publishes verKeyN in nmState in a waythat the users in Nymble can obtain it and verifyits integrity at any time (e.g., during registration).

5.2 Server registrationTo participate in the Nymble system, a server withidentity sid initiates a type-Auth channel to the NM, andregisters with the NM according to the Server Registrationprotocol below. Each server may register at most once inany linkability window.

1) The NM makes sure that the server has not alreadyregistered: If (sid , ·, ·) ∈ nmEntries in its nmState,it terminates with failure; it proceeds otherwise.

2) The NM reads the current time period and linkabil-ity window as tnow and wnow respectively, and thenobtains a svrState by running (see Algorithm 11)

NMRegisterServernmState(sid , tnow,wnow).

3) The NM appends svrState to its nmState, sends itto the Server, and terminates with success.

4) The server, on receiving svrState , records it as itsstate, and terminates with success.

Page 8: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

8

Algorithm 8 VerifyBL

Input: (sid , t ,w , blist , cert) ∈ H × N2 × Bn × C, n ∈ N0

Output: b ∈ {true, false}1: (td, daisy , ts,mac, sig) := cert2: if td 6= t ∨ td < ts then3: return false4: target := h(td−ts)(daisy)5: content := sid ||ts||w ||target ||blist6: return Sig.Verify(content, sig , verKeyN )

Algorithm 9 NMVerifyBL

Input: (sid , t ,w , blist , cert) ∈ H × N2 × Bn × C, n ∈ N0

Persistent state: nmState ∈ SN

Output: b ∈ {true, false}1-6: Same as lines 1–6 in VerifyBL

7: Extract macKeyN from keys in nmState8: return mac ?= MA.Mac(content,macKeyN )

In svrState , macKeyNS is a key shared between theNM and the server for verifying the authenticity ofnymble tickets; time lastUpd indicates the time periodwhen the blacklist was last updated, which is initializedto tnow, the current time period at registration.

5.3 User registration

A user with identity uid must register with the PMonce each linkability window. To do so, the user initiatesa type-Basic channel to the PM, followed by the UserRegistration protocol described below.

1) The PM checks if the user is allowed to register. Inour current implementation the PM infers the reg-istering user’s IP address from the communicationchannel, and makes sure that the IP address doesnot belong to a known Tor exit node. If this is notthe case, the PM terminates with failure.

2) Otherwise, the PM reads the current linkabilitywindow as wnow, and runs

pnym := PMCreatePseudonympmState(uid ,wnow).

The PM then gives pnym to the user, and terminateswith success.

3) The user, on receiving pnym , sets her state usrStateto (pnym, ∅), and terminates with success.

5.4 Credential acquisition

To establish a Nymble-connection to a server, a usermust provide a valid ticket, which is acquired as partof a credential from the NM. To acquire a credentialfor server sid during the current linkability window, aregistered user initiates a type-Anon channel to the NM,followed by the Credential Acquisition protocol below.

1) The user extracts pnym from usrState and sendsthe pair (pnym, sid) to the NM.

Type Initiator Responder Link

Basic – Authenticated ConfidentialAuth Authenticated Authenticated ConfidentialAnon Anonymous Authenticated Confidential

Fig. 6. Different types of channels utilized in Nymble.

Algorithm 10 NMInitState

Output: nmState ∈ SN

1: macKeyNP := Mac.KeyGen()2: macKeyN := Mac.KeyGen()3: seedKeyN := Mac.KeyGen()4: (encKeyN , decKeyN ) := Enc.KeyGen()5: (signKeyN , verKeyN ) := Sig.KeyGen()6: keys := (macKeyNP ,macKeyN , seedKeyN ,7: encKeyN , decKeyN , signKeyN , verKeyN )8: nmEntries := ∅9: return nmState := (keys,nmEntries)

2) The NM reads the current linkability window aswnow. It makes sure the user’s pnym is valid: If

NMVerifyPseudonymnmState(pnym,wnow)

returns false, the NM terminates with failure; itproceeds otherwise.

3) The NM runs

NMCreateCredentialnmState(pnym, sid ,wnow),

which returns a credential cred . The NM sends credto the user and terminates with success.

4) The user, on receiving cred , creates usrEntry :=(sid , cred , false), appends it to its state usrState ,and terminates with success.

5.5 Nymble-connection establishment

To establish a connection to a server sid , the user initiatesa type-Anon channel to the server, followed by theNymble-connection establishment protocol described below.

5.5.1 Blacklist validation

1) The server sends 〈blist , cert〉 to the user, where blistis its blacklist for the current time period and certis the certificate on blist . (We will describe how theserver can update its blacklist soon.)

2) The user reads the current time period and linka-bility window as t (U)

now and w (U)now and assumes these

values to be current for the rest of the protocol.3) For freshness and integrity the user checks if

VerifyBLusrState(sid , t (U)now,w (U)

now, blist , cert) = true.

If not, she terminates the protocol with failure.

Page 9: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

9

Algorithm 11 NMRegisterServer

Input: (sid , t ,w) ∈ H × N2

Persistent state: nmState ∈ SN

Output: svrState ∈ SS

1: (keys,nmEntries) := nmState2: macKeyNS := Mac.KeyGen()3: daisyL ∈R H4: nmEntries ′ := nmEntries||(sid ,macKeyNS , daisyL, t)5: nmState := (keys,nmEntries ′)6: target := h(L−t+1)(daisyL)7: blist := ∅8: cert := NMSignBLnmState(sid , t ,w , target , blist)9: svrState := (sid ,macKeyNS , blist , cert , ∅, ∅, ∅, t)

10: return svrState

5.5.2 Privacy check

Since multiple connection-establishment attempts by auser to the same server within the same time periodcan be linkable, the user keeps track of whether she hasalready disclosed a ticket to the server in the current timeperiod by maintaining a boolean variable ticketDisclosedfor the server in her state.

Furthermore, since a user who has been blacklisted bya server can have her connection-establishment attemptslinked to her past establishment, the user must makesure that she has not been blacklisted thus far.

Consequently, if ticketDisclosed in usrEntries[sid ] inthe user’s usrState is true, or

UserCheckIfBlacklistedusrState(sid , blist) = true,

then it is unsafe for the user to proceed and the user setssafe to false and terminates the protocol with failure.10

5.5.3 Ticket examination

1) The user sets ticketDisclosed in usrEntries[sid ]in usrState to true. She then sends 〈ticket〉 tothe server, where ticket is ticket [t (U)

now] in cred inusrEntries[sid ] in usrState .Note that the user discloses ticket for time periodt (U)now after verifying blist ’s freshness for t (U)

now. Thisprocedure avoids the situation in which the userverifies the current blacklist just before a timeperiod ends, and then presents a newer ticket forthe next time period.

2) On receiving 〈ticket〉, the server reads the currenttime period and linkability window as t (S)

now andw (S)

now respectively. The server then checks that:• ticket is fresh, i.e., ticket 6∈ slist in server’s state.• ticket is valid, i.e., on input (t (S)

now,w (S)now, ticket),

the algorithm ServerVerifyTicket returnstrue. (See Algorithm 5.)

10. We note that a nymble-authenticated session may be long-lived,where actions within the session are linkable. It is the establishmentof multiple sessions within a time period that is disallowed.

Algorithm 12 ServerLinkTicket

Input: ticket ∈ TPersistent state: svrState ∈ SS

Output: b ∈ {true, false}1: Extract lnkng-tokens from svrState2: (·,nymble, · · · ) := ticket3: for all i = 1 to |lnkng-tokens| do4: if (·,nymble) = lnkng-tokens[i] then5: return true6: return false

• ticket is not linked (in other words, the userhas not been blacklisted), i.e.,

ServerLinkTicketsvrState(ticket) = false.

(See Algorithm 12.)3) If any of the checks above fails, the server sends

〈goodbye〉 to the user and terminates with failure.Otherwise, it adds ticket to slist in its state, sends〈okay〉 to the user and terminates with success.

4) On receiving 〈okay〉, the user terminates in success.

5.6 Service provision and access loggingIf both the user and the server terminate with success inthe Nymble-connection Establishment described above, theserver may start serving the user over the same channel.The server records ticket and logs the access during thesession for a potential complaint in the future.

5.7 Auditing and filing for complaintsIf at some later time the server desires to blacklist theuser behind a Nymble-connection, during the estab-lishment of which the server collected ticket from theuser, the server files a complaint by appending ticket tocmplnt-tickets in its svrState .

Filed complaints are batched up. They are processedduring the next blacklist update (to be described next).

5.8 Blacklist updateServers update their blacklists for the current time periodfor two purposes. First, as mentioned earlier, the serverneeds to provide the user with its blacklist (and blacklistcertificate) for the current time period during a Nymble-connection establishment. Second, the server needs to beable to blacklist the misbehaving users by processing thenewly filed complaints (since last update).

The procedure for updating blacklists (and their cer-tificates) differs depending on whether complaints areinvolved. When there is no complaint (i.e., the server’scmplnt-tickets is empty), blacklists stay unchanged; thecertificates need only a “light refreshment.” When thereare complaints, on the other hand, new entries are addedto the blacklists and certificates need to be regenerated.Since these updates are certified for integrity and fresh-ness at the granularity of time periods, multiple updates

Page 10: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

10

within a single time period are disallowed (otherwiseservers could send users stale blacklists).

Our current implementation employs “lazy” update:the server updates its blacklist upon its first Nymble-connection establishment request in a time period.

5.8.1 Without complaints1) The server with identity sid initiates a type-Auth

channel to the NM, and sends a request to the NMfor a blacklist update.

2) The NM reads the current time period as tnow. Itextracts tlastUpd and daisyL from nmEntry for sid innmState. If tlastUpd is tnow, the server has alreadyupdated its blacklist for the current time period,and the NM terminates the protocol as failure.

3) Otherwise, the NM updates tlastUpd to tnow. Itcomputes daisy ′ := h(L−tnow+1)(daisyL) and sends(tnow, daisy ′) to the server.

4) The server replaces td and daisy in cert in blist inits svrState with tnow and daisy ′ respectively.

5.8.2 With complaints1) The server with identity sid initiates a

type-Auth channel to the NM and sends(blist , cert , cmplnt-tickets) from its svrState asa blacklist update request.

2) The NM reads the current time period as t (N)now. It

runsNMHandleComplaintsnmState

on input (sid , tnow,wnow, blist , cert , cmplnt-tickets).(See Algorithm 15.) If the algorithm returns ⊥, theNM considers the update request invalid, in whichcase the NM terminates the protocol as failure.

3) Otherwise, the NM relays the algorithm’s output(blist ′, cert ′, seeds), to the server.

4) The server updates its state svrState as follows.It replaces blist and cert with blist ||blist ′ andcert ′ respectively, and sets cmplnt-tkts to ∅. Foreach seed ∈ seeds , the server creates a token as(seed , g(seed)) and appends it to lnkng-tokens . Fi-nally, the server terminates with success.

We now explain what NMHandleComplaints does.The algorithm first checks the integrity and freshness ofthe blacklist (lines 2–6) and that the NM hasn’t alreadyupdated the server’s blacklist for the current time period.It then checks if all complaints are valid for some previ-ous time period during the current linkability window(lines 7–12). Finally, the algorithm prepares an answer tothe update request by invoking NMComputeBLUpdateand NMComputeSeeds (see Algorithm 14) (lines 13–16).NMComputeBLUpdate (see Algorithm 13) creates new

entries to be appended to the server’s blacklist. Eachentry is either the actual nymble∗ of the user beingcomplained about if the user has not been blacklistedalready, or a random nymble otherwise. This way, theserver cannot learn if two complaints are about the sameuser, and thus cannot link the Nymble-connections to the

Algorithm 13 NMComputeBLUpdate

Input: (sid , t ,w , blist , cmplnt-tickets) ∈ H×N2×Bn×T m

Persistent state: nmState ∈ SN

Output: (blist ′, cert ′) ∈ Bm × C1: (keys,nmEntries) := nmState

2:

(·,macKeyN , seedKeyN ,encKeyN , ·, signKeyN , ·

):= keys

3: for i = 1 to m do4: (·, ·, ctxt , ·, ·) := cmplnt-tickets[i]5: nymble∗||seed := Decrypt(ctxt , decKeyN )6: if nymble∗ ∈ blist then7: blist ′[i] ∈R H8: else9: blist ′[i] := nymble∗

10: daisy ′L ∈R H11: target ′ := h(L−t+1)(daisy ′L)12: cert ′ := NMSignBL(sid , t ,w , target ′, blist ||blist ′)13: Replace daisyL and tlastUpd in nmEntries[sid ] in

nmState with daisy ′L and by t respectively14: return (blist ′, cert ′)

Algorithm 14 NMComputeSeeds

Input: (t , blist , cmplnt-tickets) ∈ N× Bn × T m

Persistent state: nmState ∈ SN

Output: seeds ∈ Hm

1: Extract decKeyN from keys in nmState2: for all i = 1 to m do3: (t ′,nymble, ctxt , · · · ) := cmplnt-tickets[i]4: nymble∗||seed := Enc.Decrypt(ctxt , decKeyN )5: if nymble∗ ∈ blist then6: seeds[i] ∈R H7: else8: seeds[i] := f (t−t′)(seed)9: return seeds

same user. NMComputeSeeds (see Algorithm 14) usesthe same trick when computing a seed that enables theserver to link a blacklisted user.

5.9 Periodic update5.9.1 Per time periodAt the end of each time period that is not the last ofthe current linkability window, each registered serverupdates its svrState by running (see Algorithm 7)

ServerUpdateStatesvrState(),

which prepares the linking-token-list for the new timeperiod. Each entry is updated by evolving the seed andcomputing the corresponding nymble.

Each registered user sets ticketDisclosed in everyusrEntry in usrState to false, signaling that the user hasnot disclosed any ticket in the new time period.

5.9.2 Per linkability windowAt the beginning of each linkability window, all theentities, i.e., the PM, the NM, the servers and the users

Page 11: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

11

Algorithm 15 NMHandleComplaints

Input: (sid , t ,w , blist , cert , cmplnt-tickets) ∈ H × N2 ×Bn × C × T m

Persistent state: nmState ∈ SN

Output: (blist ′, cert ′, seeds) ∈ Bm × C ×Hm

1: Extract time lastUpd from nmEntries[sid ] in nmState2: b1 := (time lastUpd < t)3: b2 :=4: NMVerifyBLnmState(sid , time lastUpd,w , blist , cert)5: if ¬(b1 ∧ b2) then6: return ⊥7: for all i = 1 to m do8: ticket := cmplnt-tickets[i]; (t , · · · ) := ticket9: bi1 := t < t

10: bi2 := NMVerifyTicketnmState(sid , t ,w , ticket)11: if ¬(bi1 ∧ bi2) then12: return ⊥13: (blist ′, cert ′) :=14: NMComputeBLUpdatenmState(sid , t ,w , blist , cert)15: seeds :=16: NMComputeSeedsnmState(t , blist , cmplnt-tickets)17: return (blist ′, cert ′, seeds)

Algorithm 16 ServerUpdateState

Persistent state: svrState ∈ SS

1: Extract lnkng-tokens from svrState2: for all i = 1 to |lnkng-tokens| do3: (seed ,nymble) := lnkng-tokens[i]4: seed ′ := f(seed); nymble ′ := g(seed ′)5: tokens ′[i] := (seed ′,nymble ′)6: Replace lnkng-tokens in svrState with tokens ′

7: Replace seen-tickets in svrState with ∅

erase their state and start afresh. In other words, the NMand the PM must re-setup Nymble for the new currentlinkability window and all servers and users must re-register if they still want to use Nymble.

6 PERFORMANCE EVALUATION

We implemented Nymble and collected various empiri-cal performance numbers, which verify the linear (in thenumber of “entries” as described below) time and spacecosts of the various operations and data structures.

6.1 Implementation and experimental setupWe implemented Nymble as a C++ library along withRuby and JavaScript bindings. One could, however,easily compile bindings for any of the languages (suchas Python, PHP, and Perl) supported by the SimplifiedWrapper and Interface Generator (SWIG) for example.We utilize OpenSSL for all the cryptographic primitives.

We use SHA-256 for the cryptographic hash functions;HMAC-SHA-256 for the message authentication MA;AES-256 in CBC-mode for the symmetric encryptionEnc; and 2048-bit RSASSA-PSA for the digital signatures

0

50

100

150

200

250

0 200 400 600 800 1000

Size

(KB)

Number of Entries

Blacklist update requestCredential

Blacklist update responseBlacklist

Fig. 7. The marshaled size of various Nymble datastructures. The X-axis refers to the number of entries—complaints in the blacklist update request, tickets in thecredential, tokens and seeds in the blacklist update re-sponse, and nymbles in the blacklist.

Sig. We chose RSA over DSA for digital signaturesbecause of its faster verification speed—in our system,verification occurs more often than signing.

We evaluated our system on a 2.2 GHz Intel Core 2Duo Macbook Pro with 4 GB of RAM. The PM, the NM,and the server were implemented as Mongrel (Ruby’sversion of Apache) servers. The user portion was im-plemented as a Firefox 3 extension in JavaScript withXPCOM bindings to the Nymble C++ library. For eachexperiment relating to protocol performance, we reportthe average of 10 runs. The evaluation of data-structuresizes is the byte count of the marshalled data structuresthat would be sent over the network.

6.2 Experimental results

Figure 7 shows the size of the various data structures.The X-axis represents the number of entries in each datastructure—complaints in the blacklist update request,tickets in the credential (equal to L, the number of timeperiods in a linkability window), nymbles in the black-list, tokens and seeds in the blacklist update response,and nymbles in the blacklist. For example, a linkabilitywindow of 1 day with 5 minute time periods equatesto L = 288.11 The size of a credential in this case isabout 59 KB. The size of a blacklist update request with50 complaints is roughly 11 KB, whereas the size of ablacklist update response for 50 complaints is only about4 KB. The size of a blacklist (downloaded by users beforeeach connection) with 500 nymbles is 17 KB.

In general, each structure grows linearly as the numberof entries increases. Credentials and blacklist updaterequests grow at the same rate because a credential isa collection of tickets which is more or less what is sent

11. The security-related tradeoffs for this parameter are discussed inSection 2.4. The performance tradeoffs are apparent in this section (seeFigures 7 and 8(a) for credential size and acquisition times).

Page 12: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

12

0

5

10

15

20

25

30

35

0 200 400 600 800 1000

Tim

e (m

s)

Number of Entries

Credential acquisitionBlacklist update

(a) Blacklist updates take several milliseconds and credentials can begenerated in 9 ms for the suggested parameter of L=288.

0

1

2

3

4

5

6

0 200 400 600 800 1000

Tim

e (m

s)

Number of Entries

User blacklist validation and privacy checkServer blacklist update

Server update stateServer ticket examination

(b) The bottleneck operation of server ticket examination is less than 1ms and validating the blacklist takes the user only a few ms.

Fig. 8. Nymble’s performance at (a) the NM and (b) theuser and the server when performing various protocols.

as a complaint list when the server wishes to updateits blacklist. In our implementation we use Google’sProtocol Buffers to (un)marshal these structures becauseit is cross-platform friendly and language-agnostic.

Figure 8(a) shows the amount of time it takes the NMto perform various protocols. It takes about 9 ms tocreate a credential when L = 288. Note that this protocoloccurs only once every linkability window for each userwanting to connect to a particular server. For blacklistupdates, the initial jump in the graph corresponds to thefixed overhead associated with signing a blacklist. To ex-ecute the update blacklist protocol with 500 complaintsit takes the NM about 54 ms. However, when there areno complaints, it takes the NM on average less than amillisecond to update the daisy.

Figure 8(b) shows the amount of time it takes theserver and user to perform various protocols. Theseprotocols are relatively inexpensive by design, i.e., theamount of computation performed by the users andservers should be minimal. For example, it takes lessthan 3 ms for a user to execute a security check on ablacklist with 500 nymbles. Note that this figure includessignature verification as well, and hence the fixed-cost

overhead exhibited in the graph. It takes less than amillisecond for a server to perform authentication of aticket against a blacklist with 500 nymbles. Every timeperiod (e.g., every 5 minutes), a server must update itsstate and blacklist. Given a linking list with 500 entries,the server will spend less than 2 ms updating the linkinglist. If the server were to issue a blacklist update requestwith 500 complaints, it would take less than 3 ms for theserver to update its blacklist.

To measure the latency perceived by an authenticatinguser, we simulated a client authenticating to a serverwith 500 blacklist entries. We simulated two scenarios,with the PM, NM and server (a) on the local networkand (b) on a remote machine (48 ms round trip time).12

On average it took a total of 470 ms for the full protocolon the local network and 2001 ms for the remote case:acquiring a pseudonym (87 ms local; 307 ms remote) andcredential (107 ms; 575 ms), acquiring the blacklist andthe server checking if the user is blacklisted (179 ms; 723ms), and finally authenticating (97 ms; 295 ms).13

7 SECURITY ANALYSIS

Theorem 1: Our Nymble construction has Blacklistabil-ity, Rate-limiting, Non-frameability and Anonymity pro-vided that the trust assumptions in Section 3.2 hold true,and the cryptographic primitives used are secure. ut

We summarize the proof of Theorem 1. Please refer toour technical report [16] for a detailed version.

7.1 BlacklistabilityAn honest PM and NM will issue a coalition of c uniqueusers at most c valid credentials for a given server.Because of the security of HMAC, only the NM can issuevalid tickets, and for any time period the coalition hasat most c valid tickets, and can thus make at most cconnections to the server in any time period regardlessof the server’s blacklisting. It suffices to show that if eachof the c users has been blacklisted in some previous timeperiod of the current linkability window, the coalitioncannot authenticate in the current time period k∗.

Assume the contrary that connection establishment k∗

using one of the coalition members’ ticket∗ was success-ful even though the user was blacklisted in a previoustime period k′. Since connection establishments k′ andk∗ were successful, the corresponding tickets ticket ′ andticket∗ must be valid. Assuming the security of digitalsignatures and HMAC, an honest server can alwayscontact an honest NM with a valid ticket and the NMwill successfully terminate during the blacklist update.Since the server blacklisted the valid ticket ′ and updatesits linking list honestly, the ServerLinkTicket willreturn fail on input ticket∗, and thus the connection k∗

must fail, which is a contradiction.

12. The remote machine was a 256 slice from Slicehost runningUbuntu 4.09, and provide an indication of performance on the network.

13. For perspective, we note that with SSL disabled the total timeswere 65 ms on the local network and 1086 ms for the remote case.

Page 13: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

13

7.2 Non-FrameabilityAssume the contrary that the adversary successfullyframed honest user i∗ with respect to an honest serverin time period t∗, and thus user i∗ was unable to connectin time period t∗ using ticket∗ even though none ofhis tickets were previously blacklisted. Because of thesecurity of HMAC, and since the PM and NM are honest,the adversary cannot forge tickets for user i∗, and theserver cannot already have seen ticket∗; it must be thatticket∗ was linked to an entry in the linking list. Thusthere exists an entry (seed∗,nymble∗) in the server’s link-ing list, such that the nymble in ticket∗ equals nymble∗.The server must have obtained this entry in a successfulblacklist update for some valid ticketb, implying the NMhad created this ticket for some user i.

If i 6= i∗, then user i’s seed0 is different from user i∗’sseed0 so long as the PM is honest, and yet the two seed0’sevolve to the same seed∗, which contradicts the collision-resistance property of the evolution function. Thus wehave i = i∗. But as already argued, the adversarycannot forge i∗’s ticketb, and it must be the case thati∗’s ticketb was blacklisted before t∗, which contradictsour assumption that i∗ was a legitimate user in time t∗.

7.3 AnonymityWe show that an adversary learns only that some le-gitimate user connected or that some illegitimate user’sconnection failed, i.e., there are two anonymity sets oflegitimate and illegitimate users.

Distinguishing between two illegitimate users Weargue that any two chosen illegitimate users out ofthe control of the adversary will react indistinguishably.Since all honest users execute the Nymble-connection Es-tablishment protocol in exactly the same manner up untilthe end of the Blacklist validation stage (Section 5.5.1), itsuffices to show that every illegitimate user will evaluatesafe to false, and hence terminate the protocol withfailure at the end of the Privacy check stage (Section 5.5.2).

For an illegitimate user (attempting a new connec-tion) who has already disclosed a ticket during a con-nection establishment earlier in the same time period,ticketDisclosed for the server will have been set to trueand safe is evaluated to false during establishment k∗.

An illegitimate user who has not disclosed a ticketduring the same time period must already be blacklisted.Thus the server complained about some previous ticket∗

of the user. Since the NM is honest, the user’s nymble∗

appears in some previous blacklist of the server. Since anhonest NM never deletes entries from a blacklist, it willappear in all subsequent blacklists, and safe is evaluatedto false for the current blacklist. Servers cannot forgeblacklists or present blacklists for earlier time periods(as otherwise the digital signature would be forgeable,or the hash in the daisy chain could be inverted).

Distinguishing between two legitimate users Theauthenticity of the channel implies that a legitimateuser knows the correct identity of the server and thus

boolean ticketDisclosed for the server remains false.Furthermore, UserCheckIfBlacklisted returns false(assuming the security of digital signatures) and safe isevaluated to true for the legitimate user.

Now, in the ticket presented by the user, only nymbleand ctxt are functions of the user’s identity. Since theadversary does not know the decryption key, the CCA2security of the encryption implies that ctxt reveals noinformation about the user’s identity to the adversary.Finally, since the server has not obtained any seeds forthe user, under the Random Oracle model the nymblepresented by the user is indistinguishable from randomand cannot be linked with other nymbles presented bythe user. Furthermore, if and when the server complainsabout a user’s tickets in the future, the NM ensuresthat only one real seed is issued (subsequent seedscorresponding to the same user are random values), andthus the server cannot distinguish between legitimateusers for a particular time period by issuing complaintsin a future time period.

7.4 Across multiple linkability windowsWith multiple linkability windows, our Nymble con-struction still has Accountability and Non-frameabilitybecause each ticket is valid for and only for a spe-cific linkability window; it still has Anonymity becausepseudonyms are an output of a collision-resistant func-tion that takes the linkability window as input.

8 DISCUSSION

IP-address blocking By picking IP addresses as theresource for limiting the Sybil attack, our current im-plementation closely mimics IP-address blocking em-ployed by Internet services. There are, however, someinherent limitations to using IP addresses as the scarceresource. If a user can obtain multiple addresses she cancircumvent both nymble-based and regular IP-addressblocking. Subnet-based blocking alleviates this problem,and while it is possible to modify our system to supportsubnet-based blocking, new privacy challenges emerge;a more thorough description is left for future work.

Other resources Users of anonymizing networkswould be reluctant to use resources that directly revealtheir identity (e.g., passports or a national PKI). Emailaddresses could provide more privacy, but provide weakblacklistability guarantees because users can easily createnew email addresses. Other possible resources includeclient puzzles [25] and e-cash, where users are requiredto perform a certain amount of computation or paymoney to acquire a credential. These approaches wouldlimit the number of credentials obtained by a singleindividual by raising the cost of acquiring credentials.

Server-specific linkability windows An enhancementwould be to provide support to vary T and L for differ-ent servers. As described, our system does not supportvarying linkability windows, but does support varyingtime periods. This is because the PM is not aware of

Page 14: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

14

the server the user wishes to connect to, yet it mustissue pseudonyms specific to a linkability window. Wedo note that the use of resources such as client puzzlesor e-cash would eliminate the need for a PM, and userscould obtain Nymbles directly from the NM. In that case,server-specific linkability windows could be used.

Side-channel attacks While our current implementa-tion does not fully protect against side-channel attacks,we mitigate the risks. We have implemented variousalgorithms in a way that their execution time leaks littleinformation that cannot already be inferred from thealgorithm’s output. Also, since a confidential channeldoes not hide the size of the communication, we haveconstructed the protocols so that each kind of protocolmessage is of the same size regardless of the identity orcurrent legitimacy of the user.

9 CONCLUSIONS

We have proposed and built a comprehensive credentialsystem called Nymble, which can be used to add a layerof accountability to any publicly known anonymizingnetwork. Servers can blacklist misbehaving users whilemaintaining their privacy, and we show how these prop-erties can be attained in a way that is practical, efficient,and sensitive to needs of both users and services.

We hope that our work will increase the mainstreamacceptance of anonymizing networks such as Tor, whichhas thus far been completely blocked by several servicesbecause of users who abuse their anonymity.

ACKNOWLEDGMENTS

Peter C. Johnson and Daniel Peebles helped in the earlystages of prototyping. We are grateful for the suggestionsand help from Roger Dingledine and Nick Mathewson.

REFERENCES

[1] G. Ateniese, J. Camenisch, M. Joye, and G. Tsudik. A Practical andProvably Secure Coalition-Resistant Group Signature Scheme. InCRYPTO, LNCS 1880, pages 255–270. Springer, 2000.

[2] G. Ateniese, D. X. Song, and G. Tsudik. Quasi-Efficient Revocationin Group Signatures. In Financial Cryptography, LNCS 2357, pages183–197. Springer, 2002.

[3] M. Bellare, R. Canetti, and H. Krawczyk. Keying Hash Functionsfor Message Authentication. In CRYPTO, LNCS 1109, pages 1–15.Springer, 1996.

[4] M. Bellare, A. Desai, E. Jokipii, and P. Rogaway. A ConcreteSecurity Treatment of Symmetric Encryption. In FOCS, pages394–403, 1997.

[5] M. Bellare and P. Rogaway. Random Oracles Are Practical: AParadigm for Designing Efficient Protocols. In Proceedings of the1st ACM conference on Computer and communications security, pages62–73. ACM Press, 1993.

[6] M. Bellare, H. Shi, and C. Zhang. Foundations of Group Sig-natures: The Case of Dynamic Groups. In CT-RSA, LNCS 3376,pages 136–153. Springer, 2005.

[7] D. Boneh and H. Shacham. Group Signatures with Verifier-LocalRevocation. In ACM Conference on Computer and CommunicationsSecurity, pages 168–177. ACM, 2004.

[8] S. Brands. Untraceable Off-line Cash in Wallets with Observers(Extended Abstract). In CRYPTO, LNCS 773, pages 302–318.Springer, 1993.

[9] E. Bresson and J. Stern. Efficient Revocation in Group Signatures.In Public Key Cryptography, LNCS 1992, pages 190–206. Springer,2001.

[10] J. Camenisch and A. Lysyanskaya. An Efficient System for Non-transferable Anonymous Credentials with Optional AnonymityRevocation. In EUROCRYPT, LNCS 2045, pages 93–118. Springer,2001.

[11] J. Camenisch and A. Lysyanskaya. Dynamic Accumulators andApplication to Efficient Revocation of Anonymous Credentials. InCRYPTO, LNCS 2442, pages 61–76. Springer, 2002.

[12] J. Camenisch and A. Lysyanskaya. Signature Schemes and Anony-mous Credentials from Bilinear Maps. In CRYPTO, LNCS 3152,pages 56–72. Springer, 2004.

[13] D. Chaum. Blind Signatures for Untraceable Payments. InCRYPTO, pages 199–203, 1982.

[14] D. Chaum. Showing Credentials without Identification Transfeer-ing Signatures between Unconditionally Unlinkable Pseudonyms.In AUSCRYPT, LNCS 453, pages 246–264. Springer, 1990.

[15] D. Chaum and E. van Heyst. Group Signatures. In EUROCRYPT,pages 257–265, 1991.

[16] C. Cornelius, A. Kapadia, P. P. Tsang, and S. W. Smith. Nymble:Blocking Misbehaving Users in Anonymizing Networks. Tech-nical Report TR2008-637, Dartmouth College, Computer Science,Hanover, NH, December 2008.

[17] I. Damgard. Payment Systems and Credential Mechanisms withProvable Security Against Abuse by Individuals. In CRYPTO,LNCS 403, pages 328–335. Springer, 1988.

[18] R. Dingledine, N. Mathewson, and P. Syverson. Tor: The Second-Generation Onion Router. In Usenix Security Symposium, pages303–320, Aug. 2004.

[19] J. R. Douceur. The Sybil Attack. In IPTPS, LNCS 2429, pages251–260. Springer, 2002.

[20] S. Even, O. Goldreich, and S. Micali. On-Line/Off-Line DigitalSchemes. In CRYPTO, LNCS 435, pages 263–275. Springer, 1989.

[21] J. Feigenbaum, A. Johnson, and P. F. Syverson. A Model of OnionRouting with Provable Anonymity. In Financial Cryptography,LNCS 4886, pages 57–71. Springer, 2007.

[22] S. Goldwasser, S. Micali, and R. L. Rivest. A Digital SignatureScheme Secure Against Adaptive Chosen-Message Attacks. SIAMJ. Comput., 17(2):281–308, 1988.

[23] J. E. Holt and K. E. Seamons. Nym: Practical Pseudonymity forAnonymous Networks. Internet Security Research Lab TechnicalReport 2006-4, Brigham Young University, June 2006.

[24] P. C. Johnson, A. Kapadia, P. P. Tsang, and S. W. Smith. Nymble:Anonymous IP-Address Blocking. In Privacy Enhancing Technolo-gies, LNCS 4776, pages 113–133. Springer, 2007.

[25] A. Juels and J. G. Brainard. Client Puzzles: A CryptographicCountermeasure Against Connection Depletion Attacks. In NDSS.The Internet Society, 1999.

[26] A. Kiayias, Y. Tsiounis, and M. Yung. Traceable Signatures. InEUROCRYPT, LNCS 3027, pages 571–589. Springer, 2004.

[27] B. N. Levine, C. Shields, and N. B. Margolin. A Survey ofSolutions to the Sybil Attack. Technical Report Tech report 2006-052, University of Massachusetts Amherst, Oct 2006.

[28] A. Lysyanskaya, R. L. Rivest, A. Sahai, and S. Wolf. PseudonymSystems. In Selected Areas in Cryptography, LNCS 1758, pages 184–199. Springer, 1999.

[29] S. Micali. NOVOMODO: Scalable Certificate Validation andSimplified PKI Management. In 1st Annual PKI Research Workshop- Proceeding, April 2002.

[30] T. Nakanishi and N. Funabiki. Verifier-Local Revocation GroupSignature Schemes with Backward Unlinkability from BilinearMaps. In ASIACRYPT, LNCS 3788, pages 533–548. Springer, 2005.

[31] L. Nguyen. Accumulators from Bilinear Pairings and Applica-tions. In CT-RSA, LNCS 3376, pages 275–292. Springer, 2005.

[32] I. Teranishi, J. Furukawa, and K. Sako. k-Times AnonymousAuthentication (Extended Abstract). In ASIACRYPT, LNCS 3329,pages 308–322. Springer, 2004.

[33] P. P. Tsang, M. H. Au, A. Kapadia, and S. W. Smith. BlacklistableAnonymous Credentials: Blocking Misbehaving Users WithoutTTPs. In CCS ’07: Proceedings of the 14th ACM conference onComputer and communications security, pages 72–81, New York, NY,USA, 2007. ACM.

[34] P. P. Tsang, M. H. Au, A. Kapadia, and S. W. Smith. PEREA:Towards Practical TTP-Free Revocation in Anonymous Authen-tication. In ACM Conference on Computer and CommunicationsSecurity, pages 333–344. ACM, 2008.

Page 15: Nymble: Blocking Misbehaving Users in Anonymizing Networks · Nymble: Blocking Misbehaving Users in Anonymizing Networks Patrick P. Tsang, Apu Kapadia, Member, IEEE, Cory Cornelius,

15

Patrick Tsang is a Ph.D. candidate in the Com-puter Science department at Dartmouth College.His research covers security and trust in comput-ers and networks, privacy-enhancing technolo-gies and applied cryptography. He is particularlyinterested in developing scalable security andprivacy solutions for resource-constrained envi-ronments. He is a member of the InternationalAssociation for Cryptologic Research (IACR).

Dr. Apu Kapadia received his Ph.D. from theUniversity of Illinois at Urbana-Champaign andreceived a four-year High-Performance Com-puter Science Fellowship from the Departmentof Energy for his dissertation research. Followinghis doctorate, Apu joined Dartmouth Collegeas a Post-Doctoral Research Fellow with theInstitute for Security Technology Studies (ISTS),and then as a Member of Technical Staff at MITLincoln Laboratory. He joined Indiana Univer-sity Bloomington as an Assistant Professor in

the School of Informatics and Computing in 2009. Apu is an activeresearcher in security and privacy and is particularly interested inanonymizing networks, usable security, security in peer-to-peer andmobile networks, and applied cryptography. He is a member of IEEEand ACM.

Cory Cornelius is a Ph.D. student in the Com-puter Science department at Dartmouth Col-lege. His research interests are security, privacy,networks, and, in particular, the intersection ofthese fields.

Prof. Sean W. Smith has been active in infor-mation security since before there was a Web.Previously a research scientist at Los AlamosNational Lab and at IBM, he joined Dartmouthin 2000. Sean has published two books; beengranted over a dozen patents; and advised overthree dozen Ph.D., M.S., and senior honors the-ses. Sean was educated at Princeton and CMU,and is a member of Phi Beta Kappa and SigmaXi.