22
;LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 91 conference reports THANKS TO OUR SUMMARIZERS SECURITY '06 Madhukar Anand Tanya Bragin Kevin Butler Lionel Litty Michael Locasto Bryan D. Payne Manigandan Radhakrishnan Micah Sherr Patrick Traynor Wei Xu METRICON 1.0 Dan Geer NSPW ’06 Matt Bishop Michael Collins Carrie Gates Abe Singer CONTENTS 91 15th USENIX Security Symposium 112 MetriCon 1.0 116 New Security Paradigms Workshop (NSPW ’06) Security ’06: 15th USENIX Security Symposium Vancouver, B.C., Canada July 31–August 4, 2006 KEYNOTE ADDRESS The Current State of the War on Terrorism and What It Means for Homeland Security and Technology Richard A. Clarke, Chairman, Good Harbor Consulting LLC Summarized by Kevin Butler Richard Clarke gave an impas- sioned, scathing indictment of the U.S. government’s policies and practices that electrified the crowd in Vancouver, bringing them to their feet at the conclu- sion of his keynote speech. Clarke was the former counter- terrorism advisor on the U.S. National Security Council for the Bush administration and has advised every administration since Reagan. He brought his 30 years of consulting work to bear on his speech, where he laid out many of the issues facing both the security research community and the populace at large, given the current global situation. Clarke began by commenting that, nearing the fifth anniversary of the September 11 attacks, we should examine what measures to increase security have been addressed since that time. As Clarke pointed out, five years is a long time: It took less than five years for the Allies to destroy Nazi Germany and Imperial Japan. The day after the 9/11 attacks, President Bush asked Clarke to outline a series of plans to maintain security. Clarke responded with a series of blue- prints for going after Al Qaeda while simultaneously addressing vulnerabilities at home. Part of these plans concerned the state of IT security, given the growing dependence of the nation on cyber-based systems. Unfortu- nately, since that time, little of what had been discussed was implemented, while other, more draconian measures that impinge on civil liberties have instead taken their place. Much of Clarke’s criticism was aimed at the administration’s han- dling of the Al Qaeda threat. After 9/11 there was talk about finding the cells and taking out the lead- ership, when the focus should have also included attacking the root causes of problems: a battle of ideas, where the West displays a better ideology that is more appealing to the Islamic world than that promulgated by Al Qaeda. The results have not been great to this point, as Al Qaeda still exists and has transformed from a hierarchical, pyramid structure to a decentralized organization with dozens of indi- vidual organizations. In the 36 months after 9/11, Clarke asserted, the number of attacks by groups related to Al Qaeda doubled around the world. Clarke satirically referred to an Arabic satellite TV station estab- lished by the U.S. government “that nobody watches” as our entry into the battle of ideas. He also showed many similarities between the American occupa- tion of Iraq and the French occu- pation of Algeria, a conflict that the French ultimately lost. By alienating the Iraqi population thanks to the stories from Abu Grahaib, the assaults on Fallujah, and the perception of killing innocent civilians in Iraq (despite what the truth may be, Al Jazeera and Al Arabia show daily images of Americans killing women and children through aerial and other assaults), we are losing the battle of ideas and convincing the Iraqis that Al Qaeda was right: For its oil, we are taking over a country that did nothing to us. We are almost fulfilling bin Laden’s prophecies. What are the metrics of success in Iraq? Clarke sug-

12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 91

conferencereports

THANKS TO OUR SUMMARIZERS

SECURITY '06

Madhukar Anand

Tanya Bragin

Kevin Butler

Lionel Litty

Michael Locasto

Bryan D. Payne

Manigandan Radhakrishnan

Micah Sherr

Patrick Traynor

Wei Xu

METRICON 1.0

Dan Geer

NSPW ’06

Matt Bishop

Michael Collins

Carrie Gates

Abe Singer

CONTENTS

91 15th USENIX Security Symposium112 MetriCon 1.0116 New Security Paradigms

Workshop (NSPW ’06)

Security ’06: 15th USENIXSecurity Symposium

Vancouver, B.C., CanadaJuly 31–August 4, 2006

KEYNOTE ADDRESS

The Current State of the War onTerrorism and What It Means forHomeland Security and Technology

Richard A. Clarke, Chairman,Good Harbor Consulting LLC

Summarized by Kevin Butler

Richard Clarke gave an impas-sioned, scathing indictment ofthe U.S. government’s policiesand practices that electrified thecrowd in Vancouver, bringingthem to their feet at the conclu-sion of his keynote speech.Clarke was the former counter-terrorism advisor on the U.S.National Security Council for theBush administration and hasadvised every administrationsince Reagan. He brought his 30years of consulting work to bearon his speech, where he laid outmany of the issues facing both thesecurity research community andthe populace at large, given thecurrent global situation.

Clarke began by commentingthat, nearing the fifth anniversaryof the September 11 attacks, weshould examine what measures toincrease security have beenaddressed since that time. AsClarke pointed out, five years is along time: It took less than fiveyears for the Allies to destroyNazi Germany and ImperialJapan. The day after the 9/11attacks, President Bush askedClarke to outline a series of plansto maintain security. Clarkeresponded with a series of blue-prints for going after Al Qaedawhile simultaneously addressingvulnerabilities at home. Part ofthese plans concerned the state ofIT security, given the growingdependence of the nation oncyber-based systems. Unfortu-

nately, since that time, little ofwhat had been discussed wasimplemented, while other, moredraconian measures that impingeon civil liberties have insteadtaken their place.

Much of Clarke’s criticism wasaimed at the administration’s han-dling of the Al Qaeda threat. After9/11 there was talk about findingthe cells and taking out the lead-ership, when the focus shouldhave also included attacking theroot causes of problems: a battleof ideas, where the West displaysa better ideology that is moreappealing to the Islamic worldthan that promulgated by AlQaeda. The results have not beengreat to this point, as Al Qaedastill exists and has transformedfrom a hierarchical, pyramidstructure to a decentralizedorganization with dozens of indi-vidual organizations. In the 36months after 9/11, Clarkeasserted, the number of attacksby groups related to Al Qaedadoubled around the world.Clarke satirically referred to anArabic satellite TV station estab-lished by the U.S. government“that nobody watches” as ourentry into the battle of ideas.

He also showed many similaritiesbetween the American occupa-tion of Iraq and the French occu-pation of Algeria, a conflict thatthe French ultimately lost. Byalienating the Iraqi populationthanks to the stories from AbuGrahaib, the assaults on Fallujah,and the perception of killinginnocent civilians in Iraq (despitewhat the truth may be, Al Jazeeraand Al Arabia show daily imagesof Americans killing women andchildren through aerial and otherassaults), we are losing the battleof ideas and convincing the Iraqisthat Al Qaeda was right: For itsoil, we are taking over a countrythat did nothing to us. We arealmost fulfilling bin Laden’sprophecies. What are the metricsof success in Iraq? Clarke sug-

Page 2: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

gested these are the economyand stability of the country.When asked how many innocentcivilians had been killed in Iraq,President Bush replied, “Around30,000”; however, a team fromJohns Hopkins did a field surveyusing internationally acceptedmetrics and found the numberwas close to 100,000, not count-ing the 2,500 dead and 10,000wounded Americans. The Penta-gon admits to having alreadyspent $400 billion on the war,and some estimates put the totalprice tag at over a trillion dollars.Part of the rationale of the higheconomic and human cost is thatfighting the terrorists “overthere” means we won’t be fight-ing them “here.” This argumentis logically fallacious, however;nothing we are doing there pre-vents them from fighting us hereor anywhere else, as evidencedby the attacks on the Madridtrain system, attacks on the Lon-don subway, and the plannedattacks in Toronto. None of thefighting overseas prevented thesesituations, nor will it preventfuture attacks in Canada and theUnited States.

Clarke argued that more effortshould have been spent protect-ing critical infrastructure andminimizing the possibilities ofdamage here. An ABC reportshowed how easy it was to take abackpack onto a train and leaveit unattended, the modus oper-andi of the Madrid attacks. Simi-larly, 120 chemical plants in theUnited States store lethal gas,with over a million people in theplume radius. If one such plantwere attacked and a plume un-leashed, over 17,000 casualtieswould be expected; however,Congress has debated plant pro-tection for over three years butnothing has been done. This hasoccurred with issue after issue.Instead, measures that reducecivil liberties—such as the inef-

fective color-coded threat sys-tem—have been implemented byabusing the term “security.” Thegovernment engaged in masswiretapping without any judicialoversight and, combined withother abuses, this has led to theassociation of security with BigBrother in the minds of the pub-lic.

The current situation has impli-cations for security researchers,as currently many systems werenot designed with security inmind, and Clarke asserted thatresearch into securing these sys-tems is necessary. However,DARPA no longer funds many ofthese activities, and researchmoney is being handled by theDepartment of Homeland Secu-rity through the HSARPRA pro-gram; however, this program isseverely underfunded, with thisyear’s budget cut from $16 mil-lion to $12 million—Clarkenoted that such was the adminis-tration’s commitment to cyber-security research. This view isshared by the president’s owncommittee, which identified theprogram as requiring significantnew funding, but this has not yetcome to pass.

Clarke made some controversialstatements describing what regu-lation should be required. Hesuggested that identifying bestpractices for ISPs is possible toregulate the service providerindustry. He also suggested gov-ernmental regulation in criticalsystems such as electrical powersystems and banking, but heconsiders the current govern-ment ideologically opposed tosuch measures. The upshot isthat many opportunities to focuson real security have beenwasted. The best thing for us inthe community to do is not torely on the government, but tounite and come up with ways ofsolving problems ourselves, as

well as staying vigilant for gov-ernmental attempts to furthererode civil liberties. He ended hisaddress by noting that when peo-ple erode civil liberties, privacy,and constitutional guarantees,they need to be reminded oftheir oath to defend the Consti-tution against all enemies, lestthey become the enemies them-selves.

A spirited question period fol-lowed. To the question “Is it thatthe administration is apathetic,malicious, or doesn’t have aclue?” Clarke responded, “Yes.”The administration, in his view,cannot be educated and needs tobe replaced. He also reiteratedthe need to win the battle ofideas by supporting moderateand progressive voices in theMiddle East, and he noted theamount of “security theatre”being practiced, where there is ashow of security without sub-stance. A particularly resonantpoint was the need to find anddisclose vulnerabilities; asClarke noted, our real enemiesalready know our vulnerabilitiesand security processes, and pre-tending we won’t talk aboutthem because these enemies willlearn about them is very wrong.One of the best defenses againstfuture attacks is an environmentof openness and disclosure.Another important point wasthat we need to accept that riskis present and casualties willoccur; the question of an attackis not “if” but, rather, “when.”We should focus on mitigatingthe effects of the next attackswhenever they may happen,while focusing on ideals such asdue process and guaranteedConstitutional rights that are thehallmarks of our civilization: Ifwe cannot preserve those, thenwhat are we fighting for?

92 ; LOG I N : VO L . 3 1 , NO . 6

Page 3: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

AUTH E NTI C ATI O N

Summarized by Micah Sherr

A Usability Study and Critique of TwoPassword Managers

Sonia Chiasson, P.C. van Oorschot, andRobert Biddle, Carleton University

Sonia Chiasson presented ausability study of two passwordmanagers, PwdHash (USENIXSecurity ’05) and Password Mul-tiplier (WWW2005). Althoughboth password managers attemptto increase security by shiftingthe burden of creating and main-taining strong passwords awayfrom the user, the study demon-strated that usability issues inthe applications led to incorrectusage of the managers and, insome cases, to the leakage ofpassword information.

Sonia presented the results of afollow-up questionnaire given tomembers of the focus group. Par-ticipants indicated that they didnot perceive an improved senseof security. Users were uncom-fortable with not knowing thepasswords to the sites they vis-ited, and many had misconcep-tions as to the operation of themanagers (e.g., the belief thattheir passwords were kept in alarge and remote database). Atthe same time, the transparencyof the applications often led tofalse senses of security. Forexample, users who installed apassword manager but failed toactivate it falsely believed theirpasswords were being protected.Sonia concluded by noting thatthe usability of a system directlyimpacts its security. Security sys-tems must support users’ con-ceptions as to how softwareshould operate; they shouldimbue a sound (and not neces-sarily transparent) sense ofsecurity.

On the Release of CRLs in Public KeyInfrastructure

Chengyu Ma, Beijing University;Nan Hu and Yingjiu Li, SingaporeManagement University

The talk was given by Yingjiu Li,who presented a method ofdetermining a more nearly opti-mal schedule for distributing cer-tificate revocation lists (CRLs)—time-stamped lists of certificatesthat have expired or becomecompromised or otherwise inval-idated. In the first part of his talk,Yingjiu described an empiricalstudy of CRL distributions. Byexamining CRL release data fromVeriSign, he and his coauthorsderived a probability distributionfunction for the distribution ofCRLs. They determined thatmost revocation requests occurwithin the first few days after acertificate is issued and are lesscommon as the certificate ages.

In the second part of his talk,Yingjiu introduced a new eco-nomic model for optimizing thedistribution of CRLs. The objec-tive of the model is to minimizethe total operational cost of dis-tributing the certificates whilemaintaining a reasonable degreeof security. Yingjiu presented evi-dence that different types of cer-tificate authorities (CAs)—i.e.,start-up CAs and well-estab-lished CAs—should adopt differ-ent strategies for CRL distribu-tion and that the number of CRLdistributions will stabilize after aperiod of time.

Biometric Authentication Revisited:Understanding the Impact of Wolvesin Sheep’s Clothing

Lucas Ballard and Fabian Monrose,Johns Hopkins University; DanielLopresti, Lehigh University

Lucas Ballard presented a studyof biometric authentication, amethod of using biological andphysiological traits to authenti-cate humans. In particular,Lucas’s talk focused on exposing

weaknesses in both the imple-mentations and evaluation meth-ods for biometric authenticationschemes.

The presentation examined thesecurity of biometrics based onthe writing of passphrases (i.e., ahuman is authenticated by com-paring his or her passphraseagainst a corpus of previouslysupplied handwriting samples).Lucas presented results thatshow that even novice forgers(students who showed some tal-ent in producing forgeries) canproduce forgeries far better thanwhat the biometric systems pre-dicted could be produced.

In the last part of the talk, Lucasdescribed a system for automat-ing forgeries based on a genera-tive model. A synthesis algo-rithm selects n-grams from acorpus of the target’s handwrit-ing samples. The n-grams arethen used to forge a passphrasein the target’s handwriting style.Lucas presented results thatshow that the generated andforged passphrases perform bet-ter than those generated byskilled forgers.

I N V ITE D TA L K

Selling Security to Software Develop-ers: Lessons Learned While Building aCommercial Static Analysis Tool

Brian Chess, Fortify Software

Summarized by Tanya Bragin

Brian Chess said that findingsecurity vulnerabilities in sourcecode is like trying to find theproverbial “needle in a hay-stack.” However, he stressed thatwell-built, highly configurabletools not only help organizationsuncover potential problems butalso shape long-term softwaredevelopment policy and itsenforcement.

There are several reasons staticanalysis works well. First, exist-ing software development proce-

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 93

Page 4: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

dures can easily be modified toemploy source code analysis andin doing so allow bugs to befixed before deployment. Sec-ond, there are a limited numberof common vulnerability types,so patterns can be used to detectthem via static analysis methods.Third, static analysis tools can bebuilt to give uniform coverageover large code bases in reason-able time, as opposed to, forexample, those using dynamicanalysis, and thus they are prac-tical.

Although there have been manyacademic efforts to develop staticanalysis techniques, they are notappropriate for use in commer-cial environments, for a numberof reasons. First, the tools devel-oped in academic settings werenot explicitly designed for thescale and variety of environ-ments in which they could beused. Consequently, they gener-ally lack the customizabilityneeded in a complex organiza-tion. Second, reporting in suchtools tends to be insufficient forenterprise use. Finally, the man-agement interfaces required forover-time tracking of defects andactivities is lacking.

It turns out that these shortcom-ings are key to the eventual suc-cess of a practical static analysistool. Aside from simply workingwell, finding important vulnera-bilities, not crashing, not hang-ing, and not having any vulnera-bilities of its own, the tool has tobe scalable, well-documented,and intuitive to a person who isnot an expert in all nuances ofsoftware security. The main usersof static analysis tools in indus-try tend to not just be developersthemselves, but security auditingdepartments that outline, moni-tor, and enforce organization-wide security policies, and Brianstressed the importance of work-ing closely with such teams.

Adopting new technology thatfundamentally changes how soft-

ware should be written is hardand can cause resistance fromdevelopers. “Developers are opti-mizers,” stated Brian, himselfhaving come from that back-ground. “If they think that secu-rity is optional, they will opti-mize it out.” He continued witha couple of amusing examples ofexcuses he has heard out in thefield about why obvious vulnera-bilities do not actually makecode insecure, such as “I trustthe system administrator” or“That code will never be run.”But in reality these vulnerabili-ties can open unexpected back-doors into critical software,which significantly damage thecompany brand, open it up toliability, and cause economiclosses.

Teaching developers to thinkabout security is critical, con-cluded Brian, but we need toenable them to do their job effi-ciently with the necessary tools.A good analogy is spelling, heoffered. We teach everybody tospell, but we still have spellcheckers in our word processors.Similarly to how we already havecompilers to check for properprogramming language syntax,static analysis tools can go a stepfurther and help enforce goodprogramming practices thatresult in fewer software vulnera-bilities and incrementally helpdevelopers write good code ontheir own.

I N V ITE D TA L K

Security Vulnerabilities, Exploits, andAttack Patterns: 15 Years of Art,Pseudo-Science, Fun, and Profit

Ivan Arce, Core Security Technologies

Summarized by Madhukar Anand

In conferences such as theUSENIX Security Symposium, itis typical to learn about the workand experience of academiciansand researchers in computer sci-ence. The invited talk by Ivan

Arce (CTO of Core SecurityTechnologies) was very unlikethis. It offered an insightful,alternate perspective on the evo-lution of information security.Arce presented his views basedon his forays and experiments indesigning systems for fun andprofit.

Arce, introduced to the world ofcomputers through the Com-modore VIC 20, recounted howhe saw computers as a toy toexperiment with. Consequently,he grew up with a notion ofcomputers as a game rather thana tool. Programming the VICstaught him that computers couldbe tailored and have many hid-den features, and he learned thekey difference between an enemyand an adversary. Some of theseexperiences have shaped hisviews on the evolution of attacksand security paradigms in thepast 15 years.

Evolution of Attacks: In the early1990s (post–RT Morris worm)there was no Linux, TCP/IPstack in Windows, or the Web.Security information flowedfrom technical journals, bulletinboards, and underground publi-cations. For most people outsideof the academic world, theseunderground publications werethe main source. This was thetime when many home-com-puter users started turning pro-fessional—kids who picked upprogramming, started exploringtheir home PCs, and turned tosecurity for jobs. By the mid-1990s, exploits in shell codebecome common and stacksmashing was done for fun andprofit. From then on, there hasbeen increasing complexity insoftware and, subsequently, anincreasing sophistication inattacker skills.

Today, the attacker has the abilityto hack networks, write viruses,and reverse-engineer software.The trend now, post-2001, is togo directly at a workstation and

94 ; LOG I N : VO L . 3 1 , NO . 6

Page 5: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

own the whole network. Thisworks because there are myriadvulnerable applications, it is dif-ficult to implement inventory, itis difficult to deploy and managecountermeasures, desktops areoperated by careless andunaware users, and, above all, itrepresents the law of “minimaleffort for maximum profit.”

Arce observed that in our “questfor completeness,” we have toostrongly emphasized under-standing attacks, rather than vul-nerabilities. To illustrate thispoint, he gave the example of thessh v1 CRC insertion attack dis-covered in 1998. The patch dis-tributed to fix this vulnerabilitywas itself found to contain a bug.Therefore, it is important that wefocus on getting the basics rightrather than going after theobscure and complicated.

Another problem with currentsystems is that underlying mod-els of systems have not kept upwith changing technology. Themanagement, deployment (poli-cies, ACLs, RBAC, authentica-tion tokens, etc.), and generationof security value (AV/IDS signa-tures, patches, certificates, vul-nerability checks, etc.) in aninformation system are central-ized. In contrast, outside of theinformation world, there hasbeen an evolution of open sourcesoftware, P2P, mobile code, andtechnology based on social net-working and reputation- and col-laboration-based systems. So thecurrent information securitytechnology and business modelsshould not ignore them.

In conclusion, Arce felt that thegeneration that entered theinformation security field in theearly 1990s has successfullymanaged to create a market forsecurity. This generation hasembraced and promoted openand unmediated discussionabout security. He felt that we arebetter off now than we were 15years ago (at least, we know

where we are wrong), and so heurged everyone to learn frompast mistakes, stop reinventingthe wheel, and brace ourselvesfor a new generation in informa-tion security.

AT TAC KS

Summarized by Patrick Traynor

How to Build a Low-Cost, Extended-Range RFID Skimmer

Ilan Kirschenbaum and Avishai Wool,Tel Aviv University

Ilan Kirschenbaum explainedthat as radio-frequency identifier(RFID) technology begins topermeate our lives (credit cards,passports, etc.), many in thecommunity have voiced theirconcerns about the security ofsuch devices. An attacker can,with little difficulty, skim thedata entrusted to these devicesfrom a distance of tens of meterswith little effort. Unlike withother networking technologies,however, no one has examinedthe challenges of building adevice capable of such a feat. Toaddress these issues, Ilan dis-cussed the process of developinga portable, extended-range ISO14443-A compliant RFID skim-mer. Using a Texas InstrumentsRFID reader and one of twoantennas (a 10x15 cm printedPCB antenna and a 39 cm cop-per-tube antenna), the research-ers were able to develop a leach-ing device capable of capturingRFID data from a distance of 35cm, or approximately 3.5 timesthe advertised operational range.At approximately US$100, sucha mechanism is easily within thebudget of any dedicated adver-sary.

Many of the questions addressedthe use of a loop antenna, whichinherently has poor range. Addi-tionally, because of the size ofthe copper-tubing antenna,many in the audience wonderedabout the practicality of usingsuch a device without it being

noticed. In its current form, Ilanargued, the device would in factbe extremely effective if theattacker could hide it well beforethe time of attack (e.g., behinddrywall or around a pottedplant). Ilan then concluded bymentioning that although thecurrent work was simply a proofof concept, better antenna tech-nology could easily be appliedfor a modest increase in cost.

Keyboards and Covert Channels

Gaurav Shah, Andres Molina, and MattBlaze, University of Pennsylvania

Awarded Best Student Paper

Gaurav Shah pointed out thatthere are uncountable ways toextract sensitive data from a tar-geted machine. Most of thesetechniques, from “shoulder-surf-ing” to the installation of spy-ware, are easily detectable givencurrent techniques. Shah dis-cussed the JitterBug, a PS/2 key-board attachment designed tocapture such information. Un-like previous hardware schemes,which suffer from problems inrecovery, the JitterBug deliverscompromised data through aninterpacket covert channel.Based on the time between twopackets, an adversary locatedsomewhere between the legiti-mate sender and the receiver candecode single bits of informationwithout arousing suspicion oneither end of the communica-tion. Guarav stressed that suchattacks are not limited to pass-word stealing. By placing suchhardware into machines prior totheir distribution, adversariesfrom rival corporations to watch-ful governments can easily estab-lish surveillance over a numberof targets.

When asked how the JitterBugdetermined which data to trans-mit, Guarav discussed the use oftrigger sequences. For example,attacks targeting user passwordscan be activated after “ssh host-name.domain” is typed into the

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 95

Page 6: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

keyboard. Others asked aboutthe use of more robust TCPcovert channel encoding meth-ods that have been implementedin software. Guarav agreed thatsuch encoding mechanismscould be incorporated into theJitterBug and that the use ofinterpacket spacing was suffi-cient for a proof of concept.Guarav concluded with a discus-sion on the need for error correc-tion and repeated messages inorder to account for network jit-ter and uncertainty. By publish-ing this work, the authors hopedto draw attention to the need forall devices to be part of thetrusted computing base in orderfor real security to be attained.

Lessons from the Sony CD DRMEpisode

J. Alex Halderman and Edward W.Felten, Princeton University

When Sony, the world’s secondlargest music company, deployeddigital rights management (DRM)protection for its music, it alsoreleased a number of critical vul-nerabilities into the digital land-scape. J. Alex Halderman dis-cussed how he and EdwardFelten of Princeton Universitylogged the events as the firstmajor rollout of mandatory DRMsoftware occurred. The software,made by First4Internet and Sunn-Comm, works by automaticallyinstalling itself on the first inser-tion of a CD-ROM. Both productsinstalled themselves as rootkits toeach client’s machine, in order toprevent their subsequent unin-stallation. Unfortunately, becauseof vulnerabilities in both prod-ucts, other malicious programscould then exploit the DRM soft-ware to gain root control. Sonyinitially refused to offer uninstal-lation software, even when pre-sented with the weaknesses dis-covered by numerous researchers.However, after numerous classaction lawsuits and much public

duress, the music distributoreventually provided the tools nec-essary to rid infected machines ofthese rootkits.

Questioning by the audience waslimited, focusing largely on theresponse from other members ofindustry. For example, a numberof attendees were interested inwhether or not the software byFirst4Internet and SunnCommhas been classified as spyware byantivirus companies. The speak-er said that he believed that allmajor antivirus manufacturers infact now classify this software assuch. Halderman then discussedhow the elimination of enablingmechanisms such as AutoRunwould go a long way to preventa repeat of such an incident. Inthe end, this work highlights theconflict between entities at-tempting to protect their intel-lectual property and legitimateusers of that content. In anattempt to prevent “unapproved”use of their music, Sony in factbroke into and endangered itsclients’ PCs. Because the use ofDRM is effectively an attempt toundermine a user’s control of hisor her own computing appara-tus, its use is fundamentally asecurity issue.

S O F T WA R E

Summarized by Lionel Litty

Milk or Wine: Does Software SecurityImprove with Age?

Andy Ozment and Stuart E. Schechter,MIT Lincoln Laboratory

Andy Ozment presented theresults of a study that examinedwhether the reporting rate ofsecurity vulnerabilities decreasesover time. An earlier study byEric Rescorla had found no evi-dence that this was the case forthe four operating systems hestudied. The authors of the studyfound otherwise by scrutinizingOpenBSD over a period of 7.5

years, starting with version 2.3,the “foundational” version forthis study. They carefully exam-ined the 140 security advisoriesreleased over that period of time,as well as the OpenBSD sourcecode repository, to determineexactly when a vulnerability wasintroduced in the source codeand when it was fixed. In addi-tion, they estimated the amountof new code introduced by eachrelease of OpenBSD to find outwhether there was a correlationwith the number of vulnerabili-ties that were introduced in arelease.

Andy reported that a majority ofthe vulnerabilities found duringthe past 7.5 years were alreadypresent in the foundational ver-sion and that the median lifetimeof those vulnerabilities was atleast 2.6 years. In addition, heshowed that the rate at whichvulnerabilities were discoveredin the foundational versiondecreased over time, with onlyhalf as many vulnerabilitiesreported during the second halfof the time period. Using a relia-bility growth model, the authorsalso estimated that 42 vulnera-bilities remained in the founda-tional version. Andy concludedby saying that they found noclear correlation between num-ber of lines of code added be-tween versions and number ofnew vulnerabilities, and that theOpenBSD source code was in-deed like wine.

When asked whether he thoughtthe findings would extend toother operating systems that mayhave less of an emphasis onsecurity, Andy replied that hisstab in the dark was that it may,but that the decrease rate wasprobably slower. Theo de Raadtasked (via IRC and Dug Song)whether the study took intoaccount the mitigation mecha-nisms, such as having a nonexe-cutable stack, added by Open-

96 ; LOG I N : VO L . 3 1 , NO . 6

Page 7: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

BSD since the foundational ver-sion. Andy said that they had notand that taking them into ac-count would make the picturelook even better for OpenBSD.

N-Variant Systems: A SecretlessFramework for Security ThroughDiversity

Benjamin Cox, David Evans, AdrianFilipi, Jonathan Rowanhill, Wei Hu,Jack Davidson, John Knight, AnhNguyen-Tuong, and Jason Hiser,University of Virginia

Artificial diversity consists ofintroducing differences in theway software is executed on amachine, for example by ran-domizing the memory layout.This may foil an attacker who istrying to take control of themachine, as long as the attackerdoes not know the key used torandomize the memory layout.Benjamin Cox described a newtechnique that would eliminatethis reliance on a secret by run-ning several variants of the sameprocess in parallel, resulting inprovable security. For all benigninputs, the variants wouldbehave identically, but for someattacks, the variants woulddiverge. A monitor would detectthe divergence and raise analarm.

Benjamin discussed two ways tointroduce diversity between thevariants: partitioning the addressspace and tagging instructions.When partitioning the addressspace, the two variants use dis-tinct memory addresses. If anattack relies on the absolutememory address of either codeor data, it will cause a memoryfault in at least one of the vari-ants. Instruction set tagging addsa tag to each instruction. Thistag is checked and removed by adynamic binary translator beforethe code is run. Any codeinjected by the attacker will havean incorrect tag for at least one

of the variants, once again rais-ing an alarm. Depending on theworkload and the diversificationtechnique used, overhead for theprototype ranges from 17% tomore than double the executiontime for Apache.

Ron Jackson questioned wheth-er security was provable for theaddress space partitioningscheme. He described an attackconsisting of only overwritingsome of the bits of an address,allowing the attack to be suc-cessful in both variants. Ben-jamin answered that this attacksimply fell outside the class ofattacks prevented by addressspace partitioning. Other limita-tions of the current prototype,left for future work, includedhandling signals and sharedmemory.

Taint-Enhanced Policy Enforcement:A Practical Approach to Defeat aWide Range of Attacks

Wei Xu, Sandeep Bhatkar, and R. Sekar,Stony Brook University

Wei Xu observed that before per-forming a security-sensitiveoperation, an application shouldcheck that this operation is notunder the control of an attacker.This can be done by trackingwhat parts of an operation aretainted, i.e., originating fromuntrusted interfaces. Data inmemory originating from thenetwork is marked as tainted andtaint is then propagated through-out the execution of the applica-tion. To this end, the source codeof the application needs to beautomatically instrumented tomaintain a bitmap of taintedmemory locations.

When a security-sensitive opera-tion is performed, a policy-defined check is conducted tomake sure the operation is safe.For instance, an application mayconstruct a SQL query based onuser input. To prevent SQL injec-

tions, a check ensures that onlydata fields in the query are underuser control. Wei suggested thata policy ensuring that no twoconsecutive tokens are taintedwould achieve that goal. Similarpolicies can be used to defeatother types of attacks, such ascross-site scripting, path-tra-versal attacks, and commandinjections. These policies areenabled through the availabilityof fine-grained taint information.Various low-level optimizationsallow the performance overheadof the approach to be below 10%for server applications.

Asked whether one could applythe same approach directly tobinaries, thus suppressing therequirement that source codebe available, Wei explained thatit may be possible but wouldalmost certainly preclude theoptimizations they used to keepthe performance overhead ac-ceptable. Anil Somayaji askedwhether hardware support couldhelp improve performance. Weianswered that it was a possibilitythat had not been fully exploredyet.

I N V ITE D TA L K

Signaling Vulnerabilities inWiretapping Systems

Matt Blaze, University of Pennsylvania

Summarized by Patrick Traynor

With a recent push for the com-pliance of VoIP systems with theCommunications Assistance forLaw Enforcement Act (CALEA),many in the community havefocused on the technical hurdlesof this proposal. In an attempt tounderstand the security and reli-ability of these new eavesdrop-ping systems, Matt Blaze and ateam of University of Pennsylva-nia researchers have examinedthe inner workings of traditionaltelecommunications surveillance

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 97

Page 8: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

technology. As part of the Trust-worthy Network Eavesdroppingand Countermeasures (TNEC)project, this group explores thechallenges behind building bothsurveillance-friendly and resist-ant networks. Most important,this work asks the question,“Assuming properly functioningtools, is wiretap evidence trust-worthy?”

There are a number of ways inwhich telecommunications traf-fic can be intercepted. If directaccess to the targeted phone ispossible, the eavesdroppingparty can gain control of the twowires connecting the phone tothe network (known as the localloop). Such methods are proneto discovery by the monitoredparty and are therefore not typi-cally used in law enforcement.This work instead focuses on theuse of the loop extender, a wire-tapping device placed betweenthe targeted phone and a net-work switch that allows for mon-itoring in both traffic analysis(“pen register”) and full contentmodes. The loop extender worksby redirecting a copy of the con-tent from the “target line” torecording devices on a “friendlyline.” Because both traffic analy-sis and content are always sent tothe friendly line, the actual infor-mation recorded is set via config-uration at the recording device.

Blaze and his team began byexamining vulnerabilities in thepen register mode. In order todetermine which number a tar-get is calling, the recordingequipment decodes the series ofunique audio tones sent acrossthe line. In order to accommo-date a wide range and qualitylevel of products, both the net-work switching equipment andeavesdropping tools accept arange of analog tones. Theranges accepted by the devices,however, are not the same. Ac-cordingly, Blaze was able to show

that sending tones just outsidethe range of the telecommunica-tions switch (either above orbelow) but within the range ofthe loop extender allowed theparty being eavesdropped uponto forge the list of dialed num-bers recorded in pen registermode.

Blaze then discussed subtle vul-nerabilities in the audio record-ing mechanism used for eaves-dropping. The taping of audiocontent automatically begins atthe cessation of the “C-tone,” anaudio tone transmitted across aphone line when it is not in use.Accordingly, when the C-tone isagain detected, the recordingequipment automatically shutsitself off. To prevent sensitivematerial from being recorded, atargeted party can simply playthe C-tone continuously into thereceiver. This technique is suc-cessful even when the C-tone isplayed at 1/1000 of its normalvolume.

The new CALEA standards fixmany of the problems associatedwith loop extender wiretaps.Eavesdropping now occurs at theswitch itself and signaling hasbeen separated onto a separatechannel. Unfortunately, manyvendors continue to offer sys-tems that are compliant to theC-tone shutoff mechanism. De-pending on the configuration ofthese new CALEA-compliantdevices, modern eavesdroppingsystems may also be susceptibleto the vulnerabilities discoveredfor loop extender systems.

The attendees of the talk askedMatt a wide variety of questions.Because of the illegality of wire-tapping equipment, many werecurious as to how this group wasable to perform their work with-out fear of legal recourse. Mattmentioned that because thisresearch was part of an NSFgrant devoted to wiretappingtechnology, they had implied

consent to carry it out. Otherswere curious about testing thelocal loop by the phone compa-nies. Matt said that it was indeedpossible for the phone companyto detect the deviation of fre-quencies from the accepted bandbut that the wide range of qualityacross user devices means thatsuch divergence from the stan-dard may only be the result ofpoor construction. Matt’s talkconcluded with a discussion ofthe practical difficulties facing IPwiretapping efforts.

N E T WO R K S E C U R IT Y

Summarized by Wei Xu

SANE: A Protection Architecture forEnterprise Networks

Martin Casado and Tal Garfinkel,Stanford University; Aditya Akella,Carnegie Mellon University; Michael J.Freedman, Dan Boneh, and NickMcKeown, Stanford University

Martin Casado started his talk bypointing out that traditionaltechniques for putting accesscontrol onto enterprise networksare associated with many prob-lems, such as inflexibility, loss ofredundancy, and difficulty inmanagement.

To address these limitations,Martin proposed SANE, a SecureArchitecture for the NetworkedEnterprise. SANE introduces anisolation layer between the net-work layer and the datalink layerto govern all connectivity withinthe enterprise. All network enti-ties (such as hosts, switches, andusers) in SANE are authenticat-ed. Access to network services isgranted in the form of capabili-ties (encrypted source routes) bya logically centralized server(Domain Controller) accordingto high-level declarative accesscontrol policies. Each capabilityis checked at every step in thenetwork.

98 ; LOG I N : VO L . 3 1 , NO . 6

Page 9: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

Martin also presented severalimportant details in SANE (suchas connectivity to the DC, estab-lishing shared keys, and estab-lishing topology), as well as thecountermeasures that SANEoffers for attack resistance andcontainment. He also mentionedthat their prototype implementa-tion showed that SANE could bedeployed in current networkswith only a few modifications.

PHAS: A Prefix Hijack Alert System

Mohit Lad, University of California, LosAngeles; Dan Massey, Colorado StateUniversity; Dan Pei, AT&T Labs—Research; Yiguo Wu, University ofCalifornia, Los Angeles; BeichuanZhang, University of Arizona; LixiaZhang, University of California, LosAngeles

In this talk, Mohit Lad firstbriefly introduced the BGP (Bor-der Gateway Protocol) and thenillustrated the widely reportedBGP prefix hijack attack, inwhich a router originates a routeto a prefix but does not providedata delivery to the actual prefix.

After that, Mohit presented a Pre-fix Hijack Alert System (PHAS).The objective of PHAS is to pro-vide reliable and timely notifica-tion to prefix owners when theirBGP origin changes, so that pre-fix owners can quickly and easilydetect prefix hijacking events andtake prompt action to address theproblem. PHAS uses BGP datacollectors (especially RouteViewsand RIPE) to observe the BGPupdates. The updates are thenprovided to the origin monitor,which detects the origin changesand delivers email notificationsto the affected prefix ownersthrough a multipath delivery.According to Mohit, PHAS islightweight and readily deploy-able.

Passive Data Link Layer 802.11 Wire-less Device Driver Fingerprinting

Jason Franklin, Carnegie MellonUniversity; Damon McCoy, University ofColorado, Boulder; Parisa Tabriz, Uni-versity of Illinois, Urbana-Champaign;Vicentiu Neagoe, University of California,Davis; Jamie Van Randwyk, SandiaNational Laboratories; Douglas Sicker,University of Colorado, Boulder; ScottShenker, University of California,Berkeley

Parisa Tabriz gave the first partof this talk, in which sheexplained the motivation fortheir work by arguing that theemerging driver-specific exploitswere particularly serious for the802.11 wireless devices becauseof their wide deployment andexternal accessibility. Devicedriver fingerprinting can be ofhelp for an attacker to launch adriver-specific attack.

Damon McCoy then presentedthe details of their passive finger-printing technique, which iden-tifies the wireless device driverrunning on an IEEE 802.11 com-pliant device. Their technique isbased on the observation thatmany of the details of the activescanning process are not fullyspecified in the IEEE 802.11standard, and hence they aredetermined by wireless driverauthors. As a result, drivers canbe distinguished by their uniqueactive scanning patterns. Damonsaid that they had generated sig-natures for 17 different wirelessdrivers, and their evaluationshowed that this fingerprintingtechnique could quickly andaccurately fingerprint wirelessdevice drivers in real-world wire-less network conditions. Finally,Damon discussed several possi-ble ways to prevent the 802.11wireless driver fingerprinting.

When asked whether the finger-print of a wireless driver wouldchange in different operatingsystems or when the driver inter-acted with different access

points, Damon and Parisaanswered that the active scan-ning behavior should dependonly on the driver implementa-tion, but they had not lookedinto these particular problems.

I N V ITE D TA L K

Turing Around the Security Problem:Why Does Security Still Suck?

Crispin Cowan, SUSE Linux

Summarized by Bryan D. Payne

Crispin Cowan began thisinvited talk saying that “securitysucks” more than any otheraspect of computing. He backedup this statement by showinghow Turing’s theorem can beused to show that security is anundecidable problem. In addi-tion, security is harder than cor-rectness. Although correctness isimportant, security increasesprogrammer burden, sinceattackers can produce arbitraryinput. History has shown thatneither money nor even diligentcorporate practices are sufficientto solve the problem.

Crispin explained that oneapproach to security is to useheuristics. This is seen, forexample, in static analyzers. Theproblem with heuristics is thatthey are not perfect. Heuristicscannot analyze all programs, andthey provide imperfect results inother applications.

As an alternative, Crispin sug-gests that security professionalsuse the OODA Loop. ColonelJohn Boyd (USAF) invented theidea of an OODA Loop. OODAstands for Observation, Orienta-tion, Decision, and Action. TheOODA Loop provides a patternfor fighter pilots to use in deci-sion-making. In general, the per-son with the fastest OODA Loopwill win a battle. Crispin spent alarge portion of the talk mappingthe process of security to theOODA Loop.

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 99

Page 10: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

The OODA Loop can be mappedto the three critical questions:when, where, what. Crispinwalked through many securitytools and concepts, mappingeach to one of these three ques-tions. He covered topics such asSaltzer and Schroeder’s princi-ples of secure design, syntaxcheckers, semantic checkers,type-safe languages, kernelenhancements, intrusion detec-tion, intrusion prevention, net-work access controls, host accesscontrols, capabilities, and more.

In closing, Crispin changedgears and spoke about AppAr-mor, SUSE’s approach to improv-ing application security. Crispindescribed the fundamental dif-ference between AppArmor andSELinux: AppArmor uses pathnames and SELinux uses labels.He also acknowledged thatAppArmor is a work in progress;some types of protection are notcurrently possible.

In the Q&A session, Pete Lo-scocco, a leader of the SELinuxproject, suggested that the differ-ences between SELinux andAppArmor are more fundamen-tal than Crispin stated becauseSELinux tries to be comprehen-sive whereas AppArmor has lesscontrol. Crispin acknowledgedthat there are trade-offs and sug-gested that SELinux achieveshigher security but also has ahigher cost. Other people askedquestions about practical secu-rity issues. Why aren’t compa-nies doing more about security?Crispin believes that they aresecure enough for their pur-poses. What’s being done toaddress the security problemstoday? Crispin pointed to betterprogramming languages and var-ious forms of isolation. Howdoes one properly extend secu-rity from the kernel into theapplications? Crispin suggestedusing discrete programs thatcooperate (e.g., Postfix).

PA N E L

Major Security Blunders of the Past30 Years

Matt Blaze, University of Pennsylvania;Virgil Gligor, University of Maryland;Peter Neumann, SRI InternationalComputer Science Laboratory; RichardKemmerer, University of California,Santa Barbara

Summarized by Madhukar Anand

Matt Blaze: We are going to havemore damage because of excessattention to security. For in-stance, consider an alarm sys-tem. It is designed such that thealarm is triggered as quickly aspossible. Consequently, anyattack that aims at not setting offthe alarm system is entering intoa arms race with the alarm sen-sor design. Although the alarmsystem could win such an armsrace, it must be noted that themain objective of the attacker isnot to defeat the sensor systembut to break the system. A sim-ple strategy of setting off thealarm repeatedly would do thetrick. The police might betricked into losing faith in thealarm system.

Another example is the signalingin telephone systems. The objec-tive of the attacker here is todefeat the billing system andmake free long distance calls orcall unauthorized phone num-bers. The protocols are designedby people who did not knowthey were designing a securityfunction. The only people whoare motivated to break into sucha system are the bad guys.

The Telnet protocol was de-signed with an option for en-cryption (DES 56-bit key in a 64-bit package). The newer library,to be more compliant with theDES standard, had 8 bits desig-nated as the checksum. If thechecksum would not tally, thenthe key would be all 0s. Becauseof this, if there were bit errors,

there was a 1/256 chance ofusing encryption. In 255/256cases, the key would be all 0s.So, this is an counterexample tothe principle “Good code issecure code.” Code that checkssecure values in this case indeedmade it less secure.

Virgil Gligor: Blunder 1: Morrisworm—Buffer overflow in fin-gerd. This was the first tangibleexploit of a buffer overflow, thefirst large-scale infection engine,and the first large-scale DDoSattack. Although there were nolessons learned for a long time, itdid result in the creation ofCERT. It also led to the realiza-tion that any new technologyintroduces new vulnerability andvulnerability removal takes 6 to12 years, creating a security gap.Also, we learned that “security isa fundamental concern of sec-ondary importance.”

Blunder 2: Multilevel SecureOperating System (MLS-OS).This was pushed by the defenseestablishments in the UnitedStates and Europe. However,there was no market for MLS-OSes as they break off-the-shelfapplications and are often hardto use. In fact, the stronger theMLS-OS, the worse the system(e.g., B2 secure Xenix broke allour games). This blunder largelydrained most security fundingand was a major R&D distraction.

Blunder 3: Failed attempts at fastauthenticated encryption (in 1-pass -1 cryptographic primitive.)This blunder was largely aca-demic. Starting with CBC+CRC-32 systems in 1977 to NSA dualcounter mode + XOR, each suc-cessive system was built only tobe broken later. Together, theyconstitute the largest sequenceof cryptographic blunders. Theupshot of this is the realizationthat some of the simplest crypto-graphic problems are hard. Thelesson to be learned is to stick tobasic system security research.

100 ; LOG I N : VO L . 3 1 , NO . 6

Page 11: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

Richard Kemmerer: Blunder 1:U.S. export controls of cryptog-raphy. These controls were basedon international traffic in armsregulations. However, they werevery difficult to enforce. The les-sons learned from the exportcontrol episode are that theymake sense but change con-stantly and that the enforcershave no sense of humor—a casein point being the VerificationAssessment study (Kemmerer,1986), which was publishedwith the warning that its exportis restricted by the arms exportcontrol act. This hampered thepublication of the study andresulted in many copies beingstocked and not reaching theirreadership.

Blunder 2: Kryptonite Evolution2000 U-Lock. Although theKryptonite was advertised as the“toughest bike lock,” all it tookto break it was a Bic pen. Aftercutting small slits in the end ofthe pen’s barrel to ease it in, thelock opened with a single twist.

Blunder 3: Australian RawSewage Dump in March 2000.Vitek Boden from Brisbane, Aus-tralia, hacked into a local wastemanagement computer systemand directed raw sewage intolocal rivers, into parks, and evenonto the landscaping of a HyattRegency hotel. Boden had previ-ously been fired from the com-pany that installed the wastemanagement system, and he van-dalized the public waterways asan act of revenge. The lessonslearned from this were to payattention to security in SCADAsystems and to insider threats. Inconclusion, “Beware of gorillasinvading your system while youare counting basketball passes,”which was depicted by showinga video where a person in agorilla suit walks through thescene but is unnoticed by mostof the attendees until the secondshowing of the video.

Peter Neumann: The big pictureis that security is complex, it is aend-to-end system emergentproperty, and there is a weaknessin depth. We do not really learnfrom experience, as some of theblunders keep recurring. Secu-rity is holistic. So, we should notconsider it in isolation. Forinstance, application security iseasily undermined by OS security.

Propagation Blunders: 1980ARPANET collapse. The col-lapse, resulting from a singlepoint failure in one node, endedup bringing down the entire net-work. In 1990, there was a half-day when AT&T long lines col-lapsed, again as a result of asingle point of failure. Yetanother example of propagationblunders are the repeated poweroutages from 1967 to August2003. In some sense, we don’tdesign our infrastructures andour systems to withstand thingsas simple as a single point fail-ure. The standard answer is,“This can never happen.”

Backup and Recovery Blunders(e.g., air traffic backup and re-covery failures): In July 2006,there was a power outage inPalmdale, CA, including the airtraffic control center. The centerautomatically switched to backupdiesel generators. They workedfor about an hour but then thesystem that switches the centerbetween commercial and backuppower failed and the buildingwent dark. In 1991, three NewYork airports were closed asbackup batteries were acciden-tally drained. Other examplesof such blunders include theSwedish train reservation systemfailure and the Japanese stock ex-change system failure in Novem-ber 2005. There have been blun-ders where there was no backup,such as when the New York Pub-lic Library lost all its references.

Software Blunders: There havebeen many types of buffer over-

flows. Programming languageresearch could help reduce these,but they could also prove to be ahindrance. In fact, Multics wasmostly free of buffer overflowvulnerabilities, owing to the useof PL/I as the implementationlanguage. PL/I required an ex-plicit declaration of the length ofall strings.

Election Systems Blunder: Therequirements of an electronicvoting system include end-to-end reliability, integrity, andaccountability. Therefore, HelpAmerica Vote (HAVA) was ahuge blunder. Because none ofthe electronic paperless systemswere auditable, they lack integ-rity.

The lessons learned are that indi-vidual cases may be less impor-tant than the fact that we see thesame types of problems. We needa massive cultural change in howwe develop software.

More resources on risks andtrustworthy architectures can befound at http://www.csl.sri.com/users/neumann/chats4.html.

I N V ITE D TA L K

Aspect-Oriented Programming:Radical Research in Modularity

Gregor Kiczales, University of BritishColumbia

Summarized by Kevin Butler

Gregor Kiczales gave a practical,code-oriented talk that providedan introduction to aspect-ori-ented programming (AOP).Kiczales and his team originatedthe idea of AOP while he was atPARC. The talk focused on AOPand what it is, what makes itinteresting, and how softwarewill be different with AOP com-pared to what has come before.The focus was on the Aspect/Jprogramming language, a seam-less extension to Java that is fullysupported by the Eclipse open

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 101

Page 12: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

source project and is a model forother AOP tools.

One of the biggest benefits ofAOP is its ability to aid codeexpressiveness: The code lookslike the design, and what isgoing on in terms of operationsis clear. As an example, Kiczalespointed to a Java program calledjHotDraw which allows joiningof points and lines, setting col-ors, and other graphical tasks.This can be designed and imple-mented with object-orientedprogramming, where shapesbecome classes. There are alsoobjects relating to the display ofthe shapes and to performupdate signaling (e.g., whenshapes change). The goals forany programming paradigm areexpressiveness, modularity, andabstraction. For jHotDraw,object-oriented programmingallows good support of thesegoals for shapes and their dis-play, but it is not good for updatesignaling. The structure of sig-naling is not localized, clear, ordeclarative, and the resultingsolution is neither modular norabstract. Signaling is clearly notlocalized as it cuts across multi-ple objects. With AOP, it is possi-ble to provide the three goals setout.

Some of the terminology specificto AOP includes the idea of joinpoints, which are defined pointsin a program flow (Aspect/Jallows for dynamic join points)and pointcuts, a set of join pointson which a predicate is basedthat may or may not match.Pointcuts are composed likepredicates, and a set of primitivepointcuts are defined in Aspect/J.When a pointcut is reached, aspecified piece of code, calledadvice, is run. To go back to thedrawing example, there is anobserver pattern aspect of thesystem, and some points in thesystem’s execution are changed;after returning for a change, the

display is updated. Updated callswould be scattered and tangledwith object-oriented program-ming; by contrast, with AOP, val-ues at the join points are defined,a pointcut can explicitly exposecertain values, and advice canuse these explicitly exposed val-ues. The key to AOP is its abilityto deal with crosscutting acrossmultiple concerns. For example,a pointcut in a model can specifya slice of a different part of theworld to provide an interface.This could be particularly perti-nent to industry: One can walkup to a big system, drop point-cuts in it, and program againstthe system even if it doesn’t havethe structure the programmerwas expecting. In response toquestions about this, Kiczalesexplained that one of the funda-mental differences with AOP isthat there is no need to explicitlysignal events as is necessary withOOP. For example, one couldtake a million lines of alreadywritten code and do pointcutswithout even having the sourcecode for the original system.When asked whether the goal ofAOP was to be able to retrofitfeatures into legacy code, Kicza-les was circumspect and carefulin his answer. The biggest goal ofAOP is to modularize crosscut-ting concerns, which sometimescan be done for legacy code andsometimes not. Oftentimes, newfeatures are crosscutting con-cerns.

Kiczales expressed somethoughts as to whether goingfurther than Aspect/J could befeasible. Referencing the bookOn the Origin of Objects by RyanSmith (where objects refer toactual real-world things, notclass instances), he talked aboutperception and how we see theworld in different ways (e.g., aglass half empty vs. half full),and how do we both grab thesame thing. Registration is theprocess of parsing objects out of

the fog of undifferentiated“stuff,” and we are constantlyregistering and reregistering theworld. We also possess the abil-ity to process critically: We havemultiple routes of reference,such as the ability to exceedcausal reach (e.g., describingsomeone as closest to averageheight in Gorbachev’s office) orindexical reference (e.g., the onein front of him); allowing thissort of expressiveness into pro-gramming could be similarlyuseful. Join point models candecompose software into differ-ent ways and atomize into thefog of undifferentiated points,with connections and effectsmade through registration. Allcould have causal access to thesame point but access it in differ-ent ways, given the multipleroutes to reference.

Much of the Q&A sessionfocused on the security aspectsof AOP. For example, if AOP orAspect/J can touch a JAR file,then all bets are off. Audit log-ging is a clear example of whereAOP is useful, but what othersecurity uses are there? Is ACLchecking a potentially good use?Kiczales responded that hisgroup intentionally did nottouch on the uses for security,leaving that to security profes-sionals, but intimated that therewere certainly security problemsthat could best be attacked withAOP. Gary McGraw commentedthat this logic could be takenonly so far: If something is asecurity feature, it can probablybe done as an aspect, but secu-rity is not just a bunch of fea-tures. Other questions includeddealing with aspect clashes, towhich the best defense is goodsoftware engineering and a qual-ity codebase; how to debug code,which has improved greatly asAspect/J has matured; and theproblems of retrofitting legacycode with access control hooks,which is nontrivial. Kiczales

102 ; LOG I N : VO L . 3 1 , NO . 6

Page 13: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

responded to this by comment-ing that no sane person claimsthat one should be able to take asystem and add modular imple-mentations of crosscutting with-out having to jigger the sourcecode a little bit, particularlywhen dealing with the domainfunctionality of a system. How-ever, refactoring and some mildediting of the code could makeall the difference.

STATI C A N A LYS I S F O R S E C U R IT Y

Summarized by Lionel Litty

Static Detection of Security Vulnera-bilities in Scripting Languages

Yichen Xie and Alex Aiken, StanfordUniversity

Alex Aiken observed that Webapplications are increasinglypopular and that they presentapplication-level vulnerabilitiesthat are hard to prevent usinglow-level security mechanisms.Instead, static analysis mayprove valuable when hunting forsecurity bugs in such applica-tions. He reported that his groupwas successful in performingstatic analysis of a scripting lan-guage (PHP) to find SQL injec-tion vulnerabilities, despite thedifficulties inherent in analyzingscripting languages: dynamictyping, implicit casts, weak scop-ing, and extensive use of stringmanipulation functions and hashtables.

To scale to large applicationswithout making the analysis tooimprecise to detect bugs reliably,three levels of abstraction wereused: intrablock level, intrapro-cedural level, and interproce-dural level. The result of thesymbolic simulation of eachcode block is summarized beforeperforming the analysis of eachprocedure. The result of thisanalysis is in turn summarized toperform interprocedural analy-sis. Each summary captures

exactly the information neces-sary to perform the next step ofthe analysis, which is the key toscaling. Applying this techniqueto six large applications resultedin the discovery of 105 con-firmed vulnerabilities with onlya small number of spuriouswarnings.

Christopher Kruegel asked howthe tool dealt with variable alias-ing. Alex answered that the tooldid not perform sound trackingof aliasing outside of the blocklevel and that this was one wayin which the tool was not a veri-fication tool, meaning that it willnot guarantee the absence of vul-nerabilities. He also highlightedthat programmers often do notuse the full power of a program-ming language. This enables thetool to perform well despitemaking a number ofsimplifications.

Rule-Based Static Analysis of NetworkProtocol Implementations

Octavian Udrea, Cristian Lumezanu,and Jeffrey S. Foster, University ofMaryland

Octavian Udrea argued thatwhereas formal methods havebeen used extensively to checknetwork protocol designs, theactual implementations of theseprotocols often do not get thesame level of attention. He de-scribed a tool called Pistachio thataddresses this situation by focus-ing on static verification of a pro-tocol implementation against ahigh-level specification of theprotocol. This manually con-structed specification, derivedfrom documentation describingthe protocol, consists of a set ofrules that the protocol mustadhere to.

With the help of a theoremprover, execution of the imple-mentation is then simulated.Developing rules for the SSH andRCP protocols took the authorsonly seven hours, although these

rules did not cover all aspects ofthe protocols. Implementationsof these protocols were then ana-lyzed, and rule violations werefound in the LSH implementa-tion of SSH and the Cygwin ver-sion of RCP. These violationswere checked against a databaseof known bugs. Of the warningsproduced by Pistachio, 38% werespurious. Pistachio failed to find5% of known bugs. More infor-mation is available at http://www.cs.umd.edu/~udrea/Pistachio.html.

David Wagner asked how theauthors were able to estimate thefalse-negative rate. Octavianexplained that this number cor-responded to known implemen-tation bugs that Pistachio did notfind. This number might beinflated by still unknown bugs inthe implementation.

Evaluating SFI for a CISC Architec-ture

Stephen McCamant, MassachusettsInstitute of Technology; Greg Morrisett,Harvard University

Awarded Best Paper

Software-based Fault Isolation(SFI) is a sandboxing techniqueused to confine the effects ofrunning a piece of code from therest of the system by insertingchecks before every memorywrite and every jump. While thetechnique has been shown towork for RISC architectures,Stephen McCamant showed thatit could be applied to a CISCarchitecture, the IA-32 architec-ture, as well. The key challengesin building a prototype, PittS-FIeld, were the variable length ofthe IA-32 instructions and theirlack of alignment constraints,potentially allowing complexcode obfuscation.

These challenges were addressedby forcing instruction alignmentvia assembly code rewriting andby adding padding. In addition,

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 103

Page 14: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

the authors implemented newoptimization techniques andcarefully analyzed the perfor-mance overhead introducedby SFI, which is a 21% slow-down on the SPEC benchmark.One reason for the slowdownis increased cache pressurecaused by the 75% increase insize of the binary code as a re-sult of padding. Finally, Stephenemphasized that the securityof PittSFIeld relies on a smallverifier. The authors formallyproved, using the ACL2 theoremproving system, that the verifierchecks ensure confinement.More details can be found athttp://pag.csail.mit.edu/~smcc/projects/pittsfield.

The audience asked whether thesame technique could be per-formed directly on the assemblycode. Stephen answered that itmight be possible with a verygood disassembler or if addi-tional symbol information isavailable. Stephen was also askedthe difference between SFI andControl Flow Integrity (CFI). Hesaid that the two approacheswere very similar. SFI providesmemory isolation, but this canbe added to CFI as well. TimothyFraser mentioned that he knewof failed efforts to implement SFIfor IA-32 and commended theauthors.

I N V ITE D TA L K

Surviving Moore’s Law: Security, AI,and Last Mover Advantage

Paul Kocher, Cryptography Research

Summarized by Micah Sherr

Paul Kocher’s talk was dividedinto two parts. In the first half,he explored how complexity canbe used to secure systems. Hebegan by examining the effect ofMoore’s Law on cryptography.On the one hand, increased CPUperformance seems to favor thecryptographer. For example,moving from DES to 3DES

requires only three times theCPU cost, at the same timerequiring an exponential in-crease in work by the cryptana-lyst. However, as Paul points out,systems are becoming more com-plex, and added complexity (andlines of source code) raises thenumber of potential vulnerabil-ities dramatically. The growthof human intelligence does notnecessarily match the growthof system complexity. There istherefore a need to harness com-plexity to improve securityrather than hinder it.

Paul proposes increasing com-plexity by adding components tosystems and connecting thosecomponents in a stream. Theoutput of the system is the com-bination (e.g., XOR) of all of thecomponents’ outputs. If one sys-tem fails or is compromised, theoverall security of the systemdoes not necessarily falter. Gen-erally, he advocates adding “mi-cro CPUs”—independent andisolated execution areas that arespecialized for different opera-tions. The micro CPUs commu-nicate with each other to pro-duce the final result.

In the second half of the talk,Paul described the current stateof security as a tug-of-war be-tween those who protect systemsand those who attack them. Paulbelieves that companies nolonger believe that security isbinary and that they have adopt-ed the strategy of minimizing(rather than preventing) theamount of harm done to theirsystems. The major defensemechanism is the patch, which isapplied after an attack, butwhich hopefully reduces the pos-sibility of further attack. On theflip side, the attacker attempts toexploit new and unpatched vul-nerabilities. This leads to a stale-mate in which attacks are closelyfollowed by fixes. According toPaul, this stalemate is the best-case scenario for the defender:

Playing for a stalemate requiresless devotion than comprehen-sive security fixes, stalemates al-low defenders to place the blameon others (those who haven’tapplied the patches), and devel-oping patches is easier than in-house security testing. However,playing for a stalemate leaves thedefender with nonoptimal secu-rity.

Paul concludes by consideringthe future of this tug-of-warbetween defender and attacker.To some degree, generation ofattacks (exploits) and defenses(patches) can be automated. Atthe same time, the undecidabil-ity of certain problems meansthat there are limits to whatautomation can accomplish. Tosome degree, the human elementwill always be involved in theautomation process.

I NTR U S I O N D E TE C TI O N

Summarized by Michael Locasto

SigFree: A Signature-Free BufferOverflow Attack Blocker

Xinran Wang, Chi-Chun Pan, Peng Liu,and Sencun Zhu, The PennsylvaniaState University

Xinran Wang began by illustrat-ing the problems with currentapproaches to blocking buffer-overflow based attacks. Suchattacks can be previously unseenor potentially delivered by poly-morphic malware. Remote bufferoverflows are typically driven bynetwork input containing theexploit code, and Xinran and hiscoauthor’s key insight is thatdetecting the presence of legalbinary code in network flowswas one way to recognize suchattacks without having to resortto a signature-based scheme.

Xinran described two techniquestheir Instruction Sequence Ana-lyzer (ISA) uses to build anextended instruction flow graph(EIFG), a construct similar to acontrol flow graph. Since x86 is a

104 ; LOG I N : VO L . 3 1 , NO . 6

Page 15: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

very dense instruction set, in-struction sequences can be de-rived from most binary strings,and the authors propose build-ing an EIFG from the networktraffic. Intuitively, the larger theEIFG, the more likely it is that acertain binary string is a workingx86 program. The first techniquefor building an EIFG employspattern detection to provide aframe of reference for the ISA—itdetects the push-call instructionsequences that represent a sys-tem call.

Xinran outlined a more sophisti-cated method (Data Flow Anom-aly), which is based on detectingabnormal operations on vari-ables. The motivation for thistechnique is that a randominstruction sequence is full ofthese types of data flow anom-alies, but a useful x86 program(such as one provided by anattacker) is not. This latter tech-nique is also more resistant topolymorphic attacks, since themalware decoder does not neces-sarily need to make system calls,but still must include usefulinstructions. The authors reporton fairly good results: No falsepositives were observed in a“normal” test set, and their tech-niques detected about 250 at-tacks and variations. They arelooking forward to using aweighted scheme to frustrateattackers who may know thethreshold of useful instructionsfor the DFA scheme. A follow-upquestion on how well this tech-nique may apply beyond the x86instruction set was tabled foroffline discussion. An attendeefrom Cisco asked whether thecode for the system would bepublicly available, since this typeof capability was of widespreadinterest, but Xinran indicatedthat the techniques were cur-rently under patent review.Finally, another attendee fromMIT asked about the nature ofthe “normal” requests that actu-

ally contained fairly longsequences of code. That discus-sion was taken offline.

Polymorphic Blending Attacks

Prahlad Fogla, Monirul Sharif, RobertoPerdisci, Oleg Kolesnikov, and WenkeLee, Georgia Institute of Technology

Prahlad discussed the design andimplementation of a class of mal-ware capable of dynamicallyadapting its payload to the char-acteristics of the surroundingnetwork traffic in order to bypasscontent-based anomaly sensorssuch as PayL. The key idea isthat the malware is able to sneakpast the anomaly detector be-cause purely statistical contentanomaly detectors discard boththe syntax and the semantics ofthe normal traffic content.

The malware has two capabilitiesthat allow it to create a variantthat will not trip the contentanomaly detector. First, it is ableto observe a portion of the targetsystem’s network traffic. Second,it knows the algorithm that theanomaly detector uses to con-struct a model. After creating aninitial and approximate model ofthe traffic that the target systemis receiving, the malware createsa variant of itself using shellcodeencryption and padding tomatch the generated profilewithin some error bound. Intheir experiments using PayL asthe content anomaly detector,the authors found that all theyneeded was 20 to 30 packets tolearn the target profile. Theauthors closed by suggestingsome possible countermeasuresto the polymorphic blendingattack, including the higher-costtechniques of actually parsingthe packet data rather than usingsolely statistical methods. It mayalso be possible to use multipleindependent models or to some-how randomize the implementa-tion of the anomaly detectoralgorithm. At the end of thepresentation, the attendee from

Cisco reprised his question onthe availability of the system,and Prahlad answered that theyshared code with some partnersof the lab. A second questionerasked for clarification as to thepracticality of the first counter-measure (doing deep packetinspection). Prahlad answeredthat with appropriate support orin some types of low-traffic envi-ronments, such inspectionwould not be prohibitivelyexpensive.

Dynamic Application-Layer ProtocolAnalysis for Network IntrusionDetection

Holger Dreger, Anja Feldmann, andMichael Mai, TU Munchen; VernPaxson, ICSI/LBNL; Robin Sommer,ICSI

Holger raised the question ofhow network intrusion detectionsystems actually know whichprotocol decoder should beapplied to the traffic that thesesystems observe. Most currentsystems simply rely on the trans-port layer port numbers in orderto make this decision. This as-sumption does not hold for stan-dard services that are run onnonstandard ports nor for stan-dard or nonstandard servicesthat are run on the “incorrect”port.

Holger then used some summarystatistics of a large body of datato further explicate the problem.The data was a full packet cap-ture over a 24-hour period, total-ing some 3.2 TB, 137 millionTCP connections, and 6.3 billionpackets. The authors ran theLinux netfilter i7 protocol signa-tures on the traffic trace andfocused on HTTP, IRC, FTP, andSMTP. They found that i7’smatching was good for the nor-mal case, that is, when a particu-lar protocol appeared on a port itnormally does. But a significantportion of the traffic defied clas-sification or was incorrectly clas-sified.

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 105

Page 16: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

The key question the authorsseek to address is the problem ofdistinguishing among networkservices without taking a hintfrom the port number. Holgerpresented the design and imple-mentation of a modification tothe Bro NIDS that performsdynamic analysis of the observedtraffic: The system matches mul-tiple possible protocols in paral-lel. The system provides themechanism for doing so; wheth-er or not to commit to a particu-lar parsing decision is left as amatter of thresholding and pol-icy.

Holger described a number ofinteresting results, including thatmost connections on nonstan-dard ports in their traffic tracewere HTTP and that the proto-cols tunneled therein weremostly P2P services, some rawHTTP, and some FTP traffic.They were able to identify andclose some open SMTP relaysand HTTP servers that violatedthe site’s security policy and havebeen able to detect some botnetsat one of the author’s sites. Hol-ger closed by indicating that thesystem will be incorporated intoBro.

The first question focused onhow difficult it would be to add anew protocol definition to thesystem. Holger indicated that itwas a moderate amount of work,but Bro provides a great deal ofinfrastructure for doing thisalready. The remaining questionsfocused on the system’s ability tohandle tunneled traffic in vari-ous forms, in particular, howeasy it would be for an attackerto craft input that prevented theparallel logic from making achoice between two protocols.Holger replied that the systemprovides the mechanism, and thepolicy to control it is up to theuser or site administrator. Evenso, he indicated that the combi-

nation of protocol signaturematching and dynamic analysissomewhat alleviated this prob-lem.

Behavior-Based Spyware Detection

Egin Kirda and Christopher Kruegel,Technical University Vienna; GregBanks, Giovanni Vigna, and Richard A.Kemmerer, University of California,Santa Barbara

Egin gave an engaging talk aboutthe techniques used in real spy-ware systems and described theauthor’s system for detecting thisspyware. He began by pointingout that spyware is a burgeoningproblem, and that it is difficult toidentify this type of malwarebecause it often comes in manydifferent guises and is potentiallyinstalled by a trusted but unwaryuser. Signature-based spywaresolutions are often on the losingend of an arms race.

However, spyware often cannotescape its main goal, and thatgoal is to gather informationabout a user’s behavior and thentransmit that information some-where. This type of anomaly orbehavior-based detection is apromising technique for detect-ing spyware that doesn’t yet havea signature.

The authors focus on spyware forInternet Explorer, and Egin saidthat most spyware registers as aBrowser Helper Object (BHO)or toolbar (for all practical pur-poses, a toolbar is no differentfrom a BHO). One major reasonfor BHO’s popularity as a spywarevehicle is that high-level userbehavior and data are readilyavailable to a BHO, thus greatlysimplifying the implementationof the spyware component: Itcan simply use the events, dataobjects, and services that IE pro-vides to all BHOs. Egin indicatedthat their techniques are of lim-ited applicability for spyware thatdoes not use COM services. For

example, spyware may utilizeother forms of communicationsuch as issuing GET requests toURLs that are under the attacker’scontrol.

Egin described their implemen-tation of a “stub” IE instancethat intercepts communicationbetween the browser and BHOs.The system combines static anddynamic analysis to drive thetraining phase. The dynamicanalysis records all “interesting”COM calls and provides thestarting point for static analysisto construct a control flow graphto see whether informationcould have been leaked. For allof their 51 test samples (33 mali-cious/spyware, 18 benign), thefalse-negative rate was zero, butthe false-positive rate onlyimproves as the system movestoward the combination of staticand dynamic analysis. One in-teresting result is that the pri-vacy/P3P plugin acts like spy-ware, according to the authors’behavior characteristic: It reads aURL and then contacts the Website. After the talk, an attendeeasked whether or not it would bepossible for a malicious BHO todetect that it was being run bythe detector version of thebrowser and adjust its behavioraccordingly. Egin said that thissituation was similar to a pro-gram detecting whether it wasrunning in a VM, and that thecountermeasures taken by thespyware may be detectable, but itwas certainly an avenue forfuture research.

I N V ITE D TA L K

DRMWars: The Next Generation

Ed Felten, Princeton University

Summarized by Tanya Bragin

For a number of years digitalrights management (DRM) hasbeen a hot topic in the press anda focus of many research and

106 ; LOG I N : VO L . 3 1 , NO . 6

Page 17: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

development efforts. But hasthere been any significant prog-ress since the Digital MillenniumCopyright Act (DMCA) of 1998bolstered DRM by criminalizingany attempts to circumvent it?And after the public outrage overthe Sony rootkit fiasco, whatfuture awaits DRM? Ed Felten isuniquely positioned to answerthese questions. He spent manyyears as a computer scientist inboth industry and academia, butin recent years he has been in-creasingly focusing on legal andpolicy issues related to technol-ogy.

“‘Rights Management’ is some-what of an Orwellian term,”stated Ed Felten. “It implies thatDRM advocates want to manageyour rights, not control what youdo.” But in reality DRM softwareby the very nature of what it triesto accomplish must restrict users’ability to control their personalcomputers in order to preventthem from copying restrictedcontent. When DMCA came out,it was considered reasonable forusers to surrender some controlin order to protect the artists’right to collect revenue for theirwork. However, it is unclear thattechnology can deliver on theDRM promise, so some peopleare starting to question why usershave to hand over control of theirmachines to DRM software com-panies.

According to Ed, the main issuewith the assumption that tech-nology can actually solve theproblem of protecting digitalcontent is the “rip once, infringeeverywhere” phenomenon, pub-lished in what became known asthe “Darknet Paper” by Micro-soft Research in 2002 (www.freedom-to-tinker.com/?p=206).Given the ease with which peo-ple can participate in peer-to-peer file sharing networks, it isonly necessary for one person todefeat DRM in order for every-

one else to have ready access tothe unprotected content. Thistype of threat model is reallyhard to defeat and DRM advo-cates have gradually becomeconvinced that this problem isintractable in practice.

With this realization, incentivesfor deploying DRM softwarehave also changed. OriginallyDRM was supposed to provideantipiracy features that benefitedartists and ensured complianceof digital content with the copy-right law. However, given theapparent infeasibility of thesegoals, DRM software has come toserve other purposes. First, itallowed digital content distribu-tors to perform price discrimina-tion, which is enabled by DRM’sability to hinder transfer of con-tent from user to user. Second, itallowed some organizations,such as Apple, to gain significantmarket share through platformlocking. Apple’s successful com-bination of iPod, iTunes, and theApple Store has created an unri-valed digital music sale fran-chise.

It is unclear that these new ratio-nales for using DRM continueto benefit artists. For example,Apple, publicly published an“interoperability workaround”that allows users to burn pro-tected content to a CD and thenrip that content from the CD inan unprotected format. Similarly,price discrimination primarilybenefits digital content distribu-tors, because it allows them tomake a profit on products theyotherwise could not afford toproduce. Although price dis-crimination can also benefitartists and consumers, since itallows certain types of productsto come to market, arguably itcan be achieved using tech-niques other than DRM.

Ed expects to see more efforts bycompanies to use DRM for self-serving purposes, while continu-

ing to claim that antipiracy is theultimate goal. Ed warned audi-ence members to look out forthese troubling developments,since they are likely to hinderinteroperability, complicateproducts, and frustrate con-sumers. In Ed’s opinion, DRMpolicy should be neutral, neitherdiscouraging nor bolstering itsuse. Given the “inherent clumsi-ness of DRM,” he believes thatmarket forces would ultimatelydecide against it. For more infor-mation about DRM and otherlegal and policy issues relatedto technology, visit Ed Felten’sblog, “Freedom to Tinker,” atwww.freedom-to-tinker.com.

SYSTE M A S S U R A N C E

Summarized by Bryan D. Payne

An Architecture for Specification-Based Detection of Semantic IntegrityViolations in Kernel Dynamic Data

Nick L. Petroni, Jr., and Timothy Fraser,University of Maryland; AAron Walters,Purdue University; William A. Arbaugh,University of Maryland

Current systems design makesthe kernel a high-value targetfor attackers. Defenses such asload-time attestation and run-time attestation are valuable, butthese are only capable of detect-ing changes to static data. NickPetroni described how attackscan exploit this limitation toavoid detection, using an exam-ple of manipulating dynamicdata structures to hide a processin Linux.

The proposed solution is to cre-ate a high-level model of thedynamic data and to use thismodel to validate the data. Thisapproach, which was inspired bywork from Demsky and Rinard,uses a data structure specifica-tion language to model con-straints on the data. Nickshowed a simple example thatwould catch the previously men-tioned attack. The technique has

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 107

Page 18: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

some limitations, including thepossibility of missing short-livedchanges and the need for a kernelexpert to model the constraints.

In the Q&A session, the sessionchair asked whether peoplewould actually use a parallel lan-guage such as this. Nick believesthat they would and that compa-nies could provide the specifica-tion for binary operating systemssuch as Windows or Red HatLinux.

vTPM: Virtualizing the Trusted Plat-form Module

Stefan Berger, Ramón Cáceres, KennethA. Goldman, Ronald Perez, ReinerSailer, and Leendert van Doorn, IBMT.J. Watson Research Center

Stefan Berger presented an archi-tecture for creating virtualizedtrusted platform modules (TPMs).Multiple virtual machines cannotshare a single hardware TPM, be-cause of limited resources in theTPM and because the TPM has asingle owner. In addition, virtualmachines can migrate, whereas aTPM is physically tied to a singlemachine. The problem is thathardware-rooted security is stilldesirable.

As a solution to this problem,Stefan presented a virtualizedTPM (vTPM) that is linked tothe TPM through signed certifi-cates. The vTPM is implementedas a software component, butone could also build it into asecure co-processor for im-proved security properties.Although there are still somechallenges related to virtualmachine migration, Stefanexplained how the vTPM archi-tecture allows a TPM to beextended to a virtual environ-ment today.

In the Q&A session, the firstquestion involved hardwarerequirements. Stefan said thatTPM is a standard and is avail-able from a variety of vendors. Toa question about the size of the

vTPM implementation, Stefananswered that it is about 250kbytes per vTPM.

Designing Voting Machines forVerification

Naveen Sastry, University of California,Berkeley; Tadayoshi Kohno, Universityof California, San Diego; David Wagner,University of California, Berkeley

Naveen Sastry presented tech-niques for building direct record-ing electronic (DRE) votingmachines for easier verification.Current-generation DRE votingmachines are built as one large,monolithic system. However, bybuilding these systems using dis-tinct components with minimaldata passing, each componentbecomes easier to verify. Naveenproposed building separate com-ponents for each of the security-critical functions. In addition,Naveen recommended rebootingbetween voters to ensure thateach voter starts with a knowngood system state.

In the presentation, Naveen dis-cussed the need to select specificsecurity properties and designthe system to satisfy these prop-erties. Specifically, he looked attwo security properties: (1) noneof the voter’s interactions with aDRE may affect a later voter’ssessions, and (2) a ballot may bestored only with the voter’s con-sent to cast. Compartmentalizedimplementation techniques wereused to demonstrate a systemthat upholds these properties.

In the Q&A session, one ques-tioner asked about the problemof malicious developers. Naveenanswered that this can be viewedas another security property anda system designed to handle theproblem. Naveen also addressedconcerns that a reboot would notclear the system memory by sug-gesting that one cut power to thesystem.

WO R K- I N - P RO G R E S S R E P O RTS

Summarized by ManigandanRadhakrishnan

Exploiting MMS Vulnerabilities toStealthily Exhaust a Mobile Phone’sBattery

Radmilo Racic, Denys Ma, and HaoChen, University of California at Davis

This attack is aimed at drainingthe battery of the victim’s cellphone without his or her knowl-edge. The cell phone dischargesthe most when it is in the Readystate as opposed to Idle andStandby. The attack works bykeeping the cell phone at theReady state as long as possibleeven when no useful function isperformed. The authors achievethis feat by exploiting vulnera-bilities in the Multimedia Mes-saging Service (MMS), specifi-cally the Packet Data Protocol(PDP) context retention andpaging channels. The attackworks in two phases. The firstphase involves the creation of ahit list, which is a list of mobiledevices that includes their cellu-lar number, IP address, andmodel number. The secondphase is the battery-drainingphase, where repeated UDP/TCPACK packets are sent to themobile phone either just beforethe timer expires or just after thetimer expires (so that the net-work has to page the mobiledevice). The timer here is theduration the mobile device waitsbefore going to Standby statefrom the Ready state. The experi-mental results show that theauthors have been able to dis-charge cell phone batteries at arate that is up to 22 times fasterthan that in the Standby state.

108 ; LOG I N : VO L . 3 1 , NO . 6

Page 19: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

Applying Machine-Model-BasedCountermeasure Design to ImproveProtection Against Code InjectionAttacks

Yves Younan, Frank Piessens, andWouter Joosen, Katholieke UniversiteitLeuven, Belgium

Code injection attacks are moreprevalent today, and the authorspoint out that the countermea-sures are developed in an ad hocmanner. The authors suggest amore methodical approach tocountermeasure development.They present two approaches,machine-model and meta-model.The machine-model is the modelof the execution environment ofthe program including the mem-ory locations (stack, global ta-bles, etc.) along with the opera-tions performed on them. Therelations between the differentcomponents and their interac-tions are captured using UML(although the authors do notethat they are working on a betteralternative). Such a model notonly allows the designer of coun-termeasures to function at anabstract level but also provides ameans to compare and evaluatedifferent countermeasures. Theshortcoming of this approach isthe fact that the machine modelis closely tied to a particulararchitecture. To overcome thisthe authors suggest the meta-model, which is an abstraction ofseveral machine models, andwhich, according to them, pro-vides uniformity when con-structing machine models andallows a designer to work out theglobal principles of a counter-measure independent of a spe-cific platform.

Building a Trusted Network ConnectEvaluation Testbed

Jesus Molina, Fujitsu Laboratories ofAmerica

The Trusted Network Connect(TNC), a part of the TrustedComputing Group (TCG) proto-cols, comprises a set of standardsthat ensure multivendor interop-erability across a wide variety ofendpoints, network technolo-gies, and policies. The particularissue that this presentation triesto address is the composition ofsecure applications developed bydifferent vendors. The author isbuilding a TNC testbed to testvarious vendors’ products undera variety of policies. The idea isto expose any possible vulnera-bilities.

The SAAM Project at UBC

Konstantin Beznosov, Jason Crampton,and Wing Leung, University of BritishColumbia

Every authorization system has apolicy-based decision maker andan enforcement engine. In themodel presented in this WiP, theyboth communicate through mes-sage exchange to decide whethera particular authorization requestis allowed. If this decision makerfails or is unresponsive, the sys-tem either becomes unavailableas no operations are allowed orgets breached because every oper-ation is allowed. To prevent sucha situation, the authors propose aSecondary and ApproximateAuthorization Model (SAAM).SAAM derives approximateauthorization decisions based oncached primary authorizationdecisions. The efficiency of aSAAM system depends on theauthorization policy of the sys-tem. The authors tested theirscheme on a system implement-ing the Bell-LaPadula model andfound a 30% increase in authori-zation requests that can be

responded to without consultingthe primary policy decisionmaker.

ID-SAVE: Incrementally DeployableSource Address Validity Enforcement

Toby Ehrenkranz, University of Oregon

Routers deployed on the Internettoday do not track the source ofan IP packet. They base theirrouting decisions only on thedestination address of each pack-et. The authors contend that thishas been the root cause of IPspoofing attacks over the Inter-net and of the unreliability of theRPF protocol. ID-SAVE works bybuilding “Incoming Tables” forvalid IP addresses that can routepackets through this router. Apacket not matching a validentry in the Incoming Table isdropped. The authors do refer-ence Li et al.’s work [“SourceAddress Validity Enforcement(SAVE) Protocol,” 2002] andmention that their approach hassimilar ideas to SAVE. The nov-elty of ID-SAVE lies in its ap-proach toward deployment ofthe protocol in the Internet,which has many legacy routersthat may not support IncomingTables. They use a variety oftechniques such as packet mark-ing, neighbor discovery, on-demand updates, blacklists, andpacket-driven pushback to de-rive benefits even with partialdeployment.

Automatic Repair Validation

Michael E. Locasto, Matthew Burnside,and Angelos D. Keromytis, ColumbiaUniversity

Automated intrusion preventionand self-healing software leavethe system administrator a littleperplexed, since it is just notpossible for him or her to manu-ally ensure that the fix actuallyfixes the vulnerability. To addressthe issue and motivate furtherresearch in this direction, the

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 109

Page 20: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

authors propose bloodhound, asystem that facilitates automaticverification of fixes by subjectingthem to the original input thatcaused the intrusion. This can beachieved by automatically log-ging suspicious network trafficand reusing the data to test thefixed software. As was pointedout by the authors, the chal-lenges to this approach lie in sift-ing through enormous amountsof network traffic to identifymalicious inputs, in indexingidentified input and storingthem, and in ensuring that thereplay is an effective guarantee toshow that the problem is fixed.Also, this technique has time andspace limits and has to be opti-mized for both if this techniqueis to be efficient.

Secure Software Updates: Not Really

Kevin Fu, Anthony Bellissimo, and JohnBurgess, University of Massachusetts atAmherst

Software updates are used by awide variety of applications to fixbugs and patch security vulnera-bilities. This WiP took a compre-hensive look at the update mech-anisms found in some of the mostcommon applications (MozillaWeb browsers, Adobe Acrobat,etc.). They found these mecha-nisms to be heavily dependent onthe presence of secure networksfor correct functioning. And therewere instances of them being vul-nerable to man-in-the-middleattacks. The software updatemechanisms were also found tobe prevalent in embedded devices(typically with limited computingpower and battery life) that eitheroperate over insecure networksor cannot support secure compu-tations. The specific examplespresented were those in use inautomobiles, electronic votingmachines, and some implantedmedical devices. The statement“Help! My heart is infected and islaunching a DDoS on my pan-

creas” to emphasize the need forsecure content distribution wasnot only hilarious but alsothought-provoking.

Integrated Phishing Defenses

Jeff Shirley and David Evans, Universityof Virginia

This WiP presented an inte-grated approach to identifyingand thwarting phishing attacks.The proposed system has twoparts: (1) to identify suspiciousemails and Web sites and (2) toprevent users from accessingthese phishing sites. This inte-grated approach to identificationcombines existing identificationheuristics (link obfuscation andemail text contexts) with theauthors’ own idea of gatheringURL popularity measures (de-rived using such common searchengines as MSN and Google).The novelty of their system liesin their use of an HTTP proxythat diverts a user to a warningpage when he or she tries toaccess a URL classified as aphishing site. The authors pres-ent experimental evidence thatthis greatly reduces the false-positive and false-negative rates.They also show that this schemeis successful in preventing a userfrom reaching a phishing site94% of the time (with an errorrate of about 3%).

The Utility vs. Strength Tradeoff:Anonymization for Log Sharing

Kiran Lakkaraju, National Centerfor Supercomputing Applications,University of Illinois, Urbana-Champaign

Logs contain a variety of infor-mation and some of these maybe sensitive information aboutan organization. To facilitatesharing of logs among organiza-tions, these logs need to be suffi-ciently anonymized to preventany leakages. This WiP talkedabout the FLAIM framework forlog anonymization. The authors

described a “strength vs. utility”trade-off to decide on the extentof anonymization. Here “utility”refers to the amount of usableinformation that is still presentin the anonymized log, and“strength” refers to the extent ofanonymization. The noteworthyfact here is that the stronger theanonymization, the less usefulthe log is, and vice versa. TheFLAIM anonymization frame-work uses multilevel policiescoupled with a library of anon-ymization algorithms to anon-ymize logs. The anonymizationpolicy is represented in a XML-based policy language.

Malware Prevalence in the KaZaAFile-Sharing Network

Jaeyeon Jung, CSAIL, MassachusettsInstitute of Technology

More and more viruses are re-ported to use peer-to-peer (P2P)file-sharing networks such asKaZaA to propagate. These mal-ware programs disguise them-selves as files (e.g., Winzip.exeor ICQ.exe) which are frequentlyexchanged over P2P networks;they infect the user’s host whendownloaded and opened. Theyleave copies of themselves in theuser’s sharing folder for furtherpropagation. The authors pres-ent a crawler-based malwaredetector, called “Krawler,” builtspecific for the KaZaA network.The crawler has a signature data-base and looks for files with dif-ferent file names but matchingsignatures known to the crawler.The crawler uses longest com-mon substring to minimize falsepositives. Based on their experi-ments, the author reports thatthey found 15% of crawled fileswere infected by 52 differentviruses, 12% of KaZaA hostswere infected, and 70% of in-fected hosts were in the DNSblacklists.

110 ; LOG I N : VO L . 3 1 , NO . 6

Page 21: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

Election Audits

Arel Cordero and David Wagner,University of California, Berkeley;David Dill, Stanford University

Random election audits are away of ensuring that the electionprocess is fair. When done cor-rectly these audits can provideobjective and measurable confi-dence about the election process.The difficulty lies in the selec-tion of the samples because (1)the process for selection of thesamples should be transparentand yet (2) the samples shouldbe as random as possible. Forexample, the use of software forrandom selection may producea random sample but is not atransparent process. To make theselection process fully transpar-ent and to allow public oversightthe authors suggest the use of adie (a 10-faced one, to increasethe bandwidth). They are warythat the choice of dice may andprobably will encounter resis-tance in adoption because of thepublic perception that dice areassociated with gambling. Andthey also note that such publicperception has led to superiorselection techniques having beenrejected in the past. They advo-cate education as the means toshift perception toward accept-ance of physical means such asdice and a lottery in use of ran-dom selection.

The Joe-E Subset of Java

Adrian Mettler and David Wagner,University of California, Berkeley

Every process executes with thesame set of privileges inside theJava Virtual Machine (JVM).This means that the principle ofleast authority is not enforced toensure that each process has justthe necessary set of privileges.Joe-E is a subset of Java designedto build secure systems. Joe-Euses capabilities for the enforce-ment of protection; hence theprogram can perform only those

operations for which it has capa-bilities. Joe-E has the advantageof permitting safely extensibleapplications, where an applica-tion’s privileges can be limited tothe capabilities granted, therebysimplifying analysis.

Prerendered User Interfaces forHigher-Assurance Electronic Voting

Ka-Ping Yee, David Wagner, and MartiHearst, University of California, Berk-eley; Steven M. Bellovin, ColumbiaUniversity

It is critical that the softwarepresent in voting machines bevalidated. The authors contendthat the codebase in commer-cially existing voting machinesis not only huge (37,000 lines)but also contains a lot of codethat is related to the user inter-face (14,000 lines). Prerender-ing, to generate the user inter-face before the election, drama-tically reduces the amount ofsecurity-critical code in themachine, thus reducing theamount and difficulty of soft-ware verification required toensure the correctness of theelection result. Moreover, pub-lishing of the prerendered userinterface as a separate artifactenables public participation inthe review, verification, usabilitytesting, and accessibility testingof the ballot.

Fine-Grained Secure Localization for802.11 Networks

Patrick Traynor, Patrick McDaniel, andThomas La Porta, The PennsylvaniaState University; Farooq Anjum andByungsuk Kim, Telcordia Technologies

This WiP presented a novel ap-proach to ascertaining a user’slocation in an unforgeable man-ner. This is essential when theuser’s location is a necessary partof user authentication. Theauthors use a network of accesspoints to transmit cryptographictokens at various power levels.This location information is fur-ther refined by using a low-cost

radio in the desktop machinespresent in the vicinity of thelocation ascertained by theaccess points. The use of hashfunctions (as part of the tokens)ensures that this method is resil-ient against spoofing attacks.

KernelSecNet

Manigandan Radhakrishnan andJon A. Solworth, University of Illinoisat Chicago

In most of today’s operating sys-tems, networking authorizationsare practically nonexistent; thatis, common networking opera-tions are unprivileged and end-to-end user authentication isdone (if at all) on a per-applica-tion basis and is never tied to theauthorization system of the oper-ating system. The KernelSecNetproject is aimed at providing aunified networking and distrib-uted system abstraction that isimplemented at the operating-system level, allows networkingpolicy specification, and pro-vides automatic encryption ofall traffic between hosts. Theauthors intend to develop amodel that is specific to the Ker-nelSec project and another inde-pendent UNIX-based model.

Taking Malware Detection to theNext Level (Down)

Adrienne Felt, Nathanael Paul, DavidEvans, and Sudhanva Gurumurthi,University of Virginia

The WiP presented a novel tech-nique for malware detectionusing the disk processors. Thedetection works by having thedisk processor monitor se-quences of I/O requests andidentify sequences that corre-spond to known maliciousactions (essentially, a signature).Implementing the detection atthis low a layer makes the mech-anism immune to most subver-sion mechanisms that are pres-ent in the layers above andmakes it extremely hard to cir-cumvent. Also, this mechanism

; LOGIN: DECEMBER 2006 CONFERENCE SUMMARIES 111

Page 22: 12 06 conf summaries - USENIX...Summarized by Madhukar Anand Inconferencessuchasthe USENIXSecuritySymposium,it istypicaltolearnaboutthework andexperienceofacademicians andresearchersincomputersci-ence.TheinvitedtalkbyIvan

works in isolation from the oper-ating system and can operateeven when the host is compro-mised. The authors mention thatthis technique is more effectivewhen the processors can storemore meta information about arequest, for example, whether itis a create-file or a remove-filerequest.

Data Sandboxing for Confidentiality

Tejas Khatiwala, Raj Swaminathan, andV. N. Venkatakrishnan, University ofIllinois at Chicago

The authors present a techniquethey call “Data Sandboxing,”which allows information flowconfidentiality policies to beenforced on different parts of alegacy application to preventleakage of confidential informa-tion. This is especially relevantwhen the application is a mono-lithic one that accesses (via read-ing or writing) ordinary as wellas confidential media. The tech-nique partitions the applicationinto two parts (each run as a sep-arate program) with each parthaving its own confidentialitypolicy. The first program per-forms operations on public out-put channels, and the confiden-tiality policy does not allow it toread confidential information.The second program is allowedto read confidential information,but is not allowed to write topublic channels. The authorscontend that this partitioningenables them to enforce a confi-dentiality policy that in totalityprevents leakage of confidentialinformation from the originalprogram on publicly observablechannels.

MetriCon 1.0

Vancouver, B.C., CanadaAugust 1, 2006

Summary by Dan Geer

[This material was excerptedfrom the digest found atwww.securitymetrics.org.]

MetriCon 1.0 was held onAugust 1, 2006, as a single-day,limited-attendance workshop inconjunction with the USENIXAssociation’s Security Sympo-sium in Vancouver, BritishColumbia. The idea had beenfirst discussed on the security-metrics.org mailing list and sub-sequently an organizing commit-tee was convened out of a lunchat the RSA show in February2006. There was neither formalrefereeing of papers nor proceed-ings, but there is both a digest ofthe meeting and a complete set ofpresentation materials availableat the www.securitymetrics.orgWeb site. Andrew Jaquith (Yan-kee Group) was chair of theorganizing committee, the mem-bers of which were Betsy Nichols(ClearPoint Metrics), GunnarPeterson (Artec Group), AdamShostack (Microsoft), Pete Lind-strom (Spire Security), and DanGeer (Geer Risk Services).

K EY N OTE A D D R E S S

Resolved: Metrics Are NiftyAndrew Jaquith, Yankee Group

Resolved: Metrics Are Too HardSteve Bellovin, Columbia University

Andrew Jaquith opened Metri-Con 1.0 by pointing out thatother fields have their bodies ofmanagerial technique and con-trol, but digital security does not,and that has to change. He pre-sented his list of what featuresare included in a good metric: Itmust (1) be consistently mea-sured, (2) be cheap to gather, (3)contain units of measure, (4) be

expressed as a number, and (5)be contextually specific. Jaquithargued that this all breaks downto modelers versus measurers.Modelers think about how andwhy; measurers think aboutwhat. He was quick to admit thatmeasurement without modelswill not ultimately be enough,but “let’s get started measuringsomething, for Heaven’s sake.”

Steve Bellovin countered withthe brittleness of software andthus the infeasibility of securitymetrics. Beginning with LordKelvin’s dictum on how, withoutmeasurement, your knowledge is“of a meagre and unsatisfactorykind,” Bellovin said that the rea-son we have not had much prog-ress in measuring security is thatit is in fact infeasible to measureanything in the world as we nowhave it. We cannot answer “Howstrong is it?” in the same style asa municipal building code unlesswe change how we do software.Because defense in the digitalworld requires perfection andthe attacker’s effort is linear inrelation to the number of defen-sive layers, this brittleness willpersist until we can write self-healing code. So his challenge:Show me the metrics that helpthis.

Lindstrom argued that Bellovin’sreasoning did not show that met-rics are impossible but ratherthat they are necessary. Anotherattendee asked, “So what if Iagree on bugs being universaland it only takes one to fail a sys-tem? The issue is: How do wemake decisions?” Butler agreed,saying that if it’s that hopeless,then why do security at all?Epstein reminded all that for-mally evaluated systems stillhave bugs, too. Another attendeesuggested that we can borrowsome ideas from the physicalworld, especially relative meas-urements such as “this is saferthan that.” An attendee said that

112 ; LOG I N : VO L . 3 1 , NO . 6