8
34 April 2008/Vol. 51, No. 4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY The importance of the user in the suc- cess of security mechanisms has been recognized since Auguste Kerckhoffs published his treatise on military cryp- tography, La cryptographie militaire, over a century ago. In the last decade, there has been tremendous increase in awareness and research in user interac- tion with security mechanisms. Risk and uncertainty are extremely difficult concepts for peo- ple to evaluate. For designers of security systems, it is important to understand how users evaluate and make decisions regarding security. The most elegant and intuitively designed interface does not improve security if users ignore warnings, choose poor set- tings, or unintentionally subvert corporate policies. The user problem in security systems is not just about user interfaces or system Why do good users make bad decisions? ILLUSTRATIONS BY SERGE BLOCH BY RYAN WEST “... [the system] must be easy to use and must neither require stress of mind nor the knowledge of a long series of rules...” AUGUSTE KERCKHOFFS ON THE DESIGN OF CRYPTOGRAPHIC SYSTEMS (La cryptographie militaire, 1883)

THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

34 April 2008/Vol. 51, No. 4 COMMUNICATIONS OF THE ACM

THE PSYCHOLOGYOF SECURITY

The importance of the user in the suc-cess of security mechanisms has beenrecognized since Auguste Kerckhoffspublished his treatise on military cryp-tography, La cryptographie militaire,over a century ago. In the last decade,there has been tremendous increase inawareness and research in user interac-tion with security mechanisms.

Risk and uncertainty are extremely difficult concepts for peo-ple to evaluate. For designers of security systems, it is importantto understand how users evaluate and make decisions regardingsecurity. The most elegant and intuitively designed interface doesnot improve security if users ignore warnings, choose poor set-tings, or unintentionally subvert corporate policies. The userproblem in security systems is not just about user interfaces or system

Why do good users makebad decisions?

ILLUSTRATIONS BY SERGE BLOCH

BY RYAN WEST

“... [the system] must be easy to use andmust neither require stress of mind northe knowledge of a long series of rules...”

AUGUSTE KERCKHOFFS ON THEDESIGN OF CRYPTOGRAPHIC SYSTEMS

(La cryptographie militaire, 1883)

Page 2: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

COMMUNICATIONS OF THE ACM April 2008/Vol. 51, No. 4 35

Page 3: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

36 April 2008/Vol. 51, No. 4 COMMUNICATIONS OF THE ACM

interaction. Fundamentally, it is about how peoplethink of risk that guides their behavior. There arebasic principles of human behavior that govern howusers think about security in everyday situations andshed light on why they undermine security by acci-dent.This article offers a brief introduction to research

on risk, uncertainty, and human decision making andhow they relate to users making security decisions,and provides a few key concepts and possibilities inhow they may be used to improve users’ securitybehavior.Non-acceptance of security tools is recognized as a

major problem facing the information security world[5]. Research in the usability of security mechanismshas exploded over the last decade and an excellenttrove of research papers is cataloged by the HCISecBibliography hosted at www.gaudior.net/alma/bib-lio.html. Among the studies listed there is a mountainof evidence that mechanisms for encryption, autho-rization, and authentication can be difficult for peo-ple to understand or use [1, 9] and that people oftenfail to recognize security risks or the information pro-vided to cue them [3, 4]. Accordingly, researchershave promoted the need for user-centered designthroughout the development process and warn thatusability testing security systems only at the end ofthe process does not guarantee a usable or acceptablesystem [7, 11, 12].However, there is more to this than interaction

with technology. Human decision making has been atopic of study in social sciences from economics topsychology for over a century. The net sum of thatresearch suggests that individuals are often less thanoptimal decision makers when it comes to reasoning

about risk. However, we have predictable andexploitable characteristics in our decision-makingprocess. Understanding these principles and howusers come to make decisions about security may sug-gest places where we can improve the outcome of thedecisions.

Users do not think they are at risk. First of all,people tend to believe they are less vulnerable to risksthan others. Most people believe they are better thanaverage drivers and that they will live beyond averagelife expectancy [6]. People also believe they are lesslikely to be harmed by consumer products comparedto others. It stands to reason that any computer userhas the preset belief that they are at less risk of a com-puter vulnerability than others. It should come as nosurprise that, in 2004, a survey from AOL and theNational Cyber Security Alliance reported thatroughly 72% of home users did not have a properlyconfigured firewall and that only one-third hadantivirus virus signatures updated within the pastweek.1

Even as security measures improve, users willremain at risk. There is evidence that individualsmaintain an acceptable degree of risk that is self-level-ling, known as risk homeostasis.2 Applied to security, itsuggests that as users increase their security measures,they are likely to increase risky behavior. For example,the user who has just installed a personal firewall maybe more likely to leave his machine online all thetime.

Users aren’t stupid, they’re unmotivated. In social

1America Online and the National Cyber Security Alliance. AOL/NCSA OnlineSafety Study, 2004; www.staysafeonline.info/news/safety_study_v04.pdf.2G.J.S. Wilde. Target Risk 2: A New Psychology of Safety and Health. PDE Publications,Toronto, Ontario, 2001.

People tend to believe they are less vulnerableto risks than others. People also believe they are lesslikely to be harmed by consumer products compared toothers. It stands to reason that any computer userhas the preset belief that they are at less risk of a

computer vulnerability than others.

Page 4: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

cognition, the term is cognitive miser. Humans have alimited capacity for information processing and rou-tinely multitask. As a result, few tasks or decisionsreceive our full attention at any given time. To con-serve mental resources, we generally tend to favorquick decisions based on learned rules and heuristics.While this type of decision making is not perfect, itis highly efficient. It is efficient in the sense it isquick, it minimizes effort, and the outcome is goodenough most of the time. This partially accounts forwhy users do not reliably read all the text relevant ina display or consider all the consequences of theiractions.

Safety is an abstract concept. When evaluatingalternatives in making a decision, outcomes that areabstract in nature tend to be less persuasive than out-comes that are concrete [2]. This is key to under-standing how users perceive security and makedecisions. Often the pro-security choice has no visibleoutcome and there is no visible threat. The reward forbeing more secure is that nothing bad happens. Safetyin this situation is an abstract concept. This, by itsnature, is difficult for people to evaluate as a gainwhen mentally comparing cost, benefits, and risks.Compare the abstract reward (safety) garnered

from being more secure against a concrete reward likeviewing an attachment in instant messaging or Webcontent that requires a browser add-on and the out-come does not favor security. This is especially truewhen a user does not know what his or her level of riskis or believes they are at less risk than others to start.Returning to the principle of the cognitive miser, theuser is also more likely to make a quick decision with-out considering all of the risks, consequences, andoptions.

Feedback and learning from security-related deci-sions. The learning situation created by many com-mon security and risk decisions does not help either.In a usual learning situation, behavior is shaped bypositive reinforcement when we do something“right.” We do something good, we are rewarded. Inthe case of security, when the user does somethinggood, the reinforcement is that bad things are lesslikely to happen. There is seldom an immediate

reward or instant gratification,which can be a powerful reinforcerin shaping behavior.In another common learning

situation, behavior is shaped bynegative reinforcement when wedo something “wrong.” We dosomething bad, we suffer the con-sequences. In the case of security,when the user does somethingbad, the negative reinforcementmay not be immediately evident.It may be delayed by days, weeks,or months if it comes at all. Causeand effect is learned best when theeffect is immediate and the anti-security choice often has noimmediate consequences. This

makes learning consequences difficult except in thecase of spectacular disasters.

Evaluating the security/cost trade-off. While thegains of security are generally abstract and the nega-tive consequences are stochastic, the cost is real andimmediate. Security is integrated into systems in sucha way that it usually comes with a price paid in time,effort, and convenience—all valuable commodities tousers.For example, in the simplest case—restricting

access to a public share in Microsoft’s Windows Vistato a group of users—requires about nine separate stepsand six distinct user interfaces (see Table 1). Whileeach step seems small, they add up a real cost to users.In deciding what to do, users weigh the cost of theeffort against the perceived value of the gain(safety/security) and the perceived chance that noth-ing bad would happen either way.

Making trade-offs between risk, losses, and gains.Given that security gains are often intangible, thecosts known, and the negative consequences involveprobabilities, we can look at several known factors atplay when people evaluate risks, costs, and benefits.

Users are more likely to gamble for a loss thanaccept a guaranteed loss. First of all, people react to riskdifferently depending on whether they think they areprimarily gaining something or losing something. Tver-sky and Kahneman [8] showed that people are morelikely to avoid risk when alternatives are presented asgains and take risks when alternatives are presented aslosses. For example, consider the following scenariowhere a person has to decide between two options pre-sented as gains:

Scenario 1:A) Gain $5 at no risk

COMMUNICATIONS OF THE ACM April 2008/Vol. 51, No. 4 37

From Windows Explorer: UI #11. Right click on folder in public share (invokes UI #2)2. Click on Properties in context menu (invokes UI #3)3. Click on Sharing tab (invokes UI #4)4. Click Share… (invokes UI #5)5. Enter the User or Group name to share with6. Click Add (automatically sets permission level to “Reader” which sets ACEs for Read, Read & Execute, and List Folder Contents)7. Click Share (invokes UI #6)8. Click Done (returns to UI #3)9. Click Close (returns to UI #1)

Table 1. Nine stepsand six UIs arerequired to set filepermissions on apublic share inWindows Vista. Ittakes four steps justto find the settings.

Page 5: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

B) Gain $10 if a coin tosslands heads up

When Tversky and Kahne-man used a similar sce-nario, 72% of thosesurveyed chose the sure betoffered by option Abecause there was less riskand the outcome was guar-anteed. Now consider asimilar scenario presentedas a choice between twolosses:

Scenario 2:A) Lose $5 guaranteedB) Lose $10 if a coin tosslands heads up

When Tversky and Kahneman framed their scenarioas a choice between losses, 64% of the respondentschose option B. People tended to focus on thechance to not lose anything offered in B comparedto the sure loss guaranteed by option A.When evaluating a security decision, the negative

consequences are potentially greater of course, but theprobability is generally less and often unknown. Theprinciple holds true. When there is a potential loss ina poor security decision compared to the guaranteedloss of making the pro-security decision, the user maybe inclined to take the risk. For example, consider thechoice between two losses in a common security deci-sion involving the download and installation of a dig-ital certificate and ActiveX control from an unknown

source. In this scenario,the primary goal is toview the Web page con-tent:

Scenario 3:A) Do not install digi-tal certificate andActiveX control fromunknown source anddo not view the con-tent of the Web page(fail on primarygoal), guaranteed.B) Install digital cer-tificate and ActiveXcontrol fromunknown source,view the Web page(accomplish primarygoal), and take achance that some-thing bad happens.

Like Scenario 2, someusers will chance that

nothing bad will happen in order to achieve theirprimary goal than accept the task failure guaranteedby option A. Furthermore, if there are no immediateand obvious negative consequences incurred byoption B, the user learns it is an acceptable decisionand is more likely to repeat it in the future. Theeveryday security decisions end users make, likeopening file attachments, are often presented in theform of losses as in Scenario 3.

38 April 2008/Vol. 51, No. 4 COMMUNICATIONS OF THE ACM

Motivating Value

If X = Y,then A > B

B

X

A

Y

PerceivedGain

PerceivedLoss

Figure 1. Losses carry more valuecompared to gains when both areperceived as equal. For non-zerovalues, if value of loss (X) = value ofgain (Y), then motivation of loss(A) > motivation of gain (B)(Adapted from Tversky andKahneman [8]).

People do not perceive gains and loss equally.This suggests that while a system designer mayconsider the cost of security effort small, the losscould be perceived as worse than the greater gainin safety. Put simply, the user must perceive a

greater magnitude of gain than of loss.

Page 6: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

Security as a secondary task.People tend to focus more onthe losses that will affect theirimmediate goal than the gainswhen making decisions undertime pressure [12]. Users areoften called on by the system tomake a security decision whilethey are in the middle of anactivity. In these cases, the useris often motivated to get onwith the primary task as quicklyas possible and, therefore, lesslikely to make a decision thatfurther interrupts that task. Incases where users are prompted to install softwareupdates, scan a file for viruses before opening, and soforth, users are less likely to comply when in the mid-dle of another task, especially if in a hurry.

Losses perceived disproportionately to gains. Peo-ple do not perceive gains and losses equally. Tverskyand Kahneman [8] showed that when individuals per-ceive a gain and a loss to have the same value, the lossis more motivating in the decision (see Figure 2). Inshort, this means that a loss of $100 is more adversethan a gain of $100 is attractive to a decision maker.This suggests that while a

system designer may considerthe cost of security effort small,the loss could be perceived asworse than the greater gain insafety. Put simply, the usermust perceive a greater magni-tude of gain than of loss.

IMPROVING SECURITYCOMPLIANCE AND

DECISION MAKING

Using the principles at workin security decision making,there are several avenues thatmay improve user securitybehavior.

Reward pro-security behav-ior. There must be a tangiblereward for making good secu-rity decisions. Some suggestthat corporate IT organiza-tions would be encouraged toadopt stronger security prac-tices if insurance companiesoffered lower premiums tothose who protect themselvesby certain measures [5]. Like-

wise, end users must be moti-vated to take pro-securityactions.Increasing the immediate and

tangible reward for secure actionsmay increase compliance. Oneform of reward is to see that thesecurity mechanisms are workingand that the action the user choseis, in fact, making them safer.This makes safety a visible gainwhen evaluating gains and lossesin a security decision.A good example of this is

when an antivirus or antispy-ware product finds and removesmalicious code. In these cases,the security application oftenissues a notification that it hasfound and mitigated a threat.This is an effective way for a

security system to prove its value to the user by show-ing there was a risk and that the system protectedthem. By returning to the access control scenario forfile sharing, it would be possible to report attempts atunauthorized access to the file owner.

Improve the awareness ofrisk. As discussed earlier, peopleoften believe they are at less riskcompared to others. One way toincrease security compliance is toincrease user awareness of therisks they face. This could beachieved through user trainingand education in general butshould also be built into systemsto support specific events.One classically deficient area

in the security of systems is mes-sages and alerts. Security mes-sages often resemble othermessages dialogs (Figure 2). As aresult, security messages may notstand out in importance andusers often learn to disregardthem.To avoid the response bias

problems faced by most message dialogs, security mes-sages should be instantly distinguishable from othermessage dialogs. Security messages should look andsound very different (illustrated in Figure 3). Thishelps mitigate the blasé attitude with which usersattend to the information. Once the message dialoghas the user’s attention, they are more likely to read

COMMUNICATIONS OF THE ACM April 2008/Vol. 51, No. 4 39

Figure 2. Can you spotthe security message?Message dialogs oftenlook similar enough thatno message stands outas more important thatthan others.

Figure 3. Can you spot thesecurity message? (Part 2)

Well-designed securitymessages have distinct

visual and auditoryproperties that make themstand apart from all other

message dialogs andindicate the criticality of

the message.

Page 7: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

and consider the choices given to them.Catch corporate security policy violators. Increas-

ing the awareness of risk could also mean increasingthe likelihood that a corporate user is caught violatingsecurity policy. Having a corporate security policythat is not monitored or enforced is tantamount tohaving laws but no police. If the security systems havegood auditing capabilities and are watched by eventmonitoring systems, users who make poor securitydecisions could be “caught” in a way. This wouldserve as an immediate negative consequence by itself.Like automated systems at traffic lights that snap pic-tures and issue violations to drivers that run red lights,users who make poor security decisions could receiveautomated email notifications of their actions and thecorporate policy or safe computing practice. In gen-eral, the best deterrent to breaking the rules is not theseverity of consequences but the likelihood of beingcaught.

Reduce the cost of implementing security. Obvi-ously, if users need to take additional steps to increasetheir level of security, they will be less likely to do so.As the cost of implementing security increases, theoverall value of the decision decreases. To accomplisha task, users often seek the path of least resistance thatsatisfies the primary goal. It should be commonknowledge that in making the secure choice the easi-est for the user to implement, one takes advantage ofnormal user behavior and gains compliance.Another way to reduce the cost of security is, of

course, to employ secure default settings. Most usersnever change the default settings of their applications.In this way, one increases the cost to make non-securedecisions in terms of time and effort. While gooddefault settings can increase security, system designersmust be careful that users do not find an easier way toslip around them. For example, users who are directedby their IT departments to use strong passwordsacross multiple systems are more likely to write themdown [1].

CONCLUSIONCore to security on an everyday basis is the compli-ance of the end user, but how do we get them tomake good decisions when they are often the weak-est link in the chain? Users must be less motivated tochoose anti-security options and more motivated tochoose pro-security options. Obviously, no onewould suggest training end users with USB devicesthat deliver an electric shock or food pellet rewardbased on their actions. But, generally speaking, wecan increase compliance if we work with the psy-chological principles that drive behavior.The ideal security user experience for most users

would be none at all. The vast majority would be con-tent to use computers to enrich their lives while tak-ing for granted a perfectly secure and reliableinfrastructure that makes it all possible. Security onlybecomes a priority for many when they have prob-lems with it. However, now, and in the foreseeablefuture, users are in the control loop. We must designsystems with an understanding that, at some point,must make a decision regarding security. The ques-tion is, what will they decide?

References1. Adams, A. and Sasse, A.S. Users are not the enemy. Commun. ACM 42,(1999) 40–46.

2. Borgida, E., and Nisbett, R.E. The differential impact of abstract vs.concrete information on decisions. J. Applied Social Psychology 7 (1977)258–271.

3. Dhamija, R., Tygar, J.D., and Hearst, M. Why phishing works. In Pro-ceedings of the SIGCHI Conference on Human Factors in Computing Sys-tems (Montreal, Quebec, Canada, Apr. 22–27, 2006). R. Grinter, T.Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. ACM,New York, 581–590.

4. Downs, J.S., Holbrook, M., and Cranor, L.F. Behavioral response tophishing risk. In Proceedings of the Anti-Phishing Working Groups 2ndAnnual Ecrime Researchers Summit (Pittsburgh, PA, Oct. 4–5, 2007).ACM, New York, 37–44.

5. Greenwald, S.J., Olthoff, K.G., Raskin, V., and Ruch, W. The usernon-acceptance paradigm: INFOSEC’s dirty little secret. New SecurityParadigms Workshop, 35–43. ACM, New York.

6. Slovic, P., Fischhoff, B., and Lichtenstein, S. Facts versus fears: Under-standing perceived risks. Judgment under Uncertainty: Heuristics andBiases. D. Kahneman, P. Slovic, and A. Tversky, eds. Cambridge Uni-versity Press, New York, 1986, 463-489.

7. Smetters, D.K. and Grinter, R.E. Moving from the design of usablesecurity technologies to the design of useful secure applications. NewSecurity Paradigms Workshop. ACM, New York, 2002, 82–89.

8. Tversky, A. and Kahneman, D. Rational choice and the framing ofdecisions. J. Business 59 (1986), 251–278.

9. Whitten, A. and Tygar J.D. Why Johnny can’t encrypt: A usabilityevaluation of PGP 5.0. In Proceedings of the 8th USENIX Security Sym-posium (1999). USENIX Association, Berkeley, CA, 169–184.

10. Wright, P. The harassed decision maker: Timer pressure, distractions,and the use of evidence. J. Applied Psychology 59, (1974), 555–561.

11. Yee, K.P. User interaction design for secure systems. Proceedings of the4th International Conference on Information and Communications Secu-rity. Springer-Verlag, London, 2002.

12. Zurko, M.E. and Simon, R.T. User-centered security. New SecurityParadigms Workshop. ACM, New York, 27–33.

Ryan West ([email protected]) has conducted academicresearch in risk and decision making and applied research in areasranging from medical mistakes to computer security. He currentlyworks as a design researcher at Dell, Inc., Austin, TX.

Permission to make digital or hard copies of all or part of this work for personal or class-room use is granted without fee provided that copies are not made or distributed forprofit or commercial advantage and that copies bear this notice and the full citation onthe first page. To copy otherwise, to republish, to post on servers or to redistribute tolists, requires prior specific permission and/or a fee.

© 2008 ACM 0001-0782/08/0400 $5.00

DOI: 10.1145/1330311.1330320

c

40 April 2008/Vol. 51, No. 4 COMMUNICATIONS OF THE ACM

Page 8: THE PSYCHOLOGY OF SECURITY - CINVESTAVdelta.cs.cinvestav.mx/~francisco/ssi/p34-west.pdf · 34 April 2008/Vol.51,No.4 COMMUNICATIONS OF THE ACM THE PSYCHOLOGY OF SECURITY Theimportanceoftheuserinthesuc-cessofsecuritymechanismshasbeen

COMMUNICATIONS OF THE ACM April 2008/Vol. 51, No. 4 41