34
Trust Online and the Phishing Problem: why warnings are not enough M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz) Department of Computer Science & SECReT Doctoral Training Centre, UCL <event name> 22/09/2011

Trust Online and the Phishing Problem: why warnings are not enough M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz)

Embed Size (px)

Citation preview

Trust Online and the Phishing Problem: why warnings are not enough

M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz)

Department of Computer Science & SECReT Doctoral Training Centre, UCL

<event name> 22/09/2011

Outline

• Basics of trust• 2 lab studies on an anti-phishing tool and security

warnings• … which explain why current signals don’t work• What can we do?

– Design– Communication to user

2

What is trust?

Trust is only required in the presence of risk and uncertainty

“… willingness to be vulnerable, based on positive expectations about the actions of others”

M. Bacharach & D. Gambetta 2001. Trust as Type Detection. In: Castelfranchi, C. & Tan, Y. Trust and Deception in Virtual Societies.

Why? Economic Benefits

Ignore these at your peril …

• trust = split-second assessment, rather than thorough risk analysis and assurance

• reliance = after several successful transactions, no perceived vulnerability = split of a split-second assessment

How do we decide when to trust?

• People assessment of transaction partner’s ability and motivation [Deutsch, 1956]

• We look for cues (trust signals) that indicate these

• This assessment can be based on – cognitive elements (rational)

– affective reactions (pre-

cognitive)

TRUSTEETRUSTOR

1 Signals

TRUSTEETRUSTOR

Outside Option

1 Signals

TRUSTEETRUSTOR

2a Trusting Action2b Withdrawal

RISK

Outside Option

1 Signals

TRUSTEETRUSTOR

2a Trusting Action2b Withdrawal

3a Fulfilment 3b Defection

RISK

Dis-embedding

Interaction is stretched over time and space and involves complex socio-technical systems [Giddens, 1990]

… pervasive in modern societies

(e.g. catalogue shopping)

So – what’s so special about trust online?

•Increased risk– Privacy (more data required)– Security (open system)– Own ability (errors)

•Increased uncertainty– Inexperienced with decoding cues– Fewer surface cues available– Traditional cues no long useful

J. Riegelsberger, M. A. Sasse, & J. D. McCarthy: The Mechanics of Trust.Int J of Human-Computer Studies 2005.

Study 1: phishing

• Passive phishing indicators (Spoofstick etc.) have limited effect– Users don’t look at indicators– Users don’t know what indicators mean– Require users to disrupt their main task– Time-consuming and error-prone

R. Dhamija et al.: Why Phishing Works. Procs ACM CHI 2006Schechter et al.: The Emperor’s New Security IndicatorsIEEE Security & Privacy 2007

Are active anti-phishing tools better?

• Example: SOLID by First Cyber Security

• Traffic Light approach:– Passive indicator when no risk

exists– Becomes active when a risk is

identified

Safe WebsiteGreen

• “Extreme Caution”• Shows up only when

the website the users attempt to visit is certainly unsafe

• Presents three options:― Redirection to the authentic

website (Default option)― Close the window― Proceed to the risky site

Results – Active Warning• “Extreme Caution” window

resulted to 17 out of 18 participants visiting the genuine website. – Clear information– Right timing– Context-specific

• Safe Default is important. – Users clicked “OK” without fully

understanding the meaning of the message they have been presented with

– They were redirected to the genuine website

Results – did they still take risks?

• Tool reduced number of participants taking risks,• But: some still take risks

Potential Payoff

Number of participants

Control SOLID

£10 5 10 (green)

£35-40 12 8 (grey/yellow)

£20 1 0 (grey)

Why do users ignore the recommendation?

• Price = main factor for ignoring the tool

Need And Greed Principle (Stajano & Wilson: Understanding Scam Victims Comm ACM March 2011)

• General advice like

“If it is too good to be true, it usually is”

doesn’t work

• Participants believe they can rely on their own ability to identify scam websites, and ignore the tool

• Past experience with high false-positives creates a negative attitude towards security indicators

• Cormac Herley: security tools/advice offering a poor cost-benefit will be rejected by users

C. Herley: So Long, And No Thanks for all the ExternalitiesProcs NSPW 2009

“I know better …”

Other trust cues

• Perceived familiarity (reliance)• Mentioning other entities – Facebook and Twitter

logos• Ads – “Why would anyone pay to advertise on a

dog site?”, mention of charities• Lots of info, privacy policies, and good design

20

Symbols of trust • arbitrarily assigned meaning• specifically created to signify the

presence of trust-warranting properties• must be difficult to forge (mimicry) and

sanctions in the case of misuse• expensive

– trustor has to know about their existence and how to decode them. At the

– trustees need to invest in emitting them and in getting them known

Symptoms of trust

• not specifically created to signal trust-warranting properties – rather, by-products of the activities of trustworthy actors

• e.g. trustworthy online retailer has large customer base, repeat business

• exhibiting symptoms of trust incurs no cost for trustworthy actors, whereas untrustworthy actors would have to invest effort mimic those signals

Study 2: pdf warnings

23

Most common file types in targeted attacks in 2009. Source: F-Secure (2010)

The experiment

• Two conditions: between-subjects design

• Participant task: reading two articles and evaluating their summaries– choosing the first article: no warning– choosing the second article: a warning with each article

the participants tried

24

General results

• 120 participants (64 female, mean age 25.7)

8

χ2=1.391 p=0.238 df=1

Warning type Downloaded Refused

Generic 52 8

Specific 46 14

∑ 98 22

Gender differences

• Women were more cautious and less likely to download an article with a warning

Download Refusal

Male 50 6

Female 48 16

χ2=4.071, p=0.044, df=1

26

Eye-tracking data

• Fixation time in seconds– By warning type

• 6.13 for generic warnings• 6.33 for specific warnings

– By subsequent reaction• 6.94 for those who subsequently refused to download• 5.63 for those who subsequently downloaded the article

27

No significant difference between the length of fixation, all participants were fairly attentive to the warning regardless of the text, but just took different decisions

Hypothetical vs. observed behaviour

Download Refusal to download

Hypothetical 52 8

Self-observed 41 19

χ2 = 6.039, p = 0.014

14

Download Refusal to download

Hypothetical 46 14

Self-observed 13 47

χ2 = 36.31, p < 0.0001

Generic warning

Specific warning

Reasons for ignoring warning

• Desensitisation (55 participants): past experience of false positives

15

Reasons for ignoring warning

• Trusting the source (29)

“It depends on what the source was, if I was getting it from a dodgy website, I probably wouldn’t download it. But if something was sent to me by a friend or a lecturer or I was downloading it from a library catalogue, I would have opened it anyway.”

17

Reasons for ignoring warning

• Trusting anti-virus (18)I trusted that the anti-virus on my computer would pick anything up.

• Trusting PDF (15)I don’t think PDF files can have this kind of harm in them.

It says ‘PDF files can harm your computer’ and I know they can’t.

18

Why security warnings don’t work

• Warnings are not reliable and badly designed– more noise than signals– interrupt users’ primary task– pop-ups are associated with adverts and updates =

ANNOYING!!!

• Users have misconceptions:– about risks and indicators– about their own competence

19

Conclusions: What can be done?

1. Re-design the interaction: eliminate choice, automatically direct users to safe sites

2. More effective trust signalling: develop symptoms of trust and protect symbols better

3. Get rid of useless warnings

4. Better communication about risks, correct misconceptions about trust signals

20

Good Human Factors – by a security person

1. The system must be substantially, if not mathematically, undecipherable;

2. The system must not require secrecy and can be stolen by the enemy without causing trouble;

3. It must be easy to communicate and remember the keys without requiring written notes, it must also be easy to change or modify the keys with different participants;

4. The system ought to be compatible with telegraph communication; 5. The system must be portable, and its use must not require more

than one person; 6. Finally, regarding the circumstances in which such system is

applied, it must be easy to use and must neither require stress of mind nor the knowledge of a long series of rules.

Auguste Kerckhoffs, ‘La cryptographie militaire’, Journal des sciences militaires, vol. IX, pp. 5–38, Jan. 1883, pp. 161–191, Feb. 1883.