26
Incentive compatibility in data security Felix Ritchie, ONS (Richard Welpton, Secure Data Service)

Incentive compatibility in data security

  • Upload
    luce

  • View
    30

  • Download
    3

Embed Size (px)

DESCRIPTION

Incentive compatibility in data security. Felix Ritchie, ONS (Richard Welpton, Secure Data Service). Overview. Research data centres Traditional perspectives A principal-agent problem? Behaviour re-modelling Evidence and impact. Research data centres. - PowerPoint PPT Presentation

Citation preview

Page 1: Incentive compatibility in data security

Incentive compatibilityin data security

Felix Ritchie, ONS

(Richard Welpton, Secure Data Service)

Page 2: Incentive compatibility in data security

Overview

• Research data centres

• Traditional perspectives

• A principal-agent problem?

• Behaviour re-modelling

• Evidence and impact

Page 3: Incentive compatibility in data security

Research data centres

• Controlled facilities for access to sensitive data

• Enjoying a resurgence as ‘virtual’ RDCs– Exploit benefits of an RDC– Avoid physical access problems

• ‘People risk’ key to security

Page 4: Incentive compatibility in data security

The traditional approach

Page 5: Incentive compatibility in data security

Parameters of access

• NSI– Wants research– Hates risk– Sees security as essential

• Researcher– Wants research– Sees security as a necessary evil

a classic principal-agent problem?

Page 6: Incentive compatibility in data security

NSI perspective

• Be careful

• Be grateful

Page 7: Incentive compatibility in data security

Researcher perspective

• Give me data

• Give me a break!

Page 8: Incentive compatibility in data security

Objectives

VNSI = U(risk-, Research+) – C(control+)

Vi (researcheri) = U(researchi+, control-)

risk = R(control-, trust-) < Rmin

Research = f(Vi+)

Page 9: Incentive compatibility in data security

A principal-agent problem? NSI:

Trust = T(lawfixed)

= T(training(lawfixed), lawfixed)

Maximise research s.t. maximum risk

Risk = Riskmin

Researcher:Control = Controlfixed

Maximise research

Page 10: Incentive compatibility in data security

Dependencies

researchi

Vi

trustcontrol

Research Risk

VNSI

choice variables

Page 11: Incentive compatibility in data security

Consequences: inefficiency?

• NSI– Little incentive to develop trust– Limited gains from training– Access controls focus on deliberate misuse

• Researcher– Access controls are a cost of research– No incentive to build trust

Page 12: Incentive compatibility in data security

More objectives, more choices

researchi

Vi

trustcontrol

Research Risk

VNSI

trainingeffort

Page 13: Incentive compatibility in data security

Intermission:What do we know?

Page 14: Incentive compatibility in data security

Conversation pieces

• Researchers are malicious

• Researchers are untrustworthy

• Researchers are not security-conscious

• NSIs don’t care about research

• NSIs don’t understand research

• NSIs are excessively risk-averse

☒☑

☒☑

Page 15: Incentive compatibility in data security

Some evidence

• Deliberate misuse– Low credibility of legal penalties– Probability of detection more important– Driven by ease of use

• Researchers don’t see ‘harm’

• Accidental misuse– Security seen as NSI’s responsibility

• Contact affects value

Page 16: Incentive compatibility in data security

Developing trueincentive compatibility

Page 17: Incentive compatibility in data security

Incentive compatibility for RDCs

• Align aims of NSI & researcher– Agree level of risk– Agree level of controls– Agree value of research

• Design incentive mechanism for default– Minimal reward system– Significant punishments

• Bad economics?

Page 18: Incentive compatibility in data security

Changing the message (1)behaviour of researchers• Aim

– researchers see risk to facility as risk to them

• Message– we’re all in this together– no surprises, no incongruities– we all make mistakes

• Outcome– shopping– fessing

Page 19: Incentive compatibility in data security

Changing the message (2)behaviour of NSI• Aim

– positive engagement with researchers– realistic risk scenarios

• Message– research is a repeated game– researchers will engage if they know how– contact with researchers is of value per se– we all make mistakes

• Outcome– improved risk tolerance

Page 20: Incentive compatibility in data security

Changing the message (3)clearing research output• Aim

– clearances reliably good & delivered speedily

• Message– we’re human & with finite resources/patience– you live with crude measures, but – you tell us when it’s important– we all make mistakes

• Outcome– few repeat offenders– high volume, quick response, wide range– user-input into rules

Page 21: Incentive compatibility in data security

Changing the message (4)VML-SDS transition• Aim

– get VML users onto SDS with minimal fuss

• Message– we’re human & with finite resources/patience– don’t ask us to transfer data– unless it’s important

• Outcome– most users just transfer syntax– (mostly) good arguments for data transfer

Page 22: Incentive compatibility in data security

Changing the message: summary• we all know what we all want

• we all know each other’s concerns

• we’ve all agreed the way forward

• we are all open to suggestions

• we’re all human

Page 23: Incentive compatibility in data security

IC in practice

• Cost– VML at full operation c.£150k p.a.– Secure Data Service c. £300k– Denmark, Sweden, NL €1m-€5m p.a.

• Failures– Some refusals to accept objectives– VML bookings– Limited knowledge/exploitation of research– Limited development of risk tolerance

Page 24: Incentive compatibility in data security

Summary

• ‘Them and us’ model of data security is inefficient

• Punitive model of limited effectiveness

• Lack of information causes divergent preferences

• Possible to align preferences directly

• It works!

Page 25: Incentive compatibility in data security

Felix Ritchie

Microdata Analysis & User SupportONS

Page 26: Incentive compatibility in data security

Objectives

VNSI = U(risk-, Research+) – C(control+)

Vi (researcheri) =

U(risk-, researchi+, control-)

risk = R(control, trust)

control = C(compliance, trust

trust = T(training, compliance)