Upload
alex-hutton
View
403
Download
2
Tags:
Embed Size (px)
DESCRIPTION
Alex Hutton's Slide Show on Destroying GRC from Security B-Sides
Citation preview
Risk ManagementTime to blow it up and start over?
@alexhutton
Met E.T. Jaynesprobability theory, the logic of science
Kuhn’s Protoscience A stage in the development of a science that is described by:
• somewhat random fact gathering (mainly of readily accessible data)
• a “morass” of interesting, trivial, irrelevant observations
• A variety of theories (that are spawned from what he calls philosophical speculation) that provide little guidance to data gathering
only the wisest and stupidest of men never changeConfucius
Destroy GRCMusings of a Risk Management Deconstructivist
A feeling of diss-connect between GRC and Security
let’s talk governance
governance, without metrics & models, is superstitiongovernance, with metrics & models, describes capability to manage risk
Why does what you execute on and how you execute matter?
governance, without metrics & models, is superstitiongovernance, with metrics & models, describes capability to manage risk
measurably good governance practices (can/will) reduce riskmeasurably good governance is simply a description of capability to manage risk
not sucking eggs at security is a good idea
compliance*, without metrics, is superstitioncompliance*, with metrics, is risk management
*(regulatory)
But “GRC” Risk Management
Find issue, call issue bad, fix issue, hope you don’t find it again...
What is risk?
a. Risk is notionalb. Risk is tangible
Problems with “tangible”
- complex systems, complexity science
- usefulness outside of the very specific
- measurements
- lots of belief statements
How Complex Systems Fail (Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety)
Richard I. Cook, MD Cognitive technologies Laboratory University of Chicago
http://www.ctlab.org/documents/How%20Complex%20Systems%20Fail.pdf
Catastrophe requires multiple failures single point failures are not enough..
The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners.
Complex systems contain changing mixtures of failures latent within them.
The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations. Eradication of all latent failures is limited primarily by economic cost but also because it is difficult before the fact to see how such failures might contribute to an accident. The failures change constantly because of changing technology, work organization, and efforts to eradicate failures.
Complex systems run in degraded mode.
Post-accident attribution accident to a ‘root cause’ is fundamentally wrong. All practitioner actions are gambles.
Human expertise in complex systems is constantly changing
Change introduces new forms of failure.
Views of ‘cause’ limit the effectiveness of defenses against future events.
Problems with “notional”
- becomes difficult to extract wisdom - we want a “Gross Domestic Product”
- unable to be defended
- pseudo-scientific
- lots of belief statements
from Mark Curphey’s SecurityBullshit
What is risk?
uses of “risk”
- engineering - complex systems says “no”
- financial - no 110% return on your firewall
- medical - requires data
our standards say:
Find issue, call issue bad, fix issue, hope you don’t find it again...
Managing risk means aligning the capabilities of the organization, and the exposure of the organization with the tolerance of the data owners
- Jack Jones
evidence based medicine, meet information security
What is evidence-based risk management?
a deconstructed, notional view of risk
Threat Landscape
Controls Landscape
Loss Landscape
Asset Landscape
risk
Threat Landscape
Controls Landscape
Loss Landscape
Asset Landscape
risk
a balanced scorecard?
Threat Landscape
Controls Landscape
Loss Landscape
Asset Landscape
risk
a balanced scorecard?
capability (destroys “g” introducing quality management & mgmt. science elements into infosec)
exposure
change
“compliance” simply becomes a factor of loss landscape and/or operating as a control group for comparative data
The Achilles heel again, lack of data
Models and data sharingGood Lord Of The Dance, something a vendor might actually help you with
Verizon Incident Sharing Frameworkit’s open*!
* kinda
Verizon has shared data
- 2009 – over 600 cases
- 2010 – between 1000 & 1400
Verizon is sharing our framework
What is the Verizon Incident Sharing (VerIS) Framework?
- A means to create metrics from the incident narrative
- how Verizon creates measurements for the DBIR
- how *anyone* can create measurements from an incident
- http://securityblog.verizonbusiness.com/wp-content/uploads/2010/03/VerIS_Framework_Beta_1.pdf
What makes up the VerIS framework?
- Demographics- Incident Classification
- Event Modeling (a4)
- Discovery & Mitigation- Impact Classification
- Impact Modeling
Cybertrust Security
demographics - company industry
- company size
- geographic location
- of business unit in incident
- size of security department
Cybertrust Security
incident classification - agent- what acts against us
- asset- what the agent acts
against
- action- what the agent does to the
asset
- attribute- the result of the agent’s
action against the asset
agent
action
asset
attribute
external
partner
internal
hackingmalware
socialphysical
misuseerror
environmental
typefunction
confidentiality
availability
integrity
possession
utility
authenticity
Cybertrust Security
the series of events (a4) creates an “attack model”
1 2 3 4 5> > > >
incident classification a4 event model
Cybertrust Security
discovery & mitigation - incident timeline
- discovery method
- evidence sources
- control capability
- corrective action- most straightforward manner
in which the incident could be prevented
- the cost of preventative controls
+
Cybertrust Security
Impact classification - impact categorization- sources of Impact
(direct, indirect)
- similar to iso 27005/FAIR
- impact estimation- distribution for
amount of impact
- impact qualification- relative impact
rating
$
Cybertrust Security
$ $ $+demographics incident classification (a4) discovery
& mitigation impact classification
1 2 3 4 5> > > >
incident narrative incident metrics
Cybertrust Security
$ $ $+demographics incident classification (a4) discovery
& mitigation impact classification
1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
case studies data set
a
b
c
d
e
f
Cybertrust Security
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
data set knowledge & wisdom
a
b
c
d
e
f
demographics incident classification (a4) discovery& mitigation impact classification
Cybertrust Security
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
threat modeling
a
b
c
d
e
f
demographics incident classification (a4) discovery& mitigation impact classification
Cybertrust Security
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
threat modeling
a
b
c
d
e
f
demographics incident classification (a4) discovery& mitigation impact classification
Cybertrust Security
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
impact modeling
a
b
c
d
e
f
demographics incident classification (a4) discovery& mitigation impact classification
Cybertrust Security
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
$ $ $+1 2 3 4 5> > > >
impact modeling
a
b
c
d
e
f
demographics incident classification (a4) discovery& mitigation impact classification
Problems:
Data sharing, incidents, privacy
Failures vs. Successes(where management capability helps)
Talking to the business owner(might still need a “tangible approach here, but pseudo-actuarial data can help - we still want a GDP)
Successes:
Bridge the gap(IRM becomes tactically actionable based on threat/attack modeling)
(Capability measurements bridged to notional increase/decrease in risk)
(complex system problems addressed by showing multiple sources of causes)
Accurate, notional likelihood
Accurate tangible impact
Requirements:Data Sets
Models
Technology
Sciences - complexity, management/TQM/Probability/Game Theory, biomimicry...