Upload
fabian
View
23
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Building Security Assurance in Open Infrastructures. Workshop on Public Safety And Security Research Helsinki, February 2008 Bertrand MARQUET, Bell Labs, Alcatel-Lucent France, BUGYO & BUGYO Beyond Project Coordinator. The problem: You cannot manage /improve what you cannot measure. - PowerPoint PPT Presentation
Citation preview
Building Security Assurance
in Open Infrastructures
Workshop on Public Safety And Security Research
Helsinki, February 2008
Bertrand MARQUET, Bell Labs, Alcatel-Lucent France,
BUGYO & BUGYO Beyond Project Coordinator
2
Manage/improve security in an
acceptable range
Losses vs. costs
Euros
Security deployment
CostsLosse
s
losses +
costs
A simplified drawing to start addressing the problem
The problem: You cannot manage /improve what you
cannot measure
3
The BUGYO project
Has proposed a solution to response to that problem by providing a
Framework to measure, maintain and document security assurance of critical ICT-based services infrastructures
– Security assurance by evaluation of security based on a continuous measurement and monitoring.
We defined Security assurance as: “confidence that a system meets its security objective and therefore is cost-effective to protect against potential attacks and threats”
assurance
evaluation
Confidence
Measurement
countermeasures
Exploitablevulnerabiliti
es
risks
sufficient
Services in operations
provides
Gives evidence of
giving
thatare
Have no And thereforeminimize
on
4
Core result: 6 steps methodology
MonitorMonitorMonitor the assurance level
for the service and provide comparison
Evaluate the assurance statusof the service based on the
aggregated values andinitiate learning
Aggregate the metricresults to derive anassurance level per
component and for theservice
Investigate the network by means of the selected metrics
on a component and systemlevel as modeled
Use the metric taxonomy as checklist to assign normalized metrics
Decompose the service to identify assurance
critical components
EvaluateEvaluate
AggregateAggregate
MeasureMeasure
Select metricsSelect metrics
Modelthe service
Modelthe service
OFF LINE process
IN LINE process
The security cockpit is displaying real time assurance information.
Five levels of assurance enable to express increasing confidence.
A multi-agent platform and a centralized server are providinga measurement infrastructure (implemented metrics)and multiple aggregation algorithms.
A formalized process is transforming measured raw data intoa normalized assurance level.
A dedicated security assurance model is providing means toexpress Telco based service assurance needs.
<<managed>><<Aggregation >>
DMZ{AL: 3}
<<managed>> <<System >>WebServer
{AL:3}{Metric :1}
<<unmanaged>>MailServer
{AL:3}{Trust :1}
<<managed>> Firewall
{AL:3}{Metric:2}
<<Metric >>RuleReview{Quality : 1}
<<Metric >>PWCheck{Quality : 1}
<<Metric>>NessusReport
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.2}
<<managed>> Authentication Service
{AL:3}
<<weighted >> {Weight : 0.3}
<<weighted >> {Weight : 0.8}
<< weighted >> {Weight : 0.2}
<<weighted >> {Weight : 1}
<<unmanaged>>SSL::AppacheSSL 1
{AL:3}{Trust :1}
<<unmanaged >>AppacheHTTPEngine
:: Appache1{AL:3}{Trust :1}
<<unmanaged >> SkriptEngine ::Appach
eJSP{AL: 3} {Trust :0.9}
<<weighted>>
{Weight: 0.1} <<weighted >> {Weight : 0.1}
<<weighted>> {Weight: 0 .1}
Se
rvic
e L
aye
rE
lem
ent L
aye
rC
once
pt L
aye
rC
om
pone
nt L
aye
r
<<managed>><<Aggregation >>
DMZ{AL: 3}
<<managed>> <<System >>WebServer
{AL:3}{Metric :1}
<<unmanaged>>MailServer
{AL:3}{Trust :1}
<<managed>> Firewall
{AL:3}{Metric:2}
<<Metric >>RuleReview{Quality : 1}
<<Metric >>PWCheck{Quality : 1}
<<Metric>>NessusReport
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.2}
<<managed>> Authentication Service
{AL:3}
<<weighted >> {Weight : 0.3}
<<weighted >> {Weight : 0.8}
<< weighted >> {Weight : 0.2}
<<weighted >> {Weight : 1}
<<unmanaged>>SSL::AppacheSSL 1
{AL:3}{Trust :1}
<<unmanaged >>AppacheHTTPEngine
:: Appache1{AL:3}{Trust :1}
<<unmanaged >> SkriptEngine ::Appach
eJSP{AL: 3} {Trust :0.9}
<<weighted>>
{Weight: 0.1} <<weighted >> {Weight : 0.1}
<<weighted>> {Weight: 0 .1}
Se
rvic
e L
aye
rE
lem
ent L
aye
rC
once
pt L
aye
rC
ompo
nen
t Lay
er
Confidence
Assurance Level1 52 3 4
Assurance Level 1: Rudimentary evidence for selected partsAssurance Level 2: Regular informal evidence for important partsAssurance Level 3: Frequent informal evidence for important partsAssurance Level 4: Continuous informal evidence for large partsAssurance Level 5: Continuous semi formal evidence for entire system
The security cockpit is displaying real time assurance information.
Five levels of assurance enable to express increasing confidence.
A multi-agent platform and a centralized server are providinga measurement infrastructure (implemented metrics)and multiple aggregation algorithms.
A formalized process is transforming measured raw data intoa normalized assurance level.
A dedicated security assurance model is providing means toexpress Telco based service assurance needs.
<<managed>><<Aggregation >>
DMZ{AL: 3}
<<managed>> <<System >>WebServer
{AL:3}{Metric :1}
<<unmanaged>>MailServer
{AL:3}{Trust :1}
<<managed>> Firewall
{AL:3}{Metric:2}
<<Metric >>RuleReview{Quality : 1}
<<Metric >>PWCheck{Quality : 1}
<<Metric>>NessusReport
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.2}
<<managed>> Authentication Service
{AL:3}
<<weighted >> {Weight : 0.3}
<<weighted >> {Weight : 0.8}
<< weighted >> {Weight : 0.2}
<<weighted >> {Weight : 1}
<<unmanaged>>SSL::AppacheSSL 1
{AL:3}{Trust :1}
<<unmanaged >>AppacheHTTPEngine
:: Appache1{AL:3}{Trust :1}
<<unmanaged >> SkriptEngine ::Appach
eJSP{AL: 3} {Trust :0.9}
<<weighted>>
{Weight: 0.1} <<weighted >> {Weight : 0.1}
<<weighted>> {Weight: 0 .1}
Se
rvic
e L
aye
rE
lem
ent L
aye
rC
once
pt L
aye
rC
om
pone
nt L
aye
r
<<managed>><<Aggregation >>
DMZ{AL: 3}
<<managed>> <<System >>WebServer
{AL:3}{Metric :1}
<<unmanaged>>MailServer
{AL:3}{Trust :1}
<<managed>> Firewall
{AL:3}{Metric:2}
<<Metric >>RuleReview{Quality : 1}
<<Metric >>PWCheck{Quality : 1}
<<Metric>>NessusReport
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.2}
<<managed>> Authentication Service
{AL:3}
<<weighted >> {Weight : 0.3}
<<weighted >> {Weight : 0.8}
<< weighted >> {Weight : 0.2}
<<weighted >> {Weight : 1}
<<unmanaged>>SSL::AppacheSSL 1
{AL:3}{Trust :1}
<<unmanaged >>AppacheHTTPEngine
:: Appache1{AL:3}{Trust :1}
<<unmanaged >> SkriptEngine ::Appach
eJSP{AL: 3} {Trust :0.9}
<<weighted>>
{Weight: 0.1} <<weighted >> {Weight : 0.1}
<<weighted>> {Weight: 0 .1}
Se
rvic
e L
aye
rE
lem
ent L
aye
rC
once
pt L
aye
rC
ompo
nen
t Lay
er
Confidence
Assurance Level1 52 3 4
Assurance Level 1: Rudimentary evidence for selected partsAssurance Level 2: Regular informal evidence for important partsAssurance Level 3: Frequent informal evidence for important partsAssurance Level 4: Continuous informal evidence for large partsAssurance Level 5: Continuous semi formal evidence for entire system
Confidence
Assurance Level1 52 3 4
Assurance Level 1: Rudimentary evidence for selected partsAssurance Level 2: Regular informal evidence for important partsAssurance Level 3: Frequent informal evidence for important partsAssurance Level 4: Continuous informal evidence for large partsAssurance Level 5: Continuous semi formal evidence for entire system
improve
5
BUGYO Methodology
Show the assurance levelfor the service and provide comparison
Evaluate the assurance statusof the service based on the aggregated
values and initiate learning
Aggregate the metricresults to derive an assurance
level per component and for the service
Investigate the network by means of the selected metrics
on a component and systemlevel as modeled
Use the metric taxonomy as checklist to assign
normalized metrics
Decompose the serviceto identify assurance
critical components
A dedicated security assurance model is providing
means toexpress Telco based service
assurance needs.
<<managed>><<Aggregation >>
DMZ{AL: 3}
<<managed>> <<System >>WebServer
{AL:3}{Metric :1}
<<unmanaged>>MailServer
{AL:3}{Trust :1}
<<managed>> Firewall
{AL:3}{Metric:2}
<<Metric >>RuleReview{Quality : 1}
<<Metric >>PWCheck{Quality : 1}
<<Metric>>NessusReport
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.4}
<<weighted >> {Weight : 0.2}
<<managed>> Authentication Service
{AL:3}
<<weighted >> {Weight : 0.3}
<<weighted >> {Weight : 0.8}
<< weighted >> {Weight : 0.2}
<<weighted >> {Weight: 1}
<<unmanaged>>SSL::AppacheSSL 1
{AL:3}{Trust :1}
<<unmanaged >>AppacheHTTPEngine
:: Appache1{AL:3}{Trust :1}
<<unmanaged >> SkriptEngine ::Appach
eJSP{AL: 3} {Trust :0.9}
<<weighted >> {Weight : 0.1}
Ser
vic
e L
aye
rE
lem
ent L
aye
rC
on
cep
t La
yer
Com
pone
nt L
aye
r
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Methodology Realisation
Modelthe service
6
Model the service: the goals
The main goal of the Model is to:
Describe the system under observation
Reduce complexity
Describe the assurance composition
Document the results of measurements
Enable to mechanize the creative part
7
Model the service: fundamental concepts
We use the two system theoretic concepts of
hierarchization, so that the system can be modelled with different levels of granularity
black boxes to represent complex infrastructure objects, which can be further investigated on demand.
With these two mechanisms we allow a fast initial deployment which is stepwise refined during operation.
8
Model the service: the model elements
The infrastructure object:
The Metric
<<abstract>>Infrastructure Object
ManagedInfrastructure Object
UnmanagedInfrastructure Object
MetricManaged
Infrastructure Object* *
9
Model the service: Chose the Metrics
The model mentions Metrics, attached to Infrastructure Objects.
The metrics are the one to compute the Assurance value at their level.
The model enables to aggregate the values in order to have a value at higher level, up to Service level.
But what are metrics ?
10
BUGYO Methodology (2/5)
Show the assurance levelfor the service and provide comparison
Evaluate the assurance statusof the service based on the aggregated
values and initiate learning
Aggregate the metricresults to derive an assurance
level per component and for the service
Investigate the network by means of the selected metrics
on a component and systemlevel as modeled
Use the metric taxonomy aschecklist to assign
normalized metrics
Decompose the serviceto identify assurance
critical components
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
A formalised process is transforming
measured raw data intoa normalised assurance
level.
Methodology Realisation
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
11
How to build a normalized metric?
Security Assurance is a measure of confidence
Common Criteria Scope
– broader scope gives
more assurance
Depth– more details investigated
gives more assurance
Rigor– more formalism gives
more assurance
Other Criteria Quantity
– more evidence give greater confidence
Timeliness– more recent versions are bound to found more problems
Reliability– higher reliability of the collector gives better confidence
Co
nfi
de
nce
A s s ura nc e L ev e l1 52 3 40
F ull c o nf i d e nc e
12
Security assurance taxonomy
We derived the following 5 security assurance levels
Odd number of levels => possible to have a medium level
CC have 7 levels (but sense of level 1 and 7 in operational context)
Only 3 levels wouldn’t have provided enough granularity
Definition Interpretation/Application
Level 1
Rudimentary evidence for selected parts
Carrier/large enterprises infrastructure service Basic security assurance
Level 2
Regular informal evidence for important parts
Carrier/large enterprises infrastructure service Medium security assurance
Level 3
Frequent informal evidence for important parts
Carrier/large enterprises infrastructures service High assurance
Level 4
Continuous informal evidence for large parts
Critical infrastructure service security assurance
Level 5
Continuous semi formal evidence for entire system
Governmental/Defence service infrastructure service security assurance
13
Assurance classes
Family
1 2 3 4 5
CLASS SM: Service Model
SM_VU: Absence of relevant vulnerabilities 1 1 2 2 3
SM_OR: Unmanaged/managed objects ratio
1 2 2 3 4
CLASS MC: Metric Construction
MC_SC: Scope 1 2 2 3 4
MC_DE: Depth 1 1 2 2 3
MC_RI: Rigor 1 2 2 2 3
MC_RE: Reliability of metric 1 2 2 2 3
MC_TI: Timeliness 1 2 3 3 3
MC_FR: Frequency 1 2 3 4 4
MC_SA: Stability 1 2 2 2 3
CLASS MM: Maintenance management
MM_PM: Probe maintenance 1 1 2 2 2
MM_OM Infrastructure object model maintenance 1 1 2 2 2
Class Level
14
Some examples of families and classes
We followed Common Criteria formalism to represent classes and families
Families are embedded in a classes
Each family as a description, dependencies and its components
For instance, the SM class: Service Model Class
Refers to families dealing with the way the service is modelled
Families Description
SM_VU Absence of relevant vulnerability
Relevant vulnerabilities should not be present in the controlled system.
SM_OR
Unmanaged/Managed Object Ratio
Less unmanaged objects can provide higher confidence in the assurance expression
SM_VU: Absence of relevant vulnerabilities 1 2 3
SM: Service model
SM_OR: Unmanaged/managed objects ratio 1 2 3 4
15
How to concretely produce an assurance level
At Infrastructure Object’s level
The metric
processes that enable to gather raw data from the observed system and derive a normalized assurance level, based on the taxonomy
Decompose the process to help the building of metrics based on COTS
– Measuring => produce “raw data”:. base measure (ISO27004 definition)
– Interpreting the base measure => produce derived measure (ISO27004 definition)
– Normalising the derived measure => produce a normalized discrete AL
16
NormalisationExample of transformation
Create an AL2 capable metric that is based on the Nessus tool
Scope
AL1 2 Domain plug-ins are used (enumeration of the plug-ins)
1 Handcrafted probe is used
AL2 5 Domain plug-ins are used (enumeration of the plug-ins)
1 Handcrafted metric is used
Timeliness
AL1 Nothing
AL2 The most recent plug-ins are installed (difference in last update made)
The latest version of the scanner installed (difference in age)
NMAP in its last version is used
Frequency
AL1 A scan is performed once a month
AL2 A scan is performed once a week
Result specific:
AL1 Max. 4 serious and 10 non-serious vulnerabilities are found
AL2 Max. 2 serious and 5 non-serious vulnerabilities are found
17
BUGYO Methodology
Show the assurance levelfor the service and provide comparison
Evaluate the assurance statusof the service based on the aggregated
values and initiate learning
Aggregate the metricresults to derive an assurance
level per component and for the service
Investigate the network by means of the selected metrics
on a component and systemlevel as modeled
Use the metric taxonomy aschecklist to assign
normalized metrics
Decompose the serviceto identify assurance
critical components
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
A multi-agent platform and a centralised server are providing
a measurement infrastructure (implemented metrics) and
multiple aggregation algorithms.
Methodology Realisation
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
18
AggregationAlgorithm comparison
Proposed operational aggregation algorithms
Simplicity Very simple Very simple More complex but does not require powerful computation
Ability to indicate any changes
Only below min
Only above max
Enable any changes
Main advantage
Represent exact assurance
Represent best effort of the operator
Can monitor any minor change in the infrastructure
Main constraint
Required homogeneous metrics and distribution
Required homogenous metric levels and distribution
Need rigorous weight affection in building the model
Algorithm Min Max Weighted Sum
19
BUGYO Methodology
Show the assurance levelfor the service and provide comparison
Evaluate the assurance statusof the service based on the aggregated
values and initiate learning
Aggregate the metricresults to derive an assurance
level per component and for the service
Investigate the network by means of the selected metrics
on a component and systemlevel as modeled
Use the metric taxonomy aschecklist to assign
normalized metrics
Decompose the serviceto identify assurance
critical components
Confidence
Assurance Level1 52 3 4
Five levels of assurance enable
to express increasing confidence.
Assurance Level 1: Rudimentary evidence for selected partsAssurance Level 2: Regular informal evidence for important partsAssurance Level 3: Frequent informal evidence for important partsAssurance Level 4: Continuous informal evidence for large partsAssurance Level 5: Continuous semi formal evidence for entire system
Assurance Level 1: Rudimentary evidence for selected partsAssurance Level 2: Regular informal evidence for important partsAssurance Level 3: Frequent informal evidence for important partsAssurance Level 4: Continuous informal evidence for large partsAssurance Level 5: Continuous semi formal evidence for entire system
Methodology Realisation
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
20
Evaluation
Process of comparing the automatic real-time computed assurance levels against expected values (nominal between actual value)
Aims at supporting the decision maker
Not only compare the top level (i.e. the service assurance level)
For more advanced analysis
Can rely on two kinds of evaluation rules
Thresholds– E.g. Compare an IO assurance value of major interest don’t fall down a predetermined
value
Complex– Managed some “correlation” between interdependent IO
– Defining patterns (group of IO) that should follow some evolving rules– E.g. if the mean between 2 IO is constant but the 2 values are diverging– E.g. Smooth results over time to detect trends
Based on those evaluation rules
Raise alerts
21
BUGYO Methodology
Show the assurance levelfor the service and provide comparison
Evaluate the assurance statusof the service based on the aggregated
values and initiate learning
Aggregate the metricresults to derive an assurance
level per component and for the service
Investigate the network by means of the selected metrics
on a component and systemlevel as modeled
Use the metric taxonomy aschecklist to assign
normalized metrics
Decompose the serviceto identify assurance
critical components
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
The security cockpit is displaying real time
assurance information.
Methodology Realisation
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
Monitor
Evaluate
Aggregate
Measure
Selectmetrics
Modelthe service
22
Details on the cockpit
Monitoring
Real time indication of
Security assurance level
For service
For infrastructure
objects
Measurement
Detailed information
Metric
On infrastructure
objects
Assistance
Provide support to
maintain assurance
level
Generate specific
alarms
Edit reports
23
Demonstrator scenario: VoIP architecture
IMS-Core
LAN
xDSL
DSLAM
SBC
HSS
AS
I-CSCF
P-CSCF
I-CSCF
P-CSCF
MRFC
DNS
IP Network
WAP
IMS simulator + IPBX real deployment
BUGYO demonstrator
24
Next step BUGYO beyond: kick off planned for Q3
25
Strategic aspect to address in ICT Security & Trust
Tools to support Risk & Trust management
Metrics and tools to measure and improve effectiveness of infrastructure security
Top-Down approach for holistic security improvement
Business-adjusted risk management vs. technology driven improvement
Replace nonstop crisis response to systematic security improvement
For more information or join BUGYO BEYOND advisory board,
Contact me: [email protected]