SplunkLive! Stockholm 2016 - iZettle

Preview:

Citation preview

Copyright © 2015 Splunk Inc.

Splunk at iZettleJohannes Lofgren, Head of DevOps

j

‹#›

Johannes Löfgren - Head of Devops and Infrastructure

‹#›

The challenge….

Show me a PCI-DSS compliant centralised logging solution in 5 weeks!

‹#›

Why Splunk?• PCI-DSS - Payment Card

Industry Data Security Standard

• iZettle’s first PCI-DSS audit was in Q2 2012

• Starting point: local logs on around 10 backend servers

• Before audit deadline: Prove our control of operations and security using our centralised log solution

‹#›

Starting out

Daily report email

Scheduled alerts (email and sms)

File integrity monitoring

Learn basic search skills

‹#›

Starting out

< 1s, Expected result of automated deploy

90 minutes, Further investigation needed

File integrity monitoring

Daily report email

‹#›

iZettle expansion

All backend systems logging to Splunk

2011

One market

Monolithic backend

Single location traditional hosting

2013

Multiple markets in three continents

Distributed backend

Hybrid cloud infrastructure

‹#›

Splunk at iZettle Today

Usage50% of total implemented alerts80+ usersAll backend services log to splunk

Support Security Development QAOperations

BenefitsEasy to scaleEasy to moveSearch across multiple servicesAdapt alert triggers to trendsFIM

‹#›

Follow the trend - exampleWeekly and daily trend below

‹#›

Follow the trend - example The _internal index tracks logged bytes per source:

earliest = -1h@h latest = @h index=_internal source=*license_usage.log type=Usage s=merchant-reports | eval MB=b/1024/1024 | stats sum(MB) as last

‹#›

Follow the trend - example Run a subsearch for the same, 7 days ago. Column output:

earliest = -1h@h latest = @h index=_internal source=*license_usage.log type=Usage s=merchant-reports | eval MB=b/1024/1024 | stats sum(MB) as last | appendcols [ search earliest = -169h@h latest = -168h@h index=_internal source=*license_usage.log type=Usage s=merchant-reports | eval MB=b/1024/1024 | stats sum(MB) as comparator ]

‹#›

Follow the trend - example Calculate the percentage diff. Add explanatory labels:

earliest = -1h@h latest = @h index=_internal source=*license_usage.log type=Usage s=merchant-reports | eval MB=b/1024/1024 | stats sum(MB) as last | appendcols [ search earliest = -169h@h latest = -168h@h index=_internal source=*license_usage.log type=Usage s=merchant-reports | eval MB=b/1024/1024 | stats sum(MB) as comparator ] | eval percent_change=100*(last/comparator)-100 | rename last as "MB Latest hour", comparator as "MB same hour, 7 days ago"

‹#›

Follow the trend - example The _internal index is lightweight to search:

What to do with this?Create an alert triggering on a positive and negative threshold of the variable “percent_change”Generic enough to suit any system

‹#›

Key lessonsInsert all your

logging services to cross search

systems

Take a generic anomaly approach

on alerts

Make use of what’s already summarised

for light weight searching

Use dynamic alert thresholds

Thank You