How to Build a Metrics-optimized Software Delivery Pipeline

Preview:

Citation preview

1 COMPANY CONFIDENTIAL – DO NOT DISTRIBUTE #APMLive

Building a MetricsOptimized Pipeline

Andi GrabnerPerformance Advocate@grabnerandi

Andrew PhillipsVP DevOps StrategyXebiaLabs

1 Today’s Unicorn • Recap from Velocity & PERFORM 2015• Why we don’t have to be like Facebook

2 Why Unicorns Excel? • Speed through automation• Quality through metrics

3 Metrics and approach • Service, business & user metrics• Pre-Prod - from integration and load

tests• Prod - from real-users

4 Dynatrace andXL Release by XebiaLabs

• Building a metric-driven pipeline

Today’s agenda

U N I C O R N S T A T U S

700 deployments / year

10 + deployments / day

50 – 60 deployments / day

Every 11.6 seconds

• Waterfall agile: 3 years• 220 Apps - 1 deployment per month• “EVERY manual tester does automation”• “We don’t log bugs. We fix them.”

• Measures are built in & visible to everyone• Promote your wins! Educate your peers.

• EVERYONE can do continuous delivery.

CHALLENGE

“Deploy Faster!!”

Fail Faster!

It‘s not about blind automation of pushing more bad code through a shiny pipeline

879! SQL Queries8! Missing CSS & JS Files

340! Calls to GetItemById

Example #1: Public Website based on SharePoint

Example #2: Migrated to (Micro)-Services

26.7s Execution Time 33! Calls to the

same Web Service

171! SQL Queries through LINQ by this Web Service –

request similar data for each call

Architecture Violation: Direct access to DB from frontend logic

Key App Metrics # SQL (INS, UPD, DEL, …)

# LOGs

# API Calls (Hibernate, …) # Exceptions

Execution Time

20%80%

Level Upfrom doing . . .

this

SQL

Object allocation

Bytes transferred

Exceptions

Service CallsJavaScript

to this!

18

What you currently measureQuality Metrics inContinuous Delivery # Test Failures

Overall Duration

What you should measure

# Log Messages# HTTP 4xx/5xxRequest/Response SizePage Load/Rendering Time…

Execution Time per test# calls to API# executed SQL statements# Web Service Calls# JMS Messages# Objects Allocated# Exceptions

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build # Test Case Status

Test & Monitoring Framework Results

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build # Test Case Status

Test & Monitoring Framework Results

We identified a regression

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build 19 testPurchase OK

testSearch OK

Build # Test Case Status

Test & Monitoring Framework Results

Problem solved

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build 19 testPurchase OK

testSearch OK

Build # Test Case Status # SQL # Excep CPU

Test & Monitoring Framework Results Architectural Data

Let’s look behind the scenes

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build 19 testPurchase OK

testSearch OK

Build # Test Case Status # SQL # Excep CPU

12 0 120ms3 1 68ms

Test & Monitoring Framework Results Architectural Data

Let’s look behind the scenes

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build 19 testPurchase OK

testSearch OK

Build # Test Case Status # SQL # Excep CPU

12 0 120ms3 1 68ms

12 5 60ms

3 1 68ms

Test & Monitoring Framework Results Architectural Data

Exceptions probably reason for failed tests

Let’s look behind the scenes

Measures from your Tests in Action

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build 19 testPurchase OK

testSearch OK

Build # Test Case Status # SQL # Excep CPU

12 0 120ms3 1 68ms

12 5 60ms

3 1 68ms

75 0 230ms

3 1 68ms

Test & Monitoring Framework Results Architectural Data

Problem fixed but now we have an architectural regression

Problem fixed but now we have an architectural regression

Let’s look behind the scenes

Measures from your Tests in Action

12 0 120ms

3 1 68ms

Build 20 testPurchase OK

testSearch OK

Build 17 testPurchase OKtestSearch OK

Build 18 testPurchase FAILEDtestSearch OK

Build 19 testPurchase OK

testSearch OK

Build # Test Case Status # SQL # Excep CPU

12 0 120ms3 1 68ms

12 5 60ms

3 1 68ms

75 0 230ms

3 1 68ms

Test & Monitoring Framework Results Architectural Data

Now we have the functional and architectural confidence

Let’s look behind the scenes

One goal: deliver better features to customers fasterTwo fundamental components: speed + quality

When, and how, should we measure?

Not just in production!

Measure as early as possible in your development and delivery process

Fast feedback is both more effective and cheaper at identifying and fixing problems

Reuse what you already have - automated “functional” tests but convert them into Architectural, Performance and Scalability Validation Tests

When, and how, shall we measure?

Build/CI Integration & Perf User Analysis Analysis Analysis

Service-level

Service-level

Business application

Service-level

Business application

User experience

Feedback Loop

OK, OK, …how do I get started?

31

Metric-DrivenContinuous DeliveryWith Dynatrace & XL Release by XebiaLabs

• XebiaLabs XL Release is a pipeline orchestrator• It allows you to define, execute, track and improve all the tasks in your

delivery pipeline• All = Automated + manual, technical + process-oriented• Insight, visibility and control into people and tools

Orchestrating your delivery pipeline with XL Release

XL Release and Dynatrace XL Release by XebiaLabs allows you to integrate

Dynatrace quality metrics into your overall Continuous Delivery pipeline

Automatically verify architectural quality during your integration testing

Trigger and review performance monitoring results Automatically register releases in Dynatrace to allow for

“before/after” comparisons of user behavior Deliver better software and close the Continuous

Delivery feedback loop!

Quality metrics in your unit & integration tests

ArchitecturalQuality Metrics from

Unit & Integration Tests

#1: Analyzing every Unit & Integration test

#2: Metrics for each test

#3: Detecting regression based on measure

Unit / Integration Tests are auto baselined! Regressions auto-detected!

Build-by-Build Quality View

Build Quality Overview in Dynatrace or Jenkins

Build Quality Overview in Dynatrace & your CI server

Beyond unit & integration tests

ArchitecturalQuality Metrics from

Unit & Integration TestsPerformance Metrics

From Load Tests

Deployment Marker to support

ProductionMonitoring

Load tests: finding hotspots per test easily

#1: Analyze Load Testing Results by Timer Name, Script Name, …

Load tests: finding hotspots per test easily

#1: Analyze Load Testing Results by Timer Name, Script Name, …

#2: Which TIERS have a problem?

Load tests: finding hotspots per test easily

#1: Analyze Load Testing Results by Timer Name, Script Name, …

#2: Which TIERS have a problem?

#3: Is it the DATABASE?

Load tests: finding hotspots per test easily

#1: Analyze Load Testing Results by Timer Name, Script Name, …

#2: Which TIERS have a problem?

#3: Is it the DATABASE?#4: How HEALTHY is the

JVM and the Host?

Load tests: finding hotspots per test easily

#1: Analyze Load Testing Results by Timer Name, Script Name, …

#2: Which TIERS have a problem?

#3: Is it the DATABASE?#4: How HEALTHY is the

JVM and the Host?

#5: Do we have any ERRORS?

Load tests: finding hotspots per test easily

#1: Analyze Load Testing Results by Timer Name, Script Name, …

#2: Which TIERS have a problem?

#3: Is it the DATABASE?#4: How HEALTHY is the

JVM and the Host?

#5: Do we have any ERRORS?

#6: LOAD vs. RESPONSETime over time?

Load tests: We can compare two runs

Did we get better or worse? DB, Web Requests, API, …

Did we get better or worse? DB, Web Requests, API …

Don’t stop atGo Live!

Production data: real-user & application monitoring

Physical, Virtual or Containers

A metric-driven Continuous Delivery pipeline

ArchitecturalQuality Metrics from

Unit & Integration TestsPerformance Metrics

from Load Tests

Deployment Marker to support

Production Monitoring

Review user behaviorthrough UEM Data

Start Here

Level Up with Dynatrace Free Trial & Personalhttp://bit.ly/dtpersonal

Download XL Release from XebiaLabs

Automate, orchestrate and get visibility into your release pipelines - at enterprise scale!

Start a Free 30-Day Trial Today!http://bit.ly/TryXLRelease

Download the XebiaLabs XL Release Dynatrace Plugin - GitHub

Integrate with all of your existing tools

Putting it all together

Summary Continuous Delivery = Speed + Quality

3 levels of metrics: service, business and user

Dynatrace can already collect these metrics for you

Get started today: incorporate quality metrics into your release pipeline straight away using tools like XL Release by XebiaLabs

One goal: deliver better features to customers fasterTwo fundamental components: speed + quality

Resources

Check out XebiaLabs & Dynatrace Blogs and

Plugins

http://bit.ly/dtpersonal

Click Here

Test drive the Dynatrace

Personal License

Click Here

Try

by

http://bit.ly/XLGuide

http://bit.ly/TryXLRelease

http://bit.ly/XLDynatrace

Thank You!

Time for Q & A

Andrew PhillipsVP DevOps StrategyXebiaLabshttp://blog.xebialabs.com/

Andi Grabner@grabnerandihttp://blog.dynatrace.com

Participate in our Forum :: • community.dynatrace.com

Like us on Facebook :: • facebook.com/dynatrace• facebook.com/xebialabs

Follow us on LinkedIn :: • linkedin.com/company/dynatrace• linkedin.com/company/xebialabs

Connect with us!Follow us on Twitter :: • twitter.com/dynatrace• twitter.com/xebialabs

Watch our Videos & Demos :: • youtube.com/dynatrace• youtube.com/xebialabs

Read our Blog :: • application-performance-blog.com• blog.xebialabs.com

Recommended