Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Strategy for Continuous Testing in iDevOps
Peter Zimmerer
https://www.linkedin.com/in/peter-zimmerer/
http://www.xing.com/profile/Peter_Zimmerer
Siemens AG
#EuroSTARConf
Strategy for Continuous Testing
in iDevOpsPeter Zimmerer
Siemens Corporate TechnologyUnrestricted © Siemens AG 2018
Contents
Context and motivation
Continuous testing defined
Specifics of iDevOps
Testing areas – What’s new and different?
Prerequisites and preconditions
Mindset by example
Enablers
Practices and experiences
Recommendations
Why & What?
How?
Framework for
Development Efficiency and industrial-grade DevOps programs
Definitions (1)
Continuous testing
Design Develop Integrate Test Release Deploy Operate
Continuous feedback on quality
Shift left Shift right
Continuous testing means
• testing over the whole life cycle by shift left and shift right,
i.e. testing continuously from begin to end
Definitions (2)
Continuous testing
Continuous testing means
Design Develop Integrate Test Release Deploy Operate
Continuous feedback on quality
Shift left Shift right
Automation
• testing over the whole life cycle by shift left and shift right,
i.e. testing continuously from begin to end
• strategic test automation,
i.e. continuously reuse & adapt & execute (only the) needed tests in an intelligent manner
Adaptive test architecture
Specifics of iDevOps (industrial-grade DevOps)
• Constraints and obstacles w.r.t. (execution) time, resources, consistency, integrity, testability, security,
performance, safety, certification, regulations, standards, etc.
• System of systems with high technical and business complexity, huge variety of business models and
stakeholders
• Hardware, embedded, distributed, multi-versions, configurations, legacy, heterogeneous IoT environments
• QoS guaranties, SLAs: security, safety, durability, lifetime, compatibility, standards, certification, 24/7 operability
• Frequent deployment and update of products and solutions from cloud to embedded edge in run
without degradation of availability and reliability
• Non-intrusiveness, recoverability, roll-back
• Defect detection, isolation, diagnosis, self-repair
• Test environment provisioning and configuration
• Testing in production: anticipate and control side effects
• Test data
Specifics of iDevOps require new, innovative test approaches
Failure Accepted with
Fast Repair
Enterprise IT DevOps
Fail Safe or Fail Operational with
Fast Repair
Industry DevOps
DEV TEST
OPS OPS
DEV TEST
OPS OPS
Best Effort Quality Assured and Certified Quality
Testing areas – What‘s new and different? (1)
• (Automated) Tests for functionality, integrity, consistency, performance, security, NFRs, regression, etc.
integrated in the whole deployment pipeline: automated tests gating code
• Quality gates explicitly defined and efficiently implemented
→ faster feedback from tests (up to production)
• New test strategy
• New test objects: environment, infrastructure (as code), deployment pipeline
• New test levels, new test types:
shift left, shift right, adapted test automation pyramid and agile testing quadrants
• New test / quality roles: Test (Environment) Architect, DevOps Test Engineer, DevOps Quality Engineer
• New test architectures: adaptive, cloud-based, more simulation, virtual integration, digital twin
• More (Service, Network, OS, etc.) Virtualization, virtual assets
• More API testing, service level testing
Testing areas – What‘s new and different? (2)
• New technologies
• Container technologies
• Testing of microservices
• (Consumer-driven) Contract testing: e.g. with Pact (https://github.com/realestate-com-au/pact),
Spring Cloud Contract (https://cloud.spring.io/spring-cloud-contract/)
• Service (integration) testing
• Comprehensive logging and tracing approach for defect isolation and root cause analysis
• New test objects and test environments
• Testing the deployment pipeline: each step individually, integration between steps, failure (e.g. interruption)
between steps, rollback and recovery, state of product and environment after each deployment step
• Testing the infrastructure as code: same types of test automation as product (“just another test pyramid”)
• E.g. with Test Kitchen (http://kitchen.ci/), ServerSpec (http://serverspec.org/)
• Test environment provisioning and configuration: self-service test environments on demand
Toby Clemson,
ThoughtWorks:
Testing Strategies in a
Microservice Architecture
• Installation testing, configuration testing, interoperability testing: automated by infrastructure as code
• Testing in production
• Blue-green deployment, canary releasing, dark launching, feature flags, feature toggling, staging
• A/B testing: controlled experiments in the live system
• Chaos monkey (destructive testing), Failure Friday
• UX testing: real user experience testing and monitoring
• Monitoring (and logging and alerting and analytics) is the new testing
• Test reporting: more direct, better, automated, faster feedback and transparency
• Real-time live dashboards and heatmaps
• E.g. Hygieia delivery pipeline dashboard (https://github.com/capitalone/Hygieia)
• Operational intelligence, big data technologies, data analytics
• Closing the feedback loop across the entire iDevOps life cycle
• Creating actionable feedback based on data analytics
Testing areas – What‘s new and different? (3)
Testing of …
Testing of …
Testing of …
Prerequisites and preconditions
• Source code management SCM
• (Binary) Repository management
• Continuous integration (CI)
• Deployment pipeline: continuous deployment (CD)
• Gated check-in
• Automated regression testing
• Culture, collaboration
• Mindset
• Examples of potential conflicts, misalignment, and new priorities
that require a cultural change and different behavior for iDevOps
Mindset by example
Requirements engineering (RE) and features
• More just-in-time approach for requirements, reduce upfront planning processes
• Limit the investment in requirements just to the level of detail required
• Change the focus from
optimizing the system for accuracy in plans
to
optimizing it for throughput of value for the customer
• Prioritize keeping the code base stable over creating new features
• Developers implement requirements / features in smaller batches – no check-out over night
Mindset by example
Continuous integration (CI)
• Continuous integration (CI) vs. individual branches
• Build and sustain an infrastructure for CI
• Enforce trunk-based development – “minimal branching”
• Developers checking in their code to single trunk on a regular basis (every day, no check-out over night)
instead of allowing it to stay out on branches and in dedicated environments
• Ensure everyone is committed on keeping the build green as their top priority
• Test automation (frameworks, harness, infrastructure) is hooked up to the build as top priority
• No commit of code is done without appropriate automated tests gating code
Mindset by example
Test approach
• Incorporate performance, security, operations, NFRs into the entire software delivery lifecycle
from the beginning (shift left, across silos): “DevTestOps”, “DevPerfSecTestOps”, “DevXXXOps”
• Invest in simulators and virtualization: define, design, create, maintain, sustain, improve
• Invest in design for testability, test architecture, test infrastructure (as code)
• Shift to more frequent test cycles, shift to continuous test cycles
• Testing more frequently on smaller batches of changes
• Shift from the traditional
“cost of quality business cases”
to
“cost to achieve desired test velocity business cases”
Enablers (1)
• Agile + Lean + cross-functional empowered teams
• Value stream mapping of the deployment pipeline (goal: constant flow of high quality)
• Map out the current value stream to identify inefficiencies, constraints, bottlenecks and potentials for
improvement
• Analyze the current status of test activities (manual and automated) within the value stream
• Create a roadmap on how to manage constraints, eliminate (or even better: prevent) bottlenecks and
improve testing within the deployment pipeline
• Test policy and test strategy aligned to business goals of DevOps and continuous delivery
• Test-driven development approaches (TDD, ATDD, BDD, Specification by Example, etc.)
• Executable specifications, shift left
Enablers (2)
• Design for testability
• Software architectures and component designs that facilitate
• implementation of features in smaller batches:
more often check-ins enabled by fast feedback from tests
• early and continuous testing
• easy configurability, deployment, setup, tear down
• more frequent releases without (negative) impact to users
• testing in production without (negative) side effects to users:
e.g. by using test points, test probes, sandboxes, exposure control
• rollback scenarios and recovery
• Alignment between software / system architecture and test architecture
• Balancing between production code and test (infrastructure) code
Key factors of testability
• Control(lability)
• Visibility / observability
to be realized by well-defined
• Control points
• Observation points
Enablers (3)
• Test automation strategy based on an adaptive test architecture
• Automation and scripting to enable to repeatedly build, package, test, and deploy software with limited
human intervention
• Test automation (frameworks, harness, infrastructure) is hooked up to the build:
automated tests gating code
• Testing more frequently on smaller batches of changes, lean regression testing
• Use simulators, emulators, virtualization
• Trunk-based development of test automation code – “minimal branching”
• Test automation pyramid and agile testing quadrants
• To be enforced, extended, and renewed for (consumer-driven) contract testing,
deployment testing, “ops testing”, testing in production, infrastructure testing, etc.
• Right mix of speed, frequency, execution environment (on demand, self service)
Enablers (4)
• Continuous testing means
• Test case selection and prioritization for regression testing
• Execution time of the individual test cases aligned to different test levels and test types and distributed
across the whole deployment pipeline is critical to get fast feedback
• Step 1: Select test cases based on change impact analysis and coverage measures (use code coverage
as a starting point)
• Step 2: Select test cases based on criteria like risk, frequency of usage, failure history, change history,
previous last execution, duration, etc. that are specified and tracked as attributes of each test case
• Step 3: Select test cases based on AI or ML algorithms (requires well-defined criteria from step 2 as input)
• Example: RETECS (Reinforced Test Case Selection) by Spieker et al.
• strategic test automation,
i.e. continuously reuse & adapt & execute (only the) needed tests in an intelligent manner
Helge Spieker, Arnaud Gotlieb, Dusica Marijan, Morten Mossige: Reinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration
ISSTA 2017 Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 12-22
Modeling technique CIViT and framework Cinders (Jan Bosch et al.)
• Overview of test activities and capabilities in an integration system
• Test activities are mapped onto a two-dimensional diagram
• Axes represent the level of system completeness
at which the test operates versus feedback times
• The four areas represent four different types of tests
• F … New functionality (functional requirements)
• L … Legacy functionality
• Q … Quality attributes (quality requirements)
• E … Edge cases (unlikely or weird situations)
and their coloring their respective level of coverage
from red (0%) via orange (10% … 90%) to green (>90%)
• Icon border colors represent degree of automation
from red (0%) via orange (10% … 90%) to green (>90%)
CIViT: Continuous Integration Visualization Technique
Daniel Stahl, Jan Bosch: Cinders: The Continuous Integration and Delivery Architecture Framework
Information and Software Technology 83, Nov 2016
Practices from Siemens (1)
LEAD TIME
SCOPE
Immediate/MinutesHourDayWeekMonthOnce/Release
Component
Subsystem
Partial
Product
Full
Product
Release
Customer
CIViT diagram indicates …
• which test types are executed
• how often are they executed
• their coverage
• their degree of automation
Practices from Siemens (2)
• BDD
• Test automation pyramid
• Agile testing quadrant adapted to the domain,e.g. “Deployment”
• New test environment group,new role test (environment) architect
• Defined test architecture based on constraints, driving forces,and quality attributes (deployability, modifiability, testability)
• Test simulators and emulators, test stubs
• The delivery pipeline
• structure is owned and governed
by a dedicated test environment team
• contents is owned
by the different products / subsystems
Automated tests are your primary line of defense in your aim to release high quality software more frequently
Recommendations (1)
• Realize new testing areas
• Understand and ensure prerequisites and preconditions
• Create and foster an adequate mindset and culture
• Make use of enablers and customize practices – be an active driver, stakeholder, advisor, coach, mentor
• Be aware that a transformation in the area of testing is needed to enable iDevOps
• This requires more than just a face lift in testing
• A lot of the “old” testing stuff is still valid and not obsolete:
e.g. risk-based testing, test design techniques, test patterns, and exploratory testing
• But often there is a stronger focus and higher priority:
e.g. API testing and service level testing
Automated tests are your primary line of defense in your aim to release high quality software more frequently
Recommendations (2)
• Integrate testing into the deployment pipeline for all quality attributes, e.g. functionality, security, performance,
compliance, etc.: automated tests gating code
• Regular infrastructure smoke tests and automated deployment test suite (“can be deployed”, “clean
installation is ok”, “can be updated / upgraded from previous versions”) to be reused after each deployment
• Distribute different classes of tests along the deployment pipeline and link them to the quality gates
• The earliest tests are the quickest and easiest to run and give the fastest feedback
• With the slower and more expensive tests occurring further down the pipeline in environments that are
increasingly production-like as the release candidate looks increasingly viable
• Identify needed number of environments as a balance between flexibility / speed and complexity:
e.g. if number increases then consistency of test data might be more difficult
• Tests can be carried out against ”silent“ features in production to measure quality and impact of changes
Test strategy
Recommendations (3)
• Manual tests migrate towards the end of the pipeline, leaving computers to do as much work as possible
before humans have to get involved
• Deployments for manual testing must be coordinated so testers can have a stable environment
• Use self-service on-demand deployments or most recent or most important deployments for manual testing
• Consider running exploratory testing in parallel with other automated and non-automated tasks, minimizing
the wait by placing it at the end of the cycle rather than the start
• Make sure the pipeline is only waiting at the end of manual testing, not the beginning
Manual exploratory testing is not made obsolete
• Track Stability and Throughput at various stages in the process:
code (commit), build, deploy, acceptance test, etc.
• Stability = Change Failure Rate + Failure Recovery Time
• Throughput = Lead Time + Frequency
• IT performance metrics used by Puppet Labs
• Throughput of code
• Deployment frequency (related: percentage of failed deployments)
• Lead time for changes
• Stability of systems
• Mean time to recover (MTTR)
• Change failure rate
Recommendations (4)
Adopt a measurement-based approach and consider the following metrics and KPIs
https://devops.com/metrics-devops
Steve Smith, https://leanpub.com/measuringcontinuousdelivery
Summary
A strategy for continuous testing is the heartbeat of iDevOps
Context and motivation
Continuous testing defined
Specifics of iDevOps
Testing areas – What’s new and different?
Prerequisites and preconditions
Mindset by example
Enablers
Practices and experiences
Recommendations
Why & What?
How?
Peter Zimmerer
Principal Key Expert Engineer
Siemens AG
Corporate Technology
Research and Development for Digitalization and Automation
Software & Systems Innovations
CT RDA SSI
Otto-Hahn-Ring 6
81739 Muenchen
Phone: +49 (89) 636 633509
Fax: +49 (89) 636 45450
Mobile: +49 (172) 4328121
E-mail: [email protected]
Internet
siemens.com/innovation
siemens.com/corporate-technology
Thank You!
Peter Zimmerer
https://www.linkedin.com/in/peter-zimmerer/
http://www.xing.com/profile/Peter_Zimmerer
Siemens AG
#EuroSTARConf