12
Mistakes We Make and how-to Avoid them CMGA ACT Meetup – Nov 18 th , 2014

Mistakes we make_and_howto_avoid_them_v0.12

Embed Size (px)

DESCRIPTION

This presentation was put together for the CMGA (www.cmga.org.au) meetup in Canberra (ACT), Australia. It's an attempt to share some of my experiences building and delivering systems over the last decade and a half.

Citation preview

Page 1: Mistakes we make_and_howto_avoid_them_v0.12

Mistakes We Makeand how-to Avoid them

CMGA ACT Meetup – Nov 18th, 2014

Page 2: Mistakes we make_and_howto_avoid_them_v0.12
Page 3: Mistakes we make_and_howto_avoid_them_v0.12

Let’s Look At The Agile Development Process

Source – www.cprime.com Source – www.agileatlas.org

Page 4: Mistakes we make_and_howto_avoid_them_v0.12

Let’s Look At The Waterfall Development Process

Source – www.seguetech.com

Page 5: Mistakes we make_and_howto_avoid_them_v0.12

Try To See The Bigger Picture

Source – www.eeight.com

• It’s all about the customer and the end user

• Performance isn’t a point in time process, it’s on-going and includes activities that are performed across the entire life of a system

• The technology part of performance is generally the easy, well defined part, we mostly tend to fail because we ignore the human element i.e. interaction, experience, expectation, etc.

• Always tie performance outcomes to business outcomes. It helps set the right expectation and drives outcomes business is willing to fund.

• Focus on buy in and selling the value proposition. The technology, capability, people, tools, etc. will all come once you’ve got your stakeholders on board.

• Performance Test is not Performance Engineering

Page 6: Mistakes we make_and_howto_avoid_them_v0.12

Poorly Defined Non Functional Requirements

Source – cartoonstock.com

• Lack of good NFR’s is one of the most common reasons systems fail

• Don’t leave NFR’s to the Business Analysts. Take ownership, work with Business and IT to define relevant NFR’s.

• Refer to industry standards, leverage past experience and speak to peers. It’s rare that a program needs to invent the wheel.

• Follow the SMART approach i.e. SMART i.e. Specific, Measureable, Assignable/Achievable, Realistic, Time related.

• Define NFR’s that can be measured and validated at Performance Test. Defining NFR’s that can’t be measured or validated is useless. It’s not funny how often this happens.

• Get teams across the program to buy into the NFR’s and own them across their relevant domains.

Page 7: Mistakes we make_and_howto_avoid_them_v0.12

Investing In An Architecture That Doesn’t Scale

Source – www.barrylupton.com

• Vendor benchmarks are delivered on systems with specs and hardware you’ll rarely have access to. Be realistic with your assumptions of scalability.

• Build a PoC and pilot your solution in a sandpit environment. Don’t wait to get to Performance Test to work out Architectural Constraints.

• Leverage existing capability, peer reviews and industry experience to validate your solution design.

• Avoid fancy 3rd party COTS components if possible and stick to simple designs and implementations.

• Make it a point to stay updated on Anti Patterns. Every development platform has it own set of Anti Patterns.

Page 8: Mistakes we make_and_howto_avoid_them_v0.12

Tools Galore, Give Me Something That Works!!!

Source – www.toondoo.com

• Most clients have a gazillion tools sitting around. Find something that works and implement it.

• Don’t expect an integrated monitoring, diagnostics, reporting solution. Grab the tools you can and get them to work.

• OpenSource tools work well when it comes to basic systems monitoring. Be well aware of their limitations.

• When it comes to Performance Testing, identify your requirements and call out the need to invest early on. Performance Testing tools can be really expensive and budget generally ends up being the biggest impediment to their procurement.

• Performance Tools are required across the life cycle of the system – Modelling, Profiling/Diagnostics, Performance Testing, Monitoring & Capacity Management

Page 9: Mistakes we make_and_howto_avoid_them_v0.12

Execute SVT In A Scaled Down Environment

Source – www.glasbergen.com

• Executing performance testing in a scaled down environment does come with an element of risk

• Unless your SVT environments are 70-80% like production (in terms of configuration & capacity) extrapolation of the numbers carries high risk

• Most environments are virtual these days. Watch out for Dev/Test environments which might have a higher over provisioning ratio as compared to production which will again skew the numbers.

• Assuming linear scalability for your application when modelling performance or extrapolating performance using data from scaled down environments can get you into a lot of trouble.

• Keep an eye out for background workload especially on virtual infrastructure which can skew your numbers in a test environment.

Page 10: Mistakes we make_and_howto_avoid_them_v0.12

Read Between The Lines – Cloud Service SLA’s

Source – www.truthliesdeceptioncoverups.info

• When building applications that leverage cloud components be careful and read between the lines. Most vendor SLA’s don’t go beyond their data centers.

• If you own performance, you own SLA’s end to end while your vendor only owns SLA’s within their data centers. Build resilience into your solution.

• Most cloud vendors charge large sums to performance test using their systems, it’s preferable to bake those requirements into the procurement process.

• If you choose to use stubs to test third party components beware of the performance penalty you are overlooking.

• Most services are moving to the cloud and there can be a significant performance penalty when interacting with multiple cloud services hosted across different locations. Design sensibly.

Page 11: Mistakes we make_and_howto_avoid_them_v0.12

Trend Infra Utilization Metrics & Extrapolate

Source – www.andertoons.com

• Managing performance of a system goes well beyond performance testing and continues past go-live.

• Trending CPU, Mem, Disk utilization metrics and extrapolating them using statistical approaches is a high risk approach.

• Focus on key business workload drivers. Map the business workload drivers to the relevant infrastructure workload drivers and build your statistical models using relevant modelling techniques.

• Capture data and relevant time intervals (30s or less) for purposes of modelling and forecasting. Roll them up using your custom roll up routines. Don’t depend on the system monitoring tools to give you one data point per hour, those numbers are quite useless.

• Capacity Management doesn’t require expensive tooling. It’s requires a well defined approach and some basic modelling techniques any maths grad can pick up.

Page 12: Mistakes we make_and_howto_avoid_them_v0.12

Thank You

Source – www.dilbert.com