Test Driven

Preview:

DESCRIPTION

Want to know the case for Test-Driven Development? Want to know style tips and gotchas for Testing and TDD? Let Alex Chaffee, former Mad Scientist at Pivotal Labs, tell you everything you didn't know you didn't know about testing.

Citation preview

• by Alex Chaffee

• alex @ pivotallabs.com

• Pivotal Labs

Test-Driven

• Developers or QA Engineers

• Familiar with JUnit

• Want more detail on Automated Testing in general

• Want to know the case for Test-Driven Development

• Want to know style tips and gotchas for Testing and TDD

Intended Audience

Part I: The Blank Page

Let's test-drive a utility class...

Red-Green-Refactor

• Arrange (set up preconditions)

• Act (call production code)

• Assert (check the results)

3A

• The heart of a unit test

• An Assert is a declaration of truth

• Failed assert -> false (incorrect) behavior

• “assert your postconditions”

• Example:Set set = new MySet();set.add("Ice Cream");assertTrue(set.contains("Ice Cream"));

Assert

• Don't be over-ambitious

• Each test -- especially each new test -- should add one brick to the wall of knowledge

• Code:Brick :: Test:Mortar

• Pick tests (features) in order of growth

One Step At A Time

• A great first test to write

• Input and output are trivial

• Helps you focus on infrastructure and API

The Null Test

• When you get stuck on a test, try starting with the assertion(s) and then work your way backwards to the setup

• Start with the assert

assertTrue(set.contains(“alex”));

• Then add the code above it

Set set = new MySet();set.add(“alex”);

• Helps focus on goal

Assert First

• Start with hardcoded results and wait until later tests to force them to become real

Fake It 'Til You Make It

• Make the code abstract only when you have two or more examples

Triangulate To Abstraction

public void testSum() {

assertEquals(4, plus(3,1));

}

---

int plus(int x, y) {

return 4;

}

public void testSum() {

assertEquals(4, plus(3,1));

assertEquals(5, plus(3,2));

}

---

int plus(int x, y) {

return x + y;

}

• Before you begin, make a TODO list

• Write down a bunch of operations

• For each operation, list the null test and some others

• Also put down refactorings that come to mind

• Why not write all the tests in code first?

• Could box you in

• Interferes with red-green-refactor

Test List

• aka Don't Be Stupid

• If you really, really, honestly know the right way to implement it, then write it that way

• But keep track of how many times your "obvious implementation" was broken or untested

• Edge cases, off-by-one errors, null handling... all deserve tests and often the Obvious Implementation is not covered

Obvious Implementation

Part II: Testing Philosophy

• Unit

• Integration

• Acceptance

• QA

• UI

• Performance

• Monitoring

Automated Testing Layers

• Automated

• Isolated

• Repeatable

• Fast

• Easy to write and maintain

• Focused

• Easy to read

• Robust (opposite: Brittle)

A Good Test Is...

• Someone should be able to understand your class by reading the tests

• Live documentation (better than dead trees)

• “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

– Martin Fowler

Tests are“Executable

Specifications”

Why do you test?

• Prevent bugs

• Regress bugs ("bug repellant")

• Localize defects

• Understand design

• Document (or specify) design

• Improve design

• Support refactorings

• Enable experimentation and change

• Confidence

• Catch errors the language can't

• Long-term sustainability and maintainability

Why do you test?

When do you test?

• Before checkin

• After update

• Before deploy

• While coding

• In the background

When do you test?

• All the time

When do you test?

• Never

• After coding

• During coding

• Before coding

When do you write tests?

Why test first?

• Gets tests written

• Easier than retrofitting tests onto an existing system

• Guarantees 100% test coverage

• In practice, you never have time after the code is written, but you always have the time before

• Go figure :-)

Why test first?

Why test first?• Reduces scope of production code

• Less scope -> less work

• Proves that your objects have usable interfaces

• more useful methods and fewer useless ones

• Guarantees testability

• Test-last code is often hard to test

• Sustainable Feature Velocity

• Think of tests as examples or specifications

• One trick: write code in a test class, then extract into the production class

"If you can't write a test, then you don't know what the code should do. And what business do you have writing code in the first place when you can't say what it's supposed to do?" - Rob Mee

How can you write tests for

code that doesn't exist?

• Simple Rule:

• Test everything that could possibly break

• Depends on definitions of “everything” and “possibly”

• (and “break”)

• This means, don’t test things that couldn’t possibly break

• E.g. Getters and Setters

• Unless you think they could fail!

• Better safe than sorry

• Full-Range Testing (positive, negative, boundary, null, exception)

What to test?

• Personal judgement, skill, experience

• Usually, you start by testing too little, then you let a bug through

• Then you start testing a lot more, then you gradually test less and less, until you let a bug through

• Then you start testing too much again :-)

• Eventually you reach homeostasis

How much to test?

• Not too big, not too small

• Same concept as high coherence, low coupling

Test for "essential complexity"

• Write the tests first

• Design for testability

• Use the front door first

• Communicate intent

• Don't modify the SUT

• Keep tests independent

• Isolate the SUT

• Minimize test overlap

• Minimize untestable code

• Keep test logic out of production code

• Verify one condition per test

• Test separate concerns separately

• Ensure commensurate effort and responsibility

Meszaros' Principles of Test Automation

• Every time you write code, you write tests that exercise it

• That means that if you change the code, and the tests break, you must either

• Change the tests to match the new spec

• Change the code to meet the old spec

• Do not remove the failing tests

• Unless they no longer apply to the new code’s design or API

• Do not work around the failing tests

• Test code is not "wasted" or "extra" -- tests are first-class citizens

• If you feel like they're too much work, examine your process

• Maybe you're writing the wrong tests, or your tests are too brittle, or they're not refactored enough

Tests Are An Extension of

Code

• It forces you to really understand the code

• It forces you to really understand the tests

• It forces you to create code that is truly reusable and modular and testable

• “put your money where your mouth is”

• These forces drive you to keep your code and your tests simple and easy to understand

Unit Testing Is Hard

• Need to spend time on infrastructure, fixtures, getting comfortable with TDD

• Business case for TDD: sustainable velocity

• for feature velocity, stabilty > early oomph

• Famous graph

Test-Driving Is Slower At First

• Test-Driven Development

• Good old-fashioned coding, now with tests!

• Test-Driven Design

• Free your mind and the code will follow

• Quite a lot of overlap, but worth keeping difference in mind

• Lots of XP gurus are all about the Zen, but you don't need to buy into that

• But it's actually pretty cool to try Zen Testing

Two D's

Part III: Advanced Techniques

• Positive Tests• exercise normal conditions (“sunny day scenarios”)• E.g. Verify that after adding an element to a set,

that element exists in the set

• Negative Tests• Exercise failure conditions (“rainy day scenarios”)• E.g. verify that trying to remove an element from an

empty set throws an exception

• Boundary Conditions• Exercise the limits of the system (“cloudy day”)• E.g. adding the maximum number of elements to a set• E.g. test 0, -1, maximum, max+1

Full Range Testing

• instead of SetTest.testEmpty

• how about SetTest.testShouldHaveSizeZeroWhenEmpty

• or EmptySetTest.testHasSizeZero

Verbose Test Naming

• Optional first parameter to JUnit asserts is "message"

• Assertion messages can be confusing

• Example:assertEquals(“set is empty”, set.isEmpty());

• Does it mean “the set must be empty” or “the test is failing because the set is empty”?

• Solution: should statements

assertEquals(“set should be empty”, set.isEmpty())

• or even better:

assertEquals("a newly-created set should be empty", set.isEmpty())

Should Statements

• Philosophy: a test is a valid client of an object

• Therefore don't be ashamed of adding a method just because it would make a test easier to write

• Used -> Useful

• Remember, tests are examples of use

Test-Only Methods

• Spend time refactoring your tests

• It'll pay off later, when writing new tests or extending/debugging old ones

• Refactor for readability, not necessarily for removing all duplication

• Different priorities than for production code

• Extract methods

• Shorter lines

• Break up long tests (scenario tests) into several short tests (feature tests)

• One technique: "Refactor production code on green, Refactor test code on red."

• for complex cases, break the code, make sure the refactored tests still reveal the breakage, then fix it

Refactor Test Code

• assertEquals(86400, new Day().getSeconds())

• vs.

• assertEquals(60 * 60 * 24, new Day().getSeconds())

• vs.

• secondsPerMinute = 60

• minutesPerHour = 60

• hoursPerDay = 24

• assertEquals(secondsPerMinute * minutesPerHour * hoursPerDay, new Day().getSeconds())

Evident Data

• Problem: several axes of variability, combinatorial explosion

• Solution: Loop through a matrix of data in your test, call a "check" function on each row

Matrix Tests

• aka "Golden Data Tests"

• Grab the complete output of a routine, put it into the test

• Not amenable to test-driven development

• Effective for large or risky refactorings

• Quite brittle, so often thrown away after the refactoring is done

Characterization Tests

public void testUnknownCountry() {

try {

currencyConverter.getRate("Snozistan");

fail("Should have thrown an exception for unknown country");

} catch (UnknownCountryException e) {

// ok

}

}

Exception Tests

• A pair's job is to keep you focused

• "Wait, let's write a test first."

• "Wait, let's refactor first."

• "Wait, let's discuss this."

• "Can I drive?"

Pair Programming

• One pair writes a test

• The other pair makes it pass and writes the next test

• Repeat

• Good way to get out of a rut, or cure a keyboard hog

Ping-Pong Pairing

• When a defect is reported...

• The first step is to write a (failing) test that reproduces the bug

• Fix the bug by writing code to make the test run successfully

• Verify the bug in the running application• Add the bug test to the automated suite• Check in the bugfix code and test• Now it’s always run – instant regression test!

• "Regression tests are test that you would have written originallly." - Kent Beck

• May also want to write a failing Acceptance Test, but that's optional -- you definitely want a failing unit test

Regression Test

• Often the best thing to do is throw away your work and start again

Do Over

• At the end of the day, write a failing test and leave it there for tomorrow

• Based on writer's trick: start a sentence and leave it unfinished

Leave One For Tomorrow

• Tests are only valuable if they're run all the time

• If they're slow, people will not want to run them all the time

• So keep them fast!

• Difficult quest, but worth it

• Don’t get stuck in molasses!

• Refactor your code to be easier to write fast tests on

• Replace slow tests with (one or more) fast tests that cover the same area

The Need For Speed

Retrofitting• What to do when you have an existing untested

codebase?

• Start small

• Write one test, make it pass, check it in

• Write tests for all new code

• Write tests for all new bugs

• Write tests before attempting refactoring

• Usually easier to write characterization tests (UI/integration)

• But don’t fall into the slow test trap

• Any time all the tests are green, you can check in

• Run all the tests all the time

• Don’t check in until all tests pass

• If you broke “someone else’s” tests, you are responsible for fixing “their” code

• Remember, they are in the room, so go get them if you need help

• Learn to Love the Orb

• ccmenu

Continuous Integration

• Suites are a pain to maintain

• Write code to automatically scan for tests and run them together

• Possible to do in JUnit, but annoying

Automatic Suites

• Matter of preference

• Both are useful at times

Outside-in vs. Inside-out

• Start with domain objects

• Next layer of tests

Inside-out

• Start with customer story or user interface

• Makes you think like a user

• Tests capture these requirements

• Lower layers implemented with Test Doubles (mocks)

• After you're done, either replace mocks with real objects, or leave them there (perhaps at higher maintenance cost)

Outside-in

• Write a bunch of UI-level tests

• Leave them there while you test-drive inside-out

• When they all pass, you're done

Outside-in design,

inside-out development

• A Test Double replaces the "real" instance of an object used by the production code with something suitable for the currently running test, but with the same interface

• Stubs

• Hard-coded values

• Mocks

• Pre-programmed with expectations

• Fail-fast

• Test Doubles in general are often called Mock Objects, so be careful about terminology

• Fakes

• Can store values across calls, but don't really do what the live object would do

• E.g. in-memory database

Test Doubles

More Test Doubles

• Spies

• Remember what methods were called with what values

• Tests can inspect these lists after code returns

• Saboteurs

• Blow up in ways that would be difficult to make happen for real

• To test what would happen when, e.g., the database goes away, or the disk is full

• Self Shunt

• The test itself declares methods or classes implementing the above, and passes in a pointer to itself

• Two types:

• DI Frameworks

• Complete Construction

• This is the one I'm talking about

• Pass in dependencies to the constructor (or, if necessary, to setters)

• An object under test will receive references to all external services

• Allows tests to inject Test Doubles at will

• Forces objects to be isolated

• Example: TBD

Dependency Injection

• Changes the language of tests to emphasize that they're specifications or examples

• Replaces "assert" with "should"

EDD/BDD (specs)

• A natural progression of refactoring your test data

• literals

• constants

• local variables

• instance variables (defined in setUp())

• creation methods

• parameterized creation methods or objects ("object mothers")

• Other patterns

• test objects / graphs ("fixtures" or "cast of characters" or "menagerie")

• external fixture files

Fixtures and Object Mothers

Mock Clock

Part IV: Test-Driving UI

Part V: Q&A

Thanks!