Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
10‐10‐21
1
1
L U N D U N I V E R S I T Y
Regression Testing Practices
Per Runeson and Emelie Engström
Software Engineering Research Group LUND UNIVERSITY
Sweden
3
L U N D U N I V E R S I T Y
Regression Testing
”Retesting...to ensure that the new version of the software has retained the capability of the old version”
[Burnstein02]
10‐10‐21
2
6
L U N D U N I V E R S I T Y
Multi-dimensional regression test
Variants
(varia+oninspace
)
Levels(viewsofth
esystem
)
Versions(evolu+onin+me)
CoverageItems Expecta-ons,QualityrequirementsFunc-onalrequirements,Design Interfaces,Implementa-on
TestcasesAcceptancetestSystemtestIntegra-ontestUnittest
Repe++vetestsacrossversionsacrossvariantsacrosslevels
Redundancy ?
8
L U N D U N I V E R S I T Y
Qualitative survey of regression testing practices
• Focus group discussions – 15 participants – 10 companies
• Questionnaire – 32 respondents – 29 companies
10‐10‐21
3
9
L U N D U N I V E R S I T Y
Goals and questions
RQ1 - What is meant by regression testing in industry?
RQ2 - Which problems or challenges related to regression testing exist?
RQ3 - Which good practices on regression testing exist?
16
L U N D U N I V E R S I T Y
Analysis of the results
• Repetitive testing after changes • new configurations • fixes • changed solutions • new hardware…
• Find defects or obtain a measure of quality
• The amount and frequency determined by
• the assessed risk • the size and type of the change • the amount of available resources
What?
10‐10‐21
4
17
L U N D U N I V E R S I T Y
• At different levels – system, integration, unit
• At different stages – As early as possible, as late as
possible, continuously
When?
Analysis of the results
18
L U N D U N I V E R S I T Y
• Systematic / ad hoc – complete retest – prioritisation and selection – static and dynamic test suites – change analysis
• Manual / Automated
How?
Analysis of the results
10‐10‐21
5
19
L U N D U N I V E R S I T Y
• Test case selection • Change impact assessment • Coverage assessment • Traceability requirement/test
• Automated vs. manual testing • Cost benefit • Environment
• Design for testability • Dependencies in software • Test selection/scoping
Challenges!
Analysis of the results
20
L U N D U N I V E R S I T Y
• Focus automation below the user interface level
• Regression test continuously • Vary the test focus between different
test rounds • Visualise progress monitoring • Connect software quality attributes to
each test case
Good practices!
Analysis of the results
10‐10‐21
6
21
L U N D U N I V E R S I T Y
Transparent and Efficient Regression Test Selection - TERTS
Identify
• Context • Factors • Tools
Implement
• Procedure • Tools
Evaluate
• Transparency • Efficiency
22
L U N D U N I V E R S I T Y
History-based regression testing
• Historical effectiveness (fck /eck ) • Previous priority (PRk−1 ) • Execution history (hk )
10‐10‐21
7
23
L U N D U N I V E R S I T Y
Evaluation at Sony Ericsson
24
L U N D U N I V E R S I T Y
Fix-cache regression testing procedure
1. Identify fault-prone files 2. Link files to test cases 3. Recommend test cases
based on cache content
10‐10‐21
8
25
L U N D U N I V E R S I T Y
Evaluation at ST-Ericsson
Effe
cien
cy (f
ault/
test
cas
e)
iteration
RTC= Recommended Test Cases, XRTC= eXecuted Recommended Test Cases, TXTC Total eXecuted Test Cases
26
L U N D U N I V E R S I T Y
Conclusions from survey
• Definitions are the same – practices differ (RQ1) • Challenges and good practices (RQ2,RQ3) relate to
– Test selection, – General testing issues – Managent practices – Design issues
10‐10‐21
9
27
L U N D U N I V E R S I T Y
Conclusions from TERTS
• Automating RTS requires good information
• Potential for improving efficiency – but studies are limited
• Transparency is a goal in itself
28
L U N D U N I V E R S I T Y
Ongoing research
• Test Scope Selection
• History Based Testing
• Alignment
Phot
o: T
om H
arris