Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Real-‐Time Gray-‐Box Testing with DT-‐10
Trinity Technologies
About Gray-‐Box Testing:
In 1999, Andre C. Coulter of Lockheed Martin Missiles and Fire Control -‐ Orlando, published a
paper on Gray-‐Box Testing, “Gray-‐Box Testing Methodology”, which repositioned the testing
methodology that combines both White-‐Box Testing and Black-‐Box testing. In the following year,
using the previous Gray-‐Box Testing knowledge as basis, Lockheed Martin has further perfected the
Gray-‐Box Testing methodology by describing how to perform Gray-‐Box Testing in a real-‐time
embedded device in a real environment [reference: Graybox Software Testing in the Real World in
Real-‐Time]. The Gray-‐Box Testing methodology not only uses the coverage information to validate
software’s correctness and test completeness, but also, uses the performance analysis to verify if the
embedded device’s performance specs satisfy the real-‐time system’s requirement.
We all know that from the system’s perspective, Black-‐Box testing scenarios are derived from
system’s requirement documentations and design documentations, to see if the functionalities satisfy
the requirement of the system. Because System-‐level testing is on a higher level and does not
involve source code, testers only need to understand the require documentations and design and
execute the testing scenario base on these documentations. This is simple, but the testing is not
deep enough, thus unable to identify issues that are hard to find and pinpoint.
White-‐Box Testing, also known as Unit Testing, normally is written by developer to perform testing on
the code he or she has written. Because unit testing focuses testing on the smallest unit (usually it is
a function), it takes a large amount of time to write and maintain these test cases. With ever
increasing demand on developers to deliver more functionalities with less time and less resource, unit
testing is becoming a valuable concept but not actual implementation in practice.
Gray-‐Box Testing uses the Black-‐box Testing methodology and comes in from the perspective of
validating system functionality’s correctness. At the same time, it also uses the White-‐Box Testing
by combining the application’s internal logic to design testing scenario, executing the application and
collecting execution path information plus external user interface results. Because Gray-‐Box testing
does not focus on the application’s internal logic as thorough as White-‐Box Testing, thus, it is a
methodology that can maintain a good efficiency and a good testing result.
Description of Gray-‐Box Testing Process:
Gray-‐Box Testing advocates the involvement of testers in the early stage of SDLC; thus, no need
to wait for the source code to mature. Development and Testing (QA) teams should collaborate and
advance together as project matures. The following diagram illustrates how Gray-‐Box Testing comes
in during each stages of SDLC.
Figure 1. Gray-‐Box Testing in SDLC
We will now discuss the challenges of test case design and Gray-‐Box Testing:
Test Case Design:
To design the test cases for Gray-‐Box Testing, not only you need to understand the requirement
specifications and design documentation, but also the entire source code’s structure
First, based on the user’s requirement, the development team would need to perform project
requirement analysis, and create the requirement specifications. The testing (QA) team would also
join the requirement analysis discussion, and review these requirement specifications. By joining
the requirement analysis, Testing (QA) team can modify and add to the requirement documentation
on parts that they are important, but left out, and at the same time, it helps the Testing (QA) team to
understand what kind of test scenario they would need to design to fulfill these requirements.
Next, based on the requirement analysis, the development team would design the system from
the system architecture, and then, break the systems into module components and define how each
module would communicate with each other. At the same time, the Testing (QA) team would need
to study and understand the structure layout, the relationship, and the inputs and outputs of each
module, which prepares the Testing (QA) team for designing test cases for each module.
Lastly, the development team would develop the source code while the Testing (QA) team
would understand how the functions and source codes are executed. Understanding the constants,
variables, acceptable boundary for these values will help the Testing (QA) team in designing the test
cases for functional test. Because some boundary values are defined through MACRO in the header
file, the best way for Testing (QA) team to understand it is by understanding how the code are
executed.
After going through a series of tasks, the Testing (QA) team would already have created the test
cases needed to fulfill the system’s requirement, and once the system is ready, the Testing (QA) team
and go right into executing these test cases.
Challenge of Gray-‐Box Testing:
In the previous topic, we have gone over the process of Gray-‐Box Testing, and from it, we can
see the differences between the test cases created for Gray-‐Box Testing and for traditional Black-‐Box
Testing is that Gray-‐Box Testing requires additional tasks. These tasks are mainly: need to
understand the module design from the design documentation, understand interfaces’ input and
output of the modules combining with source code light reading methodology. Thus, the design of
the test cases would be more complete. From this perspective, the differences between Gray-‐Box
Testing and Black-‐Box Testing are not much. However, comparatively, the major difference lies on
test case execution and assessment. According to Lockheed Martin description towards Gray-‐Box
Testing, from the deployment perspective, Gray-‐Box Testing faces the following challenges:
1) Need to be able to determine if the test’s coverage is enough
How do you know if enough tests are created? How to you justify if there are any that are not
covered by the test cases? These questions can be answered by test coverage, which is an
industry standard approach in calculating this. When testers are designing the test cases base
on requirement and design documentation, they can easily find out which requirement has
been covered by using test case and requirement matrix diagram (for requirement and test case
traceability). At the same time, the tester can use third-‐party source code coverage tool such
to help them justify if the statement or branch coverage is at a satisfactory level, and see if the
originally defined requirement did not left anything out.
2) Gray-‐Box Testing needs to have a detail logging information that contains the execution of the
application, so the analysis and debugging process becomes more effective. Gray-‐Box Testing
not only focuses on how the software interacts with other hardware components, but also,
focuses on how the application is executed. Using feasible method to obtain both information
and perform thorough analysis, it is very effective for pinpointing and analyzing bugs/issues.
3) Gray-‐Box testing contains real-‐time application’s performance test. This is very important
application that will be executed on the target device to not only have the correct business logic
and functions’ output, but also, it needs to satisfy the expected performance requirement. In
2000, Lockheed Martin has added performance measurement on top of previously defined
Gray-‐Box Testing. To perform these addition test and assessment towards the target device,
new requirements need to be added to Gray-‐Box Testing. From the real-‐world perspective, test
and assessment made towards the embedded system is very important.
How can DT-‐10 help user perform Gray-‐Box Testing more efficiently?
DT-‐10 is the next generation dynamic testing tool that can perform long time tracing. Its three
major functionalities are: coverage analysis, performance test, and debugging (pinpointing the faulty
section of the code). In additional it can trace variable, oscilloscope functionality [allow user to see
how their software interact with other hardware components], and can better assist user in achieving
regression testing (from performance perspective) and other criteria:
1. DT-‐10 helps user to obtain the statement and branch coverage. Through DT-‐10’s
coverage analysis, after the tester has finishes executing their use-‐case scenario, DT-‐10 can
provide a thorough coverage report displaying the coverage information.
Figure 2. Coverage report generated by DT-‐10
If the user wishes to know the detail coverage information for a specific function, the user
can double click on the function and DT-‐10 will automatically open up the functions’ with
its source code and display which test point has been covered and which has not.
Figure 3. DT-‐10’s Integrated & Interactive Interface correlates all reporting with source code in
real-‐time
By analyzing the statement and branch coverage information provided by DT-‐10, the user
can quickly tell if there are any use-‐case scenarios which they did not test during the
Gray-‐Box Testing process. From the source code’s perspective, the user can also tell
which code lines are covered and which are not covered to see if there are any
redundancy code. Aside from analyzing the coverage information at the end, DT-‐10 also
provides real-‐time coverage information.
Using DT-‐10’s real-‐time coverage
Previously, we have seen DT-‐10 traces the software executed on the target board,
gathering the test analysis data, and then, providing the DT-‐10 analysis and detail
coverage report to the user. In additional to it, DT-‐10 can also provide real-‐time
coverage. Through real-‐time coverage, the user can see the coverage information while
he or she interacts with the target device. For an example, if you push a button on the
target device and it triggers some code lines; during this execution, you can see the
coverage information of these triggered code line within DT-‐10’s window.
Before you can utilize DT-‐10 to gather real-‐time coverage information, you will need
to enable the “View Real-‐time Coverage” option before activating DT-‐10 to retrieve the
gathered data.
Figure 4. Test Report Collection Condition Setting
Once this option is enabled, you can start the DT-‐10 tracing and it can start gathering the
coverage information in real time.
Figure 5. Real-‐time Coverage Information
Now, while DT-‐10 is still retrieving the coverage information for the handlerSenstorValue
function (currently at 75%), if I push a button on the target device, causing the application
to execute the other uncovered branch, you can see the coverage has increased to 100%.
Figure 6. Real-‐Time Coverage Information correlating to code execution
Real-‐time Coverage not only allows the user to see the source code execution and
coverage information, but it also helps the user to understand how their application’s
code lines are executed on the target device.
2. DT-‐10 can help user to perform performance assessment and test
DT-‐10 can monitor every functions’ execution and period time, and can also monitor
the between any two point interval’s execution and period time. As for multi-‐thread
application, DT-‐10 can monitor the CPU load for each thread.
Execution time and period time:
By applying DT-‐10’s tracing to the target device, you can gather every function’s
maximum, minimum and average execution time for the entire traced duration.
Figure 7. Execution Time Report
If you wish to look at the detail execution time for a specific function, let’s say the
handleSensorValue function, which has been invoked 42,845 times during the entire trace
duration; you can simply double click on the handleSenstorValue function in the column
report and a detail execution time window will pop out.
*NOTE: DT-‐10 can auto arrange the data simply by clicking on the column name. The
left picture is ordered by ‘Report No.’ and the right picture is oreder from smallest
Execution time to the largest.
The popped out window will display the execution time for all the 42,845 times this
function has been invoked. In DT-‐10, you can also set a design time for a specific
function, for an example; let’s say the execution time for the handleSensorValue must not
exceed 50,000us. We perform the following configuration in DT-‐10:
Figure 8. Setting Design Time for Specific Function
After DT-‐10 analyze the gathered data, DT-‐10 will highlight in red whenever the function’s
execution time did not meet the expected design value.
When you double click on a one of the listed execution times during the entire trace
duration, DT-‐10 will display the source code and the Test Point stack trace that
corresponding to that particular function’s execution for that specific time in green color.
Figure 9. Drill Down Detail to Correlate Source Code and Test Point for Specific Function
Execution
In Addition, DT-‐10 can also provide a function’s execution time histogram to the user
where the x-‐axis represents execution time and y-‐axis represent number of times it has
been executed. You can see that from 3,011us ~ 3,974us, this DRenderer_Present()
function has been invoked about 9,000+ plus times. This function will not heavily
referenced and called from 5,901us ~ 6,864us (invoked about 950 times), from 6864us ~
7,827us (invoked about few hundred times) and couple times from 0us ~ 1,085us and
from 8,790us ~ 9,753us (only executed few time. We know this because the minimum
and maximum execution time is 121us and 9,753us.
Figure 10. Execution Time Histogram View
The reason for viewing this execution time histogram is to let us understand the
breakdown of a specific function’s execution time. If we discovered that the maximum
execution time is 9,753us, but the majority of the execution times are from 3,011us ~
3,974us, then the person in charge of the performance analysis will need to investigate
further on how come the execution time takes longer than average time. This is where
you can double click on the listed execution number with 9,753us execution time, and
DT-‐10 will open up the source code and test point stack trace for the user to further
investigate into this behavior.
3. DT-‐10’s Function Trace Report and Function Transition Scope can provide graphic view to
help user to visualize and understand how different function interact with each other and
how the execution path flows. Below is an example of DT-‐10’s Function Trace Report
where user can visual the application’s execution logic.
Figure 11. Sample DT-‐10 Function Trace Report
Figure 12. Sample Function Transition Scope
By analyzing this report, the user can easily understand the details of all the functions that
are invoked during the execution of the application for the entire trace duration
4. Through DT-‐10’s DTPlanner, the user can configure design/expected values for variables
and parameters in the application that is executed on the target device. During the design
of the test cases for Gray-‐Box Testing, we will multiple test cases from the module and
source code’s perspectives. For those test cases that has input and output values, when
we configured an input parameter value for system, does the output value match our
expectation? Regardless of traditional Black-‐Box Testing or Gray-‐Box Testing, it would
always require human intervention to manually check if the output result matches our
expectations or to write assertion function verify the output results. In DT-‐10’s DTPlanner,
it allows us to set the design/expected value for constants, variables, function execution
time, and function period time. While running your application on the target device, if
there are any values that does not match the designed value or the set expectation, DT-‐10
will highlight the test point in red or display a read exclamation mark next to the test point;
indicating outcome result does not match expectation. This is very helpful to user when
he or she tries to auto verify the boundary values, or applying regression testing to the later
versions of the source code.
After analyzing the information gathered by DT-‐10 when tracing the target device, we can
conclude the following result:
Aside from setting expected/design value towards parameters, you can also set it towards
the execution time of each functions; to make sure the performance of the system did not
downgrade after a software upgrade.
Conclusion:
Gray-‐Box Testing combines the advantages of Black-‐Box Testing and White-‐Box Testing:
l Ability to fulfill the requirement of coverage test and execution path coverage test.
l Ability to Pinpoint hard-‐to-‐find and hard-‐to-‐reproduce defects (bugs) with ease by test
data analytics that references to source code’s execution flow.
l Ability to perform more complete and thorough test creation for Black-‐Box Testing,
including testing of infrequent called functions or use case scenarios and identifying dead
code
l Ability to assess and validate design requirements .
l Ability to access and test the functionalities’ performance via on target testing and
performing test data analytics in real-‐time.
By leveraging DT-‐10 with real-‐time gray-‐box testing, development and testing (QA) teams can
better collaborate and fulfill the testing requirements with automation to achieve increased
efficiency and overall project productivity and quality.
References:
André Coulter, “Graybox Software Testing Methodology – Embedded Software Testing
Technique”, 18th Digital Avionics Systems Conference Proceedings, 1999 IEEE, pg (10.A.5-‐2)
LinZhang Wang, “Model-‐based Gray-‐Box Testing: From Art to Engineering”, Nanjing University,
Sept 24, 2011.
Shi Dongqin, “Software Gray-‐box Test Technology Research and Application”, China Aerospace
Science & Industry Corp., 2011.