3. Track the Metrics
Tracking test metrics throughout the test effort is extremely important because it allows the project team to see developing trends and provides an historical perspective at the end of the project. Tracking metrics requires effort, but that effort can be minimized through the simple automation of the run log (by using a spreadsheet or simple database) or through customized reports from a test management or defect tracking system. This underscores the "keep it simple" step -- the metrics should be simple to track and simple to understand. The process of tracking test metrics should not create a burden on the test team or test lead.
There are several types of metrics to track including base metrics, calculated metrics and S-curves:
Base Metrics
Base metrics constitute the raw data gathered by a test analyst throughout the testing effort. These metrics are used to provide project status reports to the test lead and project manager, and also feed into the formulas used to derive calculated metrics. Every project should track the test metrics in Table 1.
There are other base metrics that can and should be tracked, but this list is sufficient for most test teams that are starting a metrics program.
Calculated Metrics
Calculated metrics convert the base metrics data into more useful information. These types of metrics are generally the responsibility of the test lead and can be tracked at many different levels (by module, tester, or project). The calculated metrics in Table 2 are recommended for implementation in all test efforts.
The S-Curve
When charting cumulative test case passes and defects, the graph commonly takes on an "S" shape. (The reason test results are normally S-shaped is a natural function of the testing process which: Starts out slowly during the start of test execution because of environment, application and data setup issues; picks up pace as testing continues, fewer issues are discovered and more fixes are released to test; and finishes slowly when the most difficult defects are fixed and lower priority test cases are executed.)
Test execution starts out slowly, picks up toward the middle of the test effort and then finishes slowly. It is useful to include the S-curve as part of a test metrics program because it gives immediate visual feedback on the progress of the test effort and illustrates the risks involved in releasing the application to production. It is helpful to develop two separate graphs, each displaying a theoretical curve to compare against the actual curve. The first graph is used to track test case passes and charts the progress of the test effort (Figure 1). The other graph is used to track defects, and charts the risk of release (Figure 2). The degree to which the actual test curve complies with the theoretical curve becomes the basis for measuring test progress and risk of release.