Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Plan Your Testing


April 1999: Features: Plan Your Testing

When a project runs late, testing often gets cut so you can make the scheduled delivery. But what about the inevitable problems that occur once the product is used? Taking time to understand why these problems happen is the first step toward correcting them.

There’s a familiar sequence of events behind this all-too-common scenario: The project manager defines programming tasks, estimates the duration and effort needed to perform the tasks, and schedules them on the project plan. Then, he or she schedules a similar amount of time for testing the programs, with testing scheduled just before the scheduled delivery date. Then, reality strikes. Coding takes longer than expected, and testing gets pushed back. You also must squeeze in additional time to test the unplanned coding. The delivery date remains the same, however, and customers are left to find the problems that testing should have found.

Real Causes

Let’s analyze why this happens. The development side of the house drives most software projects. As such, the project plan concentrates on development activities. Activities like designing and writing programs are considered important. Other activities, such as testing, often aren’t. When the project crunch occurs, it’s easy to curtail those activities. Besides, the eternally optimistic developer always assumes the code will be right.

Similarly, the project manager likely applies proven planning techniques to the development activities. At a minimum, the project manager will identify every module that must be developed or modified. He or she will determine each module’s required resources and skill levels, and will estimate the effort and duration. Devoting such attention to defining detailed tasks builds confidence in and commitment to them. It would be inconceivable to omit developing a module that’s necessary for the project.

Compare this approach with a more liberal method for planning your testing. At best, project managers will use a rule of thumb to “ballpark” the time needed for testing each module. For example, they assume testing a module will take about the same amount of time as coding it. This method provides a degree of reliability, because the project manager presumably applied sound practices to estimate the coding time.

However, by its nature, a rule-of-thumb estimate is given less credence than a seemingly well-thought-out estimate for coding time. In addition, the module-by-module allocation of time for testing tends to apply to unit testing. It can overlook or shortchange other necessary testing such as integration or load tests. The effect is a diminished commitment to testing estimates.

Sometimes project managers simply lump all testing into one testing task. These lump estimates require even less commitment. Moreover, the time allocated for such lumped testing tasks is often just the time left after developers complete “important” programming tasks. Clearly, in such cases, there is not even a pretense of credibility or commitment to the testing tasks.

Real Effects

When testing is cut short to meet a deadline, whatever bugs testing would have found will afflict users instead. It will cost considerably more to fix these defects than if they were found and fixed before the release. Moreover, all tests are not created equal—as project teams that can’t distinguish and selectively allocate resources to the most important tests will discover. These teams spend most of their test time on less important tests.

In general, the more important tests—such as integration tests, which confirm interfaces between two or more modules—usually come after you complete unit tests. Tests to demonstrate the system’s ability to handle peak loads and system tests are usually performed last. The errors these tests reveal are often the most serious, yet they’re the most likely to be crunched.

What’s Needed

To deliver quality projects on time and within budget, you need to reduce the number and severity of defects. Otherwise, the defects necessitate extensive unplanned rework, which in turn increases project cost and duration—and the number of errors that persist after the release. The biggest benefits come from methods that ensure requirements and designs are accurate and complete. However, this article focuses on methods to improve testing the delivered code. To be effective, your method should:

• Identify specific tests that need to be performed so that you can reliably estimate time and resources to carry out those tests.

• Build commitment to ensure the tests are carried out.

• Detect previously undetected causes of major problems.

• Distinguish and prioritize the most important tests.

• Ensure that important tests are run, and run early.

A Solution

My consultancy’s most effective method to achieve these objectives is a special type of test planning we call “proactive” testing, because it lets testing drive development.

There are three key characteristics of how we “proactively” plan tests: First, planning tests before coding; second, planning tests top-down; and third, planning tests as a means to reduce risks.

Planning tests before coding If you create test plans after you’ve written the code, you’re testing that the code works the way it was written, not the way it should have been written. Tests planned prior to coding tend to be thorough and more likely to detect errors of omission and misinterpretation. It takes the same amount to write down test plans no matter when you do it. However, writing them first can save you considerable time down the road.

When test plans already exist, you can often carry out tests more efficiently. First, there’s no delay. You can run tests as soon as the code is ready. Second, having the test plan lets you run more tests in the same amount of time, because you are using your time to run the tests on the plan instead of interrupting your train of thought to find test data.

Moreover, planning tests first can help developers write the code right the first time, thereby reducing development time. For example, take this simple specification: The operator enters a customer number at a particular location; the program looks up the customer in the database and displays the customer name at a specific location. Could a competent programmer code that wrong? Of course. What happens when you add the following information: Customer Number C123 should be displayed as “Jones, John P.” That’s a test case, and it helps the developer code the specification correctly.

Rework during development accounts for up to 40% of development time. Difficulty translating design specifications into appropriate code is a major cause of rework. Having the test plans along with the specifications can reduce rework, such as fixing and retesting, significantly.

Top-down planning Planning tests top-down means starting with the big picture and systematically decomposing it level by level into its components. This approach provides three major advantages over the more common bottom-up method.

First, systematic decomposition reduces the chance that you will overlook any significant component. Since each lower level simply redefines its parent in greater detail, the process of decomposition forces confirmation that the redefinition is complete. In contrast, it is easy to overlook large and significant components with bottom-up planning.

Second, by structuring test design, you can build and manage them more easily and economically. The test structure lets you reuse and redefine the software structure so you can test it with less effort and less rework. It also lets you apply reviews and automated tools where they will be most effective. Third, top-down planning creates the view you need to enable selective allocation of resources. That is, once the overall structure is defined, the test planner can decide which areas to emphasize and which to give less attention. My consultancy uses risk analysis at each successive level to drive the test plan down to greater detail. For more important areas, define more tests. Figure 1 illustrates this structure.

Figure 1: Top-Down Test Plan Structure

Testing as a means to reduce risks Testing is the primary means for reducing risk. Therefore, test planning starts with risk analysis to identify and prioritize the risks applicable to the particular test level. At each level, and for each test item, ask the following set of questions to identify risks:

• What must be demonstrated to be confident it works?

• What can go wrong to prevent it from working successfully?

• What must go right for it to work successfully?

Those risks define the components that must be tested at the next level down; and the test plan defines a strategy for accomplishing testing so that higher-priority risks are reduced by testing earlier and more often. The objective is first to do the minimum development necessary to test the high-risk items early. Ensure that key software elements will function properly before building the other elements that depend on them. This way, if the tests reveal problems in the high-risk items, you don’t have to throw out or rebuild a lot of software. When you eliminate high risks, you have the time to add and test the lower-risk code more thoroughly.

Master Test Planning

The top level is the Master Test Plan. This is a management document equivalent to, and eventually merged into, the project plan. The Master Test Plan defines the set of Detailed Test Plans for unit tests, integration tests, and special tests which, taken together, will ensure the entire project works. Unit tests deal with the smallest executable pieces of code, usually programs or objects. Ordinarily, the developer unit tests his or her own code. Integration tests test combinations of units and other integrations. Since work from multiple developers is often involved, someone other than the single developer performs integrations tests. Someone other than the developer also typically performs special tests such as load tests, usability tests, and security tests. System tests exercise the complete system end-to-end and are usually performed by someone other than the developer.

While many of you would say you already include such tests in your project plans, I find that “proactive” test planning usually creates a markedly different set of tests from traditional reactive testing.

“Proactive” test planning produces different results. Let’s use an accounts payable system as an example. The system design would include modules for maintaining a vendor file and entering orders, but thorough analysis would identify at least the following project-level risks related to these functions:

• Can’t find the appropriate vendor when the vendor is already on file

• Can’t add the vendor to the file appropriately

• Can’t modify vendor data

• Inadequate workflow leads to errors when you enter an invoice for a new vendor or a vendor whose data must be modified

• Can’t enter or modify necessary invoice data accurately

• Can’t identify multiple vendor numbers assigned to one vendor, or has difficulty selecting the proper vendor number

• Can’t transfer an invoice successfully from one vendor number to another

• Difficulty using the invoice entry or maintenance function, thereby wasting time or increasing errors

• Enters duplicate invoices

• Enters invoices for an unauthorized or discontinued vendor.

Traditional project planning would overlook some of these risks, yet they still could occur. You need to test each risk. Also, you want to split out the higher risks so you can code and test them earlier. For example, you might define transferring an invoice as a high risk warranting a separate unit test. Similarly, you might consider some of these risks, such as the ones involving ease of use and procedural workflow, to be higher-priority because of the large impact they could cause. If you were to build both modules and then discover higher-priority risks, it would probably necessitate significant delay and rework.

You can reduce the impact of these higher-priority risks by performing unit tests on them early. You don’t need to code the entire modules to carry out these unit tests. Instead, code only the portions of the modules necessary to test the high risks. You may also need stubs or extra scaffolding code to test parts of a module. If the risks don’t materialize, you could code the rest of the module. If the risks do show up, you need to redesign the modules before coding; but the total time and effort would be less than if you initially coded the full modules and then recoded them.

Note, by the way, that the risk analysis identifies the need to test manual workflow procedures. Traditional testing probably wouldn’t identify such a test because the risk doesn’t depend on program coding. However, it is still a major risk to project success and needs to be tested. Risks that you don’t identify aren’t tested, and they often aren’t coded for appropriately, but they do show up.

Lower-Level Test Plans

For each Detailed Test Plan (unit, integration, and special), ask the same three risk analysis questions: What must be demonstrated to be confident the component works? What can go wrong and prevent it from working? What must go right for it to work? Unit test planning will ensure the following risks are tested:

• Can’t identify multiple vendor numbers assigned to one vendor or select one from several vendor numbers for that vendor.

• Doesn’t identify other vendor numbers for a vendor when you add that vendor.

• Doesn’t display all of the vendor’s numbers when you enter an invoice.

• Doesn’t let you select any of the vendor’s numbers when you enter an invoice.

• Doesn’t let you correct the selected vendor number when you enter an invoice.

You need a Test Design Specification for each of these risks. For example, you would need to demonstrate the following to ensure your first Test Design Specification works:

• The other vendor numbers are identified for a vendor when you add that vendor.

• The other vendor number is for the same vendor name or address.

• The vendor’s other number doesn’t share vendor name or business address where one vendor is a subsidiary of the other.

• There are no other vendor numbers for the vendor.

• There is one other vendor number for the vendor.

• The vendor has more than one other vendor number.

For each risk, you must have one or more test cases that, taken together, demonstrate that the risk doesn’t occur. Each test case should consist of a specific input and output. You could apply a Test Design Specification to more than one Detailed Test Plan, and a test case to more than one reusable Test Design Specification.

Planned Testing Return on Investment

Top-down test planning based on pre-coding risk analysis can reduce development time and defects. You can test early to ensure that the highest risks do not occur. Moreover, this plan lets you apply resources selectively, to gain the most “bang for your buck.”


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.