Scaling Test-Driven Development

Test-driven development is an important agile software development technique that lets you specify and validate functionality at a finely detailed level.


January 03, 2008
URL:http://drdobbs.com/architecture-and-design/scaling-test-driven-development/205207998

Test-driven development (TDD) is a common agile software development technique. Although the name implies that TDD is a testing technique, it is actually a programming practice that has both specification and validation aspects. With TDD, you specify your software in detail on a just-in-time (JIT) basis via executable tests that are run in a regression manner to confirm that the system works to your current understanding of what your stakeholders require. Agile teams have been very successful applying TDD in practice, although a recent study has shown that teams are struggling to scale TDD to more complex situations. This is because TDD, by itself, doesn't scale—you need to apply other techniques to address the concerns that TDD doesn't cover.

TDD is the combination of test-first development (TFD) and refactoring. With TFD, you write a single test and then you write just enough software to fulfill that test. You can do TFD at the requirements level by writing a single customer test, the equivalent of a function test or acceptance test in the traditional world, and at the design level with developer tests. Refactoring is a technique where you make a small change to your existing code to improve its design without changing its semantics. Examples of refactorings include moving an operation up the inheritance hierarchy and renaming an operation in application source code; aligning fields and applying a consistent font on user interfaces; and renaming a column or splitting a table in a relational database. I have an overview of TDD at www.agiledata.org/essays/tdd.html and the Jolt Award winning book Test Driven Development: A Practical Guide (Prentice Hall, 2003) by Dave Astels is an even more detailed resource.

Figure 1 depicts the process of implementing a new feature via a test-driven approach. It starts when you pull a new feature off of your existing work stack to implement. You begin by formulating a customer test that specifies one aspect of the detailed requirements for that feature. For example, let's assume that you are implementing the functional to calculate the discount for an order and that you already have basic order-handling functionality in place. The first customer test that you would implement might explore what to do when a discount shouldn't be calculated; in this case the customer test would involve a customer who is not "elite" making a small order. To implement this functionality, you might first write a developer test that specified that the discount on an order with no items in it was zero and the supporting production code to do that calculation. Once that ran successfully, you'd add a test to verify that the discount for an order with a total of $999.99 is also zero, reflecting the business rule as it was described to you by your stakeholders. You'd then add a test validating that if the order is placed by a customer with a bad payment history, the discount is zero, and so on until you've got the "when not to give a discount" logic implemented. The next step in implementing the discounting functionality would be to write a customer test that specified what to do when a good customer places an order between $1000 and $5000 (give them a 1 percent discount) and sufficient developer tests to drive the implementation of the source code. You would proceed iteratively, writing sufficient customer tests and the supporting developer tests and production code to implement the complete feature. In parallel to this test-first programming, you would refactor any relevant production code that you find that wasn't the best design possible for what you were currently implementing. Once the feature is implemented, you'd pull another off the stack and begin again.

[Click image to view at full size]

Figure 1: Implementing a single feature via TDD.

Not So Shocking...

TDD is incredibly popular, probably because it works very well in practice—so well that it has a near-religious following within the Agile community. DDJ's 2007 Agile Adoption Survey showed that customer testing is the fifth most valuable practice for agile teams and that developer-level TDD is sixth. When it comes to work products, developer tests are the third most valuable, after working software and source code, and customer tests are the sixth. The point is that there is broad-based support for TDD among Agile developers—that we're actually doing what we claim to be doing.

However, TDD isn't perfect. At the CEE-SET conference in Poznan, Poland in October 2007, a paper written by Maria Sinaalto and Pekka Abrahamsson entitled "Does Test Driven Development Improve the Program Code? Alarming Results from a Comparative Case Study" was presented. This paper summarized a research study into the effectiveness of TDD, and it found that TDD may produce less complex code but that the overall package structure may be difficult to change and maintain. Although other studies into the effectiveness of TDD had looked at code-level issues, and found that TDD helps to increase quality, this was the first one to look at higher level architectural issues. This new study appears to show that TDD is effective for "design in the small," but that we need another approach for "design in the large." These results aren't shocking at all because the focus on TDD is on detailed specification and validation, not on higher level issues. Clearly, we need to look beyond TDD for a solution to this problem.

Specification at Scale

Although TDD is great at specifying code at a fine-grain level, tests simply don't scale to address higher level business process and architectural issues. AMDD enables you to scale TDD through initial envisioning of the requirements and architecture as well as just-in-time (JIT) modeling at the beginning and during construction iterations. The principles and practices of the Agile Modeling methodology describe lean strategies for gaining the benefits of modeling, which are to think things through and to communicate ideas, without the pain of excessive documentation.

Yes, I realize that modeling is one of the dirty words in the Agile community, and any discussion of up-front modeling early in the agile lifecycle is tantamount to heresy. Yet, when you ask people in private what they actually do on real-world Agile projects, you discover that the vast majority of us are apparently heretics. DDJ's 2007 Agile Adoption Survey found that 77 percent of all Agile teams do a bit of up-front requirements and architectural envisioning to identify the initial scope and technical strategy respectively. Furthermore, it found that whiteboard modeling is the fourth most valuable practice and that whiteboard sketches are the fourth most valuable work product.

To scale requirements-level TDD, you must recognize that customer tests are very good at specifying the details, but not so good at providing overall context. High-level business process models, conceptual domain models, and use cases are good at doing so, and these work products are often created as part of your initial requirements envisioning and iteration modeling activities. Process descriptions and use cases can easily reference customer tests, or even be written as customer tests that in turn invoke other customer tests, to provide the business context required to understand the overall requirements for your system. The details are still captured in the form of tests, but the models provide the high-level context required to think through critical scoping and planning issues.

Similarly, to scale design-level TDD, you must recognize that developer tests are very finely grained but once again do not provide overall context. High-level architecture sketches created during your Iteration 0 envisioning activities help set your initial technical direction—a little bit of up front design work helps to avoid structural problems cropping up later in the lifecycle. During each construction iteration, you'll do more detailed design modeling, often sketching or using index cards to explore issues that tests aren't well suited for. Highly skilled developers will even use software-based CASE tools to do this sort of work, although the majority of developers will use IDEs such as Eclipse, Visual Studio, or Jazz.

Validation at Scale

The second aspect of TDD is validation, and it too could do with some scaling help. TDD is in effect an approach to confirmatory testing where you validate the system to the level of your understanding of the requirements. This is the equivalent of "smoke testing" or testing against the specification. This is clearly an important thing to do, but it isn't the whole validation picture. The fundamental challenge with confirmatory testing, and hence TDD, is that it assumes that stakeholders actually know and are able to describe their requirements. Yes, iterative approaches clearly increase the chance of this, but there are no guarantees. TDD further compounds the problem by assuming that developers have the skills to actually write and run the tests, skills that can clearly be gained over time but still must be gained.

So, to scale the validation aspects of TDD, you need to add investigative testing practices into your software process. The goal of investigative testing is to explore issues that your stakeholders may not have thought of, such as usability issues, system integration issues, production performance issues, security issues, and a multitude of others.

In my January 2007 column (Agile Testing Strategies, www.ddj.com/development-tools/196603549), I described how Agile teams approach testing, often having a small, independent investigative test team working in parallel to the development team. Your testing environment should simulate your production environment as much as possible. Figure 2 overviews this strategy, showing how the development team deploys the current working build into your testing environment. This should occur at least once an iteration because you want to reduce the feedback lifecycle as much as possible.

[Click image to view at full size]

Figure 2: Continuous investigative testing.

The independent testers don't need a lot of details; their goal is to break the system in ways that the development team hasn't thought about—the implication is that they don't need a detailed requirements specification from you. The only documentation that they might need is a list of changes since the last deployment so that they know what to focus on first, because most likely new defects would have been introduced in the implementation of the changes. The investigative testers will use complex, and often expensive, tools to do their jobs and will usually be very highly skilled people.

When the investigative testers find a potential problem, it might be what they believe is missing functionality or it might be something that doesn't appear to work properly, which they write up as a "change story." Change stories are basically the agile form of a defect report or enhancement request, and many teams will often use defect-tracking tools such as ClearQuest or Bugzilla to manage them. The development team treats change stories like requirements—they estimate the effort to address the requirement and ask their project stakeholder(s) to prioritize it accordingly. Then, when the change story makes it to the top of their prioritized work-item list, they address it at that point in time. In effect, any potential defect found via investigative testing becomes a known issue that is then specified and validated via TDD. Because TDD is performed in an automated manner to support regression testing, the implication is that the investigative testers do not need to be as concerned about automating their own efforts, but instead can focus on the high-value activity of defect detection.

TDD at Scale

Test-driven development (TDD) is an important agile development technique that enables you to specify and validate functionality at a finely detailed level. But it does not enable you to achieve these same goals at a higher level, so you need to adopt other, complementary techniques. The principles of Agile Modeling can be applied to address the high-level specification issues that TDD does not, and an independent test team working with your development team can address the testing concerns that TDD does not.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.