Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Mission-Critical Development with XP & Agile Processes


Jan04: Mission-Critical Development With XP & Agile Processes

Julius works at Hewlett-Packard and enjoys solving problems by writing software. He can be contacted at [email protected].


My employer used a large, mission-critical internal application to coordinate the work of many of its teams around the world. Distributed users defined interrelated data, then individually delivered it to a central location. When all data was coherent, defined, and delivered, it was processed by a back-end server. Because of the volume of data, that processing alone could take several days. Once completed, the system's output was distributed to users around the globe and the entire process was repeated.

Alas, the system, which was developed over a period of several years, was a spaghetti-like system that consisted of C-based applications and scripts and had grown into an unmanageable piece of software that was difficult to improve and maintain. Finally, it became clear that a new, more generic and extensible system was needed: Management decided that replacement was in order. Furthermore, it was decided that a new system would be implemented using Java, and development was assigned to my team. At the outset, we decided to base our development process on Extreme Programming (http://www.extremeprogramming.org/) and adopt agile practices for our daily work. In this article, I describe those practices, starting with the development process itself, then focus on the two practices that proved the most valuable.

The Development Process

The process we adopted to manage the project was based on team members' reading and understanding of Extreme Programming (XP), our personal experiences, and the need to balance the XP approach with traditional practices. Consequently, we divided the development process into several milestones:

  • Milestone 1: Define requirements and use cases.

  • Milestone 2: High-level design.

  • Milestone 3: Bootstrapping the project.

  • Milestone 4: First iteration with end-user deliverables.

  • Milestone 5: More user deliverables.

  • ...

  • Milestone N: Final user deliverables.

  • Milestone N+1: Final stretch.

  • Milestone N+2: System trial and dry run.

Each milestone defined an expected output and timeframe. No code was written in Milestone 1 or 2, since their objectives were defined as documents. Milestone 3 ensured that we had a development framework supporting Continuous Development, Integration, and Test (CDIT), a policy mandating that integration and testing happen concurrently with software development instead of waiting for the code to be finished and features to be complete.

Starting at Milestone 4, the system had to have some end-user functionality deployed to testers and users. All these milestones were defined in terms of end-user functionality. For example, in Milestone 4, systems administrators could issue commands to create a processing cycle and change its properties. Of course, nothing was processed at this point, but it gave users their first exposure to the system. As in classic XP, we could ship our software (with a set of completed features) at any point after Milestone 4.

That was the theory. What we suspected—and what proved to be the case—was that we needed a few more steps. Another step (Milestone N+1: Final stretch) was designed to tie all loose ends, fix bugs, and prepare the system for use. Moreover, we added Milestone N+2: System trial and dry run. As with software of any complexity, users will do things that you can't predict. Therefore, in the dry run, we subjected the system to normal use before using it in production.

Milestone 1: Defining Requirements and Use Cases

At the beginning of the project, we spent a few weeks formally gathering the requirements and defining the basic use cases. Our objective for Milestone 1 was to create documents summarizing these findings. The team conducted extensive discussions with users and created a spreadsheet containing their comments. Requirements had a simple structure containing: Fields ID, Requirement, User Visible?, Priority, Affected Users, and Comments.

This document proved extremely valuable later in the development. The key here was for us to understand and document user needs without spending too much time on details that would later change. As a result, we totaled only about 80 individual requirements. We needed a baseline to start with, but assumed that we did not need to know everything up front. We also assumed that our process could adapt to changes introduced later in the project. To further our understanding of requirements, we created a Use Cases Document, in which each use case contained the Fields ID, Use Case, Actors, Purpose, Input, Overview, Cross Ref, and Command. This let us clarify some system behaviors and properly identify different classes of users (actors). The use cases were also translated into a set of user-visible functionalities we called "commands." These commands represented a somewhat formalized functionality that we expected from the system. Of course, command input and behaviors changed during the project's lifetime, as both user and developer understanding matured.

All in all, we spent four weeks on Milestone 1, which in retrospect seems excessive. Half that time would have been sufficient to gain same understanding of what the system should do.

Milestone 2: High-Level Design

Our objectives for Milestone 2: High-level design included:

  • Create a high-level design to carry us through the project, including basic outline of subsystems and classes.

  • Define back-end database schema.

  • Define project tools and policies.

Since our project required remote access, we created a typical set of high-level subsystems:

  • Commands client, a thin client relaying requests to the server.

  • Command server for handling requests, threading, and dispatching.

  • Handlers/formatters for implementing commands and how they are presented.

  • Services, the application logic.

  • Domain objects, the application logical objects.

  • Value objects, the stateless/functionless carriers of application information.

  • Database access layer, the encapsulation of database access, ORM.

  • Tools and utilities for logging and other low-level operations.

Within subsystems, we designed fundamental classes and overall flow of the system and documented using UML.

At this point, we also defined project tools and policies and committed ourselves to CDIT. We opted for common code ownership, decided on code standards, agreed to unit test everything and run tests twice a day, and required 100-percent passing of all tests.

The design phase lasted about four weeks, which—in retrospect—seems about the right amount of time. Still, it could have been shortened to two to three weeks. The fundamental design held up very well throughout the project and the policies helped tremendously. Looking back, if I were to change something it would be actually designing classes at this point—we probably spent too much time doing detailed class design and documenting it in UML.

Milestone 3: Bootstrapping the Project

Before writing code to define user features, we decided to bootstrap the development process. Our objective was to have everything ready for daily testing, coding, and CDIT. Source control was prepared and the development directory structure defined. We wrote Ant build.xml that contained all fundamental tasks—compile, test, deploy, and create JavaDoc. Most importantly, we set up a designated server to run our daily tests. On that server, we ran a nightly job that:

  • Built the complete system.

  • Deployed the system.

  • Ran all automated tests

  • Analyzed results and e-mailed failed tests to team members, along with a log of the latest changes in the source-control system.

The really important part was that we did this before writing a single line of application code. Of course, at first there was only a phony test, but the process was in place.

That bootstrapping phase lasted one week and was perhaps the single best investment we made during the project's lifetime.

Milestones 4 to N

Milestones 4, 5, and N were for project development. In each milestone, we defined objectives based on user-visible functionality that testers and end users could exercise. All objectives were customer driven, just as Kent Beck preaches (see Extreme Programming Explained: Embrace Change; Addison-Wesley, 1999).

To be considered "finished," each command or component had to include both unit and functional tests. Our motto was: "Code that does not have tests to prove that it works, does not exist." Tests were developed at the same time as code was written, and all tests were run every day.

Once Milestone 4 was completed, testers/customers started using the system, giving us feedback about its usefulness and robustness. They often came back with new or changing requirements for us to adapt our software and refactor. Clearly, our development was really agile—we had to respond to changes in each milestone. Since we used CDIT, we could deliver our system to users at any point in time with new functionality. During milestones, we sometimes had several deployments to the testers within a single week.

These milestones lasted from two to five weeks, although they shouldn't have lasted more than two weeks because the team was most efficient and focused when we were using shorter milestones.

Milestone N+1: Final Stretch

When the system was reasonably complete, we entered what we called the "final stretch." The objective in this milestone was to fix remaining defects and implement all small loose ends.

In theory, when we finished all customer-required functionality, the system should have been ready to ship. However, in practice, it turned out that there were quite a few missing pieces. I suspect that this would be the case in any complicated system and simply should be acknowledged in project planning. To keep focused in that phase, we adopted daily meetings inspired by the SCRUM methodology (http://www.controlchaos.com/). In these meetings, three questions were posed to everyone:

  • What have I done since the last meeting?

  • What am I going to work on next?

  • What keeps me from moving forward; what obstacles I am facing?

All in all, the final stretch lasted about two weeks.

Last Milestone: System Trial and System Dry Run

Since we were developing a mission-critical system, it was necessary to subject it to a dry run. Consequently, we deployed system clients to all users and turned the system on. We processed all the data, and verified that the results were identical to those generated by the old system. Since we trusted that our process produced quality software, we felt good about it and expected only a few issues. However, we still found problems and defects—more, in fact, than we had anticipated. The best part was that we could safely deal with them without introducing new bugs. After fixes or late refactoring, we ran the entire test suite. If it passed, we could redeploy the system with confidence. At the end of the dry run, the system (see Figure 1) was ready to go into production.

Best Practices

Looking back on the project, the practice that made the biggest difference was our approach to testing. Again, before the coding began, we set up a nightly process that ran all available tests, sending results to the entire team. This gave us a framework that would keep the team honest about product quality at all times.

In our Java-based project, we set up two separate but parallel source trees—one for sources and one for tests. We used JUnit (http://www.junit.org/) to write tests and Ant (http://ant.apache.org/) to drive them. All tests—both unit and functional—were written in Java. Since tests and the sources tested resided in the same package, the tests could easily access source classes and methods. As the project progressed, we kept adding about 100 nontrivial tests per month and targeted a 100-percent passing rate. Today, in the working system, about 60 percent of the code is defining its functionality, and 40 percent is test code.

From the beginning, when developers claimed in meetings that they had finished implementation of a feature, we asked for tests to prove it. Before long, we didn't have to ask because everyone was testing. The benefits to this mandated testing are obvious:

  • Instant gratification that something worked.
  • Better design—easily testable code is always more modular.

  • Showing others how to use the feature.

  • Safe refactoring, thereby maintaining product quality.

In short, we developed a culture in which we could brag about the number of tests. When developers came to work in the morning, they found results from the previous night's tests in their mailboxes. Consequently, we would examine failing tests and fix them before starting work on new things. The hardest part was that as code changed, old tests had to be updated and maintained. But as the project was getting more and more complicated, the value of our strict testing policy grew exponentially.

Another simple practice was our requirement to write tests when fixing defects that users or testers found. In this case, we would write a test first to see if we could reproduce the bug as seen by the user. In doing so, we tried to mimic what the user did in the test, run it, and see it fail. Once it failed, we went to the code, fixed it, and reran the test. If it passed, we knew it had been fixed. Afterwards, we would run the entire test suite to ensure that the fix did not introduce defects in other parts of the system. This was a simple but very effective process.

Start Testing Today

The simple testing practices I've described would instantly improve any software development process. First, you set up a framework to run tests automatically and make results visible. Then, as you add new code, you add tests. When you fix defects, add tests again. Before you know it, you will have a sizable suite that gives you constant feedback about the project's quality.

When we talk to other developers about our approach to testing, a comment we often hear is "But our software is special, so it is hard to test. It is GUI based, requires a database, needs a remote server, and so on." We faced the same problems in our project, but treated them as an integral part of the process. Overcoming problems made our system better designed and more modular. For example, our processing is invoked by requests issued from remote clients. To test it locally, we abstracted a remote interface and created two implementations—a real client and a facade for testing. It required a bit of work, but resulted in better design and a fully testable system. If you are developing in Java, Erik Hatcher and Steve Loughran's Java Development with Ant (Manning Publications, 2003) will help you set up Ant and JUnit for CDIT.

Daily Coding: From Peer Reviews to Pair Programming

Early in the project, we decided to adopt a policy of common code ownership, meaning that anyone could change any code as long as tests were passing. We also tried to encourage pair programming, but it didn't seem to catch on. Consequently, we started code reviews, in which we picked a piece of functionality somebody had developed and gave it to the team for review once a week. The owner would determine how much code we looked at, then everyone printed and examined it. The next week, we would meet for an hour and exchange remarks. This practice improved the code quality and also made developers more open to the idea of pair programming. After all, pair programming is like an instant code review by a peer sitting next to you.

After several peer-review sessions, some pair programming started happening spontaneously, and by the end of the project, most of the code was written in pairs.

Conclusion

After nine months of development, the system went into production on schedule. It held together well with a few defects in initial production runs. Thanks to our process, we have complete confidence that we can fix bugs and add features without introducing new problems.

I've been involved in software development for nearly 15 years on projects that vary in scope, customer base, and size. Some of the projects were shrink-wrapped software, others internal tools, and some involved custom in-house solutions. Because we created quality software and had fun doing it, this project was one of the best experiences I've had in developing complicated software.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.