Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

C/C++

CodeStore and Incremental C++


Dr. Dobb's Journal December 1997: CodeStore and Incremental C++

Lee is a researcher at IBM's Thomas J. Watson Research Center. He can be contacted at [email protected].


C++ programmers often complain about slow builds. The main reason for the time it takes to compile and link programs is that, although compilers have become faster, we're still stuck with a 20-year-old separate compilation scheme managed by the make utility. To address this bottleneck, several of us at IBM's Thomas J. Watson Research Center began work to improve "build" technology. Over time, we realized that big improvements would require big changes in C++ compilers and programming environments. We therefore teamed up with our compiler-writing colleagues in IBM's Toronto laboratory to build a C++ development system that became the basis for the edit-compile-debug-browse cycle in an upcoming version of IBM's VisualAge for C++.

The system we developed provides fast builds. An "initial" build, which takes only source code and produces target .EXE and .DLL files, corresponds to a UNIX "make clean all," or Visual C++ "build all." On the other hand, an "incremental" build takes source code, the previous build's target files, and any other state the system chooses to save, then produces the desired target files.

For an initial build, fast means that the total build time is comparable to or better than the best of the current compilers. For incremental builds, build time should be proportional to the impact of the source code changed since the last build. Changing a declaration in a header file should only require compiling affected functions (as opposed to all functions in all source files that include the header file). Changing the body of a noninline function should only require compiling that function; changing the body of an inline function should require compiling all of the function's callers. This kind of behavior would be a major win in moderate to large size projects, where most source files (.cpp files, for instance) tend to include header files.

There is drudgery in C++: typing and maintaining multiple copies of essentially the same declaration; organizing header files; arranging header file inclusion to avoid circularities; organizing header files to minimize recompilation; and so forth. Although Listing One for example, follows the usual rules-of-thumb for organizing code into files, it doesn't compile. You can fix it by adding forward declarations (class A in B.h, for instance) and moving the inline function definitions into separate header files. But this is a pain, especially in large projects.

Most of this work involves the programmer having to sort declarations for the compiler. This is backwards: Computers can sort. We want to eliminate this drudgery by providing orderless programming: You organize top-level declarations into files in any order. Even the concept that one declaration appears "before" or "after" another declaration is meaningless. You don't need to declare a function before it can be called or define a class before it is used. The code of Listing One can be organized as in Listing Two. There are no forward declarations, no macro guards, and no #includes.

Writing and maintaining compilation dependencies for make and keeping them correct is another kind of drudgery. Although automatic dependency generators exist, they are too slow to use all the time. We eliminate the problem by using a configuration file to specify what to build and letting the system figure out how to build it. Listing Three is a simple configuration file, which specifies compiling and linking C++ source to produce an executable file. We don't say that vehicle.obj depends on vehicle.cpp, vehicle.hpp, and so on. In fact, we make no distinction between header and source files, and the system produces no object files because the target is an executable file.

Conventional Compilation Architecture

An unconventional architecture was required to achieve our goals. A conventional compiler compiles a single C++ source file (that is, a .cpp file) into a single object file (an .obj file). When a source file is compiled, the preprocessor produces a stream of tokens from both the source file and the included header files. Successive stages of the compiler transform and process these tokens, ultimately producing machine code, which is written into the object file. If you enable debugging, the compiler gathers symbol information and stores it in the object file. If you enable browsing, the compiler also produces one or more browse files that contain additional information such as class hierarchy and variable uses. The browsing tools of the programming environment use this information to help you navigate through your code.

This compilation process is repeated once per source file to produce one object file per source file. Then the linker combines the resulting object files to produce the target file. This scheme is inefficient for several reasons:

  • Header files included by multiple source files are processed repeatedly. We need an architecture that avoids repeated processing of header files, without requiring large compilations for simple changes.
  • Changing any code in a header file usually causes many source files to be recompiled because make recompiles all files that include the modified header file. We need a finer-grained analysis.
  • While using debuggers and browsers, programmers typically access only a fraction of the symbol information the compiler produces. Yet the compiler must write to files all information that you might potentially access, which is inefficient in both compile time and disk space. We need an architecture that allows full access to the information needed for debugging and browsing, without the cost of saving information that is never used.
  • Template instantiation is currently either expensive or awkward. We need an architecture that can manage template instantiation on a project-wide basis, while avoiding the expense of creating the same template instance multiple times.

Development System Architecture

The fundamental problems with the conventional architecture are that individual source files are processed independently and the file-level granularity of recompilation is too coarse. We addressed these problems with a project-wide program representation, through which all components of our system access and store information about the user's code; see Figure 1. You can think of the program representation, which we call the "CodeStore," as a symbol table shared by all of the system's components.

Despite its name, the CodeStore does not hold the user's source code. Source code is held in files and the system is responsible for keeping the CodeStore synchronized with the source files, including errors. This is a crucial design point: It means that you can use your existing infrastructure for managing source files -- your favorite editor, source code control system (RCS, PVCS, and the like), search tool (such as grep), and so on. The synchronization process is called "incorporation" because changes in the source files are incorporated into the CodeStore. In fact, you can think of the CodeStore as a cache -- it enables fast processing, but the system can reconstruct the CodeStore's contents from the source files. The CodeStore also persists when you shut down the system.

The information in the CodeStore subsumes the precompiled headers used in many conventional compilers. Moreover, the CodeStore works at declaration granularity, not header-file granularity. With precompiled headers, any change in any header that has been precompiled invalidates the precompiled header; with the CodeStore, only the declarations that are actually changed (and declarations affected by the changes) are invalidated. This means that our system is effective even when headers are undergoing frequent change, as is typical during a development project. Also, the CodeStore does not impose any ordering requirement on the inclusion of header files.

The CodeStore also solves the template instantiation problems prevalent with conventional compilers. Since the CodeStore contains information about the entire target (.EXE or .DLL) being generated, it is relatively straightforward for the system to know which templates need to be instantiated, to instantiate them only once, and to update the instantiations automatically when changes affect the instantiations.

Incorporation is the fundamental process of the system, subsuming make, compilation, and linking. It is the incremental build process. A work queue drives incorporation: The entire process consists of processing items on the queue. The first step of incorporating the code in Example 1 (the text refers to the numeric superscript labels) is to put the configuration file (not shown) on the work queue. The configuration file is processed to determine which source files participate in the project, and each file is placed on the work queue. In this example, there is only one source file. As each source file is taken off the queue, its time stamp is checked to see if it has changed since the last build. If it has, the source file is macro processed and partitioned by brace matching into source regions that roughly correspond to top-level declarations. The tokens of each source region are stored in the CodeStore and the source regions are put on the work queue.

Processing continues by removing the first item -- source region 1 -- from the work queue and attempting to parse it. At position 1, A is recognized as a class and is entered into the CodeStore. When the parse reaches position 2, it fails because B is not known in the CodeStore. The parse is abandoned and source region 1 is put back on the queue. Then, source region 2 is parsed. When position 3 is reached, B is entered into the CodeStore. The declaration of B::f(A&) succeeds at position 4 because A is already known. At position 5, the parse of B is complete and its CodeStore entry is marked as a complete class definition. Next, source region 3 is parsed. The parse fails because NotThere is unknown in the CodeStore, and source region 3 is put back on the queue.

At this point, we have attempted to parse the three source regions. Two of the parses failed, but we made progress since new entries were made in the CodeStore. Therefore, we again attempt to process items on the work queue, starting with source region 1. At position 6, the parser again attempts to add A to the CodeStore as a class; A is already there and from the same source location, so this is not a duplicate definition error. The parse continues with A::f(B&), and this time, it succeeds because B is known in the CodeStore. A::f(B&) is entered in the CodeStore when position 7 is reached and A is marked complete at position 8. We again try to parse source region 3 and it again fails. Incorporation continues until either all items on the queue have been processed successfully or until a complete pass over the queue results in no changes to the CodeStore. In this example, incorporation terminates with an error that NotThere is undeclared.

After all source regions are parsed, the usual semantic analysis and code generation can proceed. The unconventional aspect is that these activities are done one function at a time, resulting in code generation incrementally linked into the target.

Automatic Dependency Management

When a source region is parsed, the system has to determine how that source region has changed (such as a new member added to a class), and how those changes will affect the other parts of the project. The former is the reconciler's job and the latter is the change propagator's job.

The CodeStore contains a dependency graph. Nodes of the graph correspond to entities in the CodeStore like declarations and source regions; a graph arc indicates that one entity depends in some way on another. Suppose the code in Table 1 has been previously incorporated and that the user removes the comment, adding a declaration of B::f(X). Source region 3 will be reparsed because it has changed. When the reconciler determines that B::f(X) has been added, the change propagator will observe that B::f(X) hides A::f(X). Therefore, any nodes that depend on A::f(X) and from which B is visible must be processed. Looking at the dependency graph, the change propagator will put g(B&) on the work queue for processing.

More generally, putting dependency graph nodes on the work queue drives incorporation. When a node of the dependency graph becomes marked "invalid," all of the nodes that depend on the invalid node are put on the queue for reprocessing. For efficiency, the actual dependency graph expresses many different kinds of dependencies (size, layout, call, and so on) and there are many different kinds of processing.

Efficiency demands that a project-wide program representation like the CodeStore be small. Fortunately, our incremental build scheme makes it possible to discard information and compute it again later. For example, our system does not store parse trees in the CodeStore. A (small) function can be parsed quickly to obtain the parse tree, since there is no need for preprocessing the function's source or processing header files. For example, the IDE provides a "find uses" feature that searches for all uses of a selected declaration in a project. This semantic search uses the dependency graph to find relevant function declarations and parses on the fly to find the desired statements.

Integrated Development Environment

The CodeStore contains information about your program. The Integrated Development Environment (IDE), which uses a workbook metaphor, is organized into sections. Each section consists of one or more pages, each of which consists of one or more panes. Each pane provides a view, and each view consists of objects, which can be selected. Figure 2 shows the Classes Overview page of the project section for a small example project. The essential concept is that an object flows into each pane and a pane then shows a view of that object. The Carlot project flows into the upper-left pane, which provides a Classes view of a project; that is, it lists all of the classes in a project. We've selected the Pickup class from the list of classes and that selected object flows into the top-right and bottom-left panes. The Class Hierarchy view shows the portion of the class hierarchy that contains Pickup. The Details view lists Pickup's bases and members. You can recursively look at the base's details. Here, we've selected the Pickup::price() member function, which flows into the bottom-right pane (again, notice price in the pane's title bar). Since that pane is showing a source view, we see the source code for the selected function.

There are several views of each kind of object. For a class there is a Details view that shows bases and members, a Visible Members view that adds to the Details view inherited members, and a Class Hierarchy view that shows the relevant portion of the class hierarchy. The interface is object oriented, in the sense that the pull-down menu lets you choose any view that makes sense for the kind of object the pane is showing. This paradigm applies everywhere. For example, to understand virtual function overriding, select the Overrides view on a pane that is viewing a function.

The IDE is configurable so you can divide a page into any number of panes, can link the panes together to control the flow of objects, and can select among the views for any kind of object. You can save page configurations, and there are predefined pages for common tasks. Each view presents information either obtained from or computed on the CodeStore.

Acknowledgments

Many people from IBM labs have made essential contributions to the system described here. This was a team effort and it would be inappropriate to acknowledge individually any subset of the team. I thank Mary Anne Buttigieg, Michael Karasick, Kevin Stoodley, David Streeter, and Mike Wulkan for feedback on this article.


Listing One

/* A.h */ #ifndef A_H
#define A_H
#include "B.h"
class A : public B 
{
public:
  B* f() { return new B; }
};
#endif


</p>
/* B.h */
#ifndef B_H
#define B_H
#include "A.h"
class B 
{
public:
  A* g() { return new A; }
};
#endif


</p>
/* main.cpp */
#include "B.h"
int main()
{
  B b;
  return 0;
}

Back to Article

Listing Two

/* A.h */class A : public B 
{
public:
  B* f() { return new B; }
};


</p>
/* B.h */
class B 
{
public:
  A* g() { return new A; }
};


</p>
/* main.cpp */
int main()
{
  B b;
  return 0;
}

Back to Article

Listing Three

optionlink(linkwithmultithreadlib),         // Use multi-threaded library
link(linkwithsharedlib)               // Use shared library
   {
   target "carlot.exe"                // Produce an executable
      {
      option
      lang(offsetofnonpodclasses),    // Backward compatibility for old code
      lang(digraph, no),
      includepath("."),
      define("_System", ""),
      lang(nokeyword, "true"),
      lang(nokeyword, "false")
         {
         source type(cpp) "carlot.cpp"  // List of C++ sources
         source type(cpp) "car.hpp"
         source type(cpp) "car.cpp"
         source type(cpp) "truck.hpp"
         source type(cpp) "truck.cpp"
         source type(cpp) "vehicle.hpp"
         source type(cpp) "vehicle.cpp"
         source type(cpp) "vlist.hpp"
         source type(cpp) "vlist.cpp"
         }
      }
   }

Back to Article

DDJ


Copyright © 1997, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.