Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Open Source

Considering Recursion


Mar00: Considering Recursion

Arch is the lead developer for KAI's C++ compiler. He can be contacted at robison @ kai.com.


Recursion means different things to different people. Sometimes it means solving a problem by breaking it into smaller similar problems, and solving those problems likewise, until the problems become trivial. I have no quarrel with this valuable technique of recursive solution. My objection is to recursive code, where one or more routines manage to call themselves through some chain of calls.

Surely, someone arguing against recursion must be an arch-conservative Fortran-77 programmer. That language forbids recursion. In fact, I was a liberal recursionist functional programmer in my youth. For my master's thesis, I implemented an Interpreter for the Backus functional language FP (see my article "Illinois Functional Programming: A Tutorial," BYTE, February 1987). Functional languages often depend strongly upon recursive programming. Despite the fact that FP hints at an escape from the recursive addiction through the use of higher level combining forms, I picked up the recursive mania. I even published a few solutions on combinatorial problems in chess (see "Eight Pieces Can Not Cover a Chess Board," by A.D. Robison, B.J. Hafner, and S.S. Skiena, The Computer Journal, December 1989) and group theory (see my "An Improved Rewriting-Number Algorithm," BIT, 1990) that both depended upon recursive programming.

Once out of school, I migrated to Shell's Bellaire research lab, where numerical loops dominated code. The loop nests were easier to understand than recursive routines. Afterwards, I moved to KAI to work on its C++ compiler. Recursive data structures were ubiquitous and recursion rampant. Code became harder to understand. I saw the light -- recursion is trouble.

The Trouble with Recursion

The fundamental trouble is that recursive code entangles control flow, which hurts readability, reuse, and optimization.

Nested loops are far easier to read than recursion. Loop structures shout, "I am repeating an action;" recursion is more subtle. Loop indentation clarifies structure; recursion cannot be similarly indented. Loops can be anonymous; recursion invariably forces creation of routine names solely for the sake of recursive calls. Loops are often trivial to analyze compared to comparable recursions. For instance, Listing One contrasts one iterative and two recursive versions of an algorithm. The anonymous indented loops of the iterative version make clear that it takes time O(n2). In contrast, the recursive versions require intense study to determine that they too take O(n2) time. The recursive code is reminiscent of spaghetti code with gotos. Indeed, the extra names for the recursive routines play the role of labels for gotos. Recursion also presents ideas backwards. For instance, the second recursive version shown in Listing One works from back to front, then unwinds in the opposite direction.

Explicit recursion is not reusable. Each time you want to use a recursive pattern, you must reimplement it. This causes maintenance and performance problems. Consider, for instance, the common recursive pattern of traversing a tree. Maintenance suffers because each time requirements dictate invention of a new kind of node, each instance of the recursive pattern may have to be updated. Performance suffers because the optimal way to traverse the tree is not the simplest way, and programmers trade away speed for simplicity. For example, Listing Two shows a simple way and a fast way to recursively traverse a tree in preorder. While the fast way is intriguing to write the first time, it becomes tiresome to write time after time, and it would be much better if the fast way were packaged to make it no less convenient. Ironically, the fast way is faster because it replaces recursion to the right by iteration.

Some compilers can optimize the fully recursive tree traversal shown in Listing Two into a partially recursive version. This is because the second recursive call is a tail recursion, which means that the caller immediately returns after the call. In general, this is the only form of recursion that you can expect an industrial compiler to optimize, and only a few do. Even a compiler that optimizes tail recursion is unlikely to make the special case improvements shown in the listing.

Recursion makes debugging programs with interactive debuggers more difficult. The reason is that the relevant state is spread out over multiple stack frames. A single conceptual variable, say n, is often replicated as a parameter in each recursive call. The nonrecursive alternatives to be discussed tend to encapsulate the relevant state in a single object that can be inspected more easily.

Recursion uses the call stack as an implicit memory resource. It is difficult to recognize when it may be exhausted. Nonrecursive algorithms make memory resources explicit, so exhaustion is easier to detect. Furthermore, nonrecursive algorithms can be much more spartan about using memory, as compilers often do not optimize recursive programs nearly as well as they do iterative ones.

While on the subject of efficiency, note that recursive programs tend to be slower than iterative ones. There is plenty of academic literature on optimizing recursive programs. But in the commercial world, optimizers are costly, and the effort is spent optimizing programs that paying customers write. Most of these customers write loops. For that matter, as previously mentioned, Fortran-77 forbids recursion, and many optimizers are Fortran hand-me-downs. Most modern compilers perform many loop optimizations, but hardly any for recursion. This is also true for human maintainers. For example, a human maintainer would instantly consider fusing these two loops:

for(int i=0; i<n; i++) dif[i] = a[i]-b[i];

for(int i=0; i<n; i++) sum[i] = a[i]+b[i];

into the single loop:

for( int i=0; i<n; i++ ) {

dif[i] = a[i]-b[i];

sum[i] = a[i]+b[i];

}

given that no unfortunate aliases were present. Indeed, fusion here requires minor editing. Legal fusion of equivalent recursive code is a tougher scenario for a maintainer to recognize.

Causes of Recursion and Some Alternatives

Like many afflictions, recursion can be eradicated once its causes are understood. I'll show some common causes and how to deal with them.

Dynamic Allocation Via Call Stack. Sometimes recursion exists for the sake of exploiting the subroutine call stack for dynamic memory allocation. Listing Three shows recursive and iterative routines for traversing a singly linked list backwards. The recursive routine implicitly allocates storage on the call stack each time it recurses. The iterative routine uses a multipass algorithm that dynamically allocates storage. The recursive routine is simpler, but despite appearances, much less efficient than the iterative version for large lists. The reason is that each recursive call stores much more than just a pointer on the stack, leading to excessive space and time overhead. The iterative version also demonstrates an aforementioned advantage -- it throws an exception when space is exhausted; the recursive version simply crashes if the stack overflows.

Recursive Data Structures. Recursive data structures, notably trees, are probably the most common cause of recursion. The recommended treatments are visitors and iterators. The example problem for this section is to traverse a tree in inorder, which is defined (recursively) as traversing a node's left subtree first, then the node, and then the node's right subtree. The goal is to avoid writing out this recursive definition each time we want to traverse a tree.

A visitor (see Design Patterns: Elements of Reusable Object-Oriented Software, by E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Addison-Wesley, 1995) applies a function to each element of a collection. To use the visitor, a client writes the actions to be recursively applied to each element as a function, then hands the function to the visitor, which does the rest of the work.

The advantage of a visitor is that it's easy to implement in almost any language. Listing Four is a bare-bones visitor. Judicious use of templates or virtual functions enable beefier visitors. The implementation of the visitor is partially recursive. It does not cure recursion, but merely controls the disease by encapsulating it within a reusable piece of code. The major benefit is that clients use the recursive pattern without writing explicit recursion, and need not be bothered with implementation details. One such detail is the way the example visitor recurses to the left, but iterates to the right, for sake of speed. Visitors can be reused, but the reuse can be almost as abstruse as recursive code, because the actions to be performed by the visitor usually have to be written as a separate function or functional object, and the visitor takes control for the entire visitation. Some languages offer conveniences to mitigate the hassle of writing these separate functions: anonymous functions (Scheme), anonymous inner classes (Java), and functional adaptors (C++'s STL). Yet writing these functions often still feels awkward compared to good old loops.

An iterator truly cures recursion by allowing a recursive data structure to be traversed with a simple loop. For example, here's code that iterates over nodes in a binary tree:

for( tree_iterator i(root); !i.is_end(); ++i ) {

....process node *i ...

}

Listing Five is a demonstration implementation of tree_iterator. The iterator traverses a tree in inorder by using a stack to keep track of what subtrees have not yet been visited. For simplicity, the stack has a fixed upper limit. An industrial-strength implementation would dynamically reallocate the stack if it grew too large. The iterator is more complicated to implement than the visitor, but in my experience the big gain in reusability justifies the extra effort if the traversal pattern is used more than a few times. Sometimes it is necessary to propagate attributes either up or down a tree. In such cases, the iterator must also keep track of the attributes of immediate children or ancestors, and provide the client with some way to access the attributes. Such iterators are substantially more complicated to write, but with the help of templates, not an impossible task. Once written, they factor out the recursive propagation pattern.

The visitor and iterator both depend on modern language features to make them practical. The visitor depends upon the ability to pass a function as an argument; and the iterator (to be readable) depends upon concise syntax. Both are much more practical to write in languages with some form of parametric polymorphism (templates, for example), because their greater initial cost of implementation can be amortized over more reuses. Old languages without these features make either solution quite a pain to use at all. At the other extreme, languages with coroutines make writing iterators even easier, because then the iterator can be written simply as a visitor that passes control back each time it processes a node. Alas, coroutines rarely exist in industrial programming languages.

Despite the extra complexity, the iterator of Listing Five typically runs faster than the visitor of Listing Four because the iterator's explicit stack holds exactly the information required, whereas the visitor's implicit stack carries the full baggage of subroutine calls. Hence, the iterator has a much smaller cache footprint. Alternative implementations might reverse the advantage: A coroutine-based iterator would have the overhead of the implicit stack, and a visitor can be written with an explicit stack.

Caveat lector: There are two implementation details that greatly impact performance. First, a fast stack is important for both iterators and visitors. A mistake that I made in early implementations was to implement the explicit stack exclusively with dynamic memory allocation. If the trees are typically small, it is better to use a small fixed-size array, and resort to dynamic allocation if the stack overflows. The maximum stack size is related to tree depth, so if the trees are known to be balanced, then simply figure the worst case and use a fixed-size array. Second, performance of some iterators can be greatly improved by advanced C++ optimizers that can assign small objects to registers. Thus, modern optimizers also help tip the scales in favor of iteration.

Hierarchical Decomposition

As mentioned, decomposing a problem into smaller similar problems is an excellent technique for solving a problem. The trick to solving such problems without recursion is an explicit stack that remembers subproblems yet to be solved or solutions thereof. The explicit stack can be generalized to other kinds of containers. When a linked list is used, it is often called a "work list." More general containers allow subproblems to be solved in something other than first-in first-out (FIFO) order.

Removing the FIFO constraint makes parallelism much easier -- multiple processors can feed off a work list. Recursive algorithms often look seductively easy to run in parallel, but in practice parallel recursive call trees introduce a nested model of parallelism that is much more complicated to tune for efficiency than the flat parallelism of a work list. If the parallel programming model requires guessing the stack sizes of processes (a cruel but common feature these days), the unbounded stack resources of recursive routines makes guessing tricky.

For hierarchical decomposition, the advantage of iteration (via explicit stacks) over recursion is not as big as it is for simple traversal, because hierarchical decompositions tend to be used once, so the extra effort to implement the explicit stacks cannot be amortized across reuse. Nonetheless, explicit stacks have an advantage during debugging because all the relevant state is encapsulated in one structure, rather than being spread out over many stack frames and cluttered with artificial routine parameters.

Mindset

Hang around theoreticians long enough, and you can be fooled into thinking that recursion is simpler to write, read, and analyze than iteration ("To iterate is human, to recurse is divine"). I once gave a talk on recursion versus iteration, and indeed some of the audience from a Scheme stronghold insisted that recursive routines were just as easy to understand as loops. It would appear that mindset is another cause of recursion.

The best way to avoid recursion is prevention by use of iteration-friendly data structures. Adding a few extra links often helps; to wit, the example of traversing a list backwards becomes trivial if the list is doubly linked. Skip lists (see Skip Lists: A Probabilistic Alternative to Balanced Trees" by W. Pugh, CACM 33(6) June 1990), for example, are often a good substitute for trees that allow simple iterative traversal. Adding parent links to a tree can remove the need for recursion. Listing Six is a tree iterator similar to that in Listing Five, except that it uses parent links instead of a stack. There is a tradeoff here with regard to speed: The parent links avoid the need for a stack, but may increase the cache footprint such that the code runs more slowly than the stack version. Threaded trees (see The Art of Computer Programming, Volume 1, by Donald E. Knuth, Addison-Wesley, 1973) are another possible alternative.

Living a Reduced Recursion Life

Abandoning recursion completely would be ridiculous. There are times when recursive routines are the best solution. My point is that recursive solutions have subtle costs that may make an iterative solution more attractive with regard to readability, maintainability, and reuse.

I've eaten my own reduced-recursion cooking. Rampant recursion plagued KAI's Fortran and C optimizer (KAP), and programmers recursed over the same structures in a superfluous multitude of ways. In fairness, I note that KAP dated to medieval ages of procedural languages that made iterators prohibitively awkward to use. And its implementation language did not allow procedures as parameters, thus ruling out visitors. This underscores my point that language improvements make recursion less attractive. I got to choose C++ as the implementation language for the KAI C++ optimizer, and designed a few fundamental iterators into it. The first-generation KAI C++ optimizer was constrained to use recursive data structures provided as part of a parser from another vendor. The structures are complex, as C++ is complex. Thus, the iterators were nontrivial to write, and some are hundreds of lines long. Nonetheless, the iterators worked well -- there are relatively few recursive routines in the current KAI C++ optimizer. The few remaining recursive routines seem to be simpler to keep recursive, though they are an extra maintenance hassle every time a new category of tree node is invented by our vendor.

An unexpected surprise from reducing recursion was that debugging became easier. The iterators have methods that allow us to print them from a debugger in a concise way that is much clearer than a pile of stack frames.

Having cured recursion with iterators the first time, I've been working on outright prevention in the next design, by redesigning to eliminate recursive data structures where possible. Whereas the iterators made the first generation look loopish, the next version is loopish. Recursion may be divine, but iteration is sensible.

DDJ

Listing One

//-----------------------------------
// Non-recursive version.
//-----------------------------------
   ....
   for (int i=0; i<n; i++)
        for (int j=0; j<n; j++)
            a[i][j] += b[i]*c[j];
   ....
//-----------------------------------
// Recursive #1 (classical tail call)
//-----------------------------------
void inner1 (int i, int j, int n) {  // Forced to name inner loop
    if (j<n) {
        a[i][j] += b[i]*c[j];
        inner1(i,j+1,n);
    }
}
void outer1 (int i, int n ) {        // Forced to name outer loop
    if (i<n) {
        inner1(i,0,n);
        outer1(i+1,n);
    }
}
    ...
    outer1(0,n);                     // Need ``root'' call.
//-----------------------------------
// Recursive #2 (typical backwards order)
//-----------------------------------
void inner2 (int i, int j, int n) {
    if (j>0) {
        --j;
        inner2(i,j,n);
        a[i][j] += b[i]*c[j];
    }
}
void outer2 (int i, int n) {
    if (i>0) {
        --i;
        inner2(i,n,n);
        outer2(i,n);
    }
}
    ...
    outer2(n,n);

Back to Article

Listing Two

struct node {
    node* left;    // Points to left child.
    node* right;   // Points to right child.
    ...data...
};
//-----------------------------------
// Obvious recursion
//-----------------------------------
void walk (node* p) {
    if (p) {
        ...process node *p...
        walk(p->left);
        walk(p->right);
    }
}
//-----------------------------------
// Faster - mixes recursion and iteration
//-----------------------------------
void walk (node* p) {
    while(p) {
        ...process node *p...
        if (p->left) {
            if (p->right) {
                walk(p->left);       // General case - recurse to left
                p=p->right;          //   and iterate to right.
            } else p=p->left;        // Special case. iterate to left if 
                                     //   there is no right subtree
        } else p=p->right;           // Special case. iterate to right if 
                                     //   there is no left subtree
    }
}

Back to Article

Listing Three

//----------------------------
// Unoptimized obvious recursion
//----------------------------
void walk_backward( node * p ) {
    if (p) {
        walk_backward(p->next);
        ...process node *p...
    }
}
//----------------------------
// Iterative version
//----------------------------
void walk_backward (node* p) {
    // Pass 1: determine size of array.
    size_t n=0;
    for (node* q=p; q; q=q->next )
        ++n;
    // Pass 2: fill the array.
    node** array = new node*[n];
    size_t k=0;
    for (node* q=p; q; q=q->next)
        array[k++] = q;
    // Pass 3: traverse the array backwards.
    while (k>0)
        process (array[--k]);
    delete[] array;
}

Back to Article

Listing Four

// Uses type  node from Listing Two
void tree_visitor (node* root, void (*f)(node&)) {
    while (root) {
        // Recurse to left.
        tree_visitor (root->left, f);
        (*f)(*root);
        // Iterate to right.
        root = root->right;
    }
}
void walk_tree (node* root) {
    tree_visitor(root, process_node);
}

Back to Article

Listing Five

// Uses type node from Listing Two
class tree_iterator {
public:
    tree_iterator( node* n ) : sp(stack) {
        if (n) {
            // March down to leftmost node, pushing skipped nodes on stack.
            for (; node* m = n->left; n=m) *sp++ = n;
        }
        at = n;
    }
    node& operator*() const {return *at;} // Return reference to current node
    bool is_end() const {return at==NULL;} // Return true if past last node
    void operator++() {
        if (sp==stack) {
            // No more pending nodes.
            at = NULL;
        } else {
            // Pop pending node from stack.
            at = *--sp;
            // March down to leftmost node, pushing skipped nodes on stack.
            for (node* n = at->right; n; n=n->left) *sp++ = n;
        }
    }
private:
    static const int STACK_MAX=32; // Limit on depth of tree.
    node* at;                  // Points to current node, or NULL if at end.
    node** sp;                 // Points to top of stack.
    node* stack[STACK_MAX];    // Stack of pending nodes.
};
void walk_tree( node* n ) {
    for (tree_iterator i(n); !i.is_end(); ++i)
         process_node(*i);
}

Back to Article

Listing Six

// Uses type node similar to that in Listing Two, but with added parent link.
struct node {
    node* left;    // Points to left child.
    node* right;   // Points to right child.
    node* parent;  // Points to parent.  NULL if root.
    ...data...
};
class tree_iterator {
public:
    tree_iterator( node* n ) {
        if (n) {
            // Jump to leftmost node of subtree rooted at n.
            while (node* m = n->left) n=m;
        }
        at = n;
    }
    node& operator*() const {return *at;} // Return reference to current node.
    bool is_end() const {return at==NULL;} // Return true if past last node.
    void operator++() {
        node* m = at;
        node* n=m->right;
        if (n) {
            // Jump to leftmost node of subtree rooted at n.
            while ((m=n->left)!=NULL) n=m;
        } else {
            // March up tree until we march up left link.
            while ((n=m->parent)!=NULL && n->right==m)  m=n;
        }
        at = n;
    }
private:
    node* at;     // Pointer to current node, or NULL if at end.
};
void walk_tree (node* n) {
    for (tree_iterator i(n); !i.is_end(); ++i)
        process_node(*i);
}








Back to Article


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.