Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Database

Algorithm Alley


Dr. Dobb's Journal January 1997: Algorithm Alley

The Fibonacci Heap

In September 1996, you saw how a simple tree structure can improve the performance of a resizable-array class. The idea was not to make every operation fast, but to keep the average cost low by making expensive operations infrequent. That same idea appears again this month, as John shows us how a roughly balanced tree of circularly linked lists can be used to implement a very fast heap.

--Tim Kientzle

John is a software development manager at UWI Unisoft Wares Inc. in Victoria, British Columbia and is a part-time graduate student at the University of Victoria. He can be contacted at [email protected].


Heaps are simple, useful containers. A heap allows you to insert elements and extract the least element. Many heap implementations also allow you to decrease an element already in the heap, and there are a variety of less-common operations. Table 1 shows the operations supported by the Fibonacci heap implementation I'm going to discuss in this article.

One of the first uses for a heap was to implement a priority queue. In fact, before researchers began realizing the wealth of algorithms to which a heap is well suited, the heap was actually called a "priority queue." Priority queues are used to keep a dynamic list of tasks of differing priorities. The Insert() operation adds a new job to the queue. The ExtractMin() operation extracts the highest-priority task. If a job suddenly required a higher priority, the DecreaseKey() operation would be used.

A heap can be used to implement a worst-case O(n log) sorting algorithm, which is the best-possible rating for a sort that uses only key comparisons. You simply insert all elements into the heap, then ExtractMin() until the heap is empty. The items will be extracted in ascending order. Heaps can also be used to construct optimal binary trees for Huffman compression, as shown in Mark Nelson's article "Priority Queues and the STL" (Dr. Dobb's Journal, January 1996).

FunctionDescription
Insert() Inserts a new node into the heap; the new node must contain a key value.
ExtractMin() Returns the node of minimum value after removing it from the heap.
DecreaseKey() Assigns a new, smaller key value to a node; the node may need to be repositioned in the heap so that it is extracted when there are no nodes with lesser values than the NEW key value. A pointer to the node must be given because heaps don't support an efficient search operation.
Union() Creates a new heap by joining two heaps given as input.
Minimum() Returns a reference to the node containing the minimum key value or some indication of what the key value is.
Delete() Deletes any node from the heap. A pointer to the node must be given.
Table 1: Heap operations.

One important algorithm that uses a heap is Dijkstra's shortest-path algorithm. This algorithm finds the least expensive path through a weighted graph (also called a "network").

Dijkstra's algorithm sets the cost of each vertex (except the starting vertex) to infinity and puts all the vertices onto a heap. You then extract the cheapest vertex from the heap -- call it M -- and examine each vertex A adjacent to M. If the cost of M plus the cost of the edge joining M to A is cheaper than the current cost of A (that is, if there's a cheap path to A through M), you create a link from A to M and decrease A's key to represent the new cost. You continue extracting successive nodes until you reach T, the target vertex. The value of T is the cost of the shortest path. The links from T back to the starting vertex indicate the shortest path.

Input:  Graph G, vertices S (start), T (terminate)
Declare:  H (initially empty heap)
1: For all vertices v
2:    if v == S then v.cost := 0
3:    else v.cost := infinity
3:    Insert v into H
4: Repeat
5:    M := ExtractMin(H)
6:    For each vertex A attached to M
7:       w := cost of edge from M to A
8:       if (M.cost + w < A.cost)
9:          DecreaseKey(A,M.cost + w)
10:         A.backlink := M
11: Until M = T
12: Output T.cost
13: Output vertices on chain of backlinks from T to S
Figure 1: Dijkstra's shortest-path algorithm.

As you can see in Figure 1, the DecreaseKey() on line 9 is the most time-consuming operation of the inner loop. Since Dijkstra's algorithm is important in network routing and other applications, it would be nice to find a heap implementation that makes this operation as fast as possible. This is the primary motivation for the Fibonacci heap.

The Fibonacci Heap

Heaps are usually implemented using binary trees, with the property that for every subtree, the root is the minimum item. In 1984, Michael Fredman and Robert Tarjan described a new way to implement heaps. In particular, the Fibonacci heap, or F-heap, is exceptionally fast at the DecreaseKey() and Insert() operations. For a typical binary heap, these operations require logarithmic time, while a Fibonacci heap requires only constant time. Table 2 shows how other operations compare.

There are some caveats, however. First, a Fibonacci heap uses more memory than a binary heap, since it requires a number of additional data elements to keep track of the items. Second, the time efficiency of the different operations is based on "amortized analysis." With the binary heap, the time reflects the total time for that operation. For an F-heap, a particular call to ExtractMin(), for example, might take a very long time because it's doing leftover work from other operations. With amortized analysis, this additional work is accounted for in the operations that cause the work.

Behind the Scenes

The Fibonacci heap stores all elements in a collection of circular, doubly linked lists. In addition to the Left and Right pointers used to implement the lists, each node has a Child pointer that allows it to be the parent of another circular list and a corresponding Parent pointer. There is also, of course, the application-specific key value and three variables called Degree, Mark, and Negative Infinity.

At any given time, there is a MinRoot pointer that points to the smallest item on the top list. In addition, if any item has a child list, that item is smaller than anything on the child list. Insertion is simple: You add the new item beside the current MinRoot and possibly adjust MinRoot if the new item is smaller. This always takes constant time.

The Union() operation is used by other heap operations. It combines two heaps by linking the top-level lists into a single circular list, much the way that two soap bubbles join at a point then expand to form one larger bubble. Again, this linkage can always be done in constant time.

The ExtractMin() operation is the real workhorse of the Fibonacci heap. After extracting the current MinRoot (and unioning its child list into the top level), the entire top-level list is traversed, both to find the new MinRoot and to rearrange the entire heap into a more efficient structure. Each node has a Degree variable that indicates the number of direct children of that node. The heap is rearranged so that no two nodes on the top list have the same degree. One interesting property is that the total number of descendants of any node will be about 2Degree. For example, suppose you insert 12 nodes into a Fibonacci heap. The heap then looks like a single circular list. After you extract the minimum element, there will be 11 nodes left. Three of these will be on the root list, with degrees of 0, 1, and 3. The node of degree three would be the root of a subtree with a total of eight nodes.

The detailed operation of ExtractMin() involves traversing the top-level loop and keeping an array with one element for each possible degree value; see Listing One. As you traverse the loop, you put a pointer to that element into the array. If there's already an element of that degree, you add the larger element to the child list of the smaller, which increases the degree of the smaller. This smaller element is then put back into the array, which may require repeating the process. Figure 2 shows how this works.

DecreaseKey() specifies a node in the heap and a new, smaller key value. If the key value is larger than the parent, then that node must be moved up in the heap. The Fibonacci heap simply removes the node from its current parent (reducing the parent's degree), and inserts it into the root list, where it could become the new MinRoot. The Fibonacci heap then does some additional work to try to maintain the overall balance of the heap; see Listing Two.

The Fibonacci heap relies on maintaining a certain rough balance to its structure, and moving many items from deep in the heap to the top can upset that balance. This is the purpose of the Mark field. Whenever a child is moved out of a parent during a DecreaseKey() operation, the parent's Mark field is set. If the Mark field is already set, it indicates a second child being lost, so the parent is also moved to the top. This "cascading cut" must then consider the next node up, which is now losing a child. This process stops when it reaches a node that is not Marked, or when it reaches a node on the root list (a node that has no parent).

To see why the DecreaseKey() operation manages to have constant amortized time, even with this cascading cut, consider a Fibonacci heap with N elements, and suppose you perform N DecreaseKey() operations. Although any particular operation could move a large number of subtrees, I want to show that the total number of subtrees moved is proportional to N. How many times can each node be visited during a cascading cut? The first time it's visited, it will be marked. The second time, it will be moved to the root list. So, each node can be visited twice before it becomes a root node. Since the cascading cut always stops when it hits a root node, the total number of root node visits can't be more than the total number of cascading cuts. Thus, there are at most 3N node visitations required for N DecreaseKey() operations.

Although the full analysis requires more complex methods to account for mixing the various operations, this gives you some idea of how DecreaseKey() can achieve constant amortized time.

The Delete() operation is actually remarkably simple. Each node has a special Negative Infinity flag that forces it to have a value smaller than any other node. To delete a node, you effectively decrease the key to negative infinity, then extract it. Since ExtractMin() is O(log), so is Delete().

A Fix for the Expandable Binary Heap

I've used the same basic design to build a dynamic binary heap. This is an unusual thing to do. Usually, binary heaps are declared to be of a certain size and don't grow or shrink. An expandable binary heap must be constructed with great care to ensure that the Insert() and ExtractMin() operations don't degrade to O(n) behavior. Specifically, you must only grow the heap log times, doubling the heap array size each time an expansion is necessary. This will ensure that the nodes will be copied to a new array a maximum of 1+2+4+...+n= 2n-1 times, so the cost of expansion is constant per insertion. Likewise, the heap array should be reduced to half its size when it is only one-quarter full. If it is reduced to half size at precisely the point when it is half full, then a sequence of Insert() and ExtractMin() operations could alternate right at that moment, causing expansion and shrinking at every operation.

How to Use the Code

The source code for this Fibonacci heap implementation (available electronically; see "Availability," page 3) defines two classes: FibHeapNode and FibHeap. The FibHeap class provides the heap operations such as Insert() and ExtractMin(). It is intended to be used as is; no subclassing is required. The FibHeapNode class should be subclassed to store your particular data and to redefine the virtual functions used by FibHeap.

The file FIBTEST.CPP shows how this subclassing should be done. In particular, when overriding the assignment, equality, and less-than functions, you must call the corresponding protected base-class function first. For example, FHN_Cmp() handles the Negative Infinity test, so you should not do your own test if the base class function indicates a meaningful value.

Binary Heap versus F-Heap

The F-heap provides a great enhancement to Dijkstra's algorithm and to other algorithms that can use DecreaseKey() effectively. The big surprise, though, is that the F-heap is even competitive on regular heap algorithms like heap sorting. The test programs generate random test data and compare the output against qsort() to make sure the heap is operating correctly. To determine the overall time complexity, you can compare the time for 2048-element and 1024-element data sets. If the process is O(n log), the ratio should be 2.2 to 1. The test program averages a ratio of 2.17 for a binary heap and 2.07 for a Fibonacci heap. The decreased ratio is due to the fact that more of the F-heap operations execute in linear time, including the Insert() and DecreaseKey() operations. In addition, the Fibonacci heap test runs about 15 percent faster than the binary heap test. Even if you remove the DecreaseKey() operation from the test, the Fibonacci heap is still over 10 percent faster. For algorithms like Dijkstra's, the difference can be two to ten times faster on a 1024-vertex graph.

References

Cormen, T., C. Leiserson, and R. Rivest. Introduction to Algorithms. Cambridge, MA: MIT Press, 1990.

Fredman, M. and Tarjan, R. "Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms." Journal of the Association for Computing Machinery, vol. 34, no. 3, July 1987.

DDJ

Listing One

//=====================================================================// ExtractMin() - O(n) worst-case actual; O(lg n) amortized
//=====================================================================


</p>
FibHeapNode *FibHeap::ExtractMin()
{
FibHeapNode *Result;
FibHeap *ChildHeap = NULL;


</p>
// Remove minimum node and set MinRoot to next node
     if ((Result = Minimum()) == NULL)
          return NULL;
     MinRoot = Result->Right;
     Result->Right->Left = Result->Left;
     Result->Left->Right = Result->Right;
     Result->Left = Result->Right = NULL;


</p>
     NumNodes --;
     if (Result->Mark) {
         NumMarkedNodes --;
         Result->Mark = 0;
     }
     Result->Degree = 0;


</p>
// Attach child list of Minimum node to the root list of the heap
// If there is no child list, then do no work
     if (Result->Child == NULL) {
         if (MinRoot == Result)
             MinRoot = NULL;
     }
// If MinRoot==Result then there was only one root tree, so the root list is
// the child list of that node (NULL if this is the last node in the list)
     else if (MinRoot == Result)
         MinRoot = Result->Child;
// If MinRoot is different, then the child list is pushed into a new temporary
// heap, which is then merged by Union() onto the root list of this heap.
     else {
         ChildHeap = new FibHeap();
         ChildHeap->MinRoot = Result->Child;
     }
// Complete the disassociation of the Result node from the heap
     if (Result->Child != NULL)
         Result->Child->Parent = NULL;
     Result->Child = Result->Parent = NULL;
// If there was a child list, then we now merge it with rest of the root list
     if (ChildHeap)
         Union(ChildHeap);
// Consolidate heap to find new minimum and do reorganize work
     if (MinRoot != NULL)
         _Consolidate();
// Return the minimum node, which is now disassociated with the heap
// It has Left, Right, Parent, Child, Mark and Degree cleared.
     return Result;
}
//====================================================================
// Consolidate(). Internal function that reorganizes heap as part of an 
// ExtractMin(). We must find new minimum in heap, which could be anywhere 
// along the root list. The search could be O(n) since all nodes could be on
// the root list. So, we reorganize the tree into more of a binomial forest 
// structure, and then find the new minimum on the consolidated O(lg n) sized 
// root list, and in the process set each Parent pointer to NULL,
// and count the number of resulting subtrees.
// After a list of n inserts, there will be n nodes on the root list,
// so the first ExtractMin() will be O(n) regardless of whether or not we 
// consolidate. However, the consolidation causes subsequent ExtractMin() 
// operations to be O(lg n). Furthermore, the extra cost of the first 
// ExtractMin() is covered by the higher amortized cost of Insert(), which is 
// the real governing factor in how costly the first ExtractMin() will be.
//=====================================================================
void FibHeap::_Consolidate()
{
FibHeapNode *x, *y, *w;
FibHeapNode *A[1+8*sizeof(long)]; // 1+lg(n)
int  I=0, Dn = 1+8*sizeof(long);
short d;


</p>
// Initialize the consolidation detection array
     for (I=0; I < Dn; I++)
          A[I] = NULL;
// We need to loop through all elements on root list.  When a collision of 
// degree is found, the two trees are consolidated in favor of the one with 
// the lesser element key value. We first need to break the circle so that we 
// can have a stopping condition (we can't go around until we reach the tree 
// we started with because all root trees are subject to becoming a child 
// during the consolidation).
     MinRoot->Left->Right = NULL;
     MinRoot->Left = NULL;
     w = MinRoot;


</p>
     do {
        x = w;
        d = x->Degree;
        w = w->Right;
// We need another loop here because the consolidated result
// may collide with another large tree on the root list.
        while (A[d] != NULL) {
             y = A[d];
             if (*y < *x)
                 _Exchange(x, y);
             if (w == y) w = y->Right;
             _Link(y, x);
             A[d] = NULL;
             d++;
        }
    A[d] = x;
     } while (w != NULL);
// Now we rebuild the root list, find the new minimum, set all root list 
// nodes' parent pointers to NULL and count the number of subtrees.
     MinRoot = NULL;
     NumTrees = 0;
     for (I = 0; I < Dn; I++)
          if (A[I] != NULL)
              _AddToRootList(A[I]);
}
//=====================================================================
void FibHeap::_AddToRootList(FibHeapNode *x)
{
     if (x->Mark) NumMarkedNodes --;
     x->Mark = 0;
     NumNodes--;
     Insert(x);
}
//==== Node y is moved from the root list to become a subtree of node x. ====
void FibHeap::_Link(FibHeapNode *y, FibHeapNode *x)
{
// Remove node y from root list
     if (y->Right != NULL)
        y->Right->Left = y->Left;
     if (y->Left != NULL)
         y->Left->Right = y->Right;
     NumTrees--;
// Make node y a singleton circular list with a parent of x
     y->Left = y->Right = y;
     y->Parent = x;
// If node x has no children, then list y is its new child list
     if (x->Child == NULL)
        x->Child = y;
// Otherwise, node y must be added to node x's child list
     else {
         y->Left = x->Child;
         y->Right = x->Child->Right;
         x->Child->Right = y;
         y->Right->Left = y;
     }
// Increase the degree of node x because it's now a bigger tree
     x->Degree ++;
// Node y has just been made a child, so clear its mark
     if (y->Mark) NumMarkedNodes--;
         y->Mark = 0;
}

Back to Article

Listing Two

//=====================================================================// DecreaseKey() - O(lg n) actual; O(1) amortized
// The O(lg n) actual cost stems from the fact that the depth, and
// therefore the number of ancestor parents, is bounded by O(lg n).
//=====================================================================


int FibHeap::DecreaseKey(FibHeapNode *theNode, FibHeapNode& NewKey) { FibHeapNode *theParent;

if (theNode==NULL || *theNode < NewKey) return NOTOK; *theNode = NewKey;

theParent = theNode->Parent; if (theParent != NULL && *theNode < *theParent) { _Cut(theNode, theParent); _CascadingCut(theParent); } if (*theNode < *MinRoot) MinRoot = theNode; return OK; } //==== Remove node x from the child list of its parent node y ==== void FibHeap::_Cut(FibHeapNode *x, FibHeapNode *y) { if (y->Child == x) y->Child = x->Right; if (y->Child == x) y->Child = NULL; y->Degree --;

x->Left->Right = x->Right; x->Right->Left = x->Left;

_AddToRootList(x); } //===================================================================== // Cuts each node in parent list, putting successive ancestor nodes on the root // list until we either arrive at the root list or until we find an ancestor // that is unmarked. When a mark is set (which only happens during a cascading // cut), it means that one child subtree has been lost; if a second subtree is // lost later during another cascading cut, then we move the node to the root // list so that it can be re-balanced on the next consolidate. //=====================================================================

void FibHeap::_CascadingCut(FibHeapNode *y) { FibHeapNode *z = y->Parent;

while (z != NULL) { if (y->Mark == 0) { y->Mark = 1; NumMarkedNodes++; z = NULL; } else { _Cut(y, z); y = z; z = y->Parent; } } }

Back to Article


Copyright © 1997, Dr. Dobb's Journal

Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.