Jump to content

Strict Fibonacci heap

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by IntGrah (talk | contribs) at 23:04, 20 April 2024 (-- Draft creation using the WP:Article wizard --). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)


Strict Fibonacci heap
TypeHeap
Invented2012
Invented byGerth S. Brodal, George Lagogiannis, and Robert E. Tarjan
Complexities in big O notation
Space complexity
Time complexity
Function Amortized Worst case
Insert Θ(1)
Find-min Θ(1)
Delete-min O(log n)
Decrease-key Θ(1)
Merge Θ(1)

In computer science, a strict Fibonacci heap is a priority queue data structure with worst case time bounds equal to the amortized bounds of the Fibonacci heap. Along with Brodal queues, strict Fibonacci heaps belong to a class of asymptotically optimal data structures for priority queues.[1] All operations on strict Fibonacci heaps run in constant time except delete-min, which is necessarily logarithmic. This is optimal, because any priority queue can be used to sort a list of elements by performing insertions and delete-min operations. Strict Fibonacci heaps were invented in 2012 by Gerth Stølting Brodal, George Lagogiannis, and Robert E. Tarjan.[2]

Structure

A strict Fibonacci heap is a single tree satisfying the minimum-heap property.

The nodes in the heap have the following rules:

  • All nodes are either active (colored white) or passive (colored red).
  • The root is passive.
  • For any node, the active children lie to the left of the passive children.

We also make the following definitions:

  • An active root is an active node with a passive parent.
  • A passive linkable node is a passive node where all its descendants are passive (a passive node with no children is considered to be linkable).
  • The rank of an active node is the number of active children it has.
  • The loss of an active node is the number of children it has lost, including active and passive children.
  • The loss of an active root is always 0 (similar to how ordinary Fibonacci heap roots are always unmarked).

To facilitate root degree reduction (described below), the passive linkable children of the root always lie to the right of the passive non-linkable children.

Invariants

Intuitively, active nodes are the only nodes which have some kind of order, whilst passive nodes do not have any particular order. This is governed by the following structural invariant:

  • For an active node, the th rightmost active child (zero-indexed) satisfies .

This invariant leads to similar structures found in binomial heaps and ordinary fibonacci heaps. For example, a subtree composed entirely of active nodes with no loss is a binomial tree.

The logarithmic bound of the delete-min operation is guaranteed by several additional invariants. A queue of all non-root nodes is also maintained

In the following, let the real number be defined as , where is the number of nodes in the heap, and denotes the binary logarithm.

  • The total loss in the heap is at most .
  • The total number of active nodes is at most .
  • The degree of the root is at most .
  • For an active node with loss 0, its degree is bounded by , and for all other nodes, .
  • (Corallary) The degree of any non-root node is at most . (Proof: let = 0 in the previous invariant.)

Transformations

The following four transformations are intended to restore the above invariants after a priority queue operation has been performed. All transformations can be performed in time, which is possible by maintaining some auxilliary data structure to track candidate nodes (see section on implementation).

Active root reduction

Let and be active roots with equal rank , and assume . Link to and increase the rank of by 1. If the rightmost child of is passive, link to the root.

Root degree reduction

Let , , and be the three rightmost passive linkable children of the root. Detach them all from the root. Sort them such that . Change and to be active. Link to , to , and to the root. As a result, becomes an active root with rank 1 and loss 0. The rank and loss of is set to 0.

The net change of this transformation is that the degree of the root node decreases by 2. As a side effect, the number of active roots increases by 1. However, this operation is always applied in conjunction with active root reduction.

One node loss reduction

Let be an active non-root with loss at least 2. Link to the root, thus turning it into an active root, and resetting its loss to 0. Let the original parent of be . must be active, since otherwise would have previously been an active root, and thus could not have had positive loss. The rank of is decreased by 1. If is not an active root, increase its loss by 1.

Overall, the total loss decreases by 1 or 2. As a side effect, the root degree and number of active roots increase by 1, making it less preferrable to two node loss reduction, but still a necessary operation.

Two node loss reduction

Let  and  be active nodes with equal rank  and loss equal to 1, and let be the parent of . Without loss of generality, assume that . Detach from , and link to . Increase the rank of by 1 and reset the loss of and from 1 to 0.

must be active, since had positive loss and could not have been an active root. Hence, the rank of is decreased by 1. If is not an active root, increase its loss by 1.

Overall, the total loss decreases by 1 or 2.

Summary

The table below presents the effect of these transformations on the three main quantities we wish to minimise.

Effect of transformations
Root degree Total loss Active roots
Active root reduction
Root degree reduction
One node loss reduction
Two node loss reduction

However, when deciding which transformations to perform, we consider only the worst case effect of these operations, for simplicity. The two types of loss reduction are considered to be the same operation, thus, to perform a 'loss reduction' means to attempt each type of loss reduction in turn.

Worst case effect of transformations
Root degree Total loss Active roots
Active root reduction
Root degree reduction
Loss reduction

Implementation

Finding candidate nodes

The invariant restoring transformations rely on being able to find candidate nodes in time. This means that we must keep track of active roots with the same rank, nodes with loss 1 of the same rank, and nodes with loss at least 2. There are several ways to do this.

The original paper by Brodal et al. used a fix-list and a rank-list.[2]

The fix-list is divided into four parts:

  1. Active roots ready for active root reduction – active roots with at least 1 partner of the same rank. Nodes with the same rank are kept adjacent.
  2. Active roots not yet ready for active reduction – the only active roots for that rank.
  3. Active nodes with loss 1 that are not yet ready for loss reduction – the only active nodes with loss 1 for that rank.
  4. Active nodes that are ready for loss reduction – This includes active nodes with loss 1 that have a partner of the same rank, and active nodes with loss at least 2, which do not need partners to be reduced. Nodes with the same rank are kept adjacent.

Active nodes with loss 0 are not included in the fix-list.

The rank-list is a doubly linked list containing information about each rank, to allow nodes of the same rank to be partnered together in the fix-list.

For each node representing rank in the rank-list, we maintain:

  • A pointer to the first active root in the fix-list with rank . If such a node does not exist, this is NULL.
  • A pointer to the first active node in the fix-list with rank and loss 1. If such a node does not exist, this is NULL.
  • A pointer to the node representing rank and , to facilitate the incrementation and decrementation of ranks.

The fix-list and rank-list require extensive bookkeeping, which must be done whenever a new active node arises, or when the rank or loss of a node is changed.

Shared pointers

The merge operation changes all of the active nodes of the smaller heap into passive nodes. This can be done in time by using a shared pointer. Instead of a boolean flag, each active node has a pointer towards an active flag object containing a boolean value. For passive nodes, it does not matter which active flag object they point to, because it is not required to change passive nodes to active en masse.

Operations

Merge

Let and be strict Fibonacci heaps, where . If either is empty, return the other and we are done. Assuming and are both non-empty, merging their fix-lists and rank-lists would take logarithmic time. Instead, we throw away the structure of the smaller heap by discarding its fix-list and rank-list, and converting all of its nodes into passive nodes. This can be done in constant time, with a shared pointer, as shown above. Link and , letting the root with the smaller key become the parent of the other. Let and be the queues of and respectively. The combined queue is , where is the root with the larger key.

The only possible structural violation is the root degree. This is solved by performing 1 active root reduction, and 1 root degree reduction, if each transformation is possible.

Root degree Total loss Active roots
State after merge
Active root reduction
Root degree reduction
Total

Insert

Insertion can be considered a special case of the merge operation. Create a new strict Fibonacci heap with a single passive node, and empty queue. Merge it with the main heap.

Find-min

Due to the minimum-heap property, the node with the minimum key is always at the root.

Delete-min

If the root is the only node in the heap, we are done by simply removing it. Otherwise, search the children of the root to find the next node with minimum key, . If is active, make it passive and set the new root to . All active children of thus become active roots. Since is now the root, move all of its passive linkable children to the right. Link all children from the old root to .

Decrease-key

Let be the node whose key has been decreased. If is the root, we are done. Otherwise, detach the subtree rooted at , and link it to the root. If the key of is smaller than the key of the root, swap their keys.

Up to two structural violations may have occurred. Firstly, the degree of the root has increased (unless was already a child of the root). The second violation may have occured when was detached from its parent .

  • If was previously an active root, then moving from being a child of to a child of the root does not create any additional active roots, nor does it increase the loss of any node.
  • If is passive and is passive, then there are no extra violations.
  • If is passive and is active, then the loss of increases by 1, which may violate the total loss invariant.
  • If both and are passive, then the loss of increases by 1, and an extra active root is created (by linking to the root).

In the worst case, all three quantities (root degree, total loss, active roots) increase by 1.

After performing 1 loss reduction, 6 active root reductions and 4 root degree reductions (in alternation) are sufficient to fix all structural violations, although not all will be required.

Root degree Total loss Active roots
State after decrease-key
Loss reduction
Active root reduction
Root degree reduction
Total

Proof of bounds

The performance of the delete-min operation depends on the degree of the root. Unlike ordinary Fibonacci heaps, the number of structural violations in a strict Fibonacci heap is kept bounded. By applying the pigeonhole principle, structural violations can be detected and fixed in constant time.

Considerations

Although they are theoretically optimal, strict Fibonacci heaps are not practical to use. Despite having simpler structure, experiments show that in practice the strict Fibonacci heap performs slower than more complicated Brodal queue and also slower than ordinary Fibonacci heaps.[3][4]

Summary of running times

Here are time complexities[5] of various heap data structures. The abbreviation am. indicates that the given complexity is amortized, otherwise it is a worst-case complexity. For the meaning of "O(f)" and "Θ(f)" see Big O notation. Names of operations assume a min-heap.

Operation find-min delete-min decrease-key insert meld make-heap[a]
Binary[5] Θ(1) Θ(log n) Θ(log n) Θ(log n) Θ(n) Θ(n)
Skew[6] Θ(1) O(log n) am. O(log n) am. O(log n) am. O(log n) am. Θ(n) am.
Leftist[7] Θ(1) Θ(log n) Θ(log n) Θ(log n) Θ(log n) Θ(n)
Binomial[5][9] Θ(1) Θ(log n) Θ(log n) Θ(1) am. Θ(log n)[b] Θ(n)
Skew binomial[10] Θ(1) Θ(log n) Θ(log n) Θ(1) Θ(log n)[b] Θ(n)
2–3 heap[12] Θ(1) O(log n) am. Θ(1) Θ(1) am. O(log n)[b] Θ(n)
Bottom-up skew[6] Θ(1) O(log n) am. O(log n) am. Θ(1) am. Θ(1) am. Θ(n) am.
Pairing[13] Θ(1) O(log n) am. o(log n) am.[c] Θ(1) Θ(1) Θ(n)
Rank-pairing[16] Θ(1) O(log n) am. Θ(1) am. Θ(1) Θ(1) Θ(n)
Fibonacci[5][17] Θ(1) O(log n) am. Θ(1) am. Θ(1) Θ(1) Θ(n)
Strict Fibonacci[18][d] Θ(1) Θ(log n) Θ(1) Θ(1) Θ(1) Θ(n)
Brodal[19][d] Θ(1) Θ(log n) Θ(1) Θ(1) Θ(1) Θ(n)[20]
  1. ^ make-heap is the operation of building a heap from a sequence of n unsorted elements. It can be done in Θ(n) time whenever meld runs in O(log n) time (where both complexities can be amortized).[6][7] Another algorithm achieves Θ(n) for binary heaps.[8]
  2. ^ a b c For persistent heaps (not supporting decrease-key), a generic transformation reduces the cost of meld to that of insert, while the new cost of delete-min is the sum of the old costs of delete-min and meld.[11] Here, it makes meld run in Θ(1) time (amortized, if the cost of insert is) while delete-min still runs in O(log n). Applied to skew binomial heaps, it yields Brodal-Okasaki queues, persistent heaps with optimal worst-case complexities.[10]
  3. ^ Lower bound of [14] upper bound of [15]
  4. ^ a b Brodal queues and strict Fibonacci heaps achieve optimal worst-case complexities for heaps. They were first described as imperative data structures. The Brodal-Okasaki queue is a persistent data structure achieving the same optimum, except that decrease-key is not supported.

References

  1. ^ Brodal, Gerth Stølting; Okasaki, Chris (1996-11). "Optimal purely functional priority queues". Journal of Functional Programming. 6 (6): 839–857. doi:10.1017/S095679680000201X. ISSN 0956-7968. {{cite journal}}: Check date values in: |date= (help)
  2. ^ a b Brodal, Gerth Stølting; Lagogiannis, George; Tarjan, Robert E. (2012-05-19). "Strict fibonacci heaps". Proceedings of the forty-fourth annual ACM symposium on Theory of computing. STOC '12. New York, NY, USA: Association for Computing Machinery: 1177–1184. doi:10.1145/2213977.2214082. ISBN 978-1-4503-1245-5.
  3. ^ Mrena, Michal; Sedlacek, Peter; Kvassay, Miroslav (June 2019). "Practical Applicability of Advanced Implementations of Priority Queues in Finding Shortest Paths". 2019 International Conference on Information and Digital Technologies (IDT). Zilina, Slovakia: IEEE. pp. 335–344. doi:10.1109/DT.2019.8813457. ISBN 9781728114019. S2CID 201812705.
  4. ^ Larkin, Daniel; Sen, Siddhartha; Tarjan, Robert (2014). "A Back-to-Basics Empirical Study of Priority Queues". Proceedings of the Sixteenth Workshop on Algorithm Engineering and Experiments: 61–72. arXiv:1403.0252. Bibcode:2014arXiv1403.0252L. doi:10.1137/1.9781611973198.7. ISBN 978-1-61197-319-8. S2CID 15216766.
  5. ^ a b c d Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms (1st ed.). MIT Press and McGraw-Hill. ISBN 0-262-03141-8.
  6. ^ a b c Sleator, Daniel Dominic; Tarjan, Robert Endre (February 1986). "Self-Adjusting Heaps". SIAM Journal on Computing. 15 (1): 52–69. CiteSeerX 10.1.1.93.6678. doi:10.1137/0215004. ISSN 0097-5397.
  7. ^ a b Tarjan, Robert (1983). "3.3. Leftist heaps". Data Structures and Network Algorithms. pp. 38–42. doi:10.1137/1.9781611970265. ISBN 978-0-89871-187-5.
  8. ^ Hayward, Ryan; McDiarmid, Colin (1991). "Average Case Analysis of Heap Building by Repeated Insertion" (PDF). J. Algorithms. 12: 126–153. CiteSeerX 10.1.1.353.7888. doi:10.1016/0196-6774(91)90027-v. Archived from the original (PDF) on 2016-02-05. Retrieved 2016-01-28.
  9. ^ "Binomial Heap | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-09-30.
  10. ^ a b Brodal, Gerth Stølting; Okasaki, Chris (November 1996), "Optimal purely functional priority queues", Journal of Functional Programming, 6 (6): 839–857, doi:10.1017/s095679680000201x
  11. ^ Okasaki, Chris (1998). "10.2. Structural Abstraction". Purely Functional Data Structures (1st ed.). pp. 158–162. ISBN 9780521631242.
  12. ^ Takaoka, Tadao (1999), Theory of 2–3 Heaps (PDF), p. 12
  13. ^ Iacono, John (2000), "Improved upper bounds for pairing heaps", Proc. 7th Scandinavian Workshop on Algorithm Theory (PDF), Lecture Notes in Computer Science, vol. 1851, Springer-Verlag, pp. 63–77, arXiv:1110.4428, CiteSeerX 10.1.1.748.7812, doi:10.1007/3-540-44985-X_5, ISBN 3-540-67690-2
  14. ^ Fredman, Michael Lawrence (July 1999). "On the Efficiency of Pairing Heaps and Related Data Structures" (PDF). Journal of the Association for Computing Machinery. 46 (4): 473–501. doi:10.1145/320211.320214.
  15. ^ Pettie, Seth (2005). Towards a Final Analysis of Pairing Heaps (PDF). FOCS '05 Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. pp. 174–183. CiteSeerX 10.1.1.549.471. doi:10.1109/SFCS.2005.75. ISBN 0-7695-2468-0.
  16. ^ Haeupler, Bernhard; Sen, Siddhartha; Tarjan, Robert E. (November 2011). "Rank-pairing heaps" (PDF). SIAM J. Computing. 40 (6): 1463–1485. doi:10.1137/100785351.
  17. ^ Fredman, Michael Lawrence; Tarjan, Robert E. (July 1987). "Fibonacci heaps and their uses in improved network optimization algorithms" (PDF). Journal of the Association for Computing Machinery. 34 (3): 596–615. CiteSeerX 10.1.1.309.8927. doi:10.1145/28869.28874.
  18. ^ Brodal, Gerth Stølting; Lagogiannis, George; Tarjan, Robert E. (2012). Strict Fibonacci heaps (PDF). Proceedings of the 44th symposium on Theory of Computing - STOC '12. pp. 1177–1184. CiteSeerX 10.1.1.233.1740. doi:10.1145/2213977.2214082. ISBN 978-1-4503-1245-5.
  19. ^ Brodal, Gerth S. (1996), "Worst-Case Efficient Priority Queues" (PDF), Proc. 7th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 52–58
  20. ^ Goodrich, Michael T.; Tamassia, Roberto (2004). "7.3.6. Bottom-Up Heap Construction". Data Structures and Algorithms in Java (3rd ed.). pp. 338–341. ISBN 0-471-46983-1.



References