TSLQueue: An Efficient Lock-Free Design for Priority Queues
Paper in proceeding, 2021
Priority queues are fundamental abstract data types, often used to manage limited resources in parallel systems. Typical proposed parallel priority queue implementations are based on heaps or skip lists. In recent literature, skip lists have been shown to be the most efficient design choice for implementing priority queues. Though numerous intricate implementations of skip list based queues have been proposed in the literature, their performance is constrained by the high number of global atomic updates per operation and the high memory consumption, which are proportional to the number of sub-lists in the queue. In this paper, we propose an alternative approach for designing lock-free linearizable priority queues, that significantly improves memory efficiency and throughput performance, by reducing the number of global atomic updates and memory consumption as compared to skip-list based queues. To achieve this, our new design combines two structures; a search tree and a linked list, forming what we call a Tree Search List Queue (TSLQueue). The leaves of the tree are linked together to form a linked list of leaves with a head as an access point. Analytically, a skip-list based queue insert or delete operation has at worst case O(log n) global atomic updates, where n is the size of the queue. While the TSLQueue insert or delete operations require only 2 or 3 global atomic updates respectively. When it comes to memory consumption, TSLQueue exhibits O(n) memory consumption, compared to O(nlog n) worst case for a skip-list based queue, making the TSLQueue more memory efficient than a skip-list based queue of the same size. We experimentally show, that TSLQueue significantly outperforms the best previous proposed skip-list based queues, with respect to throughput performance.
Lock-freedom
External trees
Linked lists
Priority queues
Shared data-structures
Concurrency
Skip lists
Performance scalability