Update: Continued in this post.
I recently came across the need for an incremental sorting algorithm, and started to wonder how to do it optimally.
The incremental sorting problem is described here, and is an “online” version of partial sort. That is, if you have already sorted elements, you should be able to quickly sort elements, and so on.
Incremental sorts can be useful for a number of cases:
- You want sorted items, but you don’t know how many elements you’ll need.
- This could often happen when you are filtering the resulting sequence, and you don’t know how many items will be filtered out.
- You are streaming the sequence, so even though you want the whole sequence, you want the first elements as quickly as possible.
We’ll see how branch misprediction and other constant factors can make the naive asymptotically optimal version far slower than a practical implementation.
Measuring the performance of an incremental sort is a bit more involved than full sort, because for a given we care about how fast we reach each element. To keep it simple, I’ll keep fixed at 10 million, and measure the time taken to reach each element. The input range is a vector of random integers.
Perhaps the simplest possible solution is pre-sorting the entire range, and then just using plain iterators to iterate over the sorted range.
Unsurprisingly, the performance of this algorithm is virtually independent of , as all the work is done at the start. This is obviously, not a good solution, but it’s a nice reference point, especially when approaches .
To avoid doing all the sorting work up front, the next logical step is trying to use something like
std::partial_sort to do the work. The idea is to to start by partially sorting a constant small part of the range and remember how much of the range is sorted. When you run out of sorted elements, do another partial sort.
Since a partial sort must look at all the remaining elements of the range, we can’t keep partially sorting small ranges, that would lead to quadratic complexity. Instead, we multiply the number of elements to sort each time by some constant, giving us a logarithmic number of partial sorts.
This method is a huge improvement for small , but it performs worse for 1 million.
Partial Sorting (ms)
std::partial_sort algortithm costs , the calls to
std::partial_sort has a total worst case run time of . This means that in total we are doing more work than we should, time to fix this.
In a paper from 2006, Paredes and Navarro describes an algorithm which uses an incremental version of quicksort to achieve the optimal bound of for incremental sorting.
This method keeps a stack of partition points for the range. Each partition point is an iterator to a position in the range where everything before the position is smaller than everything after. The stack is sorted such that the top of the stack is the position closest to the start of the range.
This stack has the wonderful property that when you are looking for the next element in the sorted range, and your current position is not equal to the top of the stack, the next element must be between the current position and the top of the stack.
If the distance to the next partition point is large, you cut it in half with
std::partition, and push the new partition point on the stack. If the distance is small, we can sort the small range with
std::sort, and pop the element from the stack. A simple version is shown here:
Looking at the results for this algorithm, it’s clear that the total amount of work done when is now optimal,
because the time to sort the whole range is now as fast as a single call to
std::sort. This has the benefit that you don’t need to worry about switching over to
std::sort if you know that you’ll need a large part of the result, you can just use the incremental version anyway.
Simple Quick Sorting (ms)
|k||Partial Sorter||Pre-Sorter||Simple Quick Sorter|
The downside of this algorithm, though, is that it does spend considerably more time than
std::partial_sort coming up with the first few elements. If I need between 1 and 5000 elements, partial sorting is still the way to go by the looks of it. Why is that?
Number of comparisons
std::partial_sort is only guaranteed to have , a typical implementation will have closer to comparisons for small and random data. This is because the impementation typically makes a max heap of the first elements, and then walks through the rest of the range, inserting elements if they are smaller than the current max.
When is small, the elements in the heap will quickly become much smaller on average than most elements in the rest of the range, so insertions into the heap become less frequent as the range is processed. That means that for most of the elements in the range, the algorithm will just do a single comparison to verify that the element should not be in heap.
The incremental quick sort implementation, on the other hand, does a few more comparisons to find the first element. The first partition operation does comparisons to cut the range in two, the next partition takes comparisons to cut the first half down to a quarter, and so on. In total, it does about comparisons to find the first element, twice that of a good partial sort.
The difference in performance is much more than the difference in comparisons can explain, however, which brings us to the next point.
Few comparisons is not the only benefit of partial sort, it also has the benefit that most of the comparisons gives the same result. This gives a very fast inner loop, because it avoids branch misprediction.
For a partition operation that cuts the range in half, the number of branch mispredictions is at a peak, about every other comparison will be a misprediction. To verify that this was a big contributing factor, I tested
std::partition with various pivot elements, that cut the range at various percentiles, to see the effect. I also compared with two hand-written loops (one normal branching and one branchless) to see that effect added up.
We see that partition speed is heavily impacted by the branch mispredictions, it seems to account for up to difference in speed.
Skewed Incremental Quick Sort
After looking at the branch misprediction measurements, I realized that a balanced partition operation could not be the basis for an optimal incremental sort, but a skewed one could. If we skew the first partitions towards the beginning of the range, we get both fewer comparisons and branch mispredictions. The skew comes with a cost, however, as there is more work to do in total if we consistently get very skewed partitions. To counteract this effect, only the first partitions should be skewed. As we iterate further into the range, the partitions should be made less skewed, so we get about the right amount of work.
In the implementation of skewed incremental quicksort, I skewed the partitions by sampling a large number of pivots, sorting them, and choosing the pivot corresponding to the desired amount of skew.
Skewed Quick Sorting (ms)
|k||Partial Sorter||Pre-Sorter||Simple Quick Sorter||Skewed Quick Sorter|
By combining the incremental quick sort algorithm with smart pivot selection, we can get an incremental sorting algorithm that is as fast as partial sort for small , and as fast as full sort when approaches .
This is my first real blog post, so comments and suggestions are welcome!
Update: Continued in this post.