I have frequently seen the question ‘How do I push an element to the front of a C++ vector?’. Usually the correct answer is to use deque instead, and that’s the end of it. But what if you also want contiguous storage? In this post I present a simple implementation of a double-ended vector,
devector, and compare the performance to vector, deque, and a circular buffer.
std::vector is typically implemented as a memory buffer of a certain size, which is partially filled up with elements, beginning at the start of the buffer. The distance from the last element to the end of the buffer is the spare capacity of the vector, and determines how many elements we can insert before we need to reallocate. The fact that we only have spare capacity in the back end of the vector, means that we can only efficiently insert at the back (or close to it).
The most important change we need to make to allow for fast insertion at both ends, is to allow the
devector to have spare capacity at both ends. This means we keep a pointer to the start of the elements, which can be different from the start of the underlying buffer. As long as there is spare capacity at both ends of the buffer, we can insert at both ends quickly.
Reaching the end
Inevitably, we will use up all spare capacity on one side of the buffer, and then how do we handle the next insertion? For
std::vector this is trivial, when we reach the back of the buffer we have to reallocate, because there is never room at the front end. For
devector, however, we have a few options when the other side of the buffer still has spare capacity:
- Move elements one step toward the other side. This is equal to how front insertions are handled in
std::vector, and is very slow. Using this strategy would mean that the
devectorwould devolve to have insertions at one end.
- Move elements several steps towards the other side. This is more interesting, because of the nice asymptotic properties. The initial insert is still slow, but now the following inserts will be fast again. If we move the the elements such that the remaining spare capacity is equal on both sides, we only need to do this times before the buffer is completely full. This is because each time we fill up on one side and move elements, the spare capacity is cut in half. Since we start with spare capacity after the previous reallocation, it takes times to get down to a constant amount of spare capacity.
- Reallocating, and placing the elements in the middle of a new buffer. This is fast for a growing
devector, because we don’t waste time moving elements back and forth in the same small buffer. It is bad for memory usage, though, because we reallocate to a larger buffer, even though we haven’t filled up the current buffer. It is also bad if we are pushing elements to one side, and popping them from the other, because then reaching the end doesn’t mean we need more space, it just means the elements should be moved.
The best option is a combination of 2 and 3. When there is still a lot of room left in the buffer, we should move elements toward the middle, and not reallocate straight away. However, as the buffer fills up, the gains from moving elements are diminishing, while the cost is increasing. Moving elements times takes work, and amortized on inserts we get time per element, which is unacceptable. A nice solution is to move elements if the size-to-capacity ratio is less than some limit , and reallocate if the ratio is larger.
I will now show that this tactic leads to amortized constant time inserts. After the previous reallocation, the
devector is full, where is the growth factor used for reallocations. For example, if the limit is 90% and the growth factor is 2, the
devector will be approximately 45% full after a reallocation. The spare capacity right after the reallocation is (65% in the example). If every move operation cuts the spare capacity in two, it takes move operations before we reach the limit again and should do a new reallocation. Since and are constants, the number of move operations is constant. Moving elements times takes work, and since it is amortized over inserts, we get amortized time per insert.
Adapting to the input
It’s great to have insertion time at both ends, but the constant factors can still be high, as we must move all elements several times between each reallocation. The problem is that no matter how the
devector is used, we treat insertions at both ends as equally likely, so we try to keep equal amounts of spare capacity on both ends. What if we assumed that previous behaviour predicts future behaviour?. That’s often a reasonably assumption. If we keep track of how many insertions are done at both ends, we can give more spare capacity to the active end, thus cutting down on (or totally getting rid of) the move operations. As long as we always give spare capacity to both ends, the analysis above still holds.
For example, if we have reached the back end of the buffer after a series of
push_backs, and the buffer is 70% percent full, an unadaptive tactic would give 15% spare capacity to both sides. We can now
push_back 15% more elements until we reach the limit again. The buffer is now 85% full, and we need to move the elements again (assuming the reallocation limit is 90%).
If we use an adaptive tactic, we could give the front 5% spare capacity, and give 25% to the back. Then we can
push_back 25% more before reaching the end. Now the buffer is 95% full, and we go straight to reallocation, without an additional move operation in between.
The code for
devector as well as the performance measurements is available at github/cppwhiletrue. There are probably still bugs in the code, but I think it’s correct enough for the measurements to be correct. I apologize for the non-portable time measurement code, but it was the simplest way to get high resolution timing with g++/mingw.
All results reported are the median of 51 runs. The performance tests are run on an Intel Core i7-4600U @2.1GHz. I report results both from MSVC compiler and g++/mingw. The
circular data structure used in the tests are a slightly adapted
In this test, I measured the time taken to
push_back random integers into each of the structures. The performance of
devector looks good for back insertion, only beaten by the libstdc++ version of
std::deque. The difference between libstdc++ and msvc
std::deque is likely due to the small node size used by msvc.
integer push back - g++/mingw (ns)
integer push back - msvc (ns)
In this test, I measured the time taken to
push_front random integers into each of the structures. The performance of
devector also looks good for front insertion, again beaten by the libstdc++ version of
integer push front - g++/mingw (ns)
integer push front - msvc (ns)
Push back / Pop front
In this test, I pushed elements to each structure, and then measured the time taken to
push_back random integers. This test emulates the steady state operation of a queue.
integer push/pop - g++/mingw (ns)
integer push/pop - msvc (ns)
If you are looking for a
std::vector-like structure with ability to insert/erase at both ends, or need a double-ended queue structure with contiguous storage, I think a double-ended vector is a good alternative.
I think a nice use case is if you need to interact with low-level api’s which takes contiguous ranges, and you don’t want to take the cost of copying the elements to a
std::vector before you call the api.
Update: When I wrote this post, I was sure someone had already made similar things, as the basic idea is pretty simple. Here are a couple examples (Please point me to others):