**Tags**

C++ requires many algorithms to be implemented in amortized constant time. I was always curious what this meant for vector insertion. To get started I first wrote an article about amortized time in general. Now I’d like to apply that to appending vector items.

23.3.6.1-1 A vector is a sequence container that supports random access iterators. In addition, it supports (amortized) constant time insert and erase operations at the end;

In this analysis I’m looking only at end insertion, which is usually done with the C++ `vector.push_back`

function.

### Cost of copying

To do complexity analysis we need to know what is being counted. Often we count the number of comparisons, such as with trees, but with an unordered vector that is silly. Maybe we should include memory allocation, or is that constant anyway? What is the cost model, shall I assume a “random-access machine”? The standard only gives us this description:

23.2.1-1 All of the complexity requirements in this Clause are stated solely in terms of the number of operations on the contained objects. [ Example: the copy constructor of type vector <vector<int> > has linear complexity, even though the complexity of copying each contained vector<int> is itself linear. -- end example ]

Let’s just focus on the cost of copying data, this should be okay by that definition. For common use this is the most interesting cost since it is usually the highest cost.

Given a type `T`

we define the cost of an operation as the number of times an object of type `T`

is copied. That is, each time an item is copied it costs 1.

### What is a vector?

It’s also helpful to know what a `vector`

actually is. It has a current number of items and a total capacity to store more. The requirements also state:

The elements of a vector are stored contiguously, meaning that if v is a vector<T, Allocator> … then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size().

A pseudo-structure for the vector can be:

1 2 3 4 5 |
vector { T* store int size int capacity } |

`capacity`

is the total space available in `store`

. `size`

is the current number of items in `store`

that are actually defined. The remaining are just undefined memory.

### Two insertion scenarios

When we insert a new item into a vector one of two things can happen:

`size < capacity`

: The item is copied to`store[size]`

and`size`

increments. Cost of`1`

.`size == capacity`

: New space must be allocated first. The old items must be copied to the new store. Then the new item added. Cost of`size + 1`

.

That second cost is clearly linear, not the constant time the standard requires.

This where amortized cost comes in. We aren’t interested in the worst case, but the average case. We don’t always have to resize the vector, that is the purpose of having extra capacity. The amortized cost is the average cost over many insertions. There must be some way to get this cost to be constant.

### How many resizes

Let’s first try the approach of increasing the capacity by each time, some constant amount. This means that every inserts will require a resize. Call the resize operation, so at we have items in the list, at we have items, at we have items. For completeness at we have items.

Operation requires that has already happened. We can only insert one item at a time into the list, thus no resize operations can be skipped. The total cost of this sequence of resizes can be expressed as this recurisve relation:

Reducing to a single closed form equation that gives us:

We need a form related to , the number of insert opertions we perform in total. This is easy, since , just assume integer math to avoid needing a floor operation. In addition to the resizing cost we need the individual insert cost, 1 for each item. So the total cost of the sequence of insertions is:

To get the amortized cost we divide by the total number of operations:

Strip out the constants and we’re left with an amortized cost of . Thus any constant size for results in linear time, not the desired constant time.

### Geometric size

Let’s move on to another option for resizing: start with 1 capacity and double each resize. To simplify the math we’ll use indexing starting at 0. The operation needs to move 1 item, the moves 2 items, and the operation moves items.

Basic geometric series reduce very nicely:

At this point we could swallow the cost of 1 copy and drop the part. Given the exponential term before it the constant 1 isn’t relevant. For thoroughness I’ll leave it in.

Relate back to where , and add in the individual inserts:

Divide by , the number of inserts, to get the amortized cost:

The constant 3 is the largest term, so our amortized cost of insert is . Success! That’s constant linear time as the standard has requested.

### The intuitive solution

That 3 represents a concrete number of operations: how many times each item will be copied. To be constant linear time it could have been 2 or 4, it doesn’t matter. Indeed, if we were off by just 1 in our definition of it would have been either of those. So let’s think about what those 3 copies actually are.

The first copy is clear, when you first insert an element it has to be copied into the `store`

memory. That leaves us with two copies.

Think of the structure of `store`

at the moment a resize happens. It starts with `size == capacity`

, it’s full, that’s why it’s being resized. After resize it ends up at `size == capacity/2`

as the capacity has doubled. The second half of the `store`

is empty.

We know this second half will eventually be filled. Once filled all items have to copied somewhere new. This means that for each item inserted, 2 copies happen. The item itself will need to be copied somewhere new, as well as one item from the first half. This relation holds regardless of how many resizings have been done. Each resize always results in one full and one empty half.

If we think in terms of the accounting method that means insertion costs 3 credits. The first credit pays for its immediate insertion. The second credit pays for it’s eventual copy to a new `store`

. The third credit pays for copying an existing item to the new `store`

. The doubling in capacity each resize ensures these new items always have enough credit to cover the copying cost.