Memory Allocation

Intel® Threading Building Blocks (Intel® TBB) provides two memory allocator templates that are similar to the STL template class std::allocator. These two templates, scalable_allocator<T> and cache_aligned_allocator<T>, address critical issues in parallel programming as follows:

Use the class cache_aligned_allocator<T> to always allocate on a cache line. Two objects allocated by cache_aligned_allocator are guaranteed to not have false sharing. If an object is allocated by cache_aligned_allocator and another object is allocated some other way, there is no guarantee. The interface to cache_aligned_allocator is identical to std::allocator, so you can use it as the allocator argument to STL template classes.

The following code shows how to declare an STL vector that uses cache_aligned_allocator for allocation:

std::vector<int,cache_aligned_allocator<int> >;

Tip

The functionality of cache_aligned_allocator<T> comes at some cost in space, because it must allocate at least one cache line’s worth of memory, even for a small object. So use cache_aligned_allocator<T> only if false sharing is likely to be a real problem.

The scalable memory allocator incorporates McRT technology developed by Intel’s PSL  CTG team.