Some others in the community might do a much better job of explaining this than I can (and they might give you better advice), but you have many options here. The important thing to consider first here, I think, is allocation lifetime variance, and allocation size variance. The reason why I say this is because if you can guarantee certain constraints, like uniform lifetimes or uniform sizes, the problem of allocation becomes much simpler.
If you can guarantee a case in which some operation requires uniform lifetimes but non-uniform sizes, you have a few options.
1. Memory Block Linked List
This case is particularly good when you tend to linearly iterate through your allocations (a good example of this is when allocating, then traversing, an abstract syntax tree, as the traversal of the abstract syntax tree tends to be in the same order in which it was allocated). In this case, you might have a couple of structures that looks something like this:
1
2
3
4
5
6
7
8
9
10
11
12 | struct MemoryBlock
{
int size;
int alloc_position;
void *memory;
struct MemoryBlock *next;
};
struct Allocator
{
struct MemoryBlock *active;
};
|
Each
MemoryBlock is pretty much just a stack. When you run out of space, you allocate a new one, forming a linked list of memory blocks over time. When you are done, you can traverse the linked list and release each block. Alternatively, if you don't need to free memory during runtime, you can let the operating system release all of your memory, and never free anything.
Also in the category of "non-variable lifetimes, variable sizes", you have:
2. Virtual Memory Linear Arena
In this case, pages of physical memory are not actually reserved until your program accesses them. The operating system simply gives your program a section of virtual address space, and when you require a read or write on those pages, the operating system will physically reserve them. This is done through operating system calls like
VirtualAlloc.
In this case, you can treat the reserved virtual memory as a single
MemoryBlock in the previous example, where to allocate, you just bump a pointer forward.
In cases where you cannot guarantee uniform lifetimes, but you
can guarantee uniform sizes, you have the following option:
3. Memory Pool
In this case, you can allocate a big block of memory that is subdivided into small chunks. You maintain a linked list, which is usually called the "free list". In this case, you just store the head of the linked list. At initialization time, all chunks in the memory block should be in the free list. When you need to allocate, you grab the head of the free list (some chunk of the memory), change the head to the previous head's next pointer, and use the chunk of memory.
When you need to free a chunk of memory, you simply replace the current head of the free list with the chunk to free, and set the new head's next pointer to the previous head.
In this case, when a memory chunk is free, you treat it as a node in a linked list:
| struct MemoryChunk
{
struct MemoryChunk *next;
};
|
But when a memory chunk is allocated, the caller that allocated that memory treats that block of memory however it wants (not aliasing it as a
MemoryChunk).
Now, in the last case, you have variable sizes and variable lifetimes.
4. Variable-Sized Memory Pool
In this case, you subdivide a pre-allocated block of memory into variably-sized chunks. For example, you might have chunks for 16 bytes, 32 bytes, 64 bytes, 128 bytes, 256 bytes, and so on. Then, if an allocation call requires something bigger than the maximum chunk size, you can make an operating system call to reserve pages for it directly. Each memory block is managed by you in a similar fashion to that of the previously discussed memory pool.
In the variable-size, variable-length case, things can get complicated. Also keep in mind that, for simplicity, I didn't discuss alignment which is something you might want to look into if you are writing an allocator.
I am of the opinion that you should always simplify your problem wherever you can. If you can somehow reduce a problem to one where you have uniform lifetimes or uniform sizes, even if some amount of memory is wasted, that is preferable to a complex, slow system that doesn't waste any memory.