Memory: Allocators
Injecting the Allocator
-
One of Zig's core principle is no hidden memory allocations .
-
It's a sharp contrast to what you'll find in C where memory is allocated with the standard library's
mallocfunction. -
In C, if you want to know whether or not a function allocates memory, you need to read the source and look for calls to
malloc.
-
-
The advantage of injecting the allocator isn't just explicitness, it's also flexibility.
-
std.mem.Allocatoris an interface which provides thealloc,free,createanddestroyfunctions, along with a few others.
-
-
If you're building a library, then it's best to accept an
std.mem.Allocatorand let users of your library decide which allocator implementation to use. Otherwise, you'll need to chose the right allocator. -
Related notes when using std.json :
-
Caio: "is it a good idea to only use 1 allocator across the whole game? I don't know if this is even possible, but purely talking in terms of a centralized way of allocating and deallocating memory"
-
yes. you might end up wanting two allocators (one for general memory that you manage the lifetime of, and a seperate arena allocator that frees all its memory every frame)
-
-
Caio: "seems like the function is returning a huge bag of unwanted data. I mean, all I actually want is the obj, as it is inside of it that the json data is stored. Is there a way to only return the obj and not have a leak?
-
"if memory is allocated within the function, then you need a way for the caller to free it. so with std.json.parseFromSlice(), that's by calling .deinit() on the returned value. that's a pretty common pattern."
-
Caio: "if memory is allocated inside a function, return the object allocated".
-
-
"For a json parsing, the Parsed(T) includes an ArenaAllocator and the value - that's it. The ArenaAllocator holds all the memory for everything inside value".
-
Areas of memory
-
The areas are conceptual; OS and executable enforce them.
-
Global space .
-
Stack .
-
Heap .
Global Space
-
The first is global space, which is where program constants, including string literals, are stored.
-
All global data is baked into the binary, fully known at compile time (and thus runtime) and immutable.
-
This data exists throughout the lifetime of the program, never needing more or less memory.
-
Aside from the impact it has on the size of our binary, this isn't something we need to worry about at all.
Stack Allocator
-
Explanation of Allocators, focusing on Linear Allocators (Stack) .
-
Does not talk about the different types of allocators, only the Linear (Stack).
-
Provides a good visualization of how the Stack is used in functions.
-
The video is good, but not very detailed.
-
-
Advantages :
-
The call stack is amazing because of the simple and predictable way it manages data (by pushing and popping stack frames).
-
Automatically handled by the compiler.
-
Very fast allocation and cleanup.
-
-
Constraints :
-
Fixed total memory.
-
"You are not allowed to store GBs of memory on the stack, for example".
-
-
Fixed size.
-
Fixed lifetimes.
-
Data has a lifetime tied to its place on the call stack.
-
-
Heap Allocator
-
Useful for data that has to live beyond the rigid boundaries of function scopes.
-
We can create memory at runtime with a runtime-known size and have complete control over its lifetime.
-
It has no built-in life cycle, so our data can live for as long or as short as necessary. And that benefit is its drawback: it has no built-in life cycle, so if we don't free data, no one will.
-
You can allocate memory in an HTTP handler and free it in a background thread, two completely separate parts of the code.
-
-
Everything we've seen so far has been constrained by requiring an upfront size. Arrays always have a compile-time known length (in fact, the length is part of the type). All of our strings have been string literals, which have a compile-time known length.
-
Furthermore, the two types of memory management strategies we've seen, global data and the call stack, while simple and efficient, are limiting. Neither can deal with dynamically sized data and both are rigid with respect to data lifetimes.
Strategies: Heap Allocation
-
Different Allocator Strategies in Zig .
-
At the time, GPA (DebugAllocator) did not exist.
-
The conclusion of the video was that "This is a developing area, but Zig is doing very well here, because it has no default allocator and forces you to think about allocator choice."
-
It wasn’t discussed which allocator to use in each case; it was only about strategies.
-
It’s strange how this area still feels so "new".
-
Page Allocator ("using syscalls")
-
std.heap.page_allocator;. -
Allocates a whole page of memory each time we ask for some memory.
-
Whenever this allocator makes an allocation, it will ask your OS for entire pages of memory; an allocation of a single byte will likely reserve multiple kibibytes.
-
As asking the OS for memory requires a system call, this is also extremely inefficient for speed.
-
Very simple, very dumb, very wasteful.
-
Disadvantages :
-
"This is the base of most allocators, but it's not what people use directly".
-
Very slow, since it uses syscalls; "massive slow in your program".
-
Wasteful.
-
It doesn’t think in terms of bytes, but pages (4KB).
-
-
-
Examples :
const std = @import("std"); fn main() !void { const allocator = std.heap.page_allocator; const memory = try allocator.alloc(u8, 100); // we allocate 100 bytes as a `[]u8`. defer allocator.free(memory); // defer is used in conjunction with a free - this is a common pattern for memory management in Zig. } -
Construction? :
const PageAllocator = struct { pub fn alloc(self: *@This(), size: u32) []u8 { const mem = std.os.mmap( // slow (syscall) alignForward(size, page_size) ) catch { return error.OutOfMemory; } return mem[0..size]; } pub fn free(self: *@This(), mem: []u8) void { return std.os.munmap(mem); } }
FixedBufferAllocator ("Bump Allocator")
-
std.heap.FixedBufferAllocator.init(...); -
Is an allocator that allocates memory into a fixed buffer and does not make any heap allocations.
-
Uses a fixed buffer to get its memory, doesn’t ask memory from the kernel.
-
It will give you the error
OutOfMemoryif it has run out of bytes. -
Advantages :
-
Very fast allocation.
-
Control lifetime via buffer.
-
-
Disadvantages :
-
Fixed total memory.
-
Cannot free individual memory.
-
"There is no data structure, it only stores the last memory index. Therefore, you can’t deallocate memory in the middle of this region."
-
"Maybe it’s possible to deallocate the last allocation."
-
"It’s possible to clear the whole buffer."
-
freeanddestroywill only work on the last allocated/created item (think of a stack).-
Freeing the non-last allocation is safe to call, but won’t do anything.
-
-
-
-
When to use :
-
This is useful when heap usage is not wanted, for example, when writing a kernel.
-
It may also be considered for performance reasons.
-
"If you don’t care about expandable memory, you should use FixedBufferAllocator, as it’s simply faster."
-
"Probably the fastest you’ll ever get."
-
-
Examples :
const std = @import("std"); fn main() !void { var buffer: [1000]u8 = undefined; var fba = std.heap.FixedBufferAllocator.init(&buffer); const allocator = fba.allocator(); const memory = try allocator.alloc(u8, 100); defer allocator.free(memory); }
ArenaAllocator ("Bump Allocator with expandable memory")
-
std.heap.ArenaAllocator.init(...); -
It’s the place where you store all data that share the same lifetime.
-
Takes in a child allocator and allows you to allocate many times and only free once. Use in combination with another allocator.
-
Here,
.deinit()is called on the arena, which frees all memory.-
Using
allocator.freein this example would be a no-op (i.e., does nothing).
-
-
Advantages :
-
Very fast allocation.
-
Expandable total memory.
-
Manual lifetime.
-
"Arena = One Lifetime".
-
-
Very simple way of avoiding leaks.
-
-
Disadvantages :
-
Cannot free individual memory.
-
"This ends up being useful for cases like Linked Lists, for example, since it allows freeing the entire list’s memory at once without traversing it."
-
Disclaimer: "Don’t use Linked Lists, use arrays. Arrays are much faster nowadays."
-
-
-
When to use :
-
Commonly used in some places, but the problem of not being able to "free individual memory" can be annoying in some cases.
-
-
Examples :
const std = @import("std"); fn main() !void { var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator); defer arena.deinit(); const allocator = arena.allocator(); _ = try allocator.alloc(u8, 1); _ = try allocator.alloc(u8, 10); _ = try allocator.alloc(u8, 100); }
DebugAllocator (formerly GeneralPurposeAllocator (GPA))
-
std.heap.DebugAllocator(.{}){};. -
"In debug builds, use DebugAllocator, formerly known as GPA. In release builds, use std.heap.smp_allocator."
-
"DebugAllocator and smp_allocator are both backed by page_allocator, which requests more memory from the operating system when it runs out."
-
Advantages :
-
Designed for safety over performance, but may still be many times faster than page_allocator.
-
This is a safe allocator that can prevent double-free, use-after-free, and detect leaks.
-
Safety checks and thread safety can be turned off via its configuration struct.
-
-
Thread-safe allocator.
-
Gets some memory first and manages buckets of memory to reduce the number of allocations.
-
-
Uses :
-
Can serve as your application's main allocator. For many programs, this will be the only allocator needed.
-
-
Example :
const std = @import("std"); const httpz = @import("httpz"); pub fn main() !void { // create our general purpose allocator var gpa = std.heap.GeneralPurposeAllocator(.{}){}; // get an std.mem.Allocator from it const allocator = gpa.allocator(); // pass our allocator to functions and libraries that require it var server = try httpz.Server().init(allocator, .{.port = 5882}); var router = server.router(); router.get("/api/user/:id", getUser); // blocks the current thread try server.listen(); }-
(2025-03-27)
var debugAllocator = std.heap.DebugAllocator(.{}){}; // Creates the DebugAllocator TYPE with configuration (.{}) // Using this type, an instance is obtained by {}; // debugAllocator is now an instance of the object `DebugAllocator(...)`. // It’s important that `debugAllocator` is a VAR. If you use CONST, everything crashes at .allocator() below. const allocator = debugAllocator.allocator(); // The internal function is used to obtain the allocator. // allocator is of type `Allocator`. defer { _ = debugAllocator.deinit(); // Not sure exactly why, but it’s important to call .deinit() on the debugAllocator. // Interestingly, the page_allocator does not require this. } -
What is this:
GeneralPurposeAllocator(.{}){}?-
std.heap.GeneralPurposeAllocatoris a function, and since it uses PascalCase, we know it returns a type. -
.{}is a struct initializer with an implicit type. What’s the type and where are the fields? The type isstd.heap.general_purpose_allocator.Config, though it isn’t directly exposed like this, which is one reason we aren’t explicit. No fields are set because theConfigstruct defines defaults, which we’ll be using. -
This is a common pattern with configuration / options.
-
-
SMP Allocator
-
?
-
Suggested for Release builds.
Testing Allocator
-
std.testing.allocator -
About :
-
This is a special allocator that only works in tests and can detect memory leaks.
-
Currently, it’s implemented using the
GeneralPurposeAllocatorwith added integration in Zig’s test runner, but that’s an implementation detail. -
The important thing is that if we use
std.testing.allocatorin our tests , we can catch most memory leaks.
-
-
In your code, use whatever allocator is appropriate.
Discussion: Slab Allocator
-
Similar to the Arena Allocator.
-
Advantages :
-
You can manually free memory.
-
-
Disadvantages :
-
Allocations have fixed sizes.
-
Metadata storage is wasteful.
-
Discussion: General Purpose Allocator
-
About :
-
This type of allocator was discussed in this Zig talk from June 2020. There was no GPA yet, so everything discussed in the video and the section below is speculative.
-
Not sure if DebugAllocator / GPA is related to this concept.
-
Still, the strategic discussion is interesting.
-
-
Free lists :
-
Advantages :
-
You can manually free memory.
-
-
Disadvantages :
-
Allocations have a minimum size.
-
Very slow.
-
Memory Fragmentation.
-
"Worse performance the longer your program is running".
-
"There’s no way to defragment your memory, as there are pointers going everywhere and you can’t really track them down".
-
-
const FreeListAllocator = struct { root: ?*Node, fn find(self: *@This(), size: u32) ?[]u8 { var iter = self.root; while (iter) |node| : (iter = node.next) { if (node.size == size) { self.remove(node); return node.buffer(); } } return null; } pub fn free(self: *@This(), mem: u32) void { const node = Node.init(mem); self.prepend(node); } } -
-
Free lists with size buckets :
-
This solves the Fragmentation problem, since allocations have fixed sizes; kinda; "mitigated, not all gone".
-
Advantages :
-
You can manually free memory.
-
-
Disadvantages :
-
Allocations have a fixed size.
-
Cache pressure.
-
"You’ll probably have cache misses if you’re allocating sporadically".
-
If everything is allocated at once, there might not be cache misses, but if allocations happen occasionally, cache misses will likely occur.
-
This makes sense when you consider that although fragmentation is avoided, this solution ends up spreading allocations that happen after deallocations.
-
"This is really bad".
-
-
-
init(), deinit(), create(), destroy()
-
For slices: use
allocandfree. -
For single items: use
createanddestroy.const std = @import("std"); const expect = std.testing.expect; test "allocator create/destroy" { const byte = try std.heap.page_allocator.create(u8); defer std.heap.page_allocator.destroy(byte); byte.* = 128; }
Warnings
Double Free
const std = @import("std");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
const allocator = gpa.allocator();
var arr = try allocator.alloc(usize, 4);
allocator.free(arr);
allocator.free(arr);
std.debug.print("This won't get printed\n", .{});
}
-
In the case of a double free, we’ll get a hard crash.
Memory Leak
fn isSpecial(allocator: Allocator, name: [] const u8) !bool {
const lower = try allocLower(allocator, name);
return std.mem.eql(u8, lower, "admin");
}
-
The memory created in
allocLoweris never freed. -
Not only that, but once
isSpecialreturns, it can never be freed. OnceisSpecialreturns, we lose our only reference to the allocated memory, thelowervariable. The memory is gone until our process exits.-
Damn.
-
-
Our function might only leak a few bytes, but if it's a long-running process and this function is called repeatedly, it will add up and we'll eventually run out of memory.
-
Memory leaks can be insidious. It isn’t just that the root cause can be difficult to identify. Really small leaks or leaks in infrequently executed code can be even harder to detect.