116 - Texture Atlas Overhaul

April 25, 2021

After a few weeks in a row of barely getting any Akigi work done while trying to tie off other unrelated loose ends, I got back on task and started to get warmed up this week.

What started off in 112 as a work stream to gradually introduce a runtime texture atlas allocation strategy in order to replace the current process of creating atlases in an offline asset compilation step has now evolved into just ripping off the bandaid all at once and moving all of my texture atlas usage to use run time allocated textures.

This will involve refactoring the texture related parts of my asset compilation process now that I no longer need to generate texture atlases at asset compile time.

One of the main reasons for moving to run time allocation is that it is much more space efficient. It isn't possible to know when at asset compile time which textures will be needed at any given time since it all depends on where a player is in the world and what other players are around.

So, by dynamically downloading only the textures that we need and placing them into atlases at run time we are able to ensure that we're only downloading and buffering textures that are actually using.

In the future I'll implement a deallocation strategy to free up space for textures that have not been used in a while. I'll also at some point think through de-fragmentation strategies.

The reason that I'm migrating everything now instead of slowly over time like I originally planned to is that doing it all at once will mean that I won't have to maintain and deal with the old code.


On the CPU side I'm using lightweight representations of texture atlases in order to keep track of which textures are placed in which atlases.

fn main() {
/// Keeps track of textures allocated on the GPU during runtime, along with the used and free space
/// within each texture atlas.
/// This allows us to use a few textures on the GPU to store many different textures.
/// The VirtualTextureAtlasesResource serves as a 2d focused allocator.
pub struct VirtualTextureAtlasesResource {
    next_atlas_id: VirtualTextureAtlasId,
    virtual_atlases: HashMap<VirtualTextureAtlasId, VirtualTextureAtlas>,
    // TODO: Perhaps HashMap<SubtextureId, (VirtualTextureAtlasId, SubtextureLocation)
    //  to avoid the extra indirection when looking up subtextures.
    // TODO: Yeah, this will need other information such as the GroupId of the texture so that
    //  we can place it with other group mates in the future.
    subtexture_to_atlas: HashMap<SubTextureId, VirtualTextureAtlasId>,

No actual texture data is stored in the VirtualTextureAtlasesResource, just the sizes of the atlases and placed subtextures.

A VirtualTextureAtlas looks like this:

fn main() {
pub type BoxedVirtualTextureAtlasAllocator = Box<dyn VirtualTextureAtlasAllocator + Send + Sync>;

/// Corresponds to a texture on the GPU.
pub struct VirtualTextureAtlas {
    size: u32,
    allocator: BoxedVirtualTextureAtlasAllocator,
    mipmapped: bool,

Different texture atlases are able to use different allocation strategies by using a different VirtualTextureAtlasAllocator under the hood.

For example, if an atlas is meant to hold textures that will always be the same size you might use a much simpler allocator than if you need to be able to allocate and deallocate textures of unpredictable sizes.

Let Go Engine will ship with a few commonly useful allocators, but anyone can implement the VirtualTextureAtlasAllocator trait themselves for more custom approaches.

There are a number of complex cases that need to be handled by the runtime texture allocation logic.

For example, some textures need to be in the same atlas as other textures. One example of this is with physically-based rendering textures, where you'll typically want your base color, roughness, metallic and normal textures all in the same atlas.

Well, a more modern approach is to just use texture arrays for PBR textures and not use an atlas at all, but I am currently supporting WebGL until WebGPU is supported in Chrome by default so I need to deal with a bit more complexity until then.


As part of the work on how textures are handled, I'm also moving towards generating all mip levels myself at asset compile time and then downloading all of them at runtime.

WebGL allows you to automatically generate mipmaps, but modern graphics APIs do not. So while I'm in the mode of improving how textures are handled I decided to take care of generating my own mipmaps. This is actually fairly simple, I just need to resize the texture a bunch of times to half the previous size.

In this first implementation I will just serialize all of the PNGs for a texture's mip levels into one file that gets downloaded at runtime, but in the future I will need a smarter approach since depending on a users settings they might not need the most detailed mip levels possible.

I'll worry about that later though, handling that shouldn't have much of an on the overall design.

Other Notes / Progress

  • Learning some Swift in order to build the iOS portion of an application that I'm licensing to another company. I'm expecting that picking up a new modern language will end up making me a better Rust programmer since I will be exposed to even more programming ideas and approaches.

  • Getting started on adding support for coalescing freed bins to rectangle-pack. This will power runtime deallocation within Akigi.

Next Journal Entry

By the next journal entry I plan to finish implementing runtime texture allocation and deallocation, and be a good way through removing my existing compile time texture atlas code in favor of instead preparing the textures to be runtime allocated.

Well Wishes,