054 - Light and the End of the Tunnel

February 16, 2020

In the previous journal entry I wrote about how I was just getting back into the swing of things after a two week vacation in Nigeria.

I'm back in stride. This week was full of progress on the refactoring of the game client, a journey that I started on in 052.

This time a week ago I felt a bit overwhelmed by all of the details of improving the client's code architecture, but since then I've laid out data structures and tests for all of the main pieces and gotten a good sense of how everything should fit together.

So we're coasting now - just writing and implementing more tests until all of the pieces are fully in place.

It's looking like about 1-2 more weeks of work on this refactoring and then we can dive back into working on actual gameplay.

So for now there aren't many player facing things to talk about. Instead I'll write about some of the code changes that I worked on over the last week.

Terrain Loading and Rendering

I spent most of Monday researching and planning out how we'd handle terrain.

The world is too large to be able to load up all of the terrain data at once so I needed to think through both the rendering of the terrain and how we'd dynamically load terrain data as needed.

I landed on a path forwards where every chunk of terrain data has an ID and when we need a new chunk of terrain we insert the ID that we need into a HashSet<TerrainChunkIdentifier>.

Our AssetLoaderSystem will see these and then call the ClientSpecificAssetLoaderFn which will load the terrain chunk and eventually insert it into the LoadedAssets struct.

All of our asset loading / deloading will be powered by a handful of data structures and coordinated by the AssetLoaderSystem - I'm already seeing the benefits of the de-coupling as opposed to how things looked before this refactoring.

There's still work left to do to get all of the new tests passing - but the system looks solid so there isn't much more creative thought left for this bit of the workstream.

Input Event Processor System

I took several passes at figuring out the data structures and responsibilities for the InputEventProcessorSystem and eventually landed on something that felt smooth.

Raw input comes in from the client, then we look at the current World state and convert the input into a NormalizedEvent, then apply that NormalizedEvent in order to change the World state.

/// When the user types or clicks or moves their mouse or enters any other sort of input we create
/// an `InputEvent`.
///
/// Every frame the `InputEventProcessorSystem` iterates through the input events and translates
/// them into into `NormalizedEvent`s.
///
/// TODO: NormalizedEvent does not feel like the right name
#[derive(Debug, PartialEq, Clone)]
pub enum NormalizedEvent {
    // ... snippet ...
    /// Push a character to the pending chat message
    PushChatChar(char),
    /// Remove one character from the pending chat message
    RemoveChatChar,
    // ... snippet ...
}

I'm happy with how this piece shaped up after a lot of early struggles.

User Interface Elements

I simplified the implementation for user interface event handling - making interactive UI elements will be a breeze going forwards.

There isn't too much of a UI in the game yet so I'm sure what I landed on will change - but things are now isolated and well tested enough that I anticipate future changes being much, much easier to accomplish.

The UiElement enum currently looks like this.

// Comments omitted for brevity

#[derive(Debug)]
pub enum UiElement {
    TexturedQuad {
        ui_quad: UiQuad,
        event_handlers: EventHandlers,
    },
    Text {
        section: OwnedVariedSection,
        event_handlers: EventHandlers,
    },
}

#[derive(Debug, Default)]
pub struct EventHandlers {
    onclick_inside: Option<EventHandler>,
    onclick_outside: Option<EventHandler>,
    onmouseover: Option<EventHandler>,
    onmouseout: Option<EventHandler>,
}

#[derive(Debug)]
pub struct EventHandler {
    event: NormalizedEvent,
    captures: bool,
}

Whenever we render the game we update a cache of the most recent Vec<UiElement>s that were rendered.

/// A resource used for handling the user interface
#[derive(Debug, Default)]
pub struct UserInterface {
    /// Once per tick the RenderSystem determines all of the user interface elements that need to be
    /// rendered.
    ///
    /// We store the most recent batch of ui element data here.
    ///
    /// By knowing what is currently being displayed we can interpret mouse clicks and other
    /// user input events accordingly.
    latest_ui_batch: Vec<UiElement>,
}

Then in the InputEventProcessorSystem we can iterate through the Vec<UiElement> to see if we need to trigger any of its event handlers.

for ui_element in sys_data.user_interface.latest_ui_batch() {
    if let Some(ui_elem_bounds) = ui_element.bounds(&sys_data.text_renderer) {
        // ... snippet ...

        let pointer_inside_elem = ui_elem_bounds.contains(ui_coord);

        if !pointer_inside_elem {
            if let Some(onclick) = ui_element.event_handlers().onclick_outside() {
                let event = onclick.event().clone();
                normalized_events.push(event);

                click_captured = click_captured || onclick.captures();
            }
        }

        // ... snippet ...
    }
}

The WebGL Renderer

Before we started the refactor our WebGL renderer was a module inside of the web-client (every client has its own crate that calls the game-app crate with some ClientSpecificResources.).

I created a new crate renderer-webgl dedicated to the WebGlRenderer. It has only a few dependencies and doesn't know anything about the game.

There's now a renderer-core crate that provides a Renderer trait and RenderJob<'a> struct.

/// Provides functionality useful for rendering the game
pub trait Renderer {
    /// Render to the render target given the provided `RenderJob`
    fn render(&mut self, render_job: RenderJob);

    /// Get the most recent frame's pixels.
    ///
    /// Currently used in our test suite - but could be used in the future to help players take
    /// screenshots.
    fn read_pixels_rgba(&self) -> Vec<u8>;
}

The RenderJob<'a> holds just enough data for a Renderer to know what to render, and nothing more.

Every frame in the RenderSystem the game-app creates a RenderJob<'a> then uses it to call the ClientSpecificRenderFn.

The piece that I'm most excited about is the new renderer-test crate.

The renderer-test crate allows us to test our renderers in a graphics API agnostic way.

So the renderer-webgl has the renderer-test crate as a dev dependency and has a single test that looks like this.

/// Run all of the render tests on the WebGlRenderer
#[wasm_bindgen_test]
fn render_tests() {
    init_console();

    let renderer_creator = |viewport_width, viewport_height| {
        let canvas = create_canvas(viewport_width, viewport_height).unwrap();
        let gl = get_webgl(&canvas).unwrap();

        Box::new(WebGlRenderer::new(gl).unwrap()) as Box<dyn Renderer>
    };

    let test_results_fn = |render_test_results| {
        for render_test_result in render_test_results {
            append_test_result_to_dom(render_test_result).unwrap();
        }
    };

    renerer_test::test_renderer(&renderer_creator, &test_results_fn);
}

fn append_test_result_to_dom(render_test_result: RenderTestResult) -> Result<(), JsValue> {
    // ... snippet ...

    let test_case_description_div = document().create_element("div")?;
    test_case_description_div.set_attribute("style", "font-size: 24px; font-weight: bold;")?;
    test_case_description_div.set_inner_html(render_test_result.description());

    let passed = document().create_element("label")?;
    passed.set_inner_html(" (passed)");
    passed.set_attribute("style", "color: rgb(50, 205, 50)")?;

    let failed = document().create_element("label")?;
    failed.set_inner_html(" (FAILED)");
    failed.set_attribute("style", "color: rgb(255, 0, 0)")?;

    match render_test_result.was_successful() {
        true => test_case_description_div.append_child(&passed)?,
        false => test_case_description_div.append_child(&failed)?,
    };

    // .. snippet ...
}

Future renderes would have their own similar version of the above test - but instead of creating a WebGlRenderer they'd create their own renderer.

Because of the beautiful libraries that are wasm-bindgen wasm-bindgen-test and web-sys and wasm-pack I'm able to test the WebGlRenderer in Chrome, Firefox and Safari and optionally view the tests in a browser headfully if I need to visualize what happened.

Right now there are only two test cases - but I'm super excited to see this evolve as I continue to add to the new renderer.

Here's how it looks when tests are failing:

wasm-pack test --chrome -- -p renderer-webgl

WebGL Renderer test visualizer Created a way to visualize the test suite without having to write any JavaScript or HTML. All thanks to by wasm-bindgen-test, [wasm-bndgen] web-sys and wasm-pack.


And here's what it looks like when tests are passing.

WebGL Render tests passing Took all of Sunday but finally got the instanced rendering of quads working in WebGL. That green feels so satisfying.


Adding a new test is as simple as creating a new RenderTest and pushing it to a Vec<RenderTest> - so the test suite should evolve quite nicely.

/// Verify that we properly render a ui quad from an RGB texture
pub fn rgb8_ui_quad_test() -> RenderTest {
    let viewport_width = 25;
    let viewport_height = 40;

    let mut render_job = RenderJob::new([0., 0., 0.], viewport(viewport_width, viewport_height));

    let quad_width = 15;
    let quad_height = 20;

    let textured_quad_render_job = UiQuadRenderJob::new(
        TextureAtlasId::Zero,
        ViewportSubsection::new(5, 10, quad_width, quad_height),
        TextureAtlasSubBounds::new([0.1875, 0.875], [0.8125, 0.125]),
        0,
    );

    let texture_to_create = TextureToBufferOntoGpu::new(
        16,
        TextureFormat::RGB_8,
        textured_quad_subtexture().to_rgb().to_vec(),
    );

    render_job.insert_texture_to_create(TextureAtlasId::Zero, texture_to_create);
    render_job.push_textured_quad_job(textured_quad_render_job);

    let expected_rgba_pixels = rgba_ui_quad_test_expected().to_rgba().to_vec();

    RenderTest {
        description:
            "Renderer should render a textured quad using a sub-section of a specified texture",
        render_job,
        expected_rgba_pixels,
    }
}

One low point while working on the WebGlRenderer was spending all of Sunday afternoon and early evening trying to render a single textured quad.

I was using instanced rendering for the first time and felt motivated to figure out why it wasn't working.

I eventually figured it out - and got a reminder of how much time can be wasted when you don't have the usual amount of type safety that Rust spoils you with since you're dealing with browser APIs.

The issue was that I was constructing a js_sys::Float32Array with a pointer to a Vec<f64>.

I converted it into a Vec<f32> and things immediately worked.

Due to my inexperience with instanced rendering I was looking all over the place to figure out what could've been going wrong, so on the bright side I learned a lot about instancing after combing through code for around six hours straight.

// ... snippet ...

// FIXME: Remove allocations
let mut vertices: Vec<f32> = vec![]; // This was previously a `Vec<f64>` since I hadn't specified the type :(

let memory_buffer = wasm_bindgen::memory()
    .dyn_into::<WebAssembly::Memory>()
    .unwrap()
    .buffer();

let data_location = vertices.as_ptr() as u32 / 4;

let data_array = js_sys::Float32Array::new(&memory_buffer)
    .subarray(data_location, data_location + vertices.len() as u32);

// FIXME: Only buffer if the vertices > than the most we've ever buffered.
// Otherwise we should update the subbuffer
gl.buffer_data_with_array_buffer_view(GL::ARRAY_BUFFER, &data_array, GL::DYNAMIC_DRAW);

// ... snippet ...

self.angle_instanced_arrays.draw_arrays_instanced_angle(
    GL::TRIANGLE_STRIP,
    0,
    4,
    quad_count,
);

Overall Thoughts on the Week

I'm seeing the light at the end of the tunnel. Things are starting to fall right into place and pieces are beginning to get re-used across systems. Feeling proud of the direction.

When this refactor is finished I'm hoping that I'll be able to add functionality to the client at a truly special pace.

Other progress

Pushing for an alpha release

Akigi's first commit was on March 8, 2016.

I'm setting the official alpha release date as Thursday April 9, 2020. 4 years 4 weeks and 4 days after the initial commit.

This should be a nice forcing function to finish up all of this behind the scenes work and start making progress on real gameplay.

Akigi isn't really a game yet - it's effectively just a lot of technology that has the potential to make it easy to make and maintain a game.

Let's change that!

Next Week

More work to do on the client side refactoring.

There's stuff left on everything that I wrote about in this journal entry, as well as some work to do on our asset build step.

I can see the end though. A week or two until we're off this train.


Cya next time!

- CFN