082 - Placing Scenery in Editor

August 30, 2020

In 079 I kicked off The Tooling Saga, a multi-week stretch where I will write several different editor tools that will make it easier to add gameplay to Akigi.

This journal entry marks three weeks into the saga, as well as, after quite a bit of implementation work, the first major milestone of this tooling journey.

I've implemented the basics of being able to add scenery into the world, enabling me to point and click in order to add new non-interactive objects into the game.

Here's a short video demonstrating the new scenery placement tool:

Note that the game editor uses the Metal 3d graphics API when running on MacOS. My crates/renderer-metal is not yet passing the crates/renderer-test test suite yet, so a few things such as mip-mapping and shadows don't currently work in the editor. This means that the game in the editor looks a little bit different from the game in the browser. I'll fix that in the future.

The user interface for placing scenery is still rough and tumble. I've made some strides towards making it easier to add new interfaces, but there is still work to be done. In the video I'm pressing Tab to switch between play and edit mode and then Shift + A to open the list to select scenery to place.

Mouse Terrain Intersection

In order to place scenery in the world I need to know what part of the world is being clicked on.

This amounts to starting with the mouse's position in screen space, then converting from there to normalized device coordinates, then to eye space and finally to world space.

I've had code that handled this conversion inside of the game crate for a couple of years now since it's used for many different player interactions with the game world.

Now that we have the editor this code needed to live in a re-usable place, so I abstracted it out, along with the rest of the camera related code, to a crates/camera library in the engine's Cargo workspace.

The already editor had a winit event loop which gives it the coordinates of the mouse, but in order to place scenery while playing the game it also needed access to the game's camera.

I ended up creating an EditableApp trait which exposes this information. My longer term vision with this trait is that any future applications made using this engine can simply implement the trait and will then be fully compatible with the editor.

Here's how the trait looks so far:

/// Allows an game to run in the editor.
///
/// Different methods give the editor the ability to do things such as:
///
/// - Query the running game application instance for information
///
/// - Hot-update assets in the running game to enable things such as hot-reloading/insertion of new
///   scenery
///
/// # Guidelines for Exposing Information from App -> Editor
///
/// We seek to minimize the amount of information that we expose in order to keep the editor
/// de-coupled from application specific details. The less there is going on, the easier it will
/// be to use the editor for other applications in the future.
///
/// So, we should be sure that there is no other way to get the information that we need before
/// exposing it to the editor from the app.
///
/// For example, instead of exposing the TerrainResource to the editor, we simply moved it to
/// another crate so that the editor can maintain its own instance directly.
///
/// This separation will prevent editor-focused functionality from leaking into application crates.
///
/// Methods in this module should contain explanations as to why exposing the information
/// from the app was the best approach.
pub trait EditableApp {
    /// Run a tick of the application's simulation.
    ///
    /// For a game this is commonly referred to as a "game tick".
    fn run_tick(&mut self);

    /// Exposed because it would be difficult for the editor to maintain it's own version of the
    /// game's camera since the camera simulation can be influenced by misc game state factors such
    /// as not being able to move the camera while in a cut scene.
    fn camera(&self) -> Camera;

    /// Push to the queue of unprocessed raw user input.
    /// A game will typically drain this queue once per frame.
    fn push_input_event(&mut self, input_event: InputEvent);

    /// Push a message from the game server -> game client.
    ///
    /// This might be from a real server that the editor is running, or a synthetic message that
    /// the editor created.
    fn push_server_to_client_message(&mut self, bytes: &[u8]);

    /// Insert scenery into a terrain chunk.
    fn insert_scenery(
        &mut self,
        terrain_chunk_id: TerrainChunkId,
        scenery_id: u16,
        scenery: SceneryPlacement,
    );

    /// Used for down-casting within unit/integration tests.
    fn as_any(&self) -> &dyn std::any::Any;
}

With both the camera and mouse coordinate information available to the editor, combined with the work over the passed couple of weeks refactoring the terrain implementation and giving the editor access to its own instance of the TerrainResource, it was now possible for the editor to know the coordinates that the mouse intersected the terrain. A prerequisite for all of the upcoming editor tools.

Play Mode, Place Mode

My approach to building out the editor is to throw in what I need without worrying too much about the editor's user experience experience, knowing that my strict test-driven development practice will allow me to easily move and refactor things over time as I learn more about what a good editor experience should feel like.

I've also been reading the documentation of other editor's to get some insight into how others have handled things that I'm thinking about or working on.

I also learned a good bit from the excellent talk Creating a Tools Pipeline for Horizon: Zero Dawn.


Before I started working on The Tooling Saga it was already possible to play Akigi in the game editor.

So, since I already have a view into the game world in editor, I decided to start by making the new tools work while playing the game in editor.

This way I wouldn't need to spend any time thinking about how to render the world in a more editor friendly way just yet.

Plus, I know that I will always want to be able to edit the game while playing it since that let's me visualize the world in exactly the way that a player will while I am editing, increasing the likelihood that I design things just-right.

Right now I can press Tab to toggle between Play mode and Place Scenery mode in a game pane within the editor.

User Interfaces

I have a custom user-interface system for the engine.

One of my favorite features is the enum-based event system which makes adding event handlers to UI elements feel lightweight and re-usable.

I'll more deeply dive into the design of this system sometime, but here are a couple quick snippets.

Right now UI elements (quads or groups of text characters) use an EventHandlers struct for to register different events.

pub struct EventHandlers<N, I: KeyboardInputHandler<N>> {
    onclick_inside: Option<MouseInputHandler<N>>,
    onclick_outside: Option<MouseInputHandler<N>>,
    onmouseover: Option<MouseInputHandler<N>>,
    onmouseout: Option<MouseInputHandler<N>>,
    ontouchmove: Option<MouseInputHandler<N>>,
    on_char_or_key_received: Option<I>,
}

#[derive(Debug)]
pub struct MouseInputHandler<N> {
    event: N,
    captures: bool,
}

The generic N type is any type that the application wants to use to describe events that occur. This would typically be an enum.

The game and editor each have their own event enum which data structures specific to their own needs.

Here's the event enum for the editor. It's fairly small since the editor is still young.

/// When the user types or clicks or moves their mouse or enters any other sort of input we create
/// an `InputEvent`.
///
/// Every frame the `InputEventProcessorSystem` iterates through the input events and translates
/// them into into `NormalizedEvent`s.
///
/// TODO: NormalizedEvent does not feel like the right name
#[derive(Debug, PartialEq, Clone)]
pub enum NormalizedEvent {
    /// Indicates that nothing should occur.
    /// This is useful when you have a UIElement that you want to capture clicks but not do
    /// anything else.
    DoNothing,
    /// Forward an input event to the game's runtime
    PushInputEventToGame(InputEvent),
    /// Set the size of the framebuffer that backs the editor window
    SetFullWindowFramebufferSize(PhysicalSize),
    /// Resize the ViewportResource
    SetViewportResourceSize(LogicalSize),
    /// See [`PlaceScenery`] for documentation.
    PlaceScenery(PlaceScenery),
    /// Set the location of the screen pointer (mouse)
    SetScreenPointer(Option<LogicalCoord>),
    /// Set the mode for the game pane with the provided ID.
    SetGamePaneMode(SetGamePaneMode),
    /// Set the renderable ID selector for a game pane.
    SetRenderableIdSelector(TabAndPaneId, Option<RenderableIdSelectorUi>),
    /// Set the RenderableId used when placing scenery.
    SetPlaceSceneryRenderableId(TabAndPaneId, RenderableId),
    /// Push char input to the renderable ID selector
    PushKeyToRenderableIdSelector(TabAndPaneId, CharOrVkey),
}

I made some improvements to the user interface code this week.

This mainly came down to adding a new field to the EventHandlers struct shown above for handling keyboard input, as well as a new trait for converting that raw keyboard input into a NormalizedEvent.

/// A type that can be converted into a NormalizedEvent when given some keyboard input.
///
/// Useful for UI elements that handle key/character presses
pub trait KeyboardInputHandler<N> {
    /// Create a NormalizedEvent based on the inputted the key.
    fn to_normalized_event_with_key(&self, key: CharOrVkey) -> N;
}

I also added the beginnings of a GridLayout struct that powers UI layouts for lists and grids.

Here's an example call site:

let layout = GridLayout::new(
    |idx| {
        if idx == 0 {
            Some(RenderableSelectorUiIdKind::SearchFilterText)
        } else if idx < renderables.len() as u32 + 1 {
            Some(RenderableSelectorUiIdKind::RenderableName(idx as _))
        } else {
            None
        }
    },
    sel.coord(),
    FirstRowColumnOffset::new(5, 5),
    GridWidthHeight::new(
        GridSize::OffsetFromFurthestItemEdge(5),
        GridSize::OffsetFromFurthestItemEdge(5),
    ),
    RowColumnLimit::new(OneOrMore::new_unlimited(), OneOrMore::one()),
    // FIXME: Use RenderableListEntry::new().render() to get the size.
    //  This would properly factor in text size, instead of hard coding.
    (|_id| 100, |_id| 30),
    (|_id1, _id2| 0, |_id1, _id2| 5),
);

// ... snipped ...

for (idx, item) in layout.enumerate() {
    // ... snipped ...
}

I also made unique IDs for all user interface elements (quads and text sections) mandatory. Right now I am only using this to query for UI elements in my unit tests, but I can imagine that in the future being able to find a specific element could be useful at runtime.

The game-app and editor each have their own UserInterfaceResource and thus each have their own ID enum.

Here's the game's enum:

/// Uniquely identifies a UI element in the app
///
/// TODO: Deny unused variants
#[derive(Debug, Hash, Eq, PartialEq, Ord, PartialOrd, Copy, Clone)]
#[allow(missing_docs)]
pub enum AppUiId {
    OverheadHitpoints(OverheadHitpointsUiId),
    OverheadDammage(OverheadDamageUiId),
    OverheadText(OverheadTextUiId),
    SkillCard(SkillCardUiId),
    InventoryItem(InventoryItemUiId),
    SidePanelTopButton(SidePanelTopButtonUiId),
    SkillsPanelBackground,
    InventoryPanelBackground,
    LoadingText,
    InteractionMenuBackground,
    InteractionMenuEntry(usize),
    DialogueOverlayBackground,
    DialogueOverlaySpeakerText,
    DialogueOverlaySpeakerName,
    DialogueOverlayResponseText(usize),
    Compass,
    PendingChatMessageText,
    RecentMessagesPanelText(usize),
    BottomPanelBackground,
}

And the UserInterfaceResource that they both make separate use of.

pub struct UserInterfaceResource<N, I: KeyboardInputHandler<N>, Id: Hash> {
    latest_ui_batch: HashMap<ViewportId, HashMap<Id, UiElement<N, I>>>,
    pressed_keys: HashMap<VirtualKeyCode, PressedKey>,
}

Going Forwards

As you can see from the video above, this is a not-so-polished first pass implementation at placing scenery.

Over time I'll be using one or two hour stints here and there to add polish and convenience and better visual representations of things whenever I either run into an inconvenience or just have some inspiration to make things look and feel a bit nicer.

The game and the editor share the same underlying user interface implementation so anytime I make progress on the editor's interface I'm making progress what's possible in the game's interface and vice versa.

Other Notes / Progress

I'm excited about maintaining both the editor and the game this early in the project.

Having to make use of a fair bit of functionality in two different places is leading to much more informed and flexible implementations and code organization.

I'm feeling my usual excitement about how things will look one, five or ten years from now. I feel happy about the fact that over time it gets easier and easier to add to the code base.

A good maintainability score so far I would say! (I mostly give credit to the monstrous duo of Rust and test-driven development here.)

Next Week

Right now I can place scenery, but there isn't yet a user interface or backend implementation for moving the scenery around. I'm going to put the scenery editing implementation work on pause though and switch gears since I feel like to mix things up a bit.

This week I'll be working on sculpting terrain. I don't yet have a plan for the pull requests that I'll be submitting over the course of this implementation, so I'll start the week by jotting down a rough sequence of planned PR's and then dive right into implementation.


Cya next time!

- CFN