071 - Quests and Tests

June 14, 2020

Coming out of last week my goals were to add more polish to the Tutor of War mini-quest and introduce the Tutor of Navigation mini-quest.

This would mean that half of the initial four tutors were in place - leaving the rest of June for adding the other two tutors as well as continuing to improve the game's rendering and UI.

I spent the first half ot the week beefing up the quest testing infrastructure, then on Friday and Saturday I added the Tutor of Navigation quest and cleaned up the dialogue for the Tutor of War.

A lot of room to grow in my dialogue writing but I'm already seeing some improvement so just have to keep at it. Still need to create new models for the different characters.

Testing Quests

Before this week my approach to testing quests looked something like this:

/// Squash the mosquitos behind his house
fn squash_mosquitos(game: &GameThread, player: &mut ConnectedPlayer) -> Result<(), anyhow::Error> {
    player.start_interactive_sequence_with_display_name(TUTOR_OF_WAR_DISPLAY_NAME)?;
    game.tick(1);

    player.assert_current_sequence_node_id(TUTOR_OF_WAR_GO_BACK_AND_GET_MOSQUITOS_NPC);

    player.auto_attack_target(player.find_ent_id_by_name("Mosquitos"))?;
    game.tick_until(|| player.has_ent_with_name("Squashed Mosquito"), 20);

    player.pickup_entity(EntLookup::DisplayName("Squashed Mosquito"))?;
    game.tick(1);

    player.start_interactive_sequence_with_display_name(TUTOR_OF_WAR_DISPLAY_NAME)?;
    game.tick_until(|| player.is_in_dialogue(), 20);

    player.assert_current_sequence_node_id(TUTOR_OF_WAR_HERES_MY_SEAL_NPC);

    Ok(())
}

I would use the ConnectedPlayer test struct to connect to the game server running in a separate thread in order to play the game headlessly.

This worked well as a starting point - but the big issue was that if I wanted to add more branching to the quests and dialogue it would become harder and harder to test.

I want the dialogue and decisions in the game to be branching and consequential, but I can't spend my time writing boilerplate tests for every possible path. So, I needed an easier way to test these branches.

To move in that direction I introduced two new bits of testing infrastructure this week. Interactive sequence graph smoke tests and randomized quest completion tests.

Interactive Sequence Graph Smoke Tests

I've added a new test function that iterates over every interactive sequence graph (so far there are two) and runs different assertions on them.

Different assertions guarantee different things, such as that every node in the graph is pointed to, nodes that give you items can only be reached once to prevent duplication bugs, having a certain ratio of nodes that have meaningful consequence such as changing how other characters in the game perceive you, as well as a whole host of other checks.

Here's one example:

Player choice smoke test An example assertion from one of our smoke tests. Analyzing things like ratios of responses doesn't suddenly make my game dialogue amazing, but I am finding that it helps with making me go back to the drawing board and think about how to give the player more choice.

Over time as I add more interactive sequences I'll continue to beef up these smoke tests which will continue to minimize the odds of introducing bugs while giving me the flexibility to write significantly branching dialogue that actually impacts your gameplay experience.

Randomized Quest Completion Tests

Some quest steps involve talking to an NPC.

Testing this used to look like this:

/// Receive the tattered tunic now that you've defeated the vicious bunny
fn receive_tattered_tunic(
    game: &GameThread,
    player: &mut ConnectedPlayer,
) -> Result<(), anyhow::Error> {
    player.advance_dialogue_tick_once_after_each(
        &[
            TUTOR_OF_WAR_LAUGHING_AT_WEAKNESS_NPC,
            TUTOR_OF_WAR_WHAT_WAS_BAR_OVER_BUNNY_HEAD_PLAYER,
            TUTOR_OF_WAR_INTRODUCE_HITPOINTS_1_NPC,
            TUTOR_OF_WAR_INTRODUCE_HITPOINTS_2_NPC,
            TUTOR_OF_WAR_ACKNOWLEDGE_UNDERSTANDING_OF_HITPOINTS_PLAYER,
            TUTOR_OF_WAR_NOTICE_LOW_HITPOINTS_NPC,
            TUTOR_OF_WAR_GIVE_TATTERED_TUNIC_NPC,
        ],
        game,
    );

    player.has_item_with_icon_and_quantity(&IconName::TatteredTunic, 1);
    player.assert_quest_step(QuestId::TutorOfWar, 30);

    player.advance_dialogue_tick_once_after_each(&[END_OF_CONVO], game);

    Ok(())
}

I would manually specify the responses to choose and then assert that things worked properly.

Instead I now have a function that will randomly choose responses until some condition is met:

#[test]
fn tutor_of_navigation_quest() -> Result<(), anyhow::Error> {
    let game = GameThread::new(comps());

    // TODO: Add a way to assert that certain nodes are always visited during a quest completion.
    //  This allows us to make sure that it isn't possible to skip over certain nodes due to a
    //  misconfigured graph.
    //  Catches cases where we set poor criteria on a start node

    let new_player = game.connect_player_tick_until_in_world(NEW_USER_ID)?;
    let completed_tutor_of_war =
        game.connect_player_tick_until_in_world(COMPLETED_TUTOR_OF_WAR_USER_ID)?;

    for player in vec![new_player, completed_tutor_of_war] {
        player.random_walk_graph_until(
            InteractiveSequenceGraphId::tutor_of_navigation,
            TUTOR_OF_NAV_LOOKUP,
            || player.quest_step_eq(QuestId::TutorOfNavigation, 65535),
            || {
                if let Some(iseq) = player.maybe_iseq_main_action() {
                    assert_ne!(iseq.node_id(), Some(FALLBACK_BUG_START_NODE_ID));
                }
            },
            Duration::from_secs(1),
            &game,
        );

        assert!(player.has_item_with_icon_and_quantity(&IconName::TutorOfNavigationSeal, 1));
    }

    game.shutdown()
}

This doesn't completely automate quest testing as I still need to manually write bits such as solving the squash_mosquitos step in the Tutor of War quest, but it does chop down the amount of code needed to test a quest by quite a bit.

Another nice piece of it is that I'm now testing lots of different branches, not just one.

Since responses are random - by running this enough times I can gain confidence that all of the paths that you can choose still lead to the correct final destination.

I can also gain confidence that things like ending and restarting a conversation or logging out during a quest don't impact your ability to complete it by having my ConnectedPlayers randomly decide to disrupt themselves in those ways.

All and all we have a good foundation for fully automating our quest testing, but there is still more work to do.

I want to leverage some of our approaches to our autonomous NPCs in order to be able to automatically figure out how to complete quests when given enough information.

This would help eliminate needing to write my own test code for that squash_mosquitos step. The automated test could simply deduce what needs to happen to advance, and then do so while still trying to add as much randomization to its approach as possible.

All of this will evolve over the coming months and years. I'm just focused on making one improvement at a time.

Other Notes / Progress

  • Started paying for Datadog. Bought a package for 5 million log events per month for a little under $10. This lets me log the duration of every single game tick. In the future I'll set up alerting so that I can stay on top of not ever letting a game tick take above a certain duration.

  • Introducing some quest quality assertions gave me the boundaries I needed to be creative. By needing a certain ratio of nodes to have consequence and have choice I found myself going back to the drawing board and honing and condensing the dialogue each time. Over the years this should help me gett better at crafting more interesting dialogue with fewer nodes.

  • Beefed up the asset compilation process to allow it to downsize UI elements from their source files. Needed this because I wanted to store compass as 1024x1024 PSD but I only needed it at around 128x128 in the final atlas. I ended up just adding a simple metadata file and making sure that our asset compiler used that file.

  • More progress on the Metal renderer. It can now render meshes without any lighting. This week I'll add in the physically-based lighting model. When the Metal renderer gets up to speed in the coming weeks I'll be able to start creating some world editor tools so that I can pick up the pace with creating Akigi's world.

Next Week

The primary focus for this week is to add the Tutor of Eats.

I'll also continue working on the renderer-metal crate to close the gap between it and the renderer-webgl crate.

I'll also work on improving the user interface (my daily Photoshop practice is giving me some confidence!) as well as working on adding in the skills interface.

I'm excited to be getting closer and closer to a pace of releasing new gameplay every week. It really does feel like a foundation is crystalizing below me.


Cya next time!

- CFN