AI and dialogue using node graphs and custom language

One of the significant progress milestones I’ve done since the beginning of March was the implementation of a system I call Convo and a rudimentary NPC AI that allows NPCs to move/wander and do things randomly, if their base purpose allows it.

To achieve the implementation, two fundamental concepts had to be developed. The first is SNTX, and the second is the usage of the TGF to express nodal networks that are are interpreted at runtime.

SNTX

SNTX is a procedural markup language designed to be injected into dicts. The resulting dict keys are procedurally looked-up to get to a resulting value.

SNTX is a significant upgrade to the TalkDialogue system that I developed in 2015.  The difference lies in the robustness of handling Conditions, as well as clarity. The idea behind the TalkDialogue and SNTX markups was the ability to author a dialogue tree using a text file. By and large this has been possible, but the branching nature of dialogue trees makes writing everything down in one linear text file still confusing. It was this reason that I looked to yEd in order to visualise the dialogue tree .

Conditions

One of the important aspects of SNTX is the concept of Conditions. Conditions are simply asking: is this node valid? If a node has a Condition, the Condition must be True in order for the node to be processed by the system.

Conditions check 3 things:

  • Accomps – a global dict that represent arbitrary ‘accomplishments’.
  • State – the NPC’s ‘state’ variable which is essentially a CSV string, and can comprise any number string tokens describing its state
  • INV – the Player’s Inventory can be searched for a particular item, for a particular quantity.

Doers

The other side of Conditions are Doers. Doers set Accomps or an NPC’s State. Within the Convo (and Astrip) systems, a node is capable of executing a command telling to either add/remove/modify a key in the Accomps, or amend an NPC’s State. It also allows transferring of items from NPC to Player, and vice-versa.

With Doers and Conditions combined, it allows me to script an interactive storyline.

In fact, I have completed a sample quest using all these systems as an early-stage trial for the prototype. I’m happy to say that it also involves having to kill a Robot.

yEd and TGF

yEd is a very capable diagramming application. It has become my weapon of choice because it is one of the very few programs that allow TGF export. TGF (Trivial Graph Format) is a super-lightweight nodal graph format that, when coupled with a well-thought-out markup, can solve a large number of data relationship issues.

My usage of yEd and TGFs began with simply creating nodes that were labeled as SNTX keys. I drew edges that served as annotations to their relationships, though they didn’t actually define the relationship. That is, except for Choices: for every Topic a line can be drawn to a Choice, which will then be recognised by the TGF2Convo converter tool (explained later).

A Convo graph. ‘choice’-labelled edges are the only edges that are processed in the TGF.

The above image shows Topics (yellow), Choices (green), Doers (cyan), ChoiceGroups (pink), Entry (white).

The labels in the nodes reflect directly as it is written in SNTX. When it is written in TGF it looks something like:

4 ==intro -1 ~text:: [The man seems to be so worried that he hardly notices you when you come up to talk.]\nWha? Oh, hi. You must be the new recruit. My name's Zak.
5 ->::intro
6 ++worried ~text:: Worried
7 ~dostate::+met_player
8 -> ?@met_player,$!zak_quest_rejected::intro_hi_again
9 ==intro_hi_again -1 ~text::Hi, again. 
10 ==worried -1 ~text:: I've lost the Sub- Rail pass that I was issued with. My team leader is going to kill me.
11 ++subrailpass ~text:: Sub-Rail Pass
12 ==subrailpass -1 ~text:: I don't know where I might have dropped it. I swear it was in my pocket.
13 ++quest ?@!questaccepted ~text:: [...]

This is translated using a Python script called TGF2Convo, and the output looks like this:

# EntryTopic: Entry for cond -----
->::intro

# EntryTopic: Entry for cond ?@met_player,$!zak_quest_rejected -----
-> ?@met_player,$!zak_quest_rejected::intro_hi_again
# Topic: intro -----------------------

# ~text
==intro -1 ~text::[The man seems to be so worried that he hardly notices you when you come up to talk.]\nWha? Oh, hi. You must be the new recruit. My name's Zak.

# ~choices
==intro -1 ~choices::worried

# ~dostate
==intro -1 ~dostate::+met_player
> Choice
```yml
# Choice: worried -----------------------
# ~text
++worried ~text:: Worried

# Choice: subrailpass -----------------------
# ~text
++subrailpass ~text:: Sub-Rail Pass

The TGF2Convo tool uses Markdown syntax so that when a Markdown viewer is used, it is easier to understand.

Ultimately, the dict is populated with these values, and the runtime uses the dict to determine the path of the dialogue.

However, this is not my ideal way. I had developed SNTX ahead of using TGF. Since using TGF with AI, I realised that utilising TGF fully would be a better way to go for dialogues, but this requires a re-working of the Convo system. This might be done at the end of the prototype phase.

AI and TGF

After the Convo system and SNTX were developed, I had to jump into AI. At that point I had two choices: I could go and write the AI in C2 as events, or I could attempt something much harder, ultimately more flexible, and platform-agnostic. I chose the latter.

When looking at what I wanted to do with the AI, I decided that I just needed a very simple system of controlling the actions and movements of NPCs (not the enemy NPCs). I outlined the requirements of what it would take for an NPC (Zak) who has lost something and is wandering around a given area.

First, there is the point of movement. The NPC should be able to use waypoints. Second, the NPC should have some random ability to use waypoints. Third, the NPC should have some random wait times. Fourth, the NPC should have random animation.

With all that in mind I went into the specifics of what components need to exist to make that happen. I needed:

  • ability to set NPC variables
  • ability to query NPC variables from within the AI graph
  • ability to initiate a ‘move’ (pathfinding) command from the AI graph to the runtime
  • likewise, the ability to ‘wait’
  • the ability to stop
  • the ability to have AI graph randomly choose between named choices
  • the ability to receive event handlers from the runtime
A portion of Zak’s AI graph
The AI at work. Zak is the guy with red pants. 🙂

Event handlers

When I started developing the AI, I would click on Zak to talk to him. A ‘Talk’ icon would appear, but Zak kept on moving. Though I could have effectively paused the game so that I could properly select the icon before Zak walked away, I thought it would be better to tell the AI to stop Zak.

That’s when event handlers came into the picture, which also brought forth a host of different possibilities. For example, I created an event handler called onastrip, which is called when you try to initiate an interaction. In the AI graph, this event handler is connected to make Zak stop. If the icons are ‘aborted’, the event onastripend is fired, and the AI graph is wired to make Zak resume where he left off. When a Convo is initiatedonconvo is fired, and this stops him, too. When Convo is ended, onconvoend is fired.

If a waypoint is missing, there is a wpmissing event that fires, which allows the AI to adjust itself in order to get a proper waypoint index.

The idea of ‘events handlers’ also gradually slid into the other aspects of the AI graph, where ‘events’ are triggered as a result of an operation (a Doer). For example, a choose node chooses between one of any number of events to fire. As long as that event handler is present, then it is a valid event.

Example of a `choose` node. `0`, `1`, and `2` are not really numbers; they are event names and when `choose` chooses an event the handler should be named `onchosen <event_name>`.

There is an immense satisfaction in working this way. There’s definitely a lot more work involved, but it brings the systems I’m working on at a higher level of flexibility, while at the same time, it’s still within the realm of my understanding since I’m the one developing it.

Although I don’t know how much of the AI, in particular, will make it through the Unity alpha, it is undoubtedly a very useful  piece of development because it informs me of the kinds of behaviours I may need to do; which aspects to simplify, and which aspects need more complex behaviours.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s