Tag Archives: prototype

Prototyping – What have I learned?

This post is for technical reference.

Consolidation

If there’s one thing I need to consider seriously is how to consolidate data that the systems use.

The main data holders are Accomps, INV, TINV, and Entity State.

The problem of Entity State, as I mentioned in one dev video is whether Entity State can be persistent If it can, then this is valid data holder. If not, we nix it.

Accomps spelling mistakes

Creating and referencing Accomps was an issue because an Accomp could be created and referenced anywhere and spelling mistakes could occur in the referencing between any of the systems. I don’t think there’s a real workaround for this except to organise the Accomps properly.

Accomps master list

If the Accomps are created in a nodal graph, they may be easier to spot within the yEd interface. However, Accomps may be ‘declared’ in any document. Thus it is useful to have a master Accomps document to reference from. But this needs to be constantly updated. If naming convention is strictly use in the creation of Accomps, then a script could be made to crawl all the documents to see where Accomps exist and update the master list.

INV and TINV

INV was queried in the Condition (# in SNTX notation). This is fine for now, but TINV should also be supported. In this case, the TINV is self and TINV is always persistent.

Systems that reference data

Each of these data holders are queried and set in all of the systems:

  • Convo
  • AI
  • Astrip
  • Trade (not implemented in the prototype)

How to consolidate?

  • Convo should be re-designed to use the TGF format. This means connections should be taken into consideration, and perhaps edge labels should identify relationship between nodes.
  • Conditions using TGF could be represented in nodes rather than in the SNTX notation.
  • Astrip is currently only expressed in SNTX notation. Astrip should be converted to TGF, too. But obviously, this requires a complete redesign of the Astrip nomenclature. But since a graph is the goal, we can use branching for Conditions and connecting to the same nodal directives of  dofunc, doaccomps, etc.
  • AI uses TGFs fully, but lacks dofuncs and doaccomps and dostate. AI uses in-game ‘DoFunc’ functions in order to accomplish specific things, which may or may not involve executing dofunc, doaccomps, or even dostate (though that’s not really being used due to the aforementioned persistence issue). But I can see where it may be useful to issue a directive through the AI.
    • When the Fixers were haxed, it was the AI that executed a special (but generic) DoFunc that added an Accomps saying it was hacked. It did this under the onhaxed event handler.
    • The improvement I see in this is to mirror the handling of the event to the Astrip. See below.

Re-purpose SNTX

SNTX’s heavy lifting is most in the areas of:

  • Entity reference
  • Conditions

Entity reference is something like ==poxingalley.bin. In a TGF, there are no node types, so referencing an entity would still require a keyword to represent the type of node being processed, so I think prefixes like the == symbol would still need to exist.

Conditions are like ?$!entry_desc_shown. Like entity referencing, this needs to remain to identify the node as a condition. However, unlike the original SNTX notation, which features the entity reference and condition (and directive) in one line, the nodal graph will split entity reference, conditions, and directives as separate nodes, which makes for a more readable graph, and makes it possible to branch and re-use other subgraphs.

The image above shows a possible way to go about it. The entity is identified first. Then contextualised immediately with the relevant action that operating on it (look). Then the Condition is checked. Edges branch from that Condition check with a label 0 and 1 and -1 denoting False and True and Nominal (Default). Nominal means that any directive will fire regardless of the Condition. This -1 edge may not be necessary, however, because it should be possible to connect the ++look node to the ~doaccomps node directly, it should grok it.

So the SNTX notation of ==, ++, ?, and ~ is still the same, but is re-purposed in TGF to directly indicate the type of node being processed. Also, we have the added 0 and 1  and -1 edges.

Astrip and event handlers

This is a narrative tool, not a generic one, so we’re operating upon set entities, not spawned ones.

Why not deal with event handlers in Astrip just as we do in AI? When an event is called in the AI, it is handled in the context of the AI. And as mentioned above regarding Fixers modifying the Accomps, it seems untidy to do that. What if, then, we query for an event handler in Astrip document and then process the directive from there?

Here, the ++ symbol abstractly represents an ‘actionable’ context, so it makes sense that can also be used as an event handler name.

So, again, the narrative construction for Accomps (in particular) is done in the Astrip, rather than the AI, which helps consolidate the actions to one place.

Namespace

The above reminds me about namespaces, names and types, and how this must be designed carefully. The namespace of e.g. poxingalley.fixer refers, firstly, to the name of the entity. However, it is possible that the name doesn’t exist in the scene. When that happens, then the type of the entity is queried. In this case there would be a match.

This is used to create entity references to type rather than name, and so some care must be taken to name (Astrip name, that is) the entities uniquely from their type.

Accomps scripting, Narrative scripting, Triggers

Introduced at later state was Accomps scripting, which monitors the Accomps and then executes a directive. However, this wasn’t terribly useful.

It was more useful to shift the work to Astrip since Astrip handles a lot of interaction. I’ve already talked about Player interaction using Astrip, but Astrip handles Triggers, too, which are the primary handlers of the narrative. So if TGF is implemented, Triggers could be easier to monitor and handle, especially since the Accomps keywords would be in their respective documents.

Convo improvements

In addition to using TGF for Convo, there are certain workflows that need to be addressed.

  • Convo should have a way to sort which Choices comes first
  • Feedback from tester: default SPACE to advance the conversation instead of clicking on the Choice. In some cases a [...] is presented to the Player, and the SPACE bar could be used to click on this implicitly.
  • The above could be improved in a way to make the Convo navigable by keyboard, so that a selection halo appear on the active Choice, and the SPACE bar (or ENTER) may be used to select the haloed Choice. This is also in line with the first item of having the ability to sort Choices, so that the most ‘obvious’ Choice is haloed first.
  • The Convo should feature the ability to not cycle back to the Choice that has already been chosen in the same Convo session. This requires keeping track of the chosen Topics for any given session. This could have been implemented in the prototype, but due to other things needed done, it wasn’t.

Unique and non-unique items

In the prototype, unique and non-unique items were delineated for the purpose of figuring out how to make them arrange themselves into icon in the inventory. However, the implementation was not totally complete. Unique items should not have any ‘quantity’ but this was not enforced in the prototype; e.g. assigning TINV bin:#1x1,shokgun=1 yields a numeral above the Shokgun icon. In the real game, unique and non-unique items should be enforced especially in regards to how items are counted.

AI vs FSM

So far, the event handler system I’ve created works well with what I’ve required it to do. In fact, I don’t want to overcomplicate the AI, but I am still investigating whether FSM might be tidier.

CX and ENX integration to INV

CX and ENX  are not integrated as INV items. I’m wondering whether this is needed. First, in the protoype, CX cannot be had by any way except through Merchant Trade. The reasoning is that CX is an electronic currency, so you can’t really ‘loot’ CX. But if it so happens that there’s a narrative justification for it, then CX should be lootable. On the other hand, I could introduce a ‘credit booster‘ item which loads the CX attribute just like powercaps load ENX.

Robot LOS reaction time

The introduction of an AI based ‘downtimer’ introduced an apparent random delay in the reaction time of Robots when they wanted to shoot or provide some reaction. This seemed to be a desirable effect. And it also made performance better by not hitting the AI each tick.

Downtimers and Uptimers

Downtimers and Uptimers were a specific AI feature that the game engine was connecting to. When an AI variable which was prefixed with downtimer or uptimer was created, it would update the value every 0.25s. If it were a downtimer, it will subtract 0.25s; if it were an uptimer it would add. Uptimers didn’t feature in any AI at all because downtimers didn’t need to check against a custom value. Downtimers called an event called ondowntimer when a downtimer reached 0 or below.

In Unity, I think it may be possible for the AI to instruct a creation of a Timer class. This Timer class would then raise events when it expires. The AI can configure the Timer class for other special purposes if need be.

Options for stealth: dive and roll

Dive and roll, like Crusader, gives a good option to dash between openings.

  • Could be a roll for success against detection
  • Could be always success if not within attack FOV, even if within nominal FOV
  • May have a noise penalty (Agility roll) so that the Robot may be attracted to face the area.
  • Has a cooldown, so you can’t keep on using it.

Options for stealth: shadows

Shadows, if anything else, should be implemented. Shadows enable the Player to hide better.

Dynamic lighting may, or may not play a part in this, though I think it may be too complicated to do so.

Options for combat

Some combat options for a more aggressive game style could be added.

  • Grenades were planned but not implemented.
  • Grenades are of 2 types: lobbed and discus. The discus type can be positioned around corners. The lobbed grenades can only be thrown overhead.
  • Although area effect was implemented in the prototype, I locked out the weapon that used it.

Reconsideration of ActionStrip user-friendliness

This refers to how obvious interactibility should be for scene elements.

  • Should we mouseover the element before the Astrip is valid (like the current implementation)?
  • Should we display all interactables on the SPACE keypress and then have the Player move the mouse and the Astrip icons pop up dynamically based on the mouseover?

I received feedback about this:

  • Mouseover should bring up a default interaction icon.
    • NPC – talk, or if not applicable, look.
    • Scene elements – search, if searchable, or look.
    • Robots – none, as they are attackable.
  • When LMB after mouseover, then default action is done.
  • If long LMB after mouseover, then potentially more options are displayed.
  • RMB over mouseover does nothing, as this is the fire button.

Area look-ahead, limited or unlimited range

This is the MMB look-ahead feature. Perhaps the MVS or at least the Longsense module could make a comeback so that it’s possible to modify this feature. Right now the look-ahead is unlimited, but this may not suit well. Not sure.

Shock effectivity

The shock effectivity is very effective, actually. The Shokgun feels that it’s not meant to ‘kill’ Robots, but just to shock them enough to get away, which has a nice feel to it.

Help tooltips

What are the tooltips that can help introduce the gameplay mechanics?

Pickups

Pickups are necessary especially in regards to the bomb placement. The prototype did not implement TMX-originated pickups for simplicity, though this should be implemented in the game.

Attack FOV vs Nominal FOV

This refers to the FOV needed by a Robot to attack. Let’s say this is the fire cone of the weapon. The Nominal FOV refers to the actual sighting FOV. A Robot might see you because of a high Nominal FOV, but until it faces you within its Attack FOV, it is unable to shoot.

Cooldown/heat-up period for certain actions/items

  • The C-Bomb required some time to set
  • The Haxbox required some time to set as well as a cooldown period before it could be used again.
  • Glitters had a duration
  • New movements, such as diving/rolling and dash may also have a cooldown period
  • Meds or dope could be restricted

C-Band GUI redesign, more icons, less bulk

More action icons were but along the C-Band causing it to expand horizontally. This made the interface bulkier than I originally intended, and thus the frame looks a bit bulky. More icons would be added to include the use of dope and potentially other actions, so this redesign is necessary.

Move on Intended Action location before action

The prototype featured moving to an exit tile when an exit element was clicked. But this did not reflect the other actions, such as talking or searching.

Removed or unimplemented features

  • Map Layers, MVS, Stacker (removed)
  • Poxbox (unimplemented)
  • Dope use (unimplemented)
  • Armour mechanic (unimplemented)
  • Nixing (unimplemented)
  • SCAMs (unimplemented)
  • Merchant price adjusters (unimplemented)
  • Confuse effect (unimplemented)

Player  concept design

The new narrative might see the Player’s backstory as an engineer. The current Player design doesn’t look like an engineer or anything particularly ‘technical’.

Re-evaluate Powercaps charge and Meds healing

The powercaps seemed to be recharge weakly, while meds seemed to heal very much. Need some thought about this.

Robot positional persistence vs movement in the background

This one is a tough one. Should Robots be virtually moving around? I think this is an overhead that’s rather hard. Perhaps it could be faked: before a given time threshold, Robots persist in their locations. After a certain game time, their position can be moved to a different location within a given radius, giving the impression that they have moved there when the Player has moved out of the area.

Spawning, random TINV, random attributes?

This was only partially worked out in the prototype. The spawned entity drew from a fixed TINV db. In the real game, the spawned entities should be able to randomise their inventory.

Save games

Save games are easy in C2, but I don’t think I’ll have the same ease in Unity. I think the first thing that must be taken care of is the ability to save games in Unity.

Debugging requirements

Most of the debugging requirements involve the checking of Accomps (conditions) as the Player progresses.

  • Ability to configure inventory, Accomps, during run-time using presets.
  • Ability to configure location of Player as well using presets.
  • The abovementioned configuration should reside in one preset system.

 

Advertisements

Prototyping – A Killing House

After six months (almost seven!) I can say that I’ve reached what is effectively the ‘end of the road’ insofar as the Citizen prototype is concerned.

I’d like to recap what I am trying to accomplish in this prototype.

I am very aware of my inexperience; I’ve never done a game before, and so I don’t know what works and what doesn’t. I only have my own taste and sensibility for what it’s worth. Though I believe (or hope) I can pull it off, I don’t know what it really is, so this prototype has been an effort to put all the ideas that I’ve come up with into one package. I thought to myself that creating systems for a game is relatively straightforward. But can I integrate any number of those systems with each other, and have them work? If I can manage that in a prototype, then I would have the blueprint for an actual game.

I created this prototype with that mind. And having experienced the troubles of illogical inconsistencies in system, bugs of a generic sort, it feels like I’m about to ‘ship’ a game without the danger of flagellation from Steam reviews.

This is my version of the Killing House, where I rehearse and train for the game I’m going to make. I’m using live rounds and making it as realistic as possible because I want to, as much as possible, be prepared for the task up ahead. I put as much effort into it because if I can get this to a level that I’m happy with, then it’s quite possible that I can pull it off when I make the final game in Unity.

Making the prototype itself was grueling enough. I don’t remember burning so much midnight oil since I was a fledgling 3D artist back more than a decade ago. Not every idea or implementation would make it to the end. But none of those aborted/abandoned things were wasted on the experience of trying to make it.

For example, the game narrative script that I wrote at the beginning is now completely different. However, it served as a springboard for identifying and designing the systems I need to have in place, so it was still a profitable and necessary first step.

There were other ideas, like the ‘Multi-Vision System’, ‘Stacker’, and ‘Map Layers’ that I dreamed up up that had to be thrown away due to either being too oblique, complicated, or just plainly stupid.

Also, I had created and rendered NPC assets which were not used in the final version of the prototype due to a change in narrative.

While there are clear advantages working in Unity — certain bugs and limitations that I’ve encountered in C2 will no longer be relevant — I’m aware that I’m facing a world of hurt in Unity-land, too. Naturally, I will take advantage of what Unity has to offer, as well, such as a full 3D environment (something which I’m comfortable with), dynamic lighting, and a robust pathfinding system, among others. All the while, I must remember to keep it simple. It’s possible that I may have to even dumb down the game more so in Unity, to keep it within my capabilities.


Sights…


Towards the end of the development, I started working on a introductory cinematic, which was a fun thing to do. It wasn’t intended to accurate represent the all the details of the story but only to give a good idea what it’s all about. Here are some screen shots:






Here are some additional screen shots demonstrating how it looks like.

Sample environment. On-screen interfaces (bottom, upper-left, upper-right)
Sample inventory screen. Item descriptions, icons.
Sample dialogue panel (Convo). Portrait.

I posted a gameplay sampler which demonstrates some of the systems in action.


… and sounds!


I’ve also cut up some audio (re-learning Cubase again!) to use with the prototype. I may post a complete playthrough in the future, perhaps after some of my friends play it first themselves and hopefully give some feedback.

 

Prototype sampler # 1

So, finally, I’ve completed my last major developmental milestone and I think that deserves a post. 🙂

The above video simply shows the playthrough of some of the game mechanics. The quest has not been fully written yet, and though all of the game mechanics are working, they’re not readily apparent without some introductions. The video is mainly to see how adventure and combat are blending as one piece.


I’ve slaved away on numerous aspects to accomplish all the major milestones I had set out to do. There were a few additions to these, but they were minor changes, and all in part of the iterative process of figuring out the closest gameplay mechanics I wanted to implement in Unity.

Though my work isn’t done yet — there are still UI issues I need to sort out — and there are still some niggling bugs present in the prototype, it is largely playable. By ‘playable’ that means you can run around, talk to people, and shoot Robots, and get shot back. You can plant a bomb, blow it up, and you can blow yourself up in the process as well. You can ‘pox’ a powerlet to get energy, you can buy and use meds to heal yourself. Frankly, a few months ago I didn’t think I could end up saying all this in one paragraph.

Most of the joy, and fear, of this prototype has been the implementation of a bespoke AI graph framework. It’s a joy because it actually works; it’s a fear because it sometimes feels too deep for me to always grasp its innards when some things don’t go right.

I’ve gone through mounds of halved/quarter-A4 to-do sheets with heaps of orange highlighter marks signifying all the big and small tasks or goals I needed done. There are so many disparate systems working that that if I didn’t have a calendar tracking my progress, I wouldn’t be able to grasp what I myself had accomplished.

For example, here’s a quick run-down of the aspects.

  • Asset creation.
    • I’ve heavily used Janus to break out animated sequences.  Using FORFILEs, a Janus looping construct that iterates through the lines of a file, creating an animated character, such as the Player character, was simple as I needed only to set up one angle and let Janus break out all the other 15 directions. Variable frame ranges for a particular animation were also taken care of using the same principle.
    • Janus was an important cog in the making of the prototype because of the amount of iterations for the scenes. An element would sometimes become designated as an interactable element, which had to be split from the main scene and rendered separately.
    • NPC/Robot portraits had a separate animation and render, and specifically had to go through post-processing.
  • Tiled was used in making the maps, and Rex’s TMX Importer was used to carry that information in C2. I had to do some modifications to the TMX Importer to enable the retrieval of the Tiles and Objects image source. Tiled enabled me to experiment and implement concepts by introducing certain datatypes for the engine’s use, which informs me of how I may implement the maps in Unity.
    • This had to be balanced with Game Data Documents which are comprised of text-based files of varying structures. These Data Documents are the immutable attributes used by the systems. In the beginning, the data would come from different sources; one would be defined in the TMX, while others defined in a CSV table. As I progressed, I refined the categorisation of data.
  • The in-game Inventory system was one hell of an undertaking, The Inventory system is connected to the Trade system, which is further split into two variants: the Container system, and the Merchant system; the former simulates the ability to store items in ‘containers’, and the latter simulates buy/sell transactions with NPCs. Merchant data, like price, buy/sell limitations, and price adjusters are tied to tables and the NPC entity as defined in the TMX.
  • While the code related to the movement was entirely specific to C2, I had to nevertheless overcome these issues to get a working prototype. Pathfinding needed some optimisations, behaviours related to physicality of entities needed to be coded in relation to the established movement behaviours. This aspect will largely be replaced by Unity’s navmesh, in addition to a target grid overlay that I may custom-build myself.
  • The Action Strip (a.k.a. Astrip) system — the method for interacting with elements in the game — was developed to be authored using text files (like most systems in the game).It serves as the hub for all ‘adventure’ interactions. It was also designed to be generic so that the display of interaction results can be be tweaked directly from the text file. For example, a ‘look’ action,  at an object may initiate a display of a description, or the narrative box, or initiate a dialogue, or anything else that has been allowed in the engine.
  • The Convo system was another early development. Some additional Python code was necessary to convert the authored .graphml files (using yEd) into a Markdown format (for readability in a text editor). However, the development of the AI graph framework proved that the Convo system was inferior, though both used node graphs. Although the Convo system has not yet been upgraded to use the same (or similar) framework of AI, this would eventually be done when the port to Unity is made.
    • The Convo system could be called by the Astrip system.
    • The Convo system also allows implicit trade of items. For example, if through speaking with an NPC, it gives you an item to be used. The Convo system communicates to the Inventory system and places the item in the Players inventory.
  • The AI system used the TGF format to represent a nodal graph. Then an in-game parser and callback/event handler framework handled the execution of the AI graph on a per-Robot basis.
    • The AI system is connected to other systems, such as the Inventory, the Trade, Convo (dialogue system) and of course, NPCs/Robots themselves.
    • Using the AI system, a Robot can accost you to do a contraband check, which was one of the first implementations of the AI (even before combat).
    • The AI can contextualise its own dialogue with the Player, changing it from a contraband check to an arrest, for example.
  • Lookups for gameplay values, such as hit-chance, effect of skills on gameplay, were done using a non-linear interpolation that was accomplished by using Open Office Calc’s cell formulas. This allowed me to tweak lookup values utilising functions as opposed to doing it individually, per cell! This application was conveniently placed to export to CSV directly, so no other intermediate process was needed to get it to C2.
  • The Combat system is closely tied with the AI and is comprised of many factors, a few of which include:
    • Alert level behaviour of Robots; certain actions at a certain alert level means differently for Robots. For example, running or crouching is OK when Alert Level is 0. But when the Alert Level is 1, running or crouching is interpreted as suspicious and Player will be fired upon.
    • Behaviour of Robots differ from one another. Some guard, some patrol, some check for contraband.
    • Offensive component
      • Player accuracy skill
      • Weapon attributes such as range, max_range, weapon dropoff (weapon damage and chance to hit is affected)
      • Rate of fire
      • Dual-wielding of weapons
      • Crouching increases accuracy
      • Bomb placement and detonation
      • Shock effect; certain weapon may stun a Robot for a period of time.
      • GMAC system, which is a modifier on top of a typical random number generator.
    • Defensive component
      • Use of cover for defence
      • Crouching reduces profile, increases Player defence against being hit
      • Running increases Player defence against hit but only if running perpendicular to Robot.
    • Stealth component
      • Crouching behind low obstacles for stealth
      • Noise level when running; Robot hears you!
      • Glitters is Electronic Counter-Measures and makes the Player invisible for a short period of time.
    • Hacking powerlets to get more energy, and the associated success rates, and the penalties for failure
    • And others that are too lengthy to include, but you get the idea…

Normally, a prototype is small, whose gameplay represents the root of what the game is about. Sometimes, a prototype is created to determine if a gameplay works or not, or if people like it enough.

But I built the prototype as a technical reconnoitre  of what I’m going to come up against. You can say I was also trying to form a beachhead at the same time. I don’t know if people would like it, but I can’t be dissuaded either way; I’ve gone this far solely on the excitement of taking a childhood game to my present.

But a prototype is also made to present the gameplay as clear as possible, that if the prototype is fun to play, then the real thing would be as fun, if not more fun to play. The problem I have with Citizen is that it is an adventure as much as it is a shoot-em-up game. The fun in 2400 AD, Fallout, or Shadowrun, for example, is the fact that it is an adventure. But I find it difficult to express the full adventure by doing a half-adventure. I think that’s due to my lack of experience writing for games. At the same time, I think that I’ve been focused so much on the technical aspects that I’ve not really dug as deep as I should into the potential of the narrative. I’ve been working on the framework in which I hope to base an adventure story (of which I have a first draft already), and I think that this prototype, as it stands, should be just seen as the prototype for the framework.

More to come.