Tag Archives: triggers

Prototyping – What have I learned?

This post is for technical reference.


If there’s one thing I need to consider seriously is how to consolidate data that the systems use.

The main data holders are Accomps, INV, TINV, and Entity State.

The problem of Entity State, as I mentioned in one dev video is whether Entity State can be persistent If it can, then this is valid data holder. If not, we nix it.

Accomps spelling mistakes

Creating and referencing Accomps was an issue because an Accomp could be created and referenced anywhere and spelling mistakes could occur in the referencing between any of the systems. I don’t think there’s a real workaround for this except to organise the Accomps properly.

Accomps master list

If the Accomps are created in a nodal graph, they may be easier to spot within the yEd interface. However, Accomps may be ‘declared’ in any document. Thus it is useful to have a master Accomps document to reference from. But this needs to be constantly updated. If naming convention is strictly use in the creation of Accomps, then a script could be made to crawl all the documents to see where Accomps exist and update the master list.


INV was queried in the Condition (# in SNTX notation). This is fine for now, but TINV should also be supported. In this case, the TINV is self and TINV is always persistent.

Systems that reference data

Each of these data holders are queried and set in all of the systems:

  • Convo
  • AI
  • Astrip
  • Trade (not implemented in the prototype)

How to consolidate?

  • Convo should be re-designed to use the TGF format. This means connections should be taken into consideration, and perhaps edge labels should identify relationship between nodes.
  • Conditions using TGF could be represented in nodes rather than in the SNTX notation.
  • Astrip is currently only expressed in SNTX notation. Astrip should be converted to TGF, too. But obviously, this requires a complete redesign of the Astrip nomenclature. But since a graph is the goal, we can use branching for Conditions and connecting to the same nodal directives of  dofunc, doaccomps, etc.
  • AI uses TGFs fully, but lacks dofuncs and doaccomps and dostate. AI uses in-game ‘DoFunc’ functions in order to accomplish specific things, which may or may not involve executing dofunc, doaccomps, or even dostate (though that’s not really being used due to the aforementioned persistence issue). But I can see where it may be useful to issue a directive through the AI.
    • When the Fixers were haxed, it was the AI that executed a special (but generic) DoFunc that added an Accomps saying it was hacked. It did this under the onhaxed event handler.
    • The improvement I see in this is to mirror the handling of the event to the Astrip. See below.

Re-purpose SNTX

SNTX’s heavy lifting is most in the areas of:

  • Entity reference
  • Conditions

Entity reference is something like ==poxingalley.bin. In a TGF, there are no node types, so referencing an entity would still require a keyword to represent the type of node being processed, so I think prefixes like the == symbol would still need to exist.

Conditions are like ?$!entry_desc_shown. Like entity referencing, this needs to remain to identify the node as a condition. However, unlike the original SNTX notation, which features the entity reference and condition (and directive) in one line, the nodal graph will split entity reference, conditions, and directives as separate nodes, which makes for a more readable graph, and makes it possible to branch and re-use other subgraphs.

The image above shows a possible way to go about it. The entity is identified first. Then contextualised immediately with the relevant action that operating on it (look). Then the Condition is checked. Edges branch from that Condition check with a label 0 and 1 and -1 denoting False and True and Nominal (Default). Nominal means that any directive will fire regardless of the Condition. This -1 edge may not be necessary, however, because it should be possible to connect the ++look node to the ~doaccomps node directly, it should grok it.

So the SNTX notation of ==, ++, ?, and ~ is still the same, but is re-purposed in TGF to directly indicate the type of node being processed. Also, we have the added 0 and 1  and -1 edges.

Astrip and event handlers

This is a narrative tool, not a generic one, so we’re operating upon set entities, not spawned ones.

Why not deal with event handlers in Astrip just as we do in AI? When an event is called in the AI, it is handled in the context of the AI. And as mentioned above regarding Fixers modifying the Accomps, it seems untidy to do that. What if, then, we query for an event handler in Astrip document and then process the directive from there?

Here, the ++ symbol abstractly represents an ‘actionable’ context, so it makes sense that can also be used as an event handler name.

So, again, the narrative construction for Accomps (in particular) is done in the Astrip, rather than the AI, which helps consolidate the actions to one place.


The above reminds me about namespaces, names and types, and how this must be designed carefully. The namespace of e.g. poxingalley.fixer refers, firstly, to the name of the entity. However, it is possible that the name doesn’t exist in the scene. When that happens, then the type of the entity is queried. In this case there would be a match.

This is used to create entity references to type rather than name, and so some care must be taken to name (Astrip name, that is) the entities uniquely from their type.

Accomps scripting, Narrative scripting, Triggers

Introduced at later state was Accomps scripting, which monitors the Accomps and then executes a directive. However, this wasn’t terribly useful.

It was more useful to shift the work to Astrip since Astrip handles a lot of interaction. I’ve already talked about Player interaction using Astrip, but Astrip handles Triggers, too, which are the primary handlers of the narrative. So if TGF is implemented, Triggers could be easier to monitor and handle, especially since the Accomps keywords would be in their respective documents.

Convo improvements

In addition to using TGF for Convo, there are certain workflows that need to be addressed.

  • Convo should have a way to sort which Choices comes first
  • Feedback from tester: default SPACE to advance the conversation instead of clicking on the Choice. In some cases a [...] is presented to the Player, and the SPACE bar could be used to click on this implicitly.
  • The above could be improved in a way to make the Convo navigable by keyboard, so that a selection halo appear on the active Choice, and the SPACE bar (or ENTER) may be used to select the haloed Choice. This is also in line with the first item of having the ability to sort Choices, so that the most ‘obvious’ Choice is haloed first.
  • The Convo should feature the ability to not cycle back to the Choice that has already been chosen in the same Convo session. This requires keeping track of the chosen Topics for any given session. This could have been implemented in the prototype, but due to other things needed done, it wasn’t.

Unique and non-unique items

In the prototype, unique and non-unique items were delineated for the purpose of figuring out how to make them arrange themselves into icon in the inventory. However, the implementation was not totally complete. Unique items should not have any ‘quantity’ but this was not enforced in the prototype; e.g. assigning TINV bin:#1x1,shokgun=1 yields a numeral above the Shokgun icon. In the real game, unique and non-unique items should be enforced especially in regards to how items are counted.


So far, the event handler system I’ve created works well with what I’ve required it to do. In fact, I don’t want to overcomplicate the AI, but I am still investigating whether FSM might be tidier.

CX and ENX integration to INV

CX and ENX  are not integrated as INV items. I’m wondering whether this is needed. First, in the protoype, CX cannot be had by any way except through Merchant Trade. The reasoning is that CX is an electronic currency, so you can’t really ‘loot’ CX. But if it so happens that there’s a narrative justification for it, then CX should be lootable. On the other hand, I could introduce a ‘credit booster‘ item which loads the CX attribute just like powercaps load ENX.

Robot LOS reaction time

The introduction of an AI based ‘downtimer’ introduced an apparent random delay in the reaction time of Robots when they wanted to shoot or provide some reaction. This seemed to be a desirable effect. And it also made performance better by not hitting the AI each tick.

Downtimers and Uptimers

Downtimers and Uptimers were a specific AI feature that the game engine was connecting to. When an AI variable which was prefixed with downtimer or uptimer was created, it would update the value every 0.25s. If it were a downtimer, it will subtract 0.25s; if it were an uptimer it would add. Uptimers didn’t feature in any AI at all because downtimers didn’t need to check against a custom value. Downtimers called an event called ondowntimer when a downtimer reached 0 or below.

In Unity, I think it may be possible for the AI to instruct a creation of a Timer class. This Timer class would then raise events when it expires. The AI can configure the Timer class for other special purposes if need be.

Options for stealth: dive and roll

Dive and roll, like Crusader, gives a good option to dash between openings.

  • Could be a roll for success against detection
  • Could be always success if not within attack FOV, even if within nominal FOV
  • May have a noise penalty (Agility roll) so that the Robot may be attracted to face the area.
  • Has a cooldown, so you can’t keep on using it.

Options for stealth: shadows

Shadows, if anything else, should be implemented. Shadows enable the Player to hide better.

Dynamic lighting may, or may not play a part in this, though I think it may be too complicated to do so.

Options for combat

Some combat options for a more aggressive game style could be added.

  • Grenades were planned but not implemented.
  • Grenades are of 2 types: lobbed and discus. The discus type can be positioned around corners. The lobbed grenades can only be thrown overhead.
  • Although area effect was implemented in the prototype, I locked out the weapon that used it.

Reconsideration of ActionStrip user-friendliness

This refers to how obvious interactibility should be for scene elements.

  • Should we mouseover the element before the Astrip is valid (like the current implementation)?
  • Should we display all interactables on the SPACE keypress and then have the Player move the mouse and the Astrip icons pop up dynamically based on the mouseover?

I received feedback about this:

  • Mouseover should bring up a default interaction icon.
    • NPC – talk, or if not applicable, look.
    • Scene elements – search, if searchable, or look.
    • Robots – none, as they are attackable.
  • When LMB after mouseover, then default action is done.
  • If long LMB after mouseover, then potentially more options are displayed.
  • RMB over mouseover does nothing, as this is the fire button.

Area look-ahead, limited or unlimited range

This is the MMB look-ahead feature. Perhaps the MVS or at least the Longsense module could make a comeback so that it’s possible to modify this feature. Right now the look-ahead is unlimited, but this may not suit well. Not sure.

Shock effectivity

The shock effectivity is very effective, actually. The Shokgun feels that it’s not meant to ‘kill’ Robots, but just to shock them enough to get away, which has a nice feel to it.

Help tooltips

What are the tooltips that can help introduce the gameplay mechanics?


Pickups are necessary especially in regards to the bomb placement. The prototype did not implement TMX-originated pickups for simplicity, though this should be implemented in the game.

Attack FOV vs Nominal FOV

This refers to the FOV needed by a Robot to attack. Let’s say this is the fire cone of the weapon. The Nominal FOV refers to the actual sighting FOV. A Robot might see you because of a high Nominal FOV, but until it faces you within its Attack FOV, it is unable to shoot.

Cooldown/heat-up period for certain actions/items

  • The C-Bomb required some time to set
  • The Haxbox required some time to set as well as a cooldown period before it could be used again.
  • Glitters had a duration
  • New movements, such as diving/rolling and dash may also have a cooldown period
  • Meds or dope could be restricted

C-Band GUI redesign, more icons, less bulk

More action icons were but along the C-Band causing it to expand horizontally. This made the interface bulkier than I originally intended, and thus the frame looks a bit bulky. More icons would be added to include the use of dope and potentially other actions, so this redesign is necessary.

Move on Intended Action location before action

The prototype featured moving to an exit tile when an exit element was clicked. But this did not reflect the other actions, such as talking or searching.

Removed or unimplemented features

  • Map Layers, MVS, Stacker (removed)
  • Poxbox (unimplemented)
  • Dope use (unimplemented)
  • Armour mechanic (unimplemented)
  • Nixing (unimplemented)
  • SCAMs (unimplemented)
  • Merchant price adjusters (unimplemented)
  • Confuse effect (unimplemented)

Player  concept design

The new narrative might see the Player’s backstory as an engineer. The current Player design doesn’t look like an engineer or anything particularly ‘technical’.

Re-evaluate Powercaps charge and Meds healing

The powercaps seemed to be recharge weakly, while meds seemed to heal very much. Need some thought about this.

Robot positional persistence vs movement in the background

This one is a tough one. Should Robots be virtually moving around? I think this is an overhead that’s rather hard. Perhaps it could be faked: before a given time threshold, Robots persist in their locations. After a certain game time, their position can be moved to a different location within a given radius, giving the impression that they have moved there when the Player has moved out of the area.

Spawning, random TINV, random attributes?

This was only partially worked out in the prototype. The spawned entity drew from a fixed TINV db. In the real game, the spawned entities should be able to randomise their inventory.

Save games

Save games are easy in C2, but I don’t think I’ll have the same ease in Unity. I think the first thing that must be taken care of is the ability to save games in Unity.

Debugging requirements

Most of the debugging requirements involve the checking of Accomps (conditions) as the Player progresses.

  • Ability to configure inventory, Accomps, during run-time using presets.
  • Ability to configure location of Player as well using presets.
  • The abovementioned configuration should reside in one preset system.



Workflow: Interaction triggers

In the RND test project one of the most important systems I developed was the interaction trigger system.

This system is simply a method of binding an action (ie “Interact”) and a specifier, and then wrapped to make a ‘broadcast signal’.

This broadcast signal is then sent. Because the broadcast signal can optionally contain a ‘target’, only those matching the target description can be made to respond to the signal.

The importance of a system like this is the ability to make level-specific scripts. I’ll give a test case from the RND project.

  • In Tiled, a marker is created with a name. This is the trigger name, which can be anything as long as it can be uniquely identified.
  • In C2, a ‘On GridMove reach target’ action is bound so that it wraps the reaching of the tile with the trigger name of the marker it has reached.
  • On reach target, the trigger is sent to a BroadcastTrigger function, which accepts the trigger name, and the intended target of the trigger, if any. The target is comma-delimited, so multiple targets can be specified.
  • The BroadcastTrigger function looks at the targets, tokenises them, and then applies the ‘receivedtrigger’ variable of each of the instances that are able to accept triggers. It applies them only to the targets specified, or all instances if no target was specified.
  • Note that a family called f_trigger_receiver was made and the receivedtrigger variable is called ‘f_receivedtrigger’ in order that BroadcastTrigger can efficiently send it to those concerned.
  • In the level-specific script, the intended target is waiting for its specific f_receivedtrigger to change. BroadcastTrigger would have changed it.
  • When it does, it fires off the events there.

In addition to the trigger, level-specific behaviours are specified, and can override the default AI of any object. This is why this is important, because the scripting is done in a separate event sheet (ie logic) and not predefined in the main logic.

Now, other actions are bound, as needed, to the BroadcastTrigger. For example, in the RND project, the On reach target trigger condition was the first one I implemented. But quickly afterwards, it was easy enough to bind the TalkToNPC function, or the InteractWithNPC function to the broadcast.

Of course, the trigger name changed. In the TalkToNPC trigger, the trigger name was "talk "&cmover.name in which the ‘talk’ keyword was appended by the actual variable name of the NPC that was talked to. The name of the NPC talked to was embedded in the signal and no target was specified because the logic was that either the player or the game world was the receiver. But, it is also possible, or even more beneficial if indeed the recipient of the ‘talk’ action was put in to the trigger target, as I did with the next implementation.

I implemented an ‘InteractWithNPC’ action in the same way, but included the recipient of the ‘interact’ action as the target. In the level script it was intended to add to the accomps to keep track who had been interacted with.

The BroadcastTrigger concept is just a concept, but seems to be a very flexible one, as I am using it currently to design a generic kind of interaction behaviour between a single ‘Useitem’ action to a host of different possible objects, each with their varying results. It’s this reason why BroadcastTrigger is useful, because behaviours are defined in the event sheet, and can be contextual as well as part of the main logic.


Thoughts on triggers

On the RND test, here are some thoughts on triggers.

Triggers are broadcast by a function. Triggers may have a targeted object/instance. In order to target any potential object, they’re put into a Family, which I’ll refer to as f_trigger_receiver (f_tr, for short).

There are 2 parts to triggers. The ‘main’ logic, and the ‘map’ logic. The main logic handles generic logic of triggers.

Main logic


The Family for all trigger receivers. Requires f_name, and f_receivedtrigger variables. f_name is the name of the entity which a trigger will use to refer to this instance. f_receivedtrigger is the string identifying the trigger that has been sent out.

On player GridMove reach target

Fired every time the player moves into a tile. This queries if a trigger area was stepped on.

Also, the player must have a current_trigger variable which keeps track of the trigger area it is on at any given time. This prevents re-triggering when the trigger area covers adjacent tiles. Also, this allows to find out if the player has stepped out of a trigger.


A function which handles the send off to f_trigger_receiver. It accepts a trigger_name, and a trigger_target. The trigger_name is the identifier of the trigger. The trigger_target is a comma-delimited string that identifies the objects/instances that the trigger will be sent to. The f_trigger_receiver family is used in order to go across different object types.

Other interactions

Any other interactions deemed worthy of a trigger only has to call the BroadcastTrigger function, and feed it an object that can accept a trigger.

The RND test, for example, had broadcast an NPC interaction generically by feeding it trigger_name="npctalk", trigger_target="npc1". Then the trigger was broadcast only on npc1 and processed accordingly.

There are no  ‘global’ triggers (ie triggers must always have a target). If a ‘global’-like trigger is needed, it might be better to use the player’s mover token as that, since it’s as global as you’re going to get.

Map logic

Map logic refers to the map/room-specific stuff.

Typically, the triggers for a particular room are stored in a separate event sheet (which I call scripts).

Time triggers

I put the time triggers in the map because it’s more specific to the map/mission. I still call BroadcastTrigger, but the trigger_name is specific to the map, of course.

Time triggers include ‘per-tick’ or any kind of time-related triggers.


Some instances need to init themselves before going into play. For example, a waypoint traveller needs to init the first waypoint index. This is done using the post_tmx boolean check, which is basically a switch that tells that the TMX has been completely read, and all objects have been created (and thus referenceable).

Other triggers and functions

Any other kind of triggers, whether they’re from FSM or TOWT, can be put in the map logic script. In the RND test, I’ve put in unique FSM states (eg “reachpath”) to put it in a special state so that the rest of AI can contextualise itself.

Map-specific functions are put here as well.





Workflow: Test project RnD 2017 03 25

I’ve been testing a lot of concepts (some old, some new) with a test project and this post is about what I’ve learned, and what else needs to be explored.

CSVToDictionary and AJAX

It’s easier to maintain a separate text file for populating lookup dicts. Use the AJAX object to read the text and then CSVToDictionary to populate the dict. Remove the double-quote marks when using a text file. This makes it easier to read.

Newlines in text files

When extracting text using AJAX, newlines might be necessary, but escape characters do not seem to be recognised. Therefore, I ended up using escape characters, but had to process it (using search-replace) during extraction.


Containers have been extremely useful especially in terms of debugging messages. For every object I need to debug, a debug Text object is created, and querying the instance of the object will always point to the same objects of the container. No additional picking is necessary. This is probably the most important aspect of my testing.


There are no real enumerations in C2, but simply assigning a constant number to a recognisable variable name is good enough. For example, in the case where the z-layer of a logical position needs to be identified by keyword, I use Z_TILE=0, Z_WALLS=1, etc.


Although a topic unto itself, the main takeaway from doing AI, is how triggers are setup in Tiled and how it’s set up in C2 to respond to triggers.

There are area triggers, which are set up in Tiled. These are positional, and in the test project, they included a ‘facing’ property, which meant that the trigger is fired only when the player is facing a certain direction. The trigger’s name is the string that will end up being called by C2. I opted to use the ‘name’ attribute in Tiled instead of the relegating it to a property because it’s clearer to see the object name in the Tiled viewport.

Some triggers are set up in C2, especially other kinds of interactions. For example, talking to an NPC will yield a trigger specific to the interaction.

The C2 trigger itself is tied to a particular entity, whether it is another NPC, or some other object. That object is responsible for keeping track of the global trigger calls, and what is relevant to itself. For example, if a certain trigger is called 3 times, the object must keep track that it has heard those 3 triggers, and act accordingly. As such, two variables are meant to store AI-specific data: scriptmem, scriptmem_float. The scriptmem variable is meant for strings, and the scriptmem_float is for float value. For example, scriptmem_float was used a generic timer (for waiting). On the other hand, scriptmem was used to store how many times the AI has heard a specific trigger by checking and appending keywords onto the string.

Another important thing about AI is the switch between scripted AI and ‘nominal’ AI. Nominal AI is one that is already pre-programmed in C2 where if there are no scripts directing the AI, it will follow a certain logic (which also depends on the type of AI it is). Two things needed to happen. First the AI needed to know which AI it was allowed to switch to, and this was put in a C2 variable called ai. For example, one agent was assigned ai=script,see. This allowed the agent to switch to ‘script’ mode, but also allow state changes to occur when the C2 ‘see player’ trigger was fired. There was a ‘hear player’ trigger that existed, but because this was not included in the variable, the agent did not respond to hearing, only seeing, and only triggers involving scripts. This ai variable assignment is first done in Tiled, and then propagated to the agent during TMX load.

In addition to the ai variable, the agent had to be put into an FSM state called “script” when it is in scripted AI mode, which allows the system to distinguish which part of the AI sequence it is in. The C2 events which constitute the AI for that agent must consider other FSM states, like “idle”, which is often the ending state after a move.

AI is a bigger topic and I will delve into it more when needed.


I find, more and more, that groups are quite useful not only in organising and commenting, but allows simpler conditional actions to be done in-game. The only example I have is the deactivation of user input if a particular state is on-going. This makes it trivial to block input rather than having check conditions of state all through the user input event.

SLG movement cost

SLG movement cost functions can actually be quite simple. At first I thought it needed to accommodate many aspects, but in the end, despite the relatively complex requirement of the test AI, pathfinding, at most, needed only to query the LOS status of a tile. Impassability was bypassed by excluding impassable tiles from the MBoard, making pathfinding simpler and, I think, faster.

It’s also probably best to name SLG movement cost functions with the following convention: <char> <purpose> path. Eg: “npc evade path”, or “npc attack path”, whereby in the “npc evade path” the NPC avoids LOS, and “npc attack path” does not avoid LOS at all.

Orthogonal and Isometric measurements

I’ve researched and learned a lot of about how to transfer orthogonal measurements to isometric values. The positional values were simple enough, but the real progress was in computing angles.

Here is a list of important considerations when dealing with isometric stuff:

  • Converting a C2 object angle (orthogonal space) to an isometric angle (OrthoAngle2IsoAngle). This is used to draw a line in isometric view if that line’s angle was the same as in orthogonal view.
  • Converting an angle depicted in isometric view to orthogonal space (IsoAngle2OrthoAngle). This is used to determine what an angle would like when viewed from top-down. So when you measure the angle between two points, it’s not truly the angle when viewed in orthogonal space because the isometric view is skewing things. IsoAngle2OrthoAngle allows the reverse computation so that, for example, LOS could be determined for a particular point.
  • Converting orthogonal XY positions to logical XY (OXY2LXY). There is no Board function that allows this mainly because this is a peculiarity of the way Tiled positions objects of the Board. The positioning of objects is written in orthgonal space, but when Rex’s SquareTx projection is set to isometric, then all measurements become isometric. Thus this function is needed for this lack.
  • Movement Board and Graphics Board versions of OXY2LXY. This is required because SquareTx for each Board is different, and thus the logical positions will yield a different location.
  • Computing isometric distance to orthogonal distance. This measures two points in isometric space and gives out the distance as though you were looking from above. This is useful in determining the distance between foreground and background objects. This uses a SquareTx as a point of reference for the width/height ratio, but can use the MBoard or GBoard, because they are assumed to have the same ratio.
  • Computing snap angle (Angle2SnapeAngle). Looks at the object’s angle, and finds the nearest angle to snap to (assuming 8 directions). This is required so that the proper animation is set.
  • Converting MBoard logical positions to GBoard logical positions and vice-versa (MLXY2GLXY, GLXY2MLXY). This is very important as it is able to relate the MBoard to the GBoard. Because the MBoard has smaller cell sizes, querying the logical positions of the MBoard using bigger GBoard logical positions will always yield the top-left cell of the MBoard.
  • Convert GridMove direction to C2 angle (GetGridMoveDirection). The GridMove values are quite different. This function converts it for use with other things, like animation, or other function related to facing, which use the C2 angle, or snap angle.


The last part of this post is about LOS. I’ve already wrote about some aspects of this. But the main path of the research lay in the following:

  • A Line-of-sight behaviour is applied to the player.
  • The player’s facing angle is taken as orthogonal.
  • An LOS field-of-view is defined (eg 90 degrees) for the player.
  • At a given angle (facing_direction), left-side and right-side fov lines are drawn based on the defined fov (90 degrees). Note that these lines are virtually drawn orthogonally.
  • The left and right lines are then converted to isometric angles.
  • Because the left and right lines have been transformed, the difference between these two angles have changed. This new difference is the new LOS field-of-view.
  • The center between these two lines is the LOS center line. The player’s LOS is rotated towards the center.
  • With the new LOS field-of-view, and a new center, this corresponds to an isometric LOS based off a 90-degree LOS when viewed orthogonally.
  • The facing_direction mentioned above bears special mention. When a player clicks on tile in-game, he is actually picking with a view that he is viewing it in isometric view. Therefore, the facing_direction is an isometric angle, which must be converted to an orthogonal angle. It is only then that the left and right fov lines can be properly oriented, because they, in their turn, will be converted back to isometric after the computation is done.

What needs to be explored

ZSorting seems to take a lot of cpu time (~50%), and I’m wondering whether there is a way I can optimise this. So far, the best solution I’ve come up with is to use On GridMove as a condition for sorting. But I think the most ideal way is to localise the sorting around the areas where movement is taking place.