Boundary cropping and imagepoints

Once upon a time I wrote C2SpriteManager to solve the problem of cropping sprites and supplying some positional metadata alongside those sprites, most especially their pivot or origin points.

Cropping of sprites is necessary not only for the benefit of in-game rendering, but also to optimise the constant importing of files from the DCC.

The basic issue is as follows, taking this sprite into consideration:

Raw render

In the raw render, the sprite’s origin point is exactly at the centre of the image (i.e. 05., 0.5).

The reason why the raw render is not pre-cropped is because this is a 3d render scene and the camera setup is templated to render a series of animation.

And the reason why it is not more closely cropped is because it is more efficient to set up a generic “one-size fits all” rendering template with enough canvas room for the character to move, than to discover that it lacks, it and have to apply the template retrospectively or have multiple templates for different characters/props.

That decided, I had to consider the boundary-cropped image:

Cropped image

In a cropped image, the origin is near the bottom (i.e. 0.46,1.008)

Cropped image, different frame

And in another frame of the same sequence the origin has changed again (i.e. 0.37, 0.97)

Yet in the raw render, the origin point is consistently at the centre, because the image resolution remains the same.

The problem is that we need to position every sprite frame accurately on the transform we use to move the character, or else it if we left it as they are, the character would be jumping around.

In any 2d rendering engine, sprites are anchored to a reference transform. In Godot and C2, it is the upper-left corner. In Unity, I believe this could be changed to anchor at centre.

But in any case, using anchors will not do. Unless the sprite were close-cropped and had consistent resolution, the relative width and height resolutions will throw off the registration off.

This is where cropping imagepoints come in. In cropping the image, we also crop the imagepoints file. We start simply using the origin, which is always 0.5, 0.5 (centre).

If we determine the boundary (bounding box) of the image, how do we get the centre point using the new boundaries?

canvas_bounding_box = list(image.getbbox())
...
    with open(crop_ip_file, 'w') as cimp:
        for ln in lines:
            line = ln.strip()
            
            ip_name, ip_x, ip_y = line.split('\t')

            # convert ip ratio to pixel
            spx = float(ip_x) * image_size[0]
            spy = float(ip_y) * image_size[1]
            
            diff_w = spx - canvas_bounding_box[0]
            diff_h = spy - canvas_bounding_box[1]


            new_w = canvas_bounding_box[2] - canvas_bounding_box[0]
            new_h = canvas_bounding_box[3] - canvas_bounding_box[1]

            new_ratio_x = float(diff_w/new_w)
            new_ratio_y = float(diff_h/new_h)

            cimp.write(f'{ip_name}\t{new_ratio_x}\t{new_ratio_y}\n')

So by taking an input centre value which is the ratio x and y values of the point on the image, we get new ratio values for the new crop.

But then we need to apply this new ratio during game render, so we have to be able to know the cropped width/height and apply an offset based of the ratio.

var current_frame_width = get_current_frame_width()
var current_frame_height = get_current_frame_height()
hotspot_offset.x = current_frame_width * hotspot_position_ratio[0]
hotspot_offset.y = current_frame_height * hotspot_position_ratio[1]
position.x = -hotspot_offset.x
position.y = -hotspot_offset.y

How to get imagepoint positions in 3d

I can take this further by supplying any point in the image that is significant, and turn it into an actual position in the game engine. For example, I use imagepoint files as a way to determine where the muzzle point of the weapon is, because that’s where I will spawn my bullets.

But then we are headed into a discussion of how to convert a 3d point to screenspace. If you’re looking for someone to explain to you like an 8-year old, you’ve come to the right place, because because that’s the about the level of my maths!

First, I used this as my guide. It was too adult, but by its help and no small miracle, I figured it out, and wanting to spare others from having to grow up mathematically, this is the rundown first and I’ll go into simple details afterwards:

  • Get the item’s world position
  • Get the camera’s matrix. This is basically the orientation of the camera. I’m using a 3×3 matrix, by the way.
  • Do a transform operation (more on that below) on the item’s world position onto the world-to-camera matrix.

Camera matrix

In most 3d apps, you’re able to get the Right, Up, Forward vectors of a given item, which are the vector directions for each of the axis vectors. When I was much younger it was useful to visualise the axis gizmo arrows. For example, the Right vector is where the X axis arrow is pointing to; the Forward vector is where the Z axis arrow is aiming towards.

In LW:

get_item_vmatrix: item
{
	vmatrix[1] = item.getWorldRight(Scene().currenttime);
	vmatrix[2] = item.getWorldUp(Scene().currenttime);
	vmatrix[3] = item.getWorldForward(Scene().currenttime);
	return (vmatrix);
}

Transform operation

A transform operation mutiplies the position by the matrix, like this;

transform:  a,  m
{
	for ( i =1; i <= 3; i++ )
	{
		b[ i ] = a.x * m[ 1 , i ] +
			 a.y * m[ 2 , i ] +
			 a.z * m[ 3 , i ];
	}                     
	ret = <b[1],b[2],b[3]>;
	return(ret);
}

Flattened, it looks like:

res.x = a.x * m[ 1 , 1 ] + a.y * m[ 2 , 1 ] + a.z * m[ 3 , 1 ]

res.y = a.x * m[ 1 , 2 ] + a.y * m[ 2 , 2 ] + a.z * m[ 3 , 2 ]

res.z = a.x * m[ 1 , 3 ] + a.y * m[ 2 , 3 ] + a.z * m[ 3 , 3 ]

Where a is the original item, and m is the camera matrix.

Even in simpler terms:

  • The new position X is the result of adding these together
    • The original item’s position X multiplied by the camera matrix’s Right vector’s X component
    • The original item’s position X multiplied by the camera matrix’s Up vector’s X component
    • The original item’s position X multiplied by the camera matrix’s Forward vector X component
  • In the same way, the new position Y is result of adding:
    • The original item’s position Y multiplied by the camera matrix’s Right vector’s Y component
    • The original item’s position Y multiplied by the camera matrix’s Up vector’s Y component
    • The original item’s position Y multiplied by the camera matrix’s Forward vector Y component
  • And I don’t have to spell everything out….

The actual result

What you get is the transformation of that point to camera space, which is all still in 3d, that is, 3d units.

Because I wanted the ratio of that point as seen through the orthographic camera, it was a matter of just getting the resolution and then using my trusty remapping function.

cam_size = find_cam_size(camera)/2;
nmin = 0;
nmax = 1;
omin_x = -cam_size;
omax_x = cam_size;
omin_y = cam_size;
omax_y = -cam_size;

value = pos.x;
result_x = remap(omin_x, omax_x, nmin, nmax, value, nil);
value = pos.y;
result_y = remap(omin_y, omax_y, nmin, nmax, value, nil);

remap: omin, omax, nmin, nmax, value, limit
{
	oldrange = omax - omin;
	oratio = (value - omin) / oldrange;
	newrange = nmax - nmin;
	result = (newrange*oratio) + nmin;
	// info(result);
	if(limit)
	{
		if(result > nmax)
			result = nmax;
		else if(result < nmin)
			result = nmin;
	}
	return(result);
}

Using generalised direction for facing direction

Using the input for facing direction is great when moving. But not so great when releasing input.

The problem is expressed when we using WASD, travel diagonally (e.g. W+D keys), and then release. If both keys were released simultaneously within input thresholds, the last vector input would be the same as the travel direction.

But if one key was released a bit earlier or later, it the last input vector would be one of the vectors. Hence, you get a jarring result where the character is moving diagonally, then stops to face in a slightly different direction.

To overcome this, a generalised direction (or vector) is tracked when the character is moving, and is updated half-a-second (or less). This direction is used only when the character has stopped moving and is going to choose which direction it is facing.

Handling gaps in MoveArea

I realised recently that the MoveArea system was severely limited in respect to passageways from one area to the next (i.e. doorways). For example, in this image:

The room only one entry way. The reason why I couldn’t just create another one to the side was because I would have to split the room into segments. And if I did that, it will come out like this:

I’d have two separate movement areas, and no movement area at all at the centre of the room. The area worked on closed polygons whilst the border worked on individual segments. But for convenience, I didn’t want to draw twice.

What I thought I needed to do was to specify a vert that served as a gap, which means that no segment should connect to and from that gap. But how to identify it? I thought of using vertex indices and use the BorderLine naming convention, but it was too cumbersome, and not very visible in the editor.

So what I opted to do was the use of tag that indicate that a vert was a gap:

I added an in-editor label thanks to this tip.

This needs to be parented on the under the border line itself to indicate that this gap belongs to that area.

MoveArea computes the closest vert for that gap and breaks individual segments from connecting to and from that gap:

However, the Area is still intact, because it doesn’t really recognise the gap; it just continues on connecting the verts to close the polygon.

move_and_slide fix with Safe Margin

The problem I’ve been having:

The red lines are the move_and_slide computed vectors. They go haywire when they reach the corner

The solution I’ve found is to increase the Safe Margin on the KinematicBody2D.

Set to 10. Originally set to 0.08.

With this description I think I found it:

Extra margin used for collision recovery in motion functions (see move_and_collide(), move_and_slide(), move_and_slide_with_snap()).

If the body is at least this close to another body, it will consider them to be colliding and will be pushed away before performing the actual motion.

A higher value means it’s more flexible for detecting collision, which helps with consistently detecting walls and floors.

https://docs.godotengine.org/en/stable/classes/class_kinematicbody2d.html?highlight=safe_margin#class-kinematicbody2d-property-collision-safe-margin

MoveArea Part 2

MoveArea, something I introduced in the previous post, is the system wrapped in an asset that allows me to draw lines (Line2D) to make a map. But it has grown quite quickly into something a bit bigger than that. It has to consider the 3 elements of movement, line-of-sight (LOS), and bullet-blocking (cover).

These are the aspects of the MoveArea and related systems.

  • Multi-height/deck – Characters are able to move from one deck to another, changing movement area, bullet and LOS blocking configuration.
  • Movement, bullet, LOS blocking – each Line2D can be configured to independently block these aspects.
  • The blocking types can be switched independently for each Line2D.
  • Vertical LOS blocking.
  • Thickness of Line2Ds are respected.
  • Joint mode of Line2Ds are respected.

MoveArea line creation

The Line2D shapes are converted to solid polygons, which are then called BorderLines.

BorderLines may be tagged (by naming convention) to define the purpose/s they serve.

The naming convention is:

{deck-level}-{blocking-type}={name}

Examples of naming convention:

This blocks all. BorderArea created for it.

    1-MLB=DeckA


This blocks LOS only. No BorderArea created.

    1-L=Bush

This blocks Bullets only. No BorderArea created.

    1-B=Glass


This blocks movement only. BorderArea created for it.

    2=DeckB
    2-M=DeckB

Movement blockers

When a movement blocker is drawn, MoveArea does the following:

  • Creates a StaticBody2D because Player is a KinematicBody2D using move_and_slide.
  • Creates a StaticBody2D to serve as a BorderArea. BorderAreas are placed in a special channel called MOVEAREA_CHECK_COLLISION_CHANNEL and its purpose is to determine if a Character has moved from one area to another. Characters, when they are created/spawned, must acquire the current MoveArea instance and register themselves, so that a ‘checker’ can be made for them in the BorderAreas.
  • Creates an extra Area2D object and groups it under “VBulletBlockers” and “VLOSBlockers”. This is used for vertical LOS, explained later.
  • It uses the specified deck level to put it in the appropriate collision layer bit.

Bullet and LOS blockers

When a either bullet or LOS blocker is drawn or specified, MoveArea does the following:

  • Creates an Area2D object, draws the polygons.
  • Groups the object into the “BulletBlockers” or “LOSBlockers” group.
  • It uses the specified deck level to put it in the appropriate collision layer bit.

There’s an arbitrary limit of 8 deck-levels for now.

LOS

The LOS system is composed of:

  • LOSTransmitter
  • LOSReceiver

These are attached to any Character that is going to need LOS capabilities. The Player also uses LOS, but with the same purpose as the robot enemies.

In any case, the LOS components belong to LOS_COLLISION_CHANNEL (e.g. bit 28) and it’s in that channel that LOS collisions are processed.

LOS by field-of-view

Enemy robots’ have a field-of-view that is always querying whether the Player is within it. When the Player is within FOV, the LOS is activated and rays are cast.

LOS exclusions/exceptions

Because all LOS components are querying the same collision channel, I’ve opted to use exclusions via groups. When MoveArea processes LOSBLockers and VLOSBlockers, it puts them into groups suffixed with the deck-level it has detected them in. For example:

VLOSBlockers-1
VLOSBlockers-2
LOSBlockers-1
LOSBlockers-3

The nodes are cached inside the LOSTransmitter, organised by deck-level so that it immediately knows which blockers are in which deck.

When Player is within the Robot’s FOV, LOS is enabled, and those exclusions are added/updated so that only the blockers that are on the same level as the LOSTransmitter will be considered by the raycast.

Vertical LOS

Vertical LOS presented some issues, which this image can help explain.

The blue line signifies an open edge on Deck 2 from which the Player can look out and down upon.

The green line signifies a wall in Deck 1, where LOS is being blocked.

In the image below, Robot is at the higher deck, and the Player has moved closer to the virtual wall. This hides the Player from a vertical LOS point-of-view.

If the Player moved further away from the Deck 2 edge, he’ll be seen.

This was done by measuring the distance of the viewer to the edge against distance of the edge to the target (i.e. Player). By assuming a certain height of the viewer and the target (no geometrical accuracy here, folks!), the minimum distance from which to measure from the edge before everything became visible; anything under that distance was invisible.

Reverse raycasting to ensure vertical LOS

There was still a problem with this. The Robot picked its closest edge and measured from there. But this comes up wrong when you have decks at the same level, as illustrated in this image.

The Player is on Deck 1, the Robot is on Deck 2 Right, and there is a Deck 2 Left, which blocks its sight to the Player. When the ray is cast it picks up the Deck 2 Right’s edge, measuring the distance from there, which is wrong.

Instead, it should be picking the edge closest to the Player. I reversed the direction of the cast, and the 2 small white line marks indicate both results.

It is usually sufficient to compute VLOS from the nearest edge, though I noticed that checking both yields more expected results.

How to cast multiple times with Raycast2D

As an aside, there is a way to cast with Raycast2D multiple times within a frame. In the case above, I had to set a new position for the caster, and then set a new ray direction. This involved a transformation and a ray update.

Thus when moving the Raycast2D, you must use force_update_transform()

extends Raycast2D
...
set_position(new_position)
force_update_transform()

And then a new direction needs to be cast:

set_cast_to(new_ray_direction)
force_raycast_update()

Mind the original settings.

Bullets, BulletBlockers, VBulletBlockers

BulletBlockers stop Bullets as long as the Bullet is in the same deck level as the blocker.

It is the Bullets (Area2D) that use their area_entered signal to detect whether they’ve entered a BulletBlocker node.

Bullets vertical LOS using the idea as VLOS.

However, at this time, though I am getting the expected results when the Player is shooting from above, it doesn’t work so well, when the Player is below.

Bits, and bytes and everything nice.

Some funcs for bitmask.

This one’s derived from the docs.

static func get_bit_mask(layers):
	"""
	Given a list of layers to be enabled, find the mask decimal value.
	Decimal - Add the results of 2 to the power of (layer to be enabled - 1).
	(2^(1-1)) + (2^(3-1)) + (2^(4-1)) = 1 + 4 + 8 = 13
 	"""
	var result = 0
	for lyr in layers:
		result += pow(2, lyr-1)
	
	return result

Binary to decimal. Binary is expressed as a string.

static func binary_to_decimal(binary):
	""" Convert a binary number expressed as String, to an decimal integer. """
	var s = 0
	for b in binary:
		s = (s * 2) + int(b)
	return s

Decimal to binary. The option to reverse has more to do with how I’d want to modify the binary in a way that’s easy to see. reverse_string function provided.

static func decimal_to_binary(d, reverse=false):
	""" Convert decimal to binary (in string form). If reverse is true, then it outputs the proper binary form. If not, then it will output the binary bits in a way that can be modified and then you can reverse it afterwards and pass it to binary_to_decimal() """
	var r = ''
	while d > 0:
		r = '%s%s' % [r, str(d % 2)]
		d = int(d / 2)
	
	if reverse:
		return reverse_string(r)
	else:
		return r

static func reverse_string(input_string):
	var a_string = []
	for c in input_string:
		a_string.append(c)
	a_string.invert()
	return a_string

Simple one: find the indices that are enabled in binary value. This is useful to know which collision layers/masks are enabled or not.

static func find_enabled_bit_index(binary_value):
	""" 
	Given a binary value ordered ascending, get the indices with have their bits enabled (1)
	"""
	var return_ndxs = []
	# NB: base-1
	for ndx in range(len(binary_value)):
		if binary_value[ndx] == '1':
			return_ndxs.append(ndx+1)
	return return_ndxs

Enable surrounding bits. This function was purposefully written to enable the collision layers/masks that were surrounding a given one. The application here is for the enemy line-of-sight; if enemy is on Layer 3, for example, its LOS is enabled for Layers 2 and 4 (it can see one level up and down).

static func enable_surrounding_bits(binary_value, indexes, width=1):
	""" Enable the surrounding bits of the indexes of the binary value. The width defines how many indices to spread out. Expects binary values to be left-right, but returns it in reverse to be used directly by binary_to_decimal()"""
	var a_binary_value = []
	var ret_binary_value = ''
	for c in binary_value:
		a_binary_value.append(c)
	
	for ndx in indexes:
		var new_ndxs = get_surrounding_bit_indexes(ndx, width)
		for nn in new_ndxs:
			# Convert to base-0 for list manip
			var bn = nn - 1
			# If within range of the list, then just add the bit in
			if nn < len(a_binary_value):
				a_binary_value[bn] = '1'
			else:
				for n in range(len(a_binary_value), bn+1):
					if n != bn:
						a_binary_value.append('0')
					else:
						a_binary_value.append('1')
	a_binary_value.invert()
	for c in a_binary_value:
		ret_binary_value += c
	return ret_binary_value


static func get_surrounding_bit_indexes(index, width):
	"""
	Get the surrounding indices starting from `index` and up to `width` on both +/- sides. Assumes base-1
	"""
	var ret_ndxs = [index]
	for w in range(1, width+1):
		var u_index = index + w
		var v_index = index - w
		ret_ndxs.append(u_index)
		if v_index >= 0:
			ret_ndxs.append(v_index)
	return ret_ndxs

Working solution for multi-height-level movement with move_and_slide

I think have a working solution for multi-height/deck movement.

I use the words “height” or “deck” rather than “level or “platform” to be clear that “level” is actually “scene”, and “platform” is not what the game is.

In above image the Player starts out in the Blue Zone (layer 1). As it walks at the top edge, it intersects with the Yellow Zone (layer 2), but it isn’t affected by the collision.

It’s only when it walks up the slope back up to the top edge where the Yellow Zone’s collision takes into effect.

This works by switching the collision layer of the Mover to the collision layer of the current area it is on.

Sounds simple, but it was far from that.

The first pass

In this post I opted for CollisionPolygon2Ds in Segment build mode in drawing areas of general movement rather than using Solids. The reason was ease of use and debugging down the line.

However, Segment CollisionPolygon2Ds are closed polygons. Once the Mover is inside the polygon there was no exit. The only way to get out was to disable the collision for the entire area. So I needed a method to detect the intention to get of the area and into a new one.

This was the first pass of the problem, which for sake of completeness, I will delineate. It involved this odd set of logic in order to make it work:

  • If you (the Mover) are not touching any area boundaries, remain in the collision layer of the current area.
  • If you are touching a boundary of an area, first determine how many areas are you currently on.
    • If you are on a single area when you touch a boundary, this means you are still in your original area. You have simply touched the neighbouring boundary. Remain in collision layer you are currently on.
    • If you happen to find yourself in two areas when you touch a boundary, then the boundary you have touched belongs to the area you wish the leave. Therefore, switch your collision layer to the other area.

The logic is simple — albeit awkward-looking — yet the determination of which area were on was tricky. As this was related to collisions (move_and_slide), I needed to know if a total overlap occurred, not just a partial one.

However, another problem with Segment CollsionPolygon2D is that it doesn’t register an overlap if you are completely within the shape. It only registers overlaps when you are intersecting an edge.

I needed a Solid polygon for these checks, so through a script, I created a copy of the Segment polygons as Solids. They had to be placed in a different layer; by virtue of being Solids, they would impede all movement.

But the next logical problem was that the Mover couldn’t query the Solids if they were on another layer. This may have been solved by constraining a duplicate of the Mover in the Solids’ layer, but I didn’t do this because I didn’t want yet another script

Instead, I opted to use Raycast to four positions around the Mover’s circular perimeter. If the four points needed to be all in the same area for the Mover to be considered over that area.

The next Segment

When I finally solved it, I was still bothered by its hackiness. I researched the web to find out if there was any concept of an open polygon in Godot. I remembered playing around with Line2D, so I knew that a line primitive was available. But what I was missing was how it related to collision.

Then I came upon this Godot proposal, which mentioned SegmentShape2D, which in the documentation reads:

Class: SegmentShape2D

Inherits: Shape2D < Resource < Reference < Object

    Segment shape for 2D collisions.

I reckoned this to be what I was looking for. So I read the discussion on the proposal, where Marcel Admiraal suggested:

You could iterate through your points and use the CollisionObject2D API to add multiple SegmentShape2Ds to your CollisionObject2D.

Indeed, working my way through the documentation and the class hierarchies, and using the Editor to help me visualise the reuslts, I was able to create the collision lines. This had to be done through script, however, as I saw no (easy) way to draw SegmentShape2Ds contiguously.

MoveArea

Thus, the movement area system is called MoveArea, and is encapulsated in a scene with an attached script.

The MoveArea scene is dragged to a scene to make up the level. Line2Ds are created, and they are named in a specific way, e.g.

1-RoomA
1-RoomB
2-RoomA-Floor2
3-RoomA-Floor3
2-RoomB-Floor2

The first number indicates the collision layer that will be occupied by this area. The rest is just the name. I decided to use the naming convention for data, so I don’t have to add another script.

The script iterates through all of the Line2Ds and creates segment collisions for each edge; these are called BorderLines.

For querying area overlaps, I create CollisionPolygon2D in Solid build mode from the Line2D points, and place them in high-index collision layer. These are called BorderAreas.

A single circular Area2D shape (reference circle) is created under the MoveArea, under the same layer as the BorderAreas. This circle’s responsibility is to indicate which BorderAreas it is overlapping. It is done through a static function in MoveArea, which moves this circle to its desired spot and queries for overlaps; as a static function, it is meant to be called by anyone which has acquired the MoveArea instanced in the scene.

Then simpler logic prevails: knowing accurately which BorderArea the Mover is on, the Mover switches to the collision layer based on the collision index indicated by the name.

Slicing PoolStringsArray

Slicing is another thing that reminds me that GScript is not Python. In the first place, GDScript doesn’t have sugar of Python:

s_my_list = my_list[0:-3]

But it does have .slice()

s_my_list = my_list.slice(0:-3)

The problem is that this only works with generic Arrays. and PoolByteArrays, too. I’m not sure why, but let someone else request it.

Thus:

var line_name = line.get_name()
# Splitting Strings results in PoolStringArray, which doesn't have the slice method, so we convert it into a generic Array first
# https://docs.godotengine.org/en/stable/classes/class_array.html?highlight=array%20slice
var s_line_name = Array(line_name.split('-')).slice(0, -2)

# Then in order to use .join() we have to turn it back to PoolStringArray
return PoolStringArray(s_line_name).join('-')

Collision layers and masks

This has always been tricky to remember so here are some notes.

First, the reference link.

Collision Layers refer to the indices that a particular collision object belongs to.

Collision Masks refer to the indices that this particular collision object will check for collisions.

But the trick, really, is to always ask, “Who is looking?”

Area2D

Area2D has a method called get_overlapping_bodies/get_overlapping_areas. When you call these, consider Area2D to be the one looking for a collision at its Mask value.

If you call Area2D.get_overlapping_areas you might get other Area2D nodes, and the caller is the Looker, and the other Area2Ds are Lookees, so they need to be in their proper layers.

StaticBody2D and move_and_slide()

In respect to move_and_slide, a StaticBody2D used as obstacle is a Looker.

It doesn’t matter what layer it’s in, but its Mask must be pointing to the Layers of the Area2D/CollisionObject2D nodes that it’s supposed to block.

Possibly more notes to follow…

move_and_slide issues, acceleration and current velocity

In a previous entry I had an issue with the move_and_slide which had solved by adjusting the the max_slides parameter. This worked ok for certain angles, but it didn’t work for others, so I needed to solve it more robustly.

It came down to the simple issue of the sprites changing their animation based of the last known velocity (last_velocity). This variable was updated every tick with a delta between the current and last position. As you may imagine, the vectors there are pretty small if it’s moving. And then it came with the physics evaluating the resulting positions, it was going nuts; basically it was oversensitive.

I figured that if I could stabilise the last_velocity then the sprites would behave better. For it wasn’t the actual movement itself that a problem, but the sprites.

I think I managed to do it by using the move_and_slide()‘s return velocity to drive the last_velocity variable. Unlike my previous implementation, move_and_slide correctly indicates the intended movement, which is really all I needed.