Icon Quaternion Weighted Average

 

Icon BVHView

 

Icon Dead Blending Node in Unreal Engine

 

Icon Propagating Velocities through Animation Systems

 

Icon Cubic Interpolation of Quaternions

 

Icon Dead Blending

 

Icon Perfect Tracking with Springs

 

Icon Creating Looping Animations from Motion Capture

 

Icon My Favourite Things

 

Icon Inertialization Transition Cost

 

Icon Scalar Velocity

 

Icon Tags, Ranges and Masks

 

Icon Fitting Code Driven Displacement

 

Icon atoi and Trillions of Whales

 

Icon SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning

 

Icon Joint Limits

 

Icon Code vs Data Driven Displacement

 

Icon Exponential Map, Angle Axis, and Angular Velocity

 

Icon Encoding Events for Neural Networks

 

Icon Visualizing Rotation Spaces

 

Icon Spring-It-On: The Game Developer's Spring-Roll-Call

 

Icon Interviewing Advice from the Other Side of the Table

 

Icon Saguaro

 

Icon Learned Motion Matching

 

Icon Why Can't I Reproduce Their Results?

 

Icon Latinendian vs Arabendian

 

Icon Machine Learning, Kolmogorov Complexity, and Squishy Bunnies

 

Icon Subspace Neural Physics: Fast Data-Driven Interactive Simulation

 

Icon Software for Rent

 

Icon Naraleian Caterpillars

 

Icon The Scientific Method is a Virus

 

Icon Local Minima, Saddle Points, and Plateaus

 

Icon Robust Solving of Optical Motion Capture Data by Denoising

 

Icon Simple Concurrency in Python

 

Icon The Software Thief

 

Icon ASCII : A Love Letter

 

Icon My Neural Network isn't working! What should I do?

 

Icon Phase-Functioned Neural Networks for Character Control

 

Icon 17 Line Markov Chain

 

Icon 14 Character Random Number Generator

 

Icon Simple Two Joint IK

 

Icon Generating Icons with Pixel Sorting

 

Icon Neural Network Ambient Occlusion

 

Icon Three Short Stories about the East Coast Main Line

 

Icon The New Alphabet

 

Icon "The Color Munifni Exists"

 

Icon A Deep Learning Framework For Character Motion Synthesis and Editing

 

Icon The Halting Problem and The Moral Arbitrator

 

Icon The Witness

 

Icon Four Seasons Crisp Omelette

 

Icon At the Bottom of the Elevator

 

Icon Tracing Functions in Python

 

Icon Still Things and Moving Things

 

Icon water.cpp

 

Icon Making Poetry in Piet

 

Icon Learning Motion Manifolds with Convolutional Autoencoders

 

Icon Learning an Inverse Rig Mapping for Character Animation

 

Icon Infinity Doesn't Exist

 

Icon Polyconf

 

Icon Raleigh

 

Icon The Skagerrak

 

Icon Printing a Stack Trace with MinGW

 

Icon The Border Pines

 

Icon You could have invented Parser Combinators

 

Icon Ready for the Fight

 

Icon Earthbound

 

Icon Turing Drawings

 

Icon Lost Child Announcement

 

Icon Shelter

 

Icon Data Science, how hard can it be?

 

Icon Denki Furo

 

Icon In Defence of the Unitype

 

Icon Maya Velocity Node

 

Icon Sandy Denny

 

Icon What type of Machine is the C Preprocessor?

 

Icon Which AI is more human?

 

Icon Gone Home

 

Icon Thoughts on Japan

 

Icon Can Computers Think?

 

Icon Counting Sheep & Infinity

 

Icon How Nature Builds Computers

 

Icon Painkillers

 

Icon Correct Box Sphere Intersection

 

Icon Avoiding Shader Conditionals

 

Icon Writing Portable OpenGL

 

Icon The Only Cable Car in Ireland

 

Icon Is the C Preprocessor Turing Complete?

 

Icon The aesthetics of code

 

Icon Issues with SDL on iOS and Android

 

Icon How I learned to stop worrying and love statistics

 

Icon PyMark

 

Icon AutoC Tools

 

Icon Scripting xNormal with Python

 

Icon Six Myths About Ray Tracing

 

Icon The Web Giants Will Fall

 

Icon PyAutoC

 

Icon The Pirate Song

 

Icon Dear Esther

 

Icon Unsharp Anti Aliasing

 

Icon The First Boy

 

Icon Parallel programming isn't hard, optimisation is.

 

Icon Skyrim

 

Icon Recognizing a language is solving a problem

 

Icon Could an animal learn to program?

 

Icon RAGE

 

Icon Pure Depth SSAO

 

Icon Synchronized in Python

 

Icon 3d Printing

 

Icon Real Time Graphics is Virtual Reality

 

Icon Painting Style Renderer

 

Icon A very hard problem

 

Icon Indie Development vs Modding

 

Icon Corange

 

Icon 3ds Max PLY Exporter

 

Icon A Case for the Technical Artist

 

Icon Enums

 

Icon Scorpions have won evolution

 

Icon Dirt and Ashes

 

Icon Lazy Python

 

Icon Subdivision Modelling

 

Icon The Owl

 

Icon Mouse Traps

 

Icon Updated Art Reel

 

Icon Tech Reel

 

Icon Graphics Aren't the Enemy

 

Icon On Being A Games Artist

 

Icon The Bluebird

 

Icon Everything2

 

Icon Duck Engine

 

Icon Boarding Preview

 

Icon Sailing Preview

 

Icon Exodus Village Flyover

 

Icon Art Reel

 

Icon LOL I DREW THIS DRAGON

 

Icon One Cat Just Leads To Another

Propagating Velocities through Animation Systems

Created on June 1, 2023, 11:17 a.m.

Here is something that is almost always a headache for animation systems: resets and teleports.

The issue is that when we want to reset a character to a new state or pose or teleport the character to a different location there are usually a whole host of other systems that need to either be informed, or find a way to deal with this change - from physics, cloth, animation and gameplay to rendering.

Many game engines have a hodge-podge of solutions: some of these sub-systems try to auto-detect when teleports or resets have occurred, others use clamping, or other heuristics to limit the worst-case artefacts that can occur. Few game engines I've worked with have a dedicated code path for elegantly handling teleports and resets and they are a very common cause of bugs and glitches.

A core part of the problem is that many systems in the game need to record some state related to the character each frame (such as the positions and velocities of joints). Having this data just suddenly change without notice when the character teleports or resets is difficult to deal with.

For example, if the character suddenly switches pose how is a cloth system possibly meant to react well? What it really needs is to be provided a new initial starting state that matches the character's new pose. But few designers want to deal with this level of complexity - they just want their character to move to a new location, or to start playing a new animation.

Animation systems themselves are big sufferers of this - because there are many things we do in animation that require state and/or velocities of some kind. Most often these velocities are computed by keeping track of the pose from the previous frame and using finite difference. This is used for things like motion warping, inverse kinematics, inertialization, physics based animation, and even motion blur and temporal anti-aliasing.

And there are several big issues that storing this previous pose and doing finite-differencing everywhere introduces:

  1. Any velocity computed via finite difference will not be valid during a frame where the character has teleported or been reset to a new pose. Unless we know when to discard these bad finite differences, we are sure to see visual glitches.
  2. Even if we are told or can detect when the reset or teleport happened, we need to wait one more frame before we can do the finite difference computation - leaving us one frame without a valid velocity where we still need to do something. Assuming zero velocity is often not enough and can produce glitches when characters are reset into fast moving animations.
  3. If we find a way to handle whole-character resets or teleports, we still need to be careful how and when we compute the finite difference, as animations coming from certain animation systems can sometimes be very jerky. For example: blending very quickly between two different animations, or appling IK and popping the foot up and out of the terrain. Such large velocities can be unstable - in particular if they end up feeding back into the physics or animation system in any way. And all of this doesn't even begin to tackle problems like LOD switches or out-of-view optimizations.

There is an alternative approach, which I have not seen used very often, but which can potentially solve many of these issues. The idea is to (as much as possible) propagate velocities though the animation system itself. The goal is to build a system which can directly evaluate the velocity of the character at any point in time, even on frames where the character has been teleported or reset and we have no history of poses. Not just that, but also to have control over how these velocities are computed - meaning we can compensate for the places we know we will break them such as quick blends and instant pose corrections.

And although there can be a performance cost in doing this, a little bit of tricky maths involved, and some edge cases where you need to fall back to finite differencing, I believe it should be able to create systems that are a lot more robust in the long term.

So let's start at the beginning, sampling animation data:


Contents


Animation Sampling

If we are sampling our animation data at continuous times, chances are we are already going to be looking up at least two consecutive frames to blend, so computing the local joint velocities almost comes for free as we can take the finite difference between these two frames (we assume the animation data itself is clean and does not contain teleports, resets, sudden jumps, or other pops).

Given a basic pose representation such as the following, which essentially just consists of arrays of different properties associated with each joint:

struct pose
{    
    array1d<vec3> pos; // Positions
    array1d<quat> rot; // Rotations
    array1d<vec3> scl; // Scales
    array1d<vec3> vel; // Linear Velocities
    array1d<vec3> ang; // Angular Velocities
    array1d<vec3> svl; // Scalar Velocities
};

Sampling an animation at a continuous time via standard linear interpolation looks something like this:

void op_sample_linear(
    pose& output,                 // Output Pose
    const slice2d<vec3> anim_pos, // Animation Positions
    const slice2d<quat> anim_rot, // Animation Rotations
    const slice2d<vec3> anim_scl, // Animation Scales
    const float time,             // Time
    const float dt)               // Delta Time between Frames
{
    int st = (int)(time / dt);
    int s0 = clamp(st + 0, 0, anim_pos.rows - 1); // Previous Frame Index
    int s1 = clamp(st + 1, 0, anim_pos.rows - 1); // Next Frame Index
    float alpha = fmod(time / dt, 1.0f);          // Interpolation Value
    
    for (int j = 0; j < output.pos.size; j++)
    {
        output.pos(j) = lerp(anim_pos(s0, j), anim_pos(s1, j), alpha);
        
        output.vel(j) = (anim_pos(s1, j) - anim_pos(s0, j)) / dt;
        
        output.rot(j) = quat_slerp_shortest(
            anim_rot(s0, j), anim_rot(s1, j), alpha);
            
        output.ang(j) = quat_to_scaled_angle_axis(
            quat_abs(quat_mul_inv(anim_rot(s1, j), anim_rot(s0, j)))) / dt;
        
        output.scl(j) = eerp(anim_scl(s0, j), anim_scl(s1, j), alpha);
        
        output.svl(j) = log(anim_scl(s1, j) / anim_scl(s0, j)) / dt;
    }
}

(Note: if quat_to_scaled_angle_axis or eerp look unfamiliar, you might want to look at some of my previous articles on computing both angular velocities and scalar velocities.)

If we are willing to sample four poses instead of two we be even fancier: and compute our pose, and its velocity via cubic interpolation. This is particularly nice for what we are doing in this article as it gives us velocities that change smoothly even between frames.

Here are what the local joint rotational velocities sampled via cubic interpolation look like visualized on a character:


Forward & Backward Kinematics

Given velocities sampled in the local space, we can compute global space velocities by propagating things down the hierarchy. Here is what computing forward kinematics look like when we account for angular and scalar velocities:

void op_fk(
    pose& global,
    const pose& local,
    const slice1d<int> parents)
{
    for (int i = 0; i < parents.size; i++)
    {
        if (parents(i) == -1)
        {
            global.pos(i) = local.pos(i);
            global.rot(i) = local.rot(i);
            global.scl(i) = local.scl(i);
            global.vel(i) = local.vel(i);
            global.ang(i) = local.ang(i);
            global.svl(i) = local.svl(i);
        }
        else
        {
            // Ensure parent global transforms have already been computed
            assert(parents(i) < i);
            
            vec3 par_pos = global.pos(parents(i));
            quat par_rot = global.rot(parents(i));
            vec3 par_scl = global.scl(parents(i));
            vec3 par_vel = global.vel(parents(i));
            vec3 par_ang = global.ang(parents(i));
            vec3 par_svl = global.svl(parents(i));
            
            // Position
            global.pos(i) = quat_mul_vec3(par_rot, local.pos(i) * par_scl) + 
                par_pos;
            
            // Rotation
            global.rot(i) = quat_mul(par_rot, local.rot(i));
            
            // Scale
            global.scl(i) = local.scl(i) * par_scl;
            
            // Linear Velocity
            global.vel(i) = quat_mul_vec3(par_rot, local.vel(i) * par_scl) + 
                par_vel + 
                cross(par_ang, quat_mul_vec3(par_rot, local.pos(i) * par_scl)) +
                quat_mul_vec3(par_rot, local.pos(i) * par_scl * par_svl);
            
            // Angular Velocity
            global.ang(i) = quat_mul_vec3(par_rot, local.ang(i)) + par_ang;
            
            // Scalar Velocity
            global.svl(i) = local.svl(i) + par_svl;
        }
    }
}

The computation of the position, rotation, and scale here are identical to what you would do in normal forward kinematics, while the velocity computations are the additional parts you might not have seen before.

Here, the trickiest part is the computation of the linear velocity which needs to account for two things: "leverage" - the fact that a parent joint's angular velocity will contribute to the linear velocity when propagated to the child. And "expansion" - the fact that a parent joint's scalar velocity will contribute to the linear velocity along the direction of translation.

Below I've setup a comparison between velocities computed like this, and those computed via finite-difference with the previous frame in the world space. Those computed by finite difference are shown in blue, while those computed via the previous forward kinematics function are shown in red.

The finite difference itself is computed as follows:

void op_finite_diff(
    pose& output,
    const pose& input_curr,
    const pose& input_prev,
    const float dt)
{
    for (int i = 0; i < output.pos.size; i++)
    {
        output.vel(i) = (input_curr.pos(i) - input_prev.pos(i)) / dt;      
        
        output.ang(i) = quat_to_scaled_angle_axis(quat_abs(
            quat_mul_inv(input_curr.rot(i), input_prev.rot(i)))) / dt;
            
        output.svl(i) = log(input_curr.scl(i) / input_prev.scl(i)) / dt;      
    }
}

As hoped, the results are visually almost identical (in fact, I had to offset the lines to avoid them drawing pixel-perfectly on top of each other).

What about going in the other direction?

If we have velocities in the world space, and we want to convert these into velocities in the local space, we can do so by re-arranging the previous equations:

void op_bk(
    pose& local,
    const pose& global,
    const slice1d<int> parents)
{
    for (int i = 0; i < parents.size; i++)
    {
        if (parents(i) == -1)
        {
            local.pos(i) = global.pos(i);
            local.rot(i) = global.rot(i);
            local.scl(i) = global.scl(i);
            local.vel(i) = global.vel(i);
            local.ang(i) = global.ang(i);
            local.svl(i) = global.svl(i);
        }
        else
        {
            vec3 par_pos = global.pos(parents(i));
            quat par_rot = global.rot(parents(i));
            vec3 par_scl = global.scl(parents(i));
            vec3 par_vel = global.vel(parents(i));
            vec3 par_ang = global.ang(parents(i));
            vec3 par_svl = global.svl(parents(i));
            
            // Position
            local.pos(i) = quat_inv_mul_vec3(par_rot, global.pos(i) - par_pos);
            
            // Rotation
            local.rot(i) = quat_inv_mul(par_rot, global.rot(i));
            
            // Scale
            local.scl(i) = global.scl(i) / par_scl;
            
            // Linear Velocity
            local.vel(i) = quat_inv_mul_vec3(par_rot, global.vel(i) - par_vel - 
                cross(par_ang, quat_mul_vec3(par_rot, local.pos(i) * par_scl))) -
                quat_mul_vec3(par_rot, local.pos(i) * par_scl * par_svl);
            
            // Angular Velocity
            local.ang(i) = quat_inv_mul_vec3(par_rot, global.ang(i) - par_ang);
            
            // Scalar Velocity
            local.svl(i) = global.svl(i) - par_svl;
        }
    }
}

Below you can see a comparison between velocities computed in the world space via finite difference and put through this backward kinematics function, and the local space rotational velocities sampled from our animation data. As before, propagated velocities are in red while finite difference velocities are in blue.

The results are again practically identical when compared to what we get via finite difference.

(Aside: the way we are dealing with object scales here is how Unreal deals with object scales. If your animation system does not use object scales you can simply remove these parts from the equations by assuming scale is always one, and scalar velocity is always zero.)


Kineforms

These two forward and backward kinematics equations actually tell us something more general.

They tell us is how to operate on kinematic objects which consists of "a position, rotation, scale and their velocities". If this type of object already has a common name please tell me, but for the chance of coining a cute term, I'm going to call these objects from now on "kineforms", or "kforms" for short.

struct kform
{
    vec3 pos = vec3(0, 0, 0);     // Position
    quat rot = quat(1, 0, 0, 0);  // Rotation
    vec3 scl = vec3(1, 1, 1);     // Scale
    vec3 vel = vec3(0, 0, 0);     // Linear Velocity
    vec3 ang = vec3(0, 0, 0);     // Angular Velocity
    vec3 svl = vec3(0, 0, 0);     // Scalar Velocity
};

Our forward kinematics equation tells us that if we have a kineform moving in it's own local reference frame, but which is attached to another kineform, what the resulting kineform will be.

Similarly, the backward kinematics equation tells us if we have a kineform and we want to know what its relative kineform is compared to another kineform, how to compute that.

So the first is a kind of multiplication or composition operation for kineforms, while the second is like an inverse multiplication (a.k.a division) operation. Here is what multiplication looks like:

kform kform_mul(kform v, kform w)
{
    kform out;
    out.pos = quat_mul_vec3(v.rot, w.pos * v.scl) + v.pos;
    
    out.rot = quat_mul(v.rot, w.rot);
    
    out.scl = w.scl * v.scl;
    
    out.vel = quat_mul_vec3(v.rot, w.vel * v.scl) + v.vel + 
            cross(v.ang, quat_mul_vec3(v.rot, w.pos * v.scl)) +
            quat_mul_vec3(v.rot, w.pos * v.scl * v.svl);
    
    out.ang = quat_mul_vec3(v.rot, w.ang) + v.ang;
    
    out.svl = w.svl + v.svl;
    
    return out;
}

And here is division:

kform kform_div(kform v, kform w)
{
    kform out;
    out.pos = quat_inv_mul_vec3(v.rot, w.pos - v.pos);
    
    out.rot = quat_inv_mul(v.rot, w.rot);
    
    out.scl = w.scl / v.scl;
    
    out.vel = quat_inv_mul_vec3(v.rot, w.vel - v.vel - 
        cross(v.ang, quat_mul_vec3(v.rot, out.pos * v.scl))) -
        quat_mul_vec3(v.rot, out.pos * v.scl * v.svl);
    
    out.ang = quat_inv_mul_vec3(v.rot, w.ang - v.ang);
    
    out.svl = w.svl - v.svl;

    return out;
}

Notice how these are identical to what we had in our forward kinematics and backward kinematics functions. We can also use this division operation to work out how to get the "inverse" of a kineform by setting the left-hand side to the identity kineform:

kform kform_inv(kform v)
{
    kform out;
    out.pos = quat_inv_mul_vec3(v.rot, -v.pos);
    
    out.rot = quat_inv(v.rot);
    
    out.scl = 1.0f / v.scl;
    
    out.vel = quat_inv_mul_vec3(v.rot, -v.vel - 
        cross(v.ang, quat_mul_vec3(v.rot, out.pos * v.scl))) -
        quat_mul_vec3(v.rot, out.pos * v.scl * v.svl);
    
    out.ang = quat_inv_mul_vec3(v.rot, -v.ang);
    
    out.svl = -v.svl;
    return out;
}

Given these general equations for composing kineforms, we can use them to transform points, velocities, directions, and all sorts of other things just by stripping out the unused parts of the object or equations:

vec3 kform_mul_dir(kform v, vec3 d)
{
    return quat_mul_vec3(v.rot, d);
}

vec3 kform_mul_pos(kform v, vec3 pos)
{
    return quat_mul_vec3(v.rot, pos * v.scl) + v.pos;
}

vec3 kform_mul_vel(kform v, vec3 pos, vec3 vel)
{
    return quat_mul_vec3(v.rot, vel * v.scl) + v.vel + 
        cross(v.ang, quat_mul_vec3(v.rot, pos * v.scl)) +
        quat_mul_vec3(v.rot, pos * v.scl * v.svl);
}

By deriving all the different operators we need, we can slowly replace code that works only on static transforms, with code that also propagates velocities. Kineforms themselves are effectively a superset of positions, rotations, and scales. By setting any of the individual components to the identity we can emulate practically any other transformation we might be interested in with or without velocities.

This will be very useful as we will see a bit later! For now back to animation systems...


Two Way Blend

If we want to blend two poses and we have a fixed blend weight that is not changing over time, then the velocities of the joints in the blended animation are going to simply be the linear interpolation of the velocities of the two input poses:

void op_blend_static(
    pose& output,
    const pose& lhs,
    const pose& rhs,
    const float alpha)
{
    for (int i = 0; i < output.pos.size; i++)
    {
        output.pos(i) = lerp(lhs.pos(i), rhs.pos(i), alpha);
        output.rot(i) = quat_slerp_shortest(lhs.rot(i), rhs.rot(i), alpha);
        output.scl(i) = eerp(lhs.scl(i), rhs.scl(i), alpha);
        output.vel(i) = lerp(lhs.vel(i), rhs.vel(i), alpha);
        output.ang(i) = lerp(lhs.ang(i), rhs.ang(i), alpha);
        output.svl(i) = lerp(lhs.svl(i), rhs.svl(i), alpha);
    }
}

When the blend weight is changing over time, we need to take this change into account as the actual blend itself introduces additional movement. Consider a blend between a stationary animation of leaning to the left and a stationary animation of leaning to the right - even though the two animations themselves are not moving, the blend will introduce movement of the body from left to right.

If we want to account for this velocity, we need the velocity of the blend weight itself. For a cross-fade blend this can be found by taking the derivative of the function that is producing the blend weight (e.g. the derivative of a smoothstep function or similar).

float smoothstep(float x)
{
    return 3*x*x - 2*x*x*x;
}

float smoothstep_dx(float x)
{
    return 6*x - 6*x*x;
}

float smoothstep_blend_alpha(
    float blend_time,
    float blend_duration,
    float eps = 1e-8f)
{
    float x = clampf(blend_time / maxf(blend_duration, eps), 0.0f, 1.0f);    
    return smoothstep(x);
}

float smoothstep_blend_alpha_dx(
    float blend_time,
    float blend_duration,
    float blend_dt,
    float eps = 1e-8f)
{
    float x = clampf(blend_time / maxf(blend_duration, eps), 0.0f, 1.0f);
    return smoothstep_dx(x) * (blend_dt / maxf(blend_duration, eps));
}

Once we have this blend weight velocity, we multiply it by the difference between the two poses to get the additional velocity introduced by the blend:

void op_blend(
    pose& output,
    const pose& lhs,
    const pose& rhs,
    const float alpha,
    const float alpha_vel,
    const float dt)
{
    for (int i = 0; i < output.pos.size; i++)
    {
        output.pos(i) = lerp(lhs.pos(i), rhs.pos(i), alpha);
        output.rot(i) = quat_slerp_shortest(lhs.rot(i), rhs.rot(i), alpha);
        output.scl(i) = eerp(lhs.scl(i), rhs.scl(i), alpha);
        
        output.vel(i) = lerp(lhs.vel(i), rhs.vel(i), alpha) + 
            (rhs.pos(i) - lhs.pos(i)) * (alpha_vel / dt);
        
        output.ang(i) = lerp(lhs.ang(i), rhs.ang(i), alpha) +
            quat_to_scaled_angle_axis(quat_abs(
                quat_mul_inv(rhs.rot(i), lhs.rot(i)))) * (alpha_vel / dt);
            
        output.svl(i) = lerp(lhs.svl(i), rhs.svl(i), alpha) +
            log(rhs.scl(i) / lhs.scl(i)) * (alpha_vel / dt);
    }
}

As you can see, this extra velocity is added to our linear interpolation of the two animations' velocities to get the final result.

Here is a comparison between velocities computed in the world space via finite difference as before, and those propagated through sampling, blending, and forward kinematics.

Depending on the down-stream application of these velocities, we might want to give people control over if they actually want to apply the additional velocity we get due to the changing blend weight. As mentioned previously, quickly blending two very different poses can produce a large velocity which we actually might want to ignore for things like physically based animation - even when it better reflects the true movement of the visual character.


Additive Animations

Additive animations (i.e. animations representing some offset we apply to the playing animation) have similar equations to the two way blend. For a fixed additive weight we simply add the velocities of the additive animation scaled by the weight to velocities of the input animation.

void op_additive_inplace_static(
    pose& output,       // Pose to apply the additive to
    const pose& add,    // Additive pose
    const float alpha,  // Weight for additive
    const float dt)
{
    for (int i = 0; i < output.pos.size; i++)
    {
        quat add_rot = quat_from_scaled_angle_axis(
            alpha * quat_to_scaled_angle_axis(quat_abs(add.rot(i))));
        
        output.pos(i) = alpha * add.pos(i) + output.pos(i);
        
        output.rot(i) = quat_mul(add_rot, output.rot(i));
        
        output.scl(i) = exp(alpha * log(add.scl(i))) * output.scl(i);
        
        output.vel(i) = output.vel(i) + alpha * add.vel(i);
        
        output.ang(i) = quat_mul_vec3(add_rot, output.ang(i)) + 
            alpha * add.ang(i);
        
        output.svl(i) = output.svl(i) + alpha * add.svl(i);
    }
}

Note here how we need to transform the angular velocity of the base pose by our additive rotation. This is because our additive is being applied on the left hand side during the quaternion multiplication. If we apply our additive on the right hand side then we need to transform the additive angular velocity instead.

void op_additive_local_inplace_static(
    pose& output,       // Pose to apply the additive to
    const pose& add,    // Additive pose
    const float alpha)  // Weight for additive
{
    for (int i = 0; i < output.pos.size; i++)
    {
        quat add_rot = quat_from_scaled_angle_axis(
            alpha * quat_to_scaled_angle_axis(quat_abs(add.rot(i))));
        
        quat cur_rot = output.rot(i);

        output.pos(i) = alpha * add.pos(i) + output.pos(i);
        
        output.rot(i) = quat_mul(cur_rot, add_rot);
        
        output.scl(i) = exp(alpha * log(add.scl(i))) * output.scl(i);
        
        output.vel(i) = output.vel(i) + alpha * add.vel(i);
        
        output.ang(i) = output.ang(i) + 
            quat_mul_vec3(cur_rot, alpha * add.ang(i));
        
        output.svl(i) = output.svl(i) + alpha * add.svl(i);
    }
}

For a weight that is changing over time we need to know the rate of the change in weight and multiply it by the additive animation to account for this change:

void op_additive_inplace(
    pose& output,          // Pose to apply the additive to
    const pose& add,       // Additive pose
    const float alpha,     // Weight for additive
    const float alpha_vel, // Rate of change for the weight
    const float dt)        // Delta time
{
    for (int i = 0; i < output.pos.size; i++)
    {
        quat add_rot = quat_from_scaled_angle_axis(
            alpha * quat_to_scaled_angle_axis(quat_abs(add.rot(i))));
        
        output.pos(i) = alpha * add.pos(i) + output.pos(i);
        
        output.rot(i) = quat_mul(add_rot, output.rot(i));
        
        output.scl(i) = exp(alpha * log(add.scl(i))) * output.scl(i);

        output.vel(i) = output.vel(i) + alpha * add.vel(i) + 
            add.pos(i) * (alpha_vel / dt);
        
        output.ang(i) = quat_mul_vec3(add_rot, output.ang(i)) + 
            alpha * add.ang(i) +
            quat_to_scaled_angle_axis(quat_abs(add.rot(i))) * (alpha_vel / dt);
            
        output.svl(i) = output.svl(i) + alpha * add.svl(i) +
            log(add.scl(i)) * (alpha_vel / dt);
    }
}

And here is how it looks compared to velocities calculated via finite difference:

Like before, there is very little difference between the two.


Inertialization

Offset-based inertialization is really just a kind of additive animation which fades out to zero over time. So like before, we just need to add the velocities of this additive animation to account for them. In this case, we get them from the derivative of our inertialization function (such as a spring, or cubic function) itself. Here I've implemented and optimized a basic cubic inertializer function:

void op_inertialize(
    pose& output,               // Output Animation
    const pose& offset,         // Additive Offset
    const pose& input,          // Input Animation
    const float blend_time,     // Current blend time
    const float blend_duration, // Total blend duration
    const float eps = 1e-8f)
{
    // Scale and clamp blend time to be between 0 and 1
    float t = clampf(blend_time / (blend_duration + eps), 0.0f, 1.0f);
    
    // Compute coefficients determined by cubic decay
    float w0 = 2.0f*t*t*t - 3.0f*t*t + 1.0f;
    float w1 = (t*t*t - 2.0f*t*t + t) * blend_duration;
    float w2 = (6.0f*t*t - 6.0f*t) / (blend_duration + eps);
    float w3 = 3.0f*t*t - 4.0f*t + 1.0f;
  
    for (int i = 0; i < output.pos.size; i++)
    {
        // Compute decayed offset
        vec3 off_pos = w0 * offset.pos(i) + w1 * offset.vel(i);
        vec3 off_vel = w2 * offset.pos(i) + w3 * offset.vel(i);
        quat off_rot = quat_from_scaled_angle_axis(
            w0 * quat_to_scaled_angle_axis(offset.rot(i)) + w1 * offset.ang(i));
        
        vec3 off_ang = w2 * quat_to_scaled_angle_axis(offset.rot(i)) + 
            w3 * offset.ang(i);
        vec3 off_scl = exp(w0 * log(offset.scl(i)) + w1 * offset.svl(i));
        vec3 off_svl = w2 * log(offset.scl(i)) + w3 * offset.svl(i);
        
        // Apply decayed offset additively
        output.pos(i) = input.pos(i) + off_pos;
        output.vel(i) = input.vel(i) + off_vel;        
        output.rot(i) = quat_mul(off_rot, input.rot(i));
        output.ang(i) = quat_mul_vec3(off_rot, input.ang(i)) + off_ang;
        output.scl(i) = input.scl(i) * off_scl;
        output.svl(i) = input.svl(i) + off_svl;
    }
}

And here is how it looks with our standard finite-difference comparison.

Again, very little visual difference.


Dead Blending

Similarly, dead-blending is really just a normal two-way blend. Assuming we can get the velocity of our extrapolated joint transforms it doesn't require special treatment either:

void op_extrapolate_decayed(
    pose& output,
    const pose& input,
    const float dt,
    const float halflife,
    const float eps = 1e-8f)
{
    // Compute amount of extrapolation and velocity decay factor
    float y = 0.69314718056f / (halflife + eps);
    float alpha = (1.0f - fast_negexpf(y * dt)) / (y + eps);
    float dalpha = fast_negexpf(y * dt);

    for (int i = 0; i < input.pos.size; i++)
    {   
        if (i == 0)
        {
            // Don't extrapolate root
            output.pos(i) = input.pos(i);
            output.rot(i) = input.rot(i);
            output.scl(i) = input.scl(i);
            output.vel(i) = input.vel(i);
            output.ang(i) = input.ang(i);
            output.svl(i) = input.svl(i);
        }
        else
        {
            // Extrapolated state
            output.pos(i) = input.pos(i) + input.vel(i) * alpha;
            output.rot(i) = quat_mul(quat_from_scaled_angle_axis(
                input.ang(i) * alpha), input.rot(i));
            output.scl(i) = input.scl(i) * exp(input.svl(i) * alpha);
            
            // Decay velocities
            output.vel(i) = input.vel(i) * dalpha;
            output.ang(i) = input.ang(i) * dalpha;
            output.svl(i) = input.svl(i) * dalpha;
        }
    }
}

void op_dead_blending(
    pose& output,
    pose& extrapolated_pose,
    const pose& transition_pose,
    const pose& input,
    const float extrapolation_halflife,
    const float blend_time,
    const float blend_duration,
    const float dt)
{
    // Extrapolate transition pose forward in time
    op_extrapolate_decayed(
        extrapolated_pose,
        transition_pose,
        blend_time,
        extrapolation_halflife);
    
    // Compute blend factor
    float blend_alpha = smoothstep_blend_alpha(
        blend_time, blend_duration);
        
    // Compute blend factor velocity
    float blend_alpha_vel = smoothstep_blend_alpha_dx(
        blend_time, blend_duration, dt);
    
    // Apply Blend
    op_blend(
        output,
        extrapolated_pose,
        input,
        blend_alpha,
        blend_alpha_vel,
        dt);
}

And this is what it looks like in action:


Procedural Animation

If we are applying any procedural animation to the character, we want these methods to also adjust the velocities as appropriate. One relatively straight forward way to do this is to try to write our procedural animation functions in terms of our kineform objects defined before.

For example, here is one way we might implement a basic procedural look-at function normally:

void op_lookat_static(
    pose& local,
    const pose& global,
    const vec3 target_pos,
    const vec3 fwd = vec3(0,1,0))
{
    // Find the target look-at position local to the head
    vec3 tar_pos = quat_inv_mul_vec3(global.rot(Bone_Head), 
        target_pos - global.pos(Bone_Head));
    
    // Find the rotation between the target and the forward dir
    quat diff = quat_between(fwd, normalize(tar_pos));
    
    // Rotate the head to face toward the target
    local.rot(Bone_Head) = quat_mul(local.rot(Bone_Head), diff);
}

And here is the same function, but using kineforms and therefore accounting for velocities:

void op_lookat(
    pose& local,
    const pose& global,
    const vec3 target_pos,
    const vec3 target_vel,
    const vec3 fwd = vec3(0,1,0))
{
    // Find the target look-at position local to the head
    kform tar_pos = kform_inv_mul(global(Bone_Head), 
        kform_from_pos_vel(target_pos, target_vel));

    // Find the rotation between the target and the forward dir
    kform diff = kform_rot_ang_between(
        kform_from_pos_vel(fwd, vec3()), 
        kform_pos_vel_normalize(tar_pos));

    // Rotate the head to face toward the target
    kform head_rot = kform_mul(local(Bone_Head), diff);
    local.rot(Bone_Head) = head_rot.rot;
    local.ang(Bone_Head) = head_rot.ang;
}

This of course relies on defining a few more functions for kform objects:

kform kform_from_pos_vel(vec3 pos, vec3 vel)
{
    kform out;
    out.pos = pos;
    out.vel = vel;
    return out;
}

kform kform_pos_vel_normalize(kform d)
{
    kform out = d;
    out.pos = normalize(d.pos);
    out.vel = normalize_dx(d.pos, d.vel);
    return out;
}

kform kform_rot_ang_between(kform dir0, kform dir1)
{
    quat rot = quat_between(dir0.pos, dir1.pos);
    quat drot = quat_between_dx(dir0.pos, dir0.vel, dir1.pos, dir1.vel);
    
    kform out;
    out.rot = rot;
    out.ang = quat_delta_to_angular_velocity(rot, drot);
 
    return out;
}

Which in turn means we need derivatives of some functions such as normalize and quat_between:

float sqrtf_dx(float x, float dx, float eps=1e-8f)
{
    return dx / (2.0f * sqrtf(x) + eps);
}

vec3 cross_dx(vec3 v, vec3 dv, vec3 w, vec3 dw)
{
    return cross(dv, w) + cross(v, dw);
}

float dot_dx(vec3 v, vec3 dv, vec3 w, vec3 dw)
{
    return dot(dv, w) + dot(v, dw);
}

vec3 normalize_dx(vec3 v, vec3 dv, float eps=1e-8f)
{
    return (dv - normalize(v) * dot(dv, normalize(v))) / 
        (length(v) + eps);
}

quat quat_between_dx(vec3 p, vec3 dp, vec3 q, vec3 dq)
{
    vec3 c = cross(p, q);
    vec3 dc = cross_dx(p, dp, q, dq);
 
    float a = sqrtf(dot(p, p) * dot(q, q)) + dot(p, q);
    
    float da = sqrtf_dx(
        dot(p, p) * dot(q, q), 
        dot_dx(p, dp, p, dp) * dot(q, q) + 
        dot_dx(q, dq, q, dq) * dot(p, p)) + dot_dx(p, dp, q, dq);
    
    return quat_normalize_dx(
        quat(a, c.x, c.y, c.z), 
        quat(da, dc.x, dc.y, dc.z));
}

An here is what the angular velocities of the character look like using this look-at function when compared to finite-differencing:


Finite Difference

For any procedural adjustments for which we can't easily derive velocities for or write in terms of kforms, we can do a form of finite differencing.

In many cases, there is still no need to keep around the pose from the previous frame. If we have the velocities of the all the inputs to the system we can use those to compute a finite difference based approximation of the output in a single frame by evaluating the procedural adjustment function twice - once on the current input, and once on the input extrapolated forward (or backward) in time.

For example, here is how we might do this for a simple two-joint IK system where we have the velocity of the IK target.

void op_two_bone_ik(
    pose& local_curr,
    pose& local_temp,
    pose& global_curr,
    pose& global_temp,
    const slice1d<int> bone_parents,
    const vec3 heel_pos,
    const vec3 heel_vel,
    const float delta = 0.01f)
{
    op_extrapolate_prev(local_temp, local_curr, delta);
    
    op_fk_static(global_curr, local_curr, bone_parents);
    op_fk_static(global_temp, local_temp, bone_parents);
    
    op_two_bone_ik_static(local_curr, global_curr, heel_pos);
    op_two_bone_ik_static(local_temp, global_temp, heel_pos - delta * heel_vel);
    
    op_finite_diff_inplace(local_curr, local_temp, delta);
}

And here is the comparison to the world-space finite difference velocities computed from the previous frame:

Naturally this is a little more expensive since it requires two evaluations of the function. But that might be a price worth paying if the function is cheap and it results in reduced management of state.


Conclusion

Handling velocities in animation systems is definitely a little more difficult than not - but I hope in this article I have given you a sense of how that difficulty can be managed. First by providing derivations for lots of the things that are commonly done in animation systems, and second, by showing how lots of the heavy lifting can be done for us if we write our logic in terms of "kineforms" which take into account velocities naturally.

By rebuilding things based of these primitives we can go quite far - and even when resorting to finite differencing for more complex procedural adjustments - if we are willing to evaluate things twice there are often better ways to do it than storing a history of poses.

Like the requirement of motion vectors in rendering - building an animation system that explicitly deals with velocities adds a significant up-front complexity to the whole thing, but at the end of the day, even though we try to hide it, careful consideration of velocities is something both programmers and users of these systems such as animators and technical animators always end up having to face eventually one way or another.

Perhaps being up front about this fact will be better in the long run than pretending these issues don't exist, and trying to debug all the weird and wonderful glitches that ignoring them brings. I also hope that having velocities easily avaliable throughout the animation system could encourage new approachs and techniques in the same way the use of motion vectors started with things like motion blur, but is now used for temporal super resolution and DLSS.

Either way, I hope this article has been interesting and as always thanks for reading.


Source Code

All the source code for this article can be found here.

Disclaimer: historically, whenever I've written a long article with a lot of mathematical derivations, even when I try my best to check things, there have tended to be at least one or two mistakes - so please treat everything you read here with some caution. And, please do send me any corrections you find! They are hugely appreciated.

github twitter rss