Closed-Form 3x3 Matrix Decompositions

08/08/2024

There's a really interesting bit of mathematics in this paper by Huancheng Lin, Floyd M. Chitalu, and Taku Komura that I'm certain will have passed many graphics people by. It's a closed-form solution for both polar decomposition and singular value decomposition of 3x3 matrices (and they also provide derivations for 2x2 matrices).

Both of these decompositions are incredibly useful and turn up all the time in graphics such as in shape matching, finite element simulation, skinning, and many other places. I was excited to learn about this result because I'd not come across a closed-form solution before (let me know if you knew of one already).

I'd always assumed some kind of iterative method, such as Muller's method for Polar Decomposition or the power iteration method for SVD was required. Having a closed-form solution is nice because it makes it easier to take gradients, which is the original reason for the derivation in this paper. As a bonus this closed form solution is faster (according to their benchmarks) and always returns a rotation component with a positive determinant which means it never flips sides which is also something very important for finite element simulation.

If you just want the code feel free to skip to the bottom of this article. But I find the derivation is interesting too because it uses quite a few unexpected matrix identities. Here is my understanding of how it works:


Polar Decomposition

Given a matrix \( \mathbf{M} \) which we want to factor into a symmetric stretch part \( \mathbf{S} \) and an orthonormal rotation part \( \mathbf{R} \) such that \( \mathbf{M} = \mathbf{R} \mathbf{S} \), this method works by first computing \( \mathbf{R} \) and then computing \( \mathbf{S} \) via \( \mathbf{S} = \mathbf{R}^\intercal \mathbf{M} \). It starts by defining \( f \) as the trace of the matrix you get when you multiply \( \mathbf{M}^\intercal \) by \( \mathbf{R} \), i.e. \( f = \text{tr}(\mathbf{M}^\intercal \mathbf{R}) \).

Then, via another matrix identity, we can write \( \mathbf{R} \) in terms of the partial deriviatives of \( f \) with respect to \( \mathbf{M} \), i.e. \( \mathbf{R} = \frac{\delta f}{\delta \mathbf{M}} \), or in other words \( \mathbf{R} = \frac{\delta \text{tr}(\mathbf{M}^\intercal \mathbf{R})}{\delta \mathbf{M}} \). The problem we have now is that \( \mathbf{R} \) appears on both sides of the equation, so there is no way for us to use this equation to compute \( \mathbf{R} \) (for now)!

The key breakthrough from Lin et al. is that they show that there exists (when \( \mathbf{M} \) is a 3x3 matrix) a different way to compute \( f \) which does not involve \( \mathbf{R} \). More specifically, they show that \( f \) is also equal to one of the solutions of the following polynomial:

\begin{align*} x^4 - 2 A x^2 - 8 C x - A^2 + 2 B &= 0 \end{align*}

Where \( A \), \( B \), and \( C \) are defined as follows:

\begin{align*} A &= \text{tr}(\mathbf{M}^\intercal \mathbf{M}) \\ B &= \| \mathbf{M}^\intercal \mathbf{M} \|^2 \\ C &= \text{det}(\mathbf{M}) \\ \end{align*}

Here \( A \) and \( B \) are the first two so-called Cauchy-Green Invariants of \( \mathbf{M} \), and \( C \) is the square root of the third Cauchy-Green Invariant.

This gives a closed form solution for \( f \) which does not involve \( \mathbf{R} \). Which means it also gives a closed form solution for the partial derivative of \( f \) with respect to \( \mathbf{M} \) which does not involve \( \mathbf{R} \). As shown in the paper, the partial derivative of \( f \) with respect to \( \mathbf{M} \) when using this equation can be written in terms of partial derivatives involving \( A \), \( B \), and \( C \):

\begin{align*} \mathbf{R} = \frac{\delta f}{\delta \mathbf{M}} = \frac{\delta f}{\delta A} \frac{\delta A}{\delta \mathbf{M}} + \frac{\delta f}{\delta B} \frac{\delta B}{\delta \mathbf{M}} + \frac{\delta f}{\delta C} \frac{\delta C}{\delta \mathbf{M}} \end{align*}

These partial derivatives with respect to \( \mathbf{M} \) can then be defined as follows:

\begin{align*} \frac{\delta A}{\delta \mathbf{M}} &= 2 \mathbf{M} \\ \frac{\delta B}{\delta \mathbf{M}} &= 4 \mathbf{M} \mathbf{M}^\intercal \mathbf{M} \\ \frac{\delta C}{\delta \mathbf{M}} &= \text{adj}(\mathbf{M})^\intercal \\ \end{align*}

where \( \text{adj}(\mathbf{M})^\intercal \) can be more easily computed in the 3x3 case by taking the cross products of the columns of \( \mathbf{M} \).

Then, the partial derivatives of the function \( f \), with respect to \( A \), \( B \), and \( C \), can be defined as follows:

\begin{align*} \frac{\delta f}{\delta A} &= \frac{2f^2 + 2A}{4f^3 - 4Af - 8C} \\ \frac{\delta f}{\delta B} &= \frac{-2}{4f^3 - 4Af - 8C} \\ \frac{\delta f}{\delta C} &= \frac{8f}{4f^3 - 4Af - 8C} \\ \end{align*}

Putting it all together it looks like this:

// Partial derivative of matrix determinant (adj(M)^T)
mat3 mat3_ddet(const mat3 M)
{
    vec3 da = cross(M.c1(), M.c2());
    vec3 db = cross(M.c2(), M.c0());
    vec3 dc = cross(M.c0(), M.c1());
    return mat3_from_cols(da, db, dc);
}

// The squared frobenius norm of a matrix (|| M ||^2)
float mat3_frobenius_squared(const mat3 M)
{
    return M.xx*M.xx + M.xy*M.xy + M.xz*M.xz +
           M.yx*M.yx + M.yy*M.yy + M.yz*M.yz +
           M.zx*M.zx + M.zy*M.zy + M.zz*M.zz;
}

// returns the real root with the largest magnitude which solves the cubic 
// equation of form x^3 + a*x^2 + b*x + c
static inline float cubic_max_abs_root(float a, float b, float c)
{
    float q = (a*a - 3.0f*b) / 9.0f;
    float r = (2.0f*a*a*a - 9.0f*a*b + 27.0f*c) / 54.0f;
    
    if (r*r < q*q*q) 
    {
        // Three Real Roots
        float t = acosf(clampf(r / sqrtf(q*q*q), -1.0f, 1.0f));
        float x0 = -2.0f * sqrtf(q) * cosf((t             ) / 3.0f) - a / 3.0f;
        float x1 = -2.0f * sqrtf(q) * cosf((t + 2.0f * PIf) / 3.0f) - a / 3.0f;
        float x2 = -2.0f * sqrtf(q) * cosf((t - 2.0f * PIf) / 3.0f) - a / 3.0f;
        return fabsf(x0) > fabsf(x1) && fabsf(x0) > fabsf(x2) ? x0 :
               fabsf(x1) > fabsf(x2) && fabsf(x1) > fabsf(x0) ? x1 : x2;
    }
    else
    {
        // One Real Root
        float e = powf(sqrtf(r*r - q*q*q) + fabsf(r), 1.0f / 3.0f);
        e = r > 0.0f ? -e : e;
        float f = e == 0.0f ? 0.0f : q / e;
        return (e + f) - a / 3.0f;
    }
}

// Computes tr(M^T R) and the singular values from A, B and C
float mat3_f_trace_cg(vec3& s, float A, float B, float C)
{
    // Compute polynomial coefficients
    float b = -2.0f * A;
    float c = -8.0f * C;
    float d = -A*A + 2.0f * B;
    
    // Find root with largest magnitude using cubic resolvent coefficients 
    float y = cubic_max_abs_root(-b, -4.0f*d, -c*c + 4.0f*b*d);

    // Find quadratics for each pair of quartic roots
    float q1, p1, q2, p2;
    
    float D = y*y - 4.0f*d;
    if (D < 1e-10f)
    {
        float D2 = maxf(-4.0f * (b - y), 0.0f);
        q1 = q2 = y * 0.5f;
        p1 = +sqrtf(D2) * 0.5f;            
        p2 = -sqrtf(D2) * 0.5f;            
    }
    else
    {
        q1 = (y + sqrtf(D)) * 0.5f;
        q2 = (y - sqrtf(D)) * 0.5f;
        p1 = (-c) / (q1 - q2);
        p2 = (+c) / (q1 - q2);
    }

    // Find first two roots
    float D01 = maxf(p1*p1 - 4.0f*q1, 0.0f);
    float x0 = (-p1 + sqrtf(D01)) * 0.5f;
    float x1 = (-p1 - sqrtf(D01)) * 0.5f;
    
    // Find second two roots
    float D23 = maxf(p2*p2 - 4.0f*q2, 0.0f);
    float x2 = (-p2 - sqrtf(D23)) * 0.5f;
    float x3 = (-p2 + sqrtf(D23)) * 0.5f;
    
    // Singular Values
    s.x = (x0 + x3) * 0.5f;
    s.y = (x1 + x3) * 0.5f;
    s.z = (x2 + x3) * 0.5f;
    
    // return trace root
    return x3;
}

// Computes the polar decomposition and singular values of a matrix M using the 
// closed-form solution
void mat3_polar(mat3& R, mat3& S, vec3& s, const mat3 M)
{
    float A = mat3_frobenius_squared(M);
    float B = mat3_frobenius_squared(mat3_transpose_mul(M, M));
    float C = mat3_det(M);
    
    float f = mat3_f_trace_cg(s, A, B, C);
    float denom = 4.0f*f*f*f - 4.0f*A*f - 8.0f*C;
    
    if (fabsf(denom) < 1e-10f)
    {
        R = mat3_eye();
        S = M; 
        return;
    }        
    
    float dfdA = (2.0f*f*f + 2.0f*A) / denom;
    float dfdB = -2.0f / denom;
    float dfdC = (8.0f*f) / denom;

    mat3 dAdM = 2.0f * M;
    mat3 dBdM = 4.0f * mat3_mul(M, mat3_transpose_mul(M, M));
    mat3 dCdM = mat3_ddet(M);
    
    R = dfdA*dAdM + dfdB*dBdM + dfdC*dCdM;
    S = mat3_transpose_mul(R, M); 
}

You may have noticed that in this code we don't just get \( \mathbf{R} \), and \( \mathbf{S} \) - we also get a variable called s which will contain the singular values of \( \mathbf{M} \), which we can denote using \( \Sigma \). This comes for free when we solve the quartic, and will come in very handy as we will see next...


Singular Value Decomposition

From the polar decomposition we can compute the SVD decomposition. If we denote the SVD decomposition of \( \mathbf{M} \) as follows \( \mathbf{M} = \mathbf{U} \Sigma \mathbf{V}^\intercal \), we can use the identity \( \mathbf{S} = \mathbf{V} \Sigma \mathbf{V}^\intercal \) to solve for \( \mathbf{V} \), and then trivially \( \mathbf{U} = \mathbf{R} \mathbf{V} \).

To solve \( \mathbf{S} = \mathbf{V} \Sigma \mathbf{V}^\intercal \) for \( \mathbf{V} \) we essentially need to find the Eigen Vectors and Eigen Values of \( \mathbf{S} \) as the Eigen Vectors will correspond to the columns of the matrix \( \mathbf{V} \), while the Eigen Values will correspond to the singular values \( \Sigma \).

Given we already have the singular values \( \Sigma \) from the Polar Decomposition, and because \( \mathbf{S} \) is symmetric, there exists a nice closed form solution for exacting the Eigen Vectors \( \mathbf{V} \).

Since this method is much more well known and well studied I wont go through the derivation here, but in my case I found this paper to be a really useful resource when it came to the actual implementation.

// Returns true if a matrix is symmetric within some tolerance
bool mat3_is_sym(const mat3 M, const float tolerance = 1e-4f)
{
    return 
        fabsf(M.xy - M.yx) < tolerance &&
        fabsf(M.zx - M.xz) < tolerance &&
        fabsf(M.yz - M.zy) < tolerance;
}

// For a symmetric matrix returns the first eigen vector given the first eigen value
vec3 mat3_sym_evec0(const mat3 M, const float eval0)
{
    assert(mat3_is_sym(M));
    
    vec3 row0 = vec3(M.xx - eval0, M.xy, M.xz);
    vec3 row1 = vec3(M.yx, M.yy - eval0, M.yz);
    vec3 row2 = vec3(M.zx, M.zy, M.zz - eval0);
    
    vec3 r0xr1 = cross(row0, row1);
    vec3 r0xr2 = cross(row0, row2);
    vec3 r1xr2 = cross(row1, row2);
    
    float d0 = dot(r0xr1, r0xr1);
    float d1 = dot(r0xr2, r0xr2);
    float d2 = dot(r1xr2, r1xr2);
    
    if (d0 == 0.0f && d1 == 0.0f && d2 == 0.0f)
    {
        // No valid eigen vectors
        return vec3(1.0f, 0.0f, 0.0f);
    }
    else
    {
        // Return largest candidate
        return d0 >= d1 && d0 >= d2 ? r0xr1 / sqrtf(d0) :
               d1 >= d0 && d1 >= d2 ? r0xr2 / sqrtf(d1) :
                                      r1xr2 / sqrtf(d2); 
    }
}

// For a symmetric matrix returns the second eigen vector given the first eigen vector 
// and the second eigen value
vec3 mat3_sym_evec1(const mat3 M, const vec3 evec0, const float eval1)
{
    assert(mat3_is_sym(M));

    vec3 u = fabsf(evec0.x) > fabsf(evec0.y) ?
        vec3(-evec0.z, 0.0f, +evec0.x) / sqrtf(evec0.x * evec0.x + evec0.z * evec0.z) :
        vec3(0.0f, +evec0.z, -evec0.y) / sqrtf(evec0.y * evec0.y + evec0.z * evec0.z);
    
    vec3 v = cross(evec0, u);
    
    float m00 = dot(u, mat3_mul_vec3(M, u)) - eval1;
    float m01 = dot(u, mat3_mul_vec3(M, v));
    float m11 = dot(v, mat3_mul_vec3(M, v)) - eval1;

    if (fabsf(m00) >= fabsf(m11))
    {
        if (maxf(fabsf(m00), fabsf(m01)) <= 0.0f)
        {
            return u;
        }
        
        if (fabsf(m00) >= fabsf(m01))
        {
            m01 /= m00;
            m00 = 1.0f / sqrtf(1.0f + m01 * m01);
            return m01 * m00 * u - m00 * v;
        }
        else
        {
            m00 /= m01;
            m01 = 1.0f / sqrtf(1.0f + m00 * m00);
            return m01 * u - m00 * m01 * v;
        }        
    }
    else
    {
        if (maxf(fabsf(m00), fabsf(m01)) <= 0.0f)
        {
            return u;
        }
        
        if (fabsf(m11) >= fabsf(m01))
        {
            m01 /= m11;
            m11 = 1.0f / sqrtf(1.0f + m01 * m01);
            return m11 * u - m01 * m11 * v;
        }
        else
        {
            m11 /= m01;
            m01 = 1.0f / sqrtf(1.0f + m11 * m11);
            return m11 * m01 * u - m01 * v;
        }        
    }
}

// Find the eigen vectors from the eigen values of a symmetric 3x3 matrix
mat3 mat3_sym_evecs_from_evals(const mat3& M, const vec3 evals)
{
    // Assert matrix is symmetric
    assert(mat3_is_sym(M));
    
    // Check matrix has some magnitude
    if (M.xy * M.xy + M.xz * M.xz + M.yz * M.yz <= 0.0f)
    {
        return mat3_eye();
    }
    
    // Compute Eigen Vectors from Eigen Values
    if (mat3_det(M) >= 0.0)
    {
        vec3 evec0 = mat3_sym_evec0(M, evals.x);
        vec3 evec1 = mat3_sym_evec1(M, evec0, evals.y);
        vec3 evec2 = cross(evec1, evec0);
        return mat3_from_cols(evec0, evec1, evec2);
    }
    else
    {
        vec3 evec2 = mat3_sym_evec0(M, evals.z);
        vec3 evec1 = mat3_sym_evec1(M, evec2, evals.y);
        vec3 evec0 = cross(evec2, evec1);
        return mat3_from_cols(evec0, evec1, evec2);
    }
}

// Compute SVD using polar decomposition and symmetric eigenvector computation
void mat3_svd(mat3& U, vec3& s, mat3& V, const mat3 M)
{
    mat3 R, S;
    mat3_polar(R, S, s, M);
    
    V = mat3_sym_evecs_from_evals(S, s);
    U = mat3_mul(R, V);
}

And with that we have closed form solutions for both polar and singular value decompositions for 3x3 matrices. What a neat bit of mathematics! You can find the code from the original paper here, and the full code from this article here.

As with all detailed numerical and mathematical code it is easy to make mistakes, so please let me know if you find any errors or edge cases that are not handled properly and I will update the code.


Appendix: Numerical Stability

Because both of these methods involve solving polynomials they are inherently not numerically stable and lose a lot of accuracy the larger the numbers get (whenever you see both divisions and cubes close to each other you know you are in for trouble).

For this reason it often makes sense to divide the matrix we are decomposing by the magnitude of its largest component to put everything in a reasonable range before we run the decomposition. Here is how we might make this small addition:

// Computes the largest absolute value in the matrix
float mat3_max_abs(const mat3 M)
{
    return maxf(maxf(maxf(maxf(maxf(maxf(maxf(maxf(
        fabsf(M.xx) , fabsf(M.xy)), fabsf(M.xy)), 
        fabsf(M.yx)), fabsf(M.yy)), fabsf(M.yz)), 
        fabsf(M.zx)), fabsf(M.zy)), fabsf(M.zz));
}

// More numerically stable version of mat3_polar
void mat3_polar_stable(mat3& R, mat3& S, vec3& s, const mat3 M)
{
    float scale = mat3_max_abs(M);
    if (scale == 0.0f)
    {
        R = mat3_eye();
        S = M;
        s = vec3();
        return;
    }
    
    mat3_polar(R, S, s, M / scale);
    S = scale * S;
    s = scale * s;
}

// More numerically stable version of mat3_svd
void mat3_svd_stable(mat3& U, vec3& s, mat3& V, const mat3 M)
{
    float scale = mat3_max_abs(M);
    if (scale == 0.0f)
    {
        U = mat3_eye();
        s = vec3();
        V = mat3_eye();
        return;
    }
    
    mat3 R, S;
    mat3_polar(R, S, s, M / scale);
    
    V = mat3_sym_evecs_from_evals(S, s);
    U = mat3_mul(R, V);
    s = scale * s;
}