Mesh Skinning

4 months, 1 week ago
Edited by
pythno
on Oct. 29, 2020, 7:53 p.m.
Reason: Initial post
I tried to implement mesh-skinning and got stuck in bone-space (or mesh-space, I'm not quite sure actually).

The model I use is built in blender. I export it as gltf and import that gltf into a custom tool that uses AssImp to

load the gltf data into my own format. AssImp is row-major but I use column-major matrices. I account for that

when converting the AssImp matrices into my own mat4-type.

My tool basically puts all the vertices/indices into an array, followed by

another array containing the bones' mOffsetMatrix. mOffsetMatrix is the matrix contained with each bone

that converts from mesh- to bonespace according to the doc: http://assimp.sourceforge.net/lib_html/structai_bone.html

Also, I export the skelton's node-hierarchy. AssImp stores this as a tree, so each Node contains a pointer-array

to its child-nodes, if it has any. For my own format I flatten this node-tree into an array (nothing against flattening curves,

by the way, but for this I think the array was more suitable). See the image below, how this looks in practice:

Each entry in the flattened Node-Array contains two indices. The first one is the index into the Node-Array itself

saying what the parent node is. The second index tells us where to find the mOffsetMatrix for this particular

node within the Bone-Array. When importing the model using my own model-format I also create an

"Animation-Array". I initialize each element with a 4x4 identity matrix.

Since I do not care about keyframes for now I use this array to do "programmed animation". So rather

than using the animation keyframes exported by Blender into the gltf file and using those to

pose my model I simply update this array by hand every frame. So in the main loop I update, for example,

the upper right arm:

I happen to know that index 14 is the bone of the upper right arm. This is not pretty but as I am not sure

where my mistake is and I do not know where the code is going yet I hesitate to do something "generic".

Okay, so the Animation-Array of "md_mesh" is updated.

Next I call a function named

.

This function now goes through the Node-Array to update the skeleton:

I loop through the Node-Array ("skeleton_nodes") and if the node has a parent I apply the parent

matrix to the current, local transform. The local transforms are all the identity matrix except

for our upper right arm, which we just updated with a rotation matrix. If there is no

parent (parent index = -1) just the local transform is written. I use tmp_matrices to not override

the Animation-Array (animation_matrices) itself. tmp_matrices is, finally, the mat4 array that

gets sent to the Vertex-Shader.

Okay, I know more stuff has to be done, but let's see what this gives us:

Alright, so the child-nodes of the upper right arm inherit the rotation. That seems to work.

I applied a rotation around the y-axis. I use a right-handed system in View-Space. Y is up,

Z goes into the viewers face (ouch) and X points to the right. In Blender, Z is up. But during export

to gltf +Y is up is enabled. So the model displays just fine in my renderer.

The arm should rotate around its shoulder-socker but it rather rotates around its parent, spine2.

At least it look like it. So rather just appying the hard-coded rotation to the Animation-Array I am

converting to bone-space first:

This gives me the following result:

Oh well, that looks... interesting.

Anyways, the arm, or to be more precise here, the upper right arms "shoulder" appears to rotate

around the mesh's origin. I put the mesh origin at its feet in Blender, by the way. Also,

the arm is rotated. So, I am not sure if this mOffsetMatrix got us from Bone- to Meshspace now

or vice versa? Well, if I assume we are in bone-space now, let's go back to mesh-space by

applying the inverse of the mOffsetMatrix:

That gives the following output:

So at least the arm found its correct position again. Furthermore, it rotates around its "shoulder".

But why is the rotation around the x-axis now? The only explanation I have for this, is that in Blender

the local rotation of the upper right arm's bone is, in fact, the Y-Axis:

.

So this is actually a "good" thing (I think). I want to be able to animate the character using

bone-transforms and not relative to the View-Space.

However, it still does not to seem quite right. The arm now looks a bit distorted. In fact,

if I apply a translation matrix after doing the rotation, it looks like as if the further the vertices

are away from the model, the more scaling is applied:

Also, again, I applied the translate into the *positive* Y direction and not the negative X. The positive Y

in Blender of the upper right arm in local bone-space points the the left.

This is also a hint that the transformations I am doing are in local bonespace after

applying the arm_offset matrix.

Phew, that was a lot. As you can see I am not quite sure about what space I am in sometimes ;) This writeup

is already the best result I could get after swapping matrices and making many other things. Most of the time

it would simply trash the whole model which sometimes also looked cool but was definitely very incorrect.

Maybe someone who reads this immediately notices a very obvious mistake I am doing but I got to a point

where I don't see the forest for the trees anymore. That is why I am asking for some advice now. Any

thoughts about this are highly appreciated.

Take care of yourselves and stay healthy.

Cheers! Pythno.

The model I use is built in blender. I export it as gltf and import that gltf into a custom tool that uses AssImp to

load the gltf data into my own format. AssImp is row-major but I use column-major matrices. I account for that

when converting the AssImp matrices into my own mat4-type.

My tool basically puts all the vertices/indices into an array, followed by

another array containing the bones' mOffsetMatrix. mOffsetMatrix is the matrix contained with each bone

that converts from mesh- to bonespace according to the doc: http://assimp.sourceforge.net/lib_html/structai_bone.html

Also, I export the skelton's node-hierarchy. AssImp stores this as a tree, so each Node contains a pointer-array

to its child-nodes, if it has any. For my own format I flatten this node-tree into an array (nothing against flattening curves,

by the way, but for this I think the array was more suitable). See the image below, how this looks in practice:

Each entry in the flattened Node-Array contains two indices. The first one is the index into the Node-Array itself

saying what the parent node is. The second index tells us where to find the mOffsetMatrix for this particular

node within the Bone-Array. When importing the model using my own model-format I also create an

"Animation-Array". I initialize each element with a 4x4 identity matrix.

Since I do not care about keyframes for now I use this array to do "programmed animation". So rather

than using the animation keyframes exported by Blender into the gltf file and using those to

pose my model I simply update this array by hand every frame. So in the main loop I update, for example,

the upper right arm:

1 2 | mat4 arm_rot_y = rotate_y (arm_r_rot_y); /* arm_r_rot_y gets incremented of some amount every frame */ md_mesh.animation_matrices[14] = arm_rot_y; |

I happen to know that index 14 is the bone of the upper right arm. This is not pretty but as I am not sure

where my mistake is and I do not know where the code is going yet I hesitate to do something "generic".

Okay, so the Animation-Array of "md_mesh" is updated.

Next I call a function named

1 | update_skeleton(&md_mesh) |

This function now goes through the Node-Array to update the skeleton:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | void update_skeleton(MdMesh * mesh) { Node * skeleton_nodes = mesh->skeleton_nodes; uint32_t node_count = mesh->node_count; for (uint32_t i = 0; i < node_count; ++i) { Node * node = &(skeleton_nodes[i]); int parent_index = node->parent_index; uint32_t bone_index = node->bone_index; mat4 local_transform = mesh->animation_matrices[bone_index]; mat4 offset_mat = mesh->bones[bone_index].offset_matrix; if (parent_index >= 0) { uint32_t parent_bone_index = skeleton_nodes[parent_index].bone_index; mat4 parent_mat = mesh->tmp_matrices[parent_bone_index]; mesh->tmp_matrices[bone_index] = mat4_x_mat4(parent_mat, local_transform); } else { mesh->tmp_matrices[bone_index] = local_transform; } } } |

I loop through the Node-Array ("skeleton_nodes") and if the node has a parent I apply the parent

matrix to the current, local transform. The local transforms are all the identity matrix except

for our upper right arm, which we just updated with a rotation matrix. If there is no

parent (parent index = -1) just the local transform is written. I use tmp_matrices to not override

the Animation-Array (animation_matrices) itself. tmp_matrices is, finally, the mat4 array that

gets sent to the Vertex-Shader.

Okay, I know more stuff has to be done, but let's see what this gives us:

Alright, so the child-nodes of the upper right arm inherit the rotation. That seems to work.

I applied a rotation around the y-axis. I use a right-handed system in View-Space. Y is up,

Z goes into the viewers face (ouch) and X points to the right. In Blender, Z is up. But during export

to gltf +Y is up is enabled. So the model displays just fine in my renderer.

The arm should rotate around its shoulder-socker but it rather rotates around its parent, spine2.

At least it look like it. So rather just appying the hard-coded rotation to the Animation-Array I am

converting to bone-space first:

1 2 | mat4 arm_offset = md_mesh.bones[14].offset_matrix; md_mesh.animation_matrices[14] = mat4_x_mat4(arm_rot_y, arm_offset); |

This gives me the following result:

Oh well, that looks... interesting.

Anyways, the arm, or to be more precise here, the upper right arms "shoulder" appears to rotate

around the mesh's origin. I put the mesh origin at its feet in Blender, by the way. Also,

the arm is rotated. So, I am not sure if this mOffsetMatrix got us from Bone- to Meshspace now

or vice versa? Well, if I assume we are in bone-space now, let's go back to mesh-space by

applying the inverse of the mOffsetMatrix:

1 | md_mesh.animation_matrices[14] = mat4_x_mat4(mat4_inverse(arm_offset), mat4_x_mat4(arm_rot_y, arm_offset)); |

That gives the following output:

So at least the arm found its correct position again. Furthermore, it rotates around its "shoulder".

But why is the rotation around the x-axis now? The only explanation I have for this, is that in Blender

the local rotation of the upper right arm's bone is, in fact, the Y-Axis:

.

So this is actually a "good" thing (I think). I want to be able to animate the character using

bone-transforms and not relative to the View-Space.

However, it still does not to seem quite right. The arm now looks a bit distorted. In fact,

if I apply a translation matrix after doing the rotation, it looks like as if the further the vertices

are away from the model, the more scaling is applied:

1 2 3 4 | mat4 trans = mat4_identity(); trans = translate(trans, (vec3){ 0, .5, 0 }); trans = mat4_x_mat4(trans, arm_rot_y); md_mesh.animation_matrices[14] = mat4_x_mat4(mat4_inverse(arm_offset), mat4_x_mat4(trans, arm_offset)); |

Also, again, I applied the translate into the *positive* Y direction and not the negative X. The positive Y

in Blender of the upper right arm in local bone-space points the the left.

This is also a hint that the transformations I am doing are in local bonespace after

applying the arm_offset matrix.

Phew, that was a lot. As you can see I am not quite sure about what space I am in sometimes ;) This writeup

is already the best result I could get after swapping matrices and making many other things. Most of the time

it would simply trash the whole model which sometimes also looked cool but was definitely very incorrect.

Maybe someone who reads this immediately notices a very obvious mistake I am doing but I got to a point

where I don't see the forest for the trees anymore. That is why I am asking for some advice now. Any

thoughts about this are highly appreciated.

Take care of yourselves and stay healthy.

Cheers! Pythno.

I don't have experience with that subject, but the first thing I would try (apart from taking a break and come back fresh to the problem) would be to verify your assumptions with simpler case.

Create a cube with one joint/bone and animate it using the bone. Make sure every thing is actually correct, and not just looks correct. Draw the 3 axis at the bone location (visualization is really important). Rotate, move, scale and combine those and verify the result. Make sure that left is left and that left was what you expected to happen. Also try to figure out in which space you need to be (and how to get to it) to do what you want, as this seems to be the issue here.

Than add a child bone and a second cube and make sure the hierarchy is correct and behaving correctly.

One thing you need to make sure is the order of the matrix multiplications as it matter a lot. Both in the sens of in which order you need to apply the transforms, but also making sure that what you think is the right hand side (rhs) and what you think is the left hand side (lhs) are actually those and that it's what they should be (in the implementation). Each time I work with matrices that gets me.

Maybe you can do the theory, share it here to see if it's correct (hopefully someone comfortable with that will check it), and then do the implementation. If you could share full reproduction code it would probably help.

Also this is about skeletal animation, not mesh skinning (at this point) unless the arm is part of the main model and is skinned to arm bones.

Create a cube with one joint/bone and animate it using the bone. Make sure every thing is actually correct, and not just looks correct. Draw the 3 axis at the bone location (visualization is really important). Rotate, move, scale and combine those and verify the result. Make sure that left is left and that left was what you expected to happen. Also try to figure out in which space you need to be (and how to get to it) to do what you want, as this seems to be the issue here.

Than add a child bone and a second cube and make sure the hierarchy is correct and behaving correctly.

One thing you need to make sure is the order of the matrix multiplications as it matter a lot. Both in the sens of in which order you need to apply the transforms, but also making sure that what you think is the right hand side (rhs) and what you think is the left hand side (lhs) are actually those and that it's what they should be (in the implementation). Each time I work with matrices that gets me.

Maybe you can do the theory, share it here to see if it's correct (hopefully someone comfortable with that will check it), and then do the implementation. If you could share full reproduction code it would probably help.

Also this is about skeletal animation, not mesh skinning (at this point) unless the arm is part of the main model and is skinned to arm bones.

If hesitating about making it generic, you can organize the code as three separable steps to make the last parts simple and reusable.

* Assigning bone orientation using canned animation, physics or inverse kinematics. (in the game or a reusable humanoid system)

* Automatically enforce hierarchy on translation without affecting orientation. (helper function in library)

* Rendering of model based on bone-space to model-space transforms. (core engine)

Then it's easy to reuse the non-hierarchial rendering part while changing how bone locations are generated. You can also let game logic know the transform in and out of a bone without traversing a stack of multiplications for placing an item in the hand. If using rag-dolls from physics or inverse kinematics based on targets, you might already have implemented constrains and model-space locations in another equation.

This separation makes it easy to render pre-defined global bone transforms for testing rendering in isolation. Then each game can change a few hundred lines of math to switch between animation styles with fast experimentation by just outputting the absolute orientation of each bone. You will most likely end up with a mix of rag-doll physics for death, balance for running, inverse kinematics for grabbing things and canned animations for gestures to make everything look right.

I suspect that one of the bones have a bad transform which accumulates due to not separating the hierarchy logic from rendering, which makes it a bigger problem to tackle at once. Printing the determinant of each bone's final orientation should shed light on the scaling issues. It can also be a total bone weight exceeding 100% or unused bone indices having ill defined floating-point values in the skinning.

* Assigning bone orientation using canned animation, physics or inverse kinematics. (in the game or a reusable humanoid system)

* Automatically enforce hierarchy on translation without affecting orientation. (helper function in library)

* Rendering of model based on bone-space to model-space transforms. (core engine)

Then it's easy to reuse the non-hierarchial rendering part while changing how bone locations are generated. You can also let game logic know the transform in and out of a bone without traversing a stack of multiplications for placing an item in the hand. If using rag-dolls from physics or inverse kinematics based on targets, you might already have implemented constrains and model-space locations in another equation.

This separation makes it easy to render pre-defined global bone transforms for testing rendering in isolation. Then each game can change a few hundred lines of math to switch between animation styles with fast experimentation by just outputting the absolute orientation of each bone. You will most likely end up with a mix of rag-doll physics for death, balance for running, inverse kinematics for grabbing things and canned animations for gestures to make everything look right.

I suspect that one of the bones have a bad transform which accumulates due to not separating the hierarchy logic from rendering, which makes it a bigger problem to tackle at once. Printing the determinant of each bone's final orientation should shed light on the scaling issues. It can also be a total bone weight exceeding 100% or unused bone indices having ill defined floating-point values in the skinning.

Mesh Skinning

3 months, 3 weeks ago
mrmixer

Also this is about skeletal animation, not mesh skinning (at this point) unless the arm is part of the main model and is skinned to arm bones.

I guess both a closely related, aren't they? If the mesh skinning doesn't work then there is no way to animate the model correctly.

Dawoodoz

Printing the determinant of each bone's final orientation should shed light on the scaling issues. It can also be a total bone weight exceeding 100% or unused bone indices having ill defined floating-point values in the skinning.

Good suggestion! I checked all the bone weights writing a debugging function and all weights sum up to 1.0.

However, checking the determinants after transformations showed the problem. They were clearly not 1 for the matrices that were

affected meaning some scaling was involved.

It turns out that my math library had a bug: The inverse was calculated incorrectly. My assumptions about the matrix multiplications

doing the transformations were correct, though:

AssImp's mOffsetMatrix fields in the bone arrays contain the inverse model-space transformation of that bone. That means

by multiplying a vertex with mOffsetMatrix puts that vertex (which is also in model-space) into the coordinates system of that bone.

Then, we can do all sorts of transformations (which are now relative to the bone) and in the end multiplying with the inverse of mOffsetMatrix we go back to modelspace.

Well, glad I could figure this out. And thanks for your suggestions!

Take care

- pythno