The 2024 Wheel Reinvention Jam just concluded. See the results.

D3D but RH Coords? - Z Wonkiness in Perspective Projection

I ripped out all my calls to DirectX Math and started hand writing my matrices for great learning, but I tried to use Right Handed Coords (z decreases into the screen) and built my math library with column major vectors (and post multiplication I guess?).

Anyone know how I'm supposed to make that work with D3D?

I ran into z dimension issues. Basically everything was backwards, inside out, or wouldn't render at all (behind the camera). After much effort it renders approximately how I would expect, but I feel like I've jumped the rails and maybe ended back up in a Left Handed coordinate system anyway -- it looks like if I decrease a vertex's z coord it pulls it closer to the camera... but if I increase the camera's z it moves it further away... I'm in Hand Shaking coordinate system.

When switching from DirectX Math (row vector, pre mult, left handed) I had to:

  • switch rasterizer FrontCounterClockwise to true
  • pass vertex indices in counter clockwise instead of clockwise
  • switch DepthFunc to comparison greater
  • clear depth buffers to 0.0f instead of 1.0f
  • avoid negating camera position when creating view matrix
  • use nearZ/zRange instead of farZ/zRange in perspective matrix

Without that last one my z dimension was inverted or something after the perspective divide. My projection matrix looks like this now (where n and f are near/far clip depths):

float fovyRads = gmath::DegsToRads(fovy);
float sinFov = gmath::Sin(fovyRads * 0.5f);
float cosFov = gmath::Cos(fovyRads * 0.5f);
float tanHalf = sinFov/cosFov;
float halfH = n * tanHalf;
float halfW = halfH * wOverH;

float zRange = (f-n);
float depth = -n/zRange;
float zOffset = n*f/zRange;

// NOTE: column major
gmath::m4x4 perspM = {
    1.0f/halfW, 0.0f,       0.0f,    0.0f, 
    0.0f,       1.0f/halfH, 0.0f,    0.0f, 
    0.0f,       0.0f,       depth,   1.0f, 
    0.0f,       0.0f,       zOffset, 0.0f 
};

So... something seems screwy. I can post more of my maths if anyone is interested. But otherwise just let me know if some glaring thing is jumping out.


Edited by Robert W. Childress on Reason: clarification

I'm not good with all that (and never used DirectX) but I think, there shouldn't be a need to change settings for it to work. Some of the things you listed seems they conflict with each other (FrontCounterClockwise and index order).

What you can do to verify what is happening is to manually compute 2 vertex positions to see if the result is what you expect, and if it isn't, where does the transformation got messed up.

You may want to verify that the normalized device coordinates are what you expect (on DirectX I think the default is having 0 at the near plane and 1 at the far plane on Z, but I'm not sure). And that vector and matrix multiplications do what you expect in the shaders.

When you need to make an algorithm equivalent to a reference implementation, start by simplifying it into a basic math formula that can be evaluated fully on the CPU and printed in the terminal for smaller examples. We can skip the depth division and only take coordinates from model space to vertex shader output, which is the last coordinate system before projection, rasterization, interpolation and pixel shading. The coordinate is in four dimensions, stretched along X and Y for the focal length and aspect ratio, with Z normalized by near and far clip planes for the depth buffer, and W is the value to divide X and Y with for projection.

You can set up a brute-force test with the Direct3D math in one function, your own right-handed math in another, and then perform vertex transforms on the CPU for asserting that the vertex shader's output given from a vertex shader will always be the same. Give a left handed version of the model and transform matrices to the Direct3D version and the right handed equivalent to your own module. If both return the exact same output for many different camera angles, focal lengths, et cetera, there should be no difference to the hardware. Just make sure that you can perform the operation with pen and paper and conclude that it makes sense, before seeing that the computer came up with the same solution.

Personally, I would just stick with left handed coordinates when working with Direct3D, because the math is only going to become more complicated when doing advanced features while following tutorials.

Thanks, folks. I think reworking it all with a simpler scenario with DirectX functions in parallel and outputting results through various stages to console/debug text would be super helpful. That makes sense.

Well I managed to match the numerical results from DirectX RH functions through world, view, and clip space and it's still rendering as though I'm looking from behind (vertex with higher z is rendered deeper into the scene). Also objects seem extended into the z dimension way more than I'd expect. 1 unit in z dimension appears to go way further than 1 unit in x or y.

Switching over to the DirectX RH functions gives me the same result.

I may try switching back to a left handed coordinate system... but I kind of want to figure out what I'm doing wrong. Might try working out what I expect the results to be and then solving backwards to figure out what my projection matrix needs to be.

EDIT:

Just thinking... for my z dimension scaling issue... I wonder if I'm missing some kind of "pixels per unit" kind of scaling in the z dimension. And maybe it's handled more reasonably in x/y because of aspect ratio multiplication and perspective division just working out.


Edited by Robert W. Childress on Reason: quick update not deserving new post
Replying to robert.childress (#26275)

Make sure to compare the working left handed system with the right handed system. If you just switch into right handed, the depth will be negated, so that near clip, far clip and depth buffer comparison has to be flipped. The model must also be expressed in a right handed coordinate system to make the change.


Edited by Dawoodoz on
Replying to robert.childress (#26282)

A problem I had was that I never really had a working implementation but didn't realize it, because I was rendering quads all at the same Z depth so it looked OK. When I decided to switch everything into cubes is when it got weird.

I ripped out all my rendering into a separate project and simplified it to isolate it (here: https://github.com/subnuminal/d3d-example ). I think I've found the major problems which were:

  • building vertices in world space coordinates instead of thinking of my geometry as being centered around the origin and translated into world space through the modelToWorld transform
  • inconsistent treatment of matrices with regards to row/column major format

I still have to slam my changes back into my main project and see what havoc is unleashed. But now there doesn't appear to be any glaring weirdness. The only thing I had to do to "translate" from RH coordinate system appears to be to negate the row of the view matrix which influences the z position. e.g.:

    // negate third row in view matrix
    view.elems[2][0] *= -1.0f;
    view.elems[2][1] *= -1.0f;
    view.elems[2][2] *= -1.0f;
    view.elems[2][3] *= -1.0f;

As long as everything else is just normal and consistent then it seems to work. We'll see!

Big thanks to d7samurai's reference implementation here which helped me with the first thing: https://gist.github.com/d7samurai/261c69490cce0620d0bfc93003cd1052

And the ryg blog was helpful in understanding matrices a bit better:

Try rendering a model with X, Y, Z explicitly written out with arrows, so that you have a reference for documenting your coordinate system. Otherwise you may find bugs that just move from one place to another without being fixed due to opposing implementations in each end. Row major and column major bugs can look confusingly similar to inverses, so test with translation, scaling, rotation and shear to make sure that everything works as expected.

Adding some coordinate arrows is a GREAT idea. I'll need those at some point anyway.

So far everything is rendering as I expect, though my orthographic projection still needs some tweaking. Actually I have to retool a bunch of things now that I have a better grasp of how this works.

But this brings me so much joy: https://www.twitch.tv/videos/1484596693


Replying to Dawoodoz (#26290)