The 2024 Wheel Reinvention Jam just concluded. See the results.

Anti-aliasing in fragment shader

I finally got AA working for primitives in the fragment shader! I've wanted to add this for awhile once I migrated my software renderer to a proper GPU-driven one, but I was a bit surprised to find a relative lack of resources on doing it in the fragment shader as opposed to more typical solutions like super-sampling, MSAA, FXAA, etc. However, there is one thing that I still don't quite understand how/why it works.

My approach is basically:

  1. determine whether fragment/pixel lies on the edge within some range
  2. if it does, calculate alpha by performing smoothstep based on the pixel's position within the range

Here is the shader (Metal): https://gist.github.com/cfloisand/64c5965b6babf6915cf001cef0f3a922

(All coordinates are in NDC.)

The part in question is with the edge testing. To test whether a pixel is within the range of an edge, I calculate the dot product with the perpendicular to the edge. Naturally, the length of the edge will affect the value of the dot product, so I store this value in the Edges data that I pass to the fragment shader as edgeMaximas, and this value is calculated as the length of the edge divided by half the height of the render target.

e.g.

edges.edgeMaximas[0] = length(v0, v1) / renderTarget.height / 2.f;
edges.edgeMaximas[1] = length(v1, v2) / renderTarget.height / 2.f;
...

Why does this calculation work? Specifically dividing by half the height of the render target. I must admit, I arrived at this through experimentation (and maybe a bit of intuition.. :) ), but I'm not confident I know why it works. I've tested at various resolutions and with various sizes of triangles, and it all works!

As you can see, the results are very good (better than I expected!): aa_triangle.png

aa_triangle_zoomed.png

Don't bother with using output target width/height for this.

Instead use fwidth().

Assuming you have access to texture coordinates in your shader (or anything else that is uniformly interpolated in screenspace across the polygon/screen), then to get pixel size in screen pixels you can do 1.0/fwidth(uv). It will handle rotations & scaling of polygon too.

Then you can use this value to know how many pixel units you are away from your target location. And interpolate/blend in case it is less than 1 (meaning it partially covers pixel).

Usually you use it together with smoothstep or similar function. here's an example of how it is typically used: https://rubendv.be/posts/fwidth/

fwidth(uv) is cheap to compute, but for higher accuracy you may need to use something like length(float2(dFdx(uv),dFdy(uv))). Most of the time you won't notice the difference. Especially for 1px wide borders.


Edited by Mārtiņš Možeiko on

I didn't know about these functions. Thanks!

It felt a bit weird passing in those values I came up with, but then again, I'm a dabbler when it comes to shader/graphics programming. :)

I struggled a bit at first to make this work using fwidth(), but now I got it and I have the same results as I had before. The important thing is that I needed to normalize the perpendicular vector when calculating the dot product so the edge test isn't dependent on the length of the edge. Although I'm not passing uv to the fragment function, the position is interpolated, so I ended up using length(fwidth(pos)) where pos is in NDC space.


Replying to mmozeiko (#29666)

fwidth(pos) is fine, as long as pos is "varying" (in old GLSL terminology) - interpolated across the polygon. It does not need to be actual texcoord, but if it's NDC pos of pixel it is fine. You can imagine it is imaginary texcoord of polygon stretching across whole screen - and same logic applies :)


Edited by Mārtiņš Možeiko on
Replying to Flyingsand (#29667)