# Engine Work: Sky shading pass

Hi everyone. For the past couple of weeks I have been mostly working on Monter’s movement system, but it’s still heavily WIP so I can’t write about it just yet. What I did finish though is a simple sky shading pass. It turned out quite nice because it gels well with Monter’s simple low-poly art style. So I’m going to write about that instead.

Considering how to draw a sky

Skybox has been a popular choice for many years. It’s a technique that packs the pre-rendered skydome into a cubemap, which is a texture with six faces. A cube geometry is then passed to the renderer with the cubemap texture on.

However, most skybox images on the internet go for the realistic look, which is still pretty crappy. Viewers can easily tell they are fake because they are static images. Since Monter is going for a minimalistic look, I decided to render sky procedurally; a simple hemispherical gradient will do.

Setting it up

So instead of having some sort of preset geometry representing the sky, I want to project rays out of the viewer and render the sky that way. It gives me a finer control of what color to put at each pixel. And this sky shading pass is going to happen before anything else, so anything that’s rendered in the scene will override the sky pixels.

The first thing to do is to generate some view rays for each pixel on the screen. To do that I will first need to pass two triangles to the vertex shader as my view plane. It’s going to cover the entire screen. Here’s a visualization of the fullscreen quad made out of triangles in OpenGL’s normalized device coordinates:

I’m taking advantage of OpenGL’s rendering pipeline here by letting it rasterize these two primitives. After they get rasterized into a bunch of fragments (or pixels), I get to decide how to shade them.

For each pixel, I can get its coordinate on the screen and then use that to project a view ray out through the view plane. P.S. the following explanation uses a left-hand coordinate system.

Firstly, I computed the view rays on all four corners. Here’s an illustration:

Imagine the camera is a single point, viewing the sky through a plane. In order to know how big this plane is, we need to set the distance between camera and the plane, called depth. I set it to 1 here. The depth can actually be any value because the ray will be normalized. Another important parameter is the FOV (field of view). It’s an angle in radians that represents how wide the eye can see. After obtaining the FOV and depth, it’s clear how the dimensions of this view plane can be computed by using the equations above.

With these four corner view rays computed, we can calculate the rest of the view rays at any arbitrary pixel location just by linearly interpolating the corner rays. Luckily we can again exploit the graphics hardware to do this for us. Lastly, they just have to be normalized to unit vectors.

After that, we have a set of view rays that cover the entire screen, but they all point at the Z direction. In order to point these rays at player’s view direction, I multiply them with the inverse of the rotation component of the view matrix to convert them from view space to world space.

As I said before, I’d like to shade the skydome as a gradient that interpolates along the hemisphere hovering above the ground.

Say we project each view ray out into the skydome and intersects at point P. The height of this P can be used to determine what color this pixel is. The closer P is to the ground, the whiter the pixel is and vice versa (it doesn’t have to be white and blue, just two extreme colors with one representing the color at the top of the skydome, and another the bottom color of the skydome).

Since I normalized view rays, I know the Y component of each ray is in the range of [-1, 1] and is suitable to be used as the percentage value in the lerp() function. Furthermore, I can add an exponent to the percentage value to have a finer control over the gradient transition.

A sky gradient after some tweaking:

Drawing the sun

Adding a sun to the sky shading pass is pretty simple since everything is set up. All we need to do is to compute the angle difference between the current view ray and a ray from the viewing point to the sun. This can be easily done with a dot product. If the dot product is bigger than a certain threshold, we shade that pixel as a part of the sun, otherwise it’s just a part of the sky.

A cone for capturing the sun on the sky:

Here’s the sun drawn in the sky:

It works, but if you look closely, the edge of the sun is quite jagged.

Here’s the code that draws the sun with jagged edge:
 1 2 3 4 5 vec3 SkyColor = ComputeSkyColor(); if (dot(ViewRay, SunRay) > 0.999) { SkyColor = SunColor; } 

This is equivalent to doing:
 1 vec3 SkyColor = mix(ComputeSkyColor(), SunColor, step(0.999, dot(ViewRay, SunRay))); 

Here we can replace step() with smoothstep(), so that the intermediate area that connects the sun and the sky is interpolated. It’s essentially anti-aliasing the edge by blurring it.
 1 vec3 SkyColor = mix(ComputeSkyColor(), SunColor, smoothstep(0.998, 0.999, dot(ViewRay, SunRay))); 

As a result, we get a much fuzzier sun with no jaggies:

Finally, combine the sky shading pass with bloom effect:

I say it looks good enough for a first pass. :)
Oliver Marsh, Edited by Oliver Marsh on
Thanks so much for these type of blog posts, they are very interesting. Game is looking great!
Chen,
You are welcome. Thank you for the kind compliments Oliver :)
Do you use hardware PCF for shadowmaps?
In earlier screenshots it looked like you don't. Why is that?
Chen, Edited by Chen on
Hi pragmatic hero, sorry for the late reply. I'm not aware of any existing hardware PCF, so I just implemented a version of PCF by hand. My current shadow map filter implementation is PCF with randomly rotated Poisson disk samples. It makes the shadow edge noisy instead of blocky. Could you tell me about the hardware PCF that you mentioned?
Edited by pragmatic_hero on
Chen96
Hi pragmatic hero, sorry for the late reply. I'm not aware of any existing hardware PCF, so I just implemented a version of PCF by hand. My current shadow map filter implementation is PCF with randomly rotated Poisson disk samples. It makes the shadow edge noisy instead of blocky. Could you tell me about the hardware PCF that you mentioned?

Hardware PCF is essentially using sampler2DShadow with depth comparison enabled over sampling regular sampler2D and doing depth comparison and linear interpolation yourself.

So when doing texture(shadowmap, vec3(x,y, depth_value_to_compare_to)).r this will do N samples (usually 4) and return float value depending on how many samples pass the depth comparison = (samples_pass/N), e.g. 0, 0.25, 0.5, 0.75, 1. and do linear interpolation between those. Supposedly that is implemented in hardware.

The randomized disk sampling looked quite noisy so i was wondering why you've opted for that.
Chen,