Learning the concept behind BRDF and how it's used in rendering.
&learningjam2024
Back on the PBR demo. Went off a tangent and implemented my own local "Shadertoy", so that I can use my preferred text editor without having to do additional copy-pasting. I've written a blog post about it.
https://unlitart.com/Blog/#PBR%20Demo%20WIP%20and%20Writing%20Offline%20Shadertoy
&learningjam2024
As an optimization I incorporated some ray tracing into the ray marching routine.
First we trace a bounding box covering the objects and also trace the ground plane. If we miss the box then there's no need to ray march anything and the results of the plane trace is returned.
In the case of a box hit, the ray march will commence from the entry point until minimum distance or the exit point has been reached. In the case of the latter, the plane trace is returned as well.
Unsure about the performance gain. In standard Shadertoy editor mode it seems to be over 10% (according to Psensor readout) but less than that in full screen. At the very least I got a clean trace of the ground plane now.
&learningjam2024
So I managed to implement raytracing (in Shadertoy). Gets expensive really fast but I'm quite pleased by the results. Rough materials are still too noisy, and I think the low sample count plays the major part unfortunately.
Sampling 4 times per sub pixel, sample count is halved for each bounce, 2x2 AA and 2 bounces. Last bounce always samples the environment light.
&learningjam2024
Seems like I've addressed the bugs I had. Also found an updated version of the paper about sampling the microfacet normal.
https://jcgt.org/published/0007/04/01/paper.pdf
Currently doing 8 samples per sub-pixel with 2x2 MSAA which effectively makes it 32 samples per pixel. The variance on rough materials are too high for my liking but seems like it could be improved with a more sensible lighting scheme.
&learningjam2024
Making progress! All the pieces in place and more or less working.
Next I'll be investigating the black spots that are most apparent in low roughness materials. Also reflections are scaled up a lot on low roughness flat surfaces (like the floor). I suspect that my sampling is biased in some way.
(The funky environment light is for testing purposes)
Found a very good introductory blog post about importance sampling that cleared up some confusion I had.
https://patapom.com/blog/Math/ImportanceSampling/
Also found a newer paper going into further detail about the sampling scheme I'm using. Haven't read it yet.
https://jcgt.org/published/0007/04/01/paper.pdf
Currently sampling 32 times per pixel.
&learningjam2024
Here's the demo in its current state, still using direct lighting.
&learningjam2024
PBR demo is taking its sweet time cause I decided that I wanted to reflect the environment (just some procedural sky sphere for now) thus needed to learn about importance sampling, so most of the time dedicated to this project has been spent reading a bunch, again. It's been the most difficult subject so far for various reasons but things are starting to converge in my mind.
Tonight I wrote a program to test a function for generating micro-normals as presented in this paper: https://hal.science/hal-01509746/document
Seems to be working. Next is to get the other parts of the routine working.
!til &learningjam2024
Just finished processing this paper. I found it very informative but also hard to follow since the math notation made the equations look the same. My notes are basically a condensed version of the whole text, including all the 131 equations of which I'll review and color code the variables.
https://jcgt.org/published/0003/02/03/paper.pdf
&learningjam2024
Recently finished reading through the paper [PM] I mentioned in my last post. It certainly clarified some things and my summary has some inaccuracies in it.
Until I get around to address that I'd like to mention that equation (3), which I got from the Disney paper [DS], also has to be multiplied with a "normalization" term which is shown at the top of page 25 (eq. 3, the left fraction on the right side). This wasn't made clear to me until it was mentioned in [PM].
This is my usual experience with reading papers of this kind. Often I need multiple sources in order to fill in the gaps of my understanding.
Testing in my Shadertoy demo, the specular highlight (D-term) now behaves more or less as I would expect. Still got aliasing at the penumbra which appears to be due to the G-term (which handles micro facet shadowing), so that's what I'll be working on next.
!til &learningjam2024 Linking to the paper I mentioned in the recap stream. The best resource on the subject I've found so far.
https://blog.selfshadow.com/publications/s2013-shading-course/hoffman/s2013_pbs_physics_math_notes.pdf
!til &learningjam2024 Got started on implementing a material demo in Shadertoy but had to realize that it's going to take its time, so a brief summary will have to suffice. I've put it up for download on my site.
https://unlitart.com/download/HMN_LearningJam2024_BRDF_Summary.pdf
Big thanks to the HMN team for arranging this event. To be honest the concept didn't excite me too much when it was announced but it turned out to be a great opportunity to check off something from the mental bucket list.
!til &learningjam2024 Currently reading through this paper.
https://media.disneyanimation.com/uploads/production/publication_asset/48/asset/s2012_pbs_disney_brdf_notes_v3.pdf
The scope has shifted from understanding BRDF's to learn about material models based on the micro facet model, which are nowadays commonly employed in both games and films, what is usualy referred to as "physically based".
As mentioned before, BRDF simply denotes a function that returns reflectance based on view, and light direction, so to learn anything of substance we need to look at the models themselves. The time has come where I finally study lighting. I'm thankful for this jam giving me the incentive.
!til &learningjam2024 The concept of BRDF's turned out simple enough. It's a function that returns the amount of light being reflected based on a given direction pointing towards the viewer and a direction pointing towards the light source. Whether they are vectors or angles based on surface normal seems to vary between sources.
A BRDF is meant to be a self contained thing in the rendering algorithm that can be swapped at will. Basically it's an abstraction.
In practice a BRDF implements some material model. Apparently it's common to combine one for diffuse and one for specular. Any old material model you might have heard of (Lambert, Phong etc) counts as a BRDF. Nowadays BRDF's are usually based on the micro facet model, both in games and film.
This is currently my primary source, which I'm still reading through.
https://boksajak.github.io/files/CrashCourseBRDF.pdf