Archive for the ‘Uncategorized’ Category

Re PBRT

Wednesday, January 15th, 2020

Since Diana mentioned PBRT...

Back when I was in high school, around 2006, a friend of mine, already working for some architectural company at that time, showed me what he was doing for money: architectural visualization with 3DMax and V-Ray. It blew my mind at that time: not in a sense that you could make a nice picture yourself, but that computer would be able to produce a nice picture based on some very rough meshes, lights and textures. How the hell does it work1?!

As few days later, I would be hectically downloading and trying to grok stuff from Ke-Sen Huang's webpage2, which was #1 resource for learning about "photorealistic" rendering3. Or reading the PhD thesis of Eric Veach4, trying to educate myself in probability theory (Monte-Carlo methods) and all the integrals in parallel; there were also some more focused books (IIRC "Photon Mapping and Irradiance Caching") and a general introductory book for newbs (something in the vein of "Everything you need to know about raytracing") - but at that time I scratched off enough from papers to not be interested in newb materials anymore. Almost at the same time, I'd have Yafaray source code at my worktable - if I can call it that, the only thing that I knew about programming at that time was how source code in Assembly, BASIC, and C looks like, so I had to understand WTF C++ even was by looking at Yafaray's source. I learned a lot, at some time even provided them windoze builds of their renderer5, did some other small tricks... Still, in the end, I understood that I knew too little about math and programming to meaningfully work on it, so stopped looking at the renderers of others or trying to build a full-blown renderer myself6. Instead, I focused more on math, physics, and programming - which motivated a move to FreeBSD at that time, as a better programming environment than windows. This was my bridge to *nix systems. Also, first interaction on IRC, also in #yafaray.

Where does PBRT figure in all of this? It doesn't. I was looking for the first edition of the book everywhere I could, but it was first scanned and dropped in the Internet around 2009-2010, when I already was in the university, had much less time and, honestly speaking, interest in the field - and going through PBRT requires a lot of dedication. So I kind of looked through it, admired the things I would have been doing if I had the book back in 2006, and closed it. It really is the bible of the PBR, and it is satisfying in that after going through it, you get a rather advanced renderer, which you can also extend - something you can't really do with toy raytracers. OTOH, it seems that it does not cover the GPU-based techniques, so it is not clear to me how well it aged.

For learning about PBR aspects of CG, Glassner's book must be really outdated. The lighting partition of PBR is (was?) driven by an artificial separation into direct lighting (for each point, sample the lights, you're either in the shadow or not, apply inverse cosine law7, you're mostly done), the indirect lighting was the tricky part, and originally was driven by efficiency requirements - rendering time depends on number of Monte-Carlo samples necessary to even out the noise - and the algorithm had to be hand-picked depending on the scene, complexity of the lighting, etc. Indoor scenes at that time would typically use Photon Mapping + Irradiance Caching (which required quite some hand-tuning in open-source renderers), while outdoor scenes would work better with Path Tracing (where you could quickly connect indirect paths with the light source aka Sun). The Glassner book is from 1994, while Photon Mapping is from 1995. I guess it presents radiosity, then? I don't know what is the state of direct-indirect lighting divide now, it created ugly special cases for specular surfaces, so I would assume it got killed in the end8.

Big shift started to happen around 2008 - first, GPGPU support (CUDA and OpenCL) started to appear in renderers; second, it was still time with Moore's law, so the abundant computational power make the field more and more physicalized. Some renderers started modeling light in terms of wavelengths and not RGB values. Of course, while it allowed some cool demos with caustics, in practical scenes it slowed down renderer by a fair percentage. But it was done to simplify the life of designers - no longer they had to add a light with color RGB (255,250,250) and intensity 15... What? Could be 15 "parrots" just as well. Now they could add a 6500K 1 Lumen blackbody source. The same went with materials - with more focus on modeling than faking, also for benefit of actual designers9. Re opacity, in the physical world this would call for something like volumetric rendering, but I have no clue how it is done in production now. The same with rendering algorithms: instead of tweaking Photon Mapping parameters every 20 mins, to remove light leaks in the corners, you could just setup the scene according to physical parameters, and wait an hour until noise is evened out.

As far as shapes are concerned, a lot of thought went into algorithms and lookup structures for ray-mesh intersections, which had to be as efficient as possible. From a rather vague memory I can tell that typically shapes would get converted to triangles - for easier SIMDfication and GPUfication. While at least educational renderers did support spheres and cylinders, I did not educate myself in more advanced stuff like NURBS. But re higher-level modeling, can I assume that the Eulora's Foxy comes from MakeHuman?

I also can't but mention that the rendering equation (the one that describes radiance transfer) made its author a sort of Lenin-Shannon of the computer graphics. Lenin, because ~99% of the works started with "The goal is to solve the rendering equation", or at least cited his work. Shannon, because while the rendering equation is all good, it is quite far away from anything practical - quite similar to Shannon's Law in telecommunications.

I would expect that more people did production quality renderers 2008-2020 than new operating systems. So looking back at it now, I can say that at least when I tuned in, this whole field definitely was alive, much more than systems research10.

  1. You see? The specialization arose immediately, for I never cared about modeling, ever. Rendering was a rabbit hole big enough to fall in and get lost. Also, I did not care in the slightest about the real-time rendering, please keep this in mind when reading rest of the post. []
  2. Funny that I can even remember this name after ~12 years. []
  3. Physically-based rendering was a better term that I learned only later. []
  4. Called "The only PhD thesis in CS that gets read regularly". []
  5. Had to figure out strange windows-related build failure. []
  6. All of my attempts failed due to general lack of education at that time. Everybody builds a simple raytracer, this is true, but there is a big gap between the simple raytracer and what would fly in production. I was more interested in the latter. []
  7. Am I remembering this right? []
  8. A read-up on bidirectional path tracing (BDPT) would be required to resolve this statement. []
  9. Though IIRC correct interaction on the borders of refractive mediums (glass-liquid-ice-liquid-glass) got figured out only ~2018. []
  10. The one that was pronounced dead in 2002. []