I initially planned to talk about my reluctancy to code lighting tricks in the last progress report. But this part did not quite fit within the “progress report” style, so I stick it there on its own.

Why do I care about lighting so much ?

*clears throat*

Everything we see, which accounts to most of the things we *perceive*, since we’ve evolved as such vision-based creatures, is the result of the interaction of light with matter. Each and every 3D scene we draw, be it for a game engine, an animation movie, a final rendering from 3D Studio Max… whatever, tries to simulate the observer’s perceived light, the best it can. Simulate, as in, *trick* the observer into thinking that the screen is actually displaying the light from a scene in the real world. Which it isn’t. What you see in the real world is the result of a massive flow of photons reaching your eyes, which is a tiny subset from an ultra-massive, unaccountable amount of photons tracing their path all over the place and bouncing into matter.

The physical equations behind light interaction with matter seem well known nowadays, and in fact may seem quite simple to people who, unlike me, aren’t too afraid of algebra. But this doesn’t help with the issue that computers simply cannot reach the amount of computation needed to simulate each of the photons bouncing around in a simple closed room at a given moment, let alone in the outside world where you may take a peak at the sky, let alone at a rate of 25 hertz or more, as is required for an interactive 3D scene in a game. Perfect computation of all light in a scene at interactive rates just *can’t be done*, and I suspect it never will be, had we the most futuristic supercomputer available.

So what we use in interactive engines is a whole set of tricks to compute far less stuff than the path of each photon (including each bounce into matter) from their light source to your eye, while still providing a plausible approximation of it. First of these tricks in the book is the decomposition of the scene into a manageable amount of polygons, representing only the visible surfaces of solid objects. When I say polygons, it is in fact quite always triangles (which is the simplest “simple polygon”). Any surface, no matter how smooth or irregular, may indeed be well approximated by a set of sufficiently small triangles.

Second of these tricks, going hand in hand with triangle decomposition, is rasterization : rasterization doesn’t intersect each photon in a scene with every visible triangle, but turns the problem the other way around : It enforces that each direct light-path from a given triangle to the observer’s eye is at a fixed, well known, very-fast found position on the observer’s screen, covering that many pixels which are this one this one and this one and so on. It is one of the most powerful simplification used for solving the light equation, and is used in almost every 3D game engine to date. To my knowledge, it is even used to some extent in some “offline” renderers for 3D movies production, such as Pixar’s PRMan.

An approch that wouldn’t use rasterization is ray-tracing : for each pixel on the screen, the path of a photon (or a group of photons in a cone) is shot in reverse from the observer’s eye into the scene, testing for each ray if they intersect the scene geometry. Where they intersect geometry (along with how they interact with “matter” being represented by this geometry) determines the final colour for the pixel from which they were cast. Ray-tracing to this day has been mostly used in image processing without an interactive framerate requirement, by 3D renderers that are allowed to take whatever time they wish (or at least much more than a fraction of a second) to draw a given scene. The dream of ray-tracing at interactive rates is nowadays just about to become a reality, using latest advances in GPU parallel-processing power, and taking advantage of complex scene organization structures, allowing to reduce the number of geometric elements against which to test intersection with a given ray.

But to date, most engines still use rasterization. The simplification it provides is great, but since it’s a simplification, there are things it doesn’t accurately describe. For example, rasterization assumes that light goes in a straight line, when it determines that this piece of the scene geometry should be seen from that position on the screen. In fact, light doesn’t really go in a straight line, but its path bends as photons bounce all around even when going through air. Rasterization holds plausible for most situations, but when light should be perceived as if it went through air with high temperature variations such as above a fire, or as if refracted by a phase switch from water to air, plausible physical light rendering requires a lot of other tricks, to overcome the shortcomings of that first rasterization trick. Shadows are also something that requires some more efforts to get right : remember, rasterization skips the part intersecting light with geometry altogether, so how do we know that the light coming from the sun, or coming from that torch, was not occluded before reaching the triangle we’re trying to evaluate there ? Well, it turns out we don’t know. And adding shadows thus requires some more tricks…

tricks.

tricks.

tricks tricks tricks tricks tricks.

So many tricks all over the place they make me sick.

I find the approach of ray-tracing so simple, pure and elegant in comparison to convoluted tricks required by rasterization, that I even considered taking the ray-tracing path for NeREIDS (against our usual triangles or even voxel grids). Who knows, maybe if NeREIDS ships on 2020 hardware would it make no difference, performance-wise ? :p

But despite the large coding effort and computing overhead incurred by those tricks, rasterization is still many times faster than ray-tracing for a plausible rendering of most scenes. So I took the rasterization road again. And yet I never got myself into the mood of coding most light tricks into a rasterization engine, even very widespread ones such as those used for shadows. Yeah, despite all those past projects where I messed around with 3D rendering, I have yet to write some code for cast-shadows. I’m a coward like that.

So I’m quite anxious about it, but I’ll get there 🙂

I guess if you’re interested about all this stuff, a master would cover things better than myself. So here is John Carmack, speaking (at length) about light and the history of rendering techniques :

Oh come on ! Use Direct3D and stop wasting time on this.

http://en.wikipedia.org/wiki/Windows_Advanced_Rasterization_Platform

You have the right to enjoy coding a *game*. (And I want to see it one day !)

Well I am using Direct3D.