Posted: 1294525007%e %B %Y, %H:%Magohover
Tags: sampling
Computing a picture is basically sampling the world to reconstruct a signal (the color information). The question is: how many samples do you need ? Claude Shannon, in 1949, replied: "A signal can be reconstructed exactly if it is sampled, at least, at twice its maximum frequency". This minimum sampling frequency is called the Nyquist frequency. What happens if your sampling frequency is below that threshold ? You get geometrical, color or temporal aliasing (jaggies, Moiré patterns, wagon wheel effect) which is visually disturbing.
In the real world, geometric details range from kilometers to sub millimeters and light comes from all directions, there are abrupt geometric or color transitions which set the Nyquist frequency to very high values. Theoretically, you would need zillions of samples to remove aliasing. However, if you consider that your eye has even less sensors than the CCD of the average cellphone camera but still never ever aliases, it seems like Mother Nature has found a good solution. In 1983, Yellott had a decisive interview with a monkey [1].
Monkey eye cone distribution 
At first sight, the cones only look like random dots. However, the frequency analysis of the spatial distribution is much more revealing.
Fourier transform 
This is a blue noise spectrum (or Poisson disk distribution) which has two interesting properties:
 there is no low frequency energy (the empty ring around the spike): there is always a minimum distance between samples.
 there is an equal amount of energy in higher frequencies (the white ring): samples are noisy but uniformly distributed
Thanks to its cone distribution, our eye converts aliasing energy into high frequency noise for which our visual system is less sensitive. You cannot remove aliasing but you can mitigate its impact on the quality of the final image. Therefore, a good sample distribution should strive to mimic a Poisson disk distribution pattern.
However, generating a Poisson distribution in a efficient manner, with a accurate control of the number of samples is still an open problem. There are only approximating distributions available. Armed with these bits of theoretical knowledge, we can now explain the behaviour of various sample distributions^{1}.
A uniform distribution satisfies the minimum distance requirement but fails to distribute evenly the aliasing energy. To make the matter worse, it concentrates it in a coherent manner producing repeating patterns. Conversely, a random distribution distributes evenly the aliasing energy but fails to respect the minimum distance requirement: this results in low frequency noise which is distracting for the eye.
A stratified jittered distribution does a better job with respect to the minimum distance requirement but does not completely fullfill it: there is no way to avoid that two samples from adjacent strata do not come too close. Most of low frequency noise is eliminated but not all. Finally, lowdiscrepancy^{2} distributions try to obtain stratified samples while avoiding clumps. They exhibit the best results.
There are many lowdiscrepancy distributions (Halton, Hammersley, Van Der Corput, Sobol', …). The next XRT release will use (0,2) sequence^{3} to sample things like area lights, soft shadows, glossiness or translucency.
As a concrete example, let's compare how an area light shadow is rendered when sampled with a random distribution and with a (0,2) sequence.


The area light is sampled with only 4 samples to highlight sampling efficiency problems. Even without zooming the pictures, it is obvious that the shadows that surround the dresser are noisier with the random sampler. Quality matters !
Rate this post: