3.0 plans

Posted: 18 Jan 2013 22:15
Tags: roadmap

Here is what I plan for version 3.0. Please bear in mind that roadmaps are not set in stone and may evolve if need be.

Open Shading Language integration

Open Shading Language (OSL) is quite trendy at the moment (see this post from Larry Gritz). Let me explain my perception of the pros and cons of OSL.

Given that XRT already supports a shading language, why move to something else? The answer is that the current implementation has a number of limitations:

  • to compute derivatives, a shader is executed at three different locations: P+dPdx, then P+dPdy and finally P (this is how BMRT implemented derivatives). Through some clever tricks implemented within the SL library, the computed values are retained and derivatives are computed at the P location. There are two drawbacks: even if only one derivative is needed, the entire shader is run thrice (except for light loops which are evaluated once). If a derivative depends on an other derivative, the computed value is 0 because second orders derivatives would require more shader executions.
  • the compiler is not able to take advantage of shader parameters that are constant for a whole primitive and prune execution accordingly. For instance, in the following construct, which occurs more than often, the condition keeps on being evaluated.
if (texturename != "")
{
   ....
}
  • ray differentials are not propagated which makes it impossible to filter textures or to refine subdivision surfaces.

OSL brings answers to all these issues through automatic differentiation, derivative tracking and just-in-time compilation (based on LLVM tools). These are quite powerful incentives for integrating this terrific piece of software. However, there are also some design decisions or unimplemented features that make me uneasy:

  • lights are modeled as emissive surfaces which means only area lights are available.
  • closures (a rough equivalent to RSL lights loops) are built in the renderer although a future version of OSL may support their implementation in the OSL itself1.

As a result, the renderers that integrate OSL must provide implementations for point lights, directional lights and closures. This clearly loses flexibility compared to a full-fledged shading language.

I guess that the main reason for this is that there are deadlines in the real world and that you sometimes have to deliver intermediate releases before the final product to allow people to work. Nevertheless, it does not compromise a major design goal from OSL which is to relieve shader writers as much as possible from the intrincacies of rendering2.

I have not been able to find much information on the latest evolutions of RSL but, from what I gathered here, Pixar folks have chosen to preserve flexibility but require more from the shader writers. Hence, the need for a more complex interface with the renderer. For instance, PRMan 17.0 is now also a path tracer. I have a quite limited knowledge of path tracing but I understand that "importance sampling" (in plain words, put your samples where your light is) is key to performance and image quality. Therefore, an "efficient" shader has to provide information on how it must be "importance sampled".

I do not feel like going backwards in terms of features. So, I could either try to extend OSL for my needs - but can it be called OSL anymore? - or keep on supporting RSL while reusing the OSL infrastructure - how about a public spec, Pixar? In any case, I'll try first to match the existing XRT capabilities (while learning more on LLVM) before moving on to advanced stuff such like global illumination.

True displacements with shaders

Actually, this is a completely uncharted territory for me. There are known solutions for scanline renderers but, when it comes to raytracing, computer graphics litterature does not help much. Despite being a important feature in a renderer (guess why commercial renderers are so tight lipped on that matter), there is only a handful of papers that deals with the subject

1. PHARR, M., AND HANRAHAN, P. 1996: Geometry caching for ray-tracing displacement maps. In Eurographics Rendering Workshop 1996, pp. 31–40 (paper link).
2. SMITS B., SHIRLEY P., STARK M. 2000: Direct ray tracing of displacement mapped triangles. In Proc. Eurographics Workshop on Rendering Techniques 2000, pp. 307–318 (paper link).
3. CHRISTENSEN, P. H., LAUR, D. M., FONG, J., WOOTEN, W. L., AND BATALI, D. 2003. Ray differentials and multiresolution geometry caching for distribution ray tracing in complex scenes. In Eurographics 2003 (paper link).
4. STOLL G., MARK W., DJEU P., WANG R., ELHASSAN I. 2006: Razor: An architecture for dynamic multiresolution ray tracing. Technical report 06-21, Department of Computer Science, University of Texas at Austin, 2006 (paper link).
5. HANIKA, J., KELLER, A., AND LENSCH, H. 2010. Two-level ray tracing with reordering for highly complex scenes. In Proceedings of Graphics Interface 2010, pp 145–152 (paper link). Also known as the "Rayes" algorithm.

The lack of any decent implementation in open-source renderers is probably a good measure of its difficulty:

  • LuxRender implements a very limited subset of displacement shaders: textures are used to displace subdivision surfaces along the normal. Although a very respectable effort, it's far from I want to achieve.
  • I believe there is a similar feature in Blender Cycles but, according to its lead developper, Brecht Van Lommel, it's far from satisfactory.
  • Psychopath is an on-going effort to implement the "Rayes" algorithm.

I feel too very attracted by the "Rayes" algorithm, maybe because the paper seems the easiest to understand. The OpenSubdiv project from Pixar will likely be an asset for XRT.

Path tracing

Although I sometimes wonder how people are willing to accept many hours render times for a single (noisy) picture, I want to give it a try just for the sake of curiosity. The concepts are quite difficult to master but the good point is that it is a very hot topic for computer graphics. There are so many papers that the jury is still out to define what is an effective solution. Look for instance at the myriad of algorithms implemented in the Mitsuba renderer.

To help the newbies, there are lots of tiny projects that implements global illumination algorithms:

  • smallpt, a path tracer
  • smallpvt, a volumetric path tracer
  • smallppm, an implementation of "Progressive photon mapping"
  • gpusppm, an implementation of "Stochastic progressive photon mapping"
  • smallVCM, an implementation of the "Vertex connection and merging" algorithm (a combination of progressive photon mapping and bidirectional path tracing)
  • githole repositories, another collection of global illumination algorithms

None of them is really designed for performance. Their only aim is to provide a pedagogical starting point.

Volume rendering

My intent is to introduce concepts from the "Production Volume Rendering" book into XRT.

Deformation blur

You have been raytracing too much when you dream of path-traced displaced deformed motion-blurred polygons …

More RenderMan compliance

  • blobby support
  • RIB 3.3 support
  • Python binding for the RI interface

Rate this post:

rating: 0+x

Comments: 0