At SIGGRAPH 2017 NVIDIA is demonstrating its Artificial-Intelligence assisted rendering ray-tracing tool called OptiX (5.0) which the company claims can give NVIDIA’s DGX workstation a rendering capability equivalent to 150 CPU-based servers. Optix 5.0 could boost the productivity of graphic designers who use Ray-Tracing, and eventually, this tech might even trickle down to more cost-effective GPUs and software solutions.
That is a bold claim indeed, but let’s step back and look at how AI-assisted ray-tracing works and why it would be so much faster. Typically, much time (and cost) spent producing ray-traced graphics goes into the creative process during which designers build models, materials, and scene. As time goes, they often have to ray-trace things out to see how it looks. This happens thousands of times, and the number of iterations often correlates with the quality of the final work.
Ray-tracing is a very compute-intensive rendering technique, and if you want to “preview” a scene without doing a complete render, the trade-off is normally to accept a higher level of noise (see above image, left side). The noise happens because not enough rays have been used to render the image so many pixels final lighting has not been calculated (yet). Sometimes, even a noisy image is “OK” from a preview perspective, but it is never ideal. Sometimes, you just need everything to look as close as final as possible.
With the help of AI, the rendering engine is capable of looking at noise patterns and estimate what the final image might look like. This not unlike rebuilding a very pixelated face in a low-resolution photo, based on having to look at a whole of faces. The computer is simply much more able to deal with having seen millions of noise patterns and estimating which one resembles the most the current one, and estimating what de-noising would look like.
Another way of thinking of it is to have the AI build a police-sketch of the final de-noised image, based on a rough description (noisy image). It remains an estimation, but if it is good enough, it can give an excellent feel for the real thing. That is what “previews” are about.
That AI denoising process is a lot faster than ray-tracing and provides remarkable results if the NVIDIA demos are a good reflection of the average use case. We will see how designers in the field judge it, but the idea and demos are quite impressive. I am not sure what things look like temporally (if you try creating animated scenes), but this is not meant as being a “final render” solution.
NVIDIA should not have any problems finding clients because previews are never fast enough, and this could potentially save a ton of money to design companies. There’s real value here. As designers can preview faster, they can try more things, and better tunes existing work within the same budget. Lowering the design budget and schedule length also becomes very possible. NVIDIA has even more AI driven graphics coming from NVIDIA Research.
. Read more about