August 25, 2021
When you have done everything you can to speed up rendering on the computing cores, what else can you do? The two GPU makers, NVIDIA and AMD, both turned to machine learning, or AI. And Intel has been taking the same approach with CPUs. It's also bringing its AI-based Open Image Denoise to upcoming GPUs.
At SIGGRAPH 2021 virtual, CPU maker Intel announces Intel Open Image Denoise is set for integration with Arnold, a leading raytraced renderer by Autodesk. Open Image Denoise was originally released at SIGGRAPH in 2018. The technology consists of “an open source library of high-performance, high-quality denoising filters for images rendered with ray tracing. Intel Open Image Denoise is part of the Intel oneAPI Rendering Toolkit and is released under the permissive Apache 2.0 license,” according to Intel's announcement.
The CPU maker has made numerous attempts to jostle its way into the GPU-dominated rendering segment. (For more, read about Intel Larrabee.) The latest campaign centers on Intel's upcoming Xe GPUs. To understand Intel's strategy, we speak to Jim Jeffers, Intel’s Senior Director and Senior Principal Engineer of Advanced Rendering and Visualization Architecture.
Are there specific types of denoising functions (or specific types of models) in which the CPU-based denoising approach works better than the GPU-based approach? Or vice versa?
In most cases for AI-based denoising using Intel Open Image Denoise, the AI-denoising capabilities are software-based so the improvements come from the software optimizations regardless of hardware used. However, across vendor implementations you can generally create shortcuts that would make denoising faster, but lower quality. If you set a max-time allocation for denoising, you may make a design decision that limits visual quality. Intel Open Image Denoise is implemented for Quality First, speed next. Then the question becomes, “how fast on a CPU or GPU can you make this high-quality implementation run?” Intel’s perspective is, Image Fidelity is King and we will work hard to make it fast for a plethora of use cases from final frame rendering for movies, to real-time rendering for games.
DE readers are primarily CAD users who might use rendering to create realistic product shots (for example, using digital images for marketing and design review in automotive). How is Open Image Denoise relevant to these usages?
Intel’s oneAPI Rendering Toolkit has been utilized by the Bentley Motors to digitize its interior and exterior design options. Bentley has rendered over 1.7 million images, delivering over 1 billion options to customers for each vehicle Bentley offers, in a cost efficient and optimized process. Bentley’s car configuration solution was optimized by Intel OSPRay, Intel Open Image Denoise and Intel Embree, all part of the Intel oneAPI Rendering Toolkit. See these references for more details: Bentley Motors Shares How Intel Accelerates its Car Configurator [2:31] | Demo [6:58]
Is there an estimated timeframe on when Xe GPUs will be delivered to the market? Does that mean Open Image Denoiser works on CPU as well as GPU cores? Or will it be limited only to Xe GPU cores?
Xe HPG microarchitecture-based Alchemist GPUs, formerly known as DG2, will appear in products in the first quarter of 2022. Intel Open Image Denoise will deliver the same highest-fidelity results on those GPUs and future Xe GPUs as are currently delivered using Intel CPUs.
The Open Image Denoise API is part of the in-progress provisional oneAPI Specification v1.1 targeting end of 2021 industry confirmation. That means that we encourage our industry partners to support this open specification. It’s quite possible for someone in the ecosystem or NVIDIA to port the open source Open Image denoise implementation to a CUDA version or if SYCL is supported using tools from CodePlay or the under development HipSYCL effort for AMD GPUs. An existing example of such portability is that Open Image Denoise is already ported and runs natively on Apple M1 Arm-based processors.
Since GPUs tend to have more computing cores than CPUs, does the GPU-based rendering more cost-effective than the CPU-exclusive approach?
Customers who tend to use Intel Open Image Denoise today value Quality First. Autodesk’s Arnold Renderer is in beta testing with Intel Open Image Denoise, adding it to its available OptiX implementation and its own non-AI denoiser, simply called Denoise. Arnold’s head of software development recently categorized the use models in general as follows:
- Denoise for maximum final-frame level quality but takes up to hours to complete;
- Intel Open Image Denoise for close-to-Denoise quality but much faster on the order of seconds;
- OptiX for high frame rate preview, not suitable for final frame rendering.
So the conclusion is that for a large scale rendering job, Intel Open Image Denoise likely provides a cost advantage because you don’t need to purchase an expensive GPU and if you do, while it might be faster, the image quality doesn’t meet film standards in many cases.
More Intel Coverage
Subscribe to our FREE magazine,FREE email newsletters or both!
About the Author
Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.Follow DE