Latest News
November 1, 2013
When it comes to rendering and visualization, fast is never fast enough. So how to go ever faster?
The old answer was to buy the fastest processors and the most, fastest RAM you can afford. And that still holds in most cases: Grab a speedy Intel Core i7 processor (or better yet, several of them), coupled with vast quantities of DDR3 RAM, and you can work at a speed that would have made your head explode just a few years back.
Consider KeyShot, which I reviewed back in the February issue. KeyShot is a CPU-only renderer that is very, very fast when running on any semi-current computer. It will give you decent-looking renders in a few seconds. Give it plenty of RAM and plenty of CPU cores, and watch it fly.
Good Ol’ CPUs
In fact, says Intel Workstation Segment Manager Wes Shimanek, “most of the high-end ray tracing algorithms out there are being developed on CPUs, because it gives them access to immense amounts of memory and immense amounts of compute capacity.
“Autodesk, for example, has a number of rendering technologies,” Shimanek continues. “But their most advanced ray tracing technologies, based on VRED, are all CPU-based. The features and functions you get with a CPU-based ray tracer are superior.”
If your CPU isn’t up to the task, you can always add an Intel Xeon Phi coprocessor. Intel Xeon Phi coprocessors provide as many as 61 cores and up to 16GB of RAM, for more than a teraflop of performance.
“The same software that runs on your CPU runs over on the Xeon Phi coprocessor,” says Shimanek. “From a software developer’s perspective, that’s great: Write it once, and it runs on multiple platforms. The CPU and coprocessor both look and smell just like Intel X86 architecture. This gives you greater flexibility.”
What’s in the Cards?
Caustic’s ray tracing acceleration boards speed things up by moving the grunt work of tracing rays onto specially designed processors. Not to be confused with the graphics processing units (GPUs) found on many graphics cards, Caustic’s ray tracing units (RTUs) are designed for ray tracing. By using the system CPU and keeping textures in system memory, the boards can render larger scenes than most GPU boards, while consuming significantly less power. (Editor’s Note: For more about Caustic’s boards, see “Seeing Ray Tracing in a Different Light” in the April issue of DE).
GPU-based cards can make for some screaming-fast renders but, as hinted at above, a key limitation is that GPU-based rendering requires all of your scene’s geometry and texture information to be loaded onto the GPU card itself, to avoid passing chunks of data across the painfully slow system bus.
In response, NVIDIA’s new Quadro K6000 packs 12GB of RAM onto a single card—triple that of the previous flagship card, the K5000. It’s not just more RAM, of course. With 2,880 compute unified device architecture (CUDA) cores on tap, and a 288GB/second memory bandwidth, the K6000 will push more than 2 billion triangles per second onto the screen. But the RAM itself is monumentally useful for loading really large scenes.
No More Data Preparation?
“Because engineering data has back sides, substrates, fasteners, harnesses, cables and hoses, that data is extremely heavy,” says Ryan Schnackenberg, head of design and engineering solutions at RTT. “In the past, we have relied heavily on a process flow called data preparation.”
Schnackenberg recalls how it could be the work of half a day, a day or even more, to organize, tessellate, delete, simplify and otherwise prepare CAD data to fit into available RAM, and to allow the application to manipulate it at a reasonable frame rate.
“Now that we’re seeing more advances in GPU technology, we see a move away from data preparation,” he adds.
At the recent SIGGRAPH 2013, RTT wanted to show some completely unprepared data from the Nissan Motor Co. The “data” was, in fact, an entire Nissan Pathfinder—comprised of more than 40 million polygons. Historically, Schnackenberg says, the team would have had to strip quite a bit out of the model to create something that would work in near real-time. But by using a 12GB K6000 card, RTT was able to load the entire car into its DeltaGen application with very little preparation. The result was a scene with high visual fidelity, running on a workstation in real-time, with full global illumination.
Plus, there are advantages to leaving all the fiddly bits intact rather than stripping them out. “In a design and engineering scenario,” says Schnackenberg, “designers want to peel the layers from the onion; they want to see the wires and the hoses.”
The less you have to change the model along the way, the easier the lifecycle management process is, he adds: “We try to re-leverage the CAD data throughout its lifecycle. As it goes to manufacturing and then to launch, you can reuse the content again for sales and marketing, online configurators, kiosks, etc.”
Remote Clusters
Another technology on display at SIGGRAPH was a cluster of 232 NVIDIA GPUs. “The K6000 does an awesome job,” says Andrew Cresci, NVIDIA’s vertical marketing general manager. “But what a cluster does is nothing short of astonishing. It’s like having a photograph you can move around in 3D.”
It can be used for rendering, or it can be used for NASTRAN or computational fluid dynamics (CFD). It doesn’t have to be a massive cluster, either, Cresci notes: “A high-end workstation these days can have three or four of our GPUs bolted into them.”
In fact, the cluster at SIGGRAPH was actually running hundreds of miles away, in Santa Clara. “We were remoting down to Anaheim,” says Cresci, “running off a workstation at the trade show with full interactivity. In the last nine months or so, remoting has really become commercially deployable for the first time.”
Renderings of a rotor (top) and a Ferrari (bottom) from Lagoa, and of an aircraft interior from RTT. |
By having a centralized compute capacity next to the file server on a high-speed link, you can load data onto the cluster very quickly. The central computer does the heavy lifting. Your access is a relatively low-bandwidth H.264 video stream.
“The visualization comes up quickly because the data loads incredibly fast,” says Cresci. “You get very high performance computing on the central cluster, and you can access it from anywhere. The ability of guys to sit in their office and have access to this phenomenal capability, without long load times and crunching times, is huge. And if you go out to a supplier site, you just log on and everything comes up.
“The data never leaves your corporate site. All you’ve got is a video stream so there’s no IP risk.”
DeltaGen 12 Recently ReleasedRTT has released an upgrade of its flagship 3D visualization software, DeltaGen 12. According to the company, the new release includes more than 60 enhancements and new features. It includes new artistic composing and post production features, such as motion blur, lens flare, sun shafts, render passes or flexible real-time light and shadow settings. The new camera model enhances a real photography approach through provision of sensor sizes, focal lengths and shutter speeds. The new release also helps save time by offering a performance increase and high efficiency with increased reusability of data, the company says. It reduces time-to-asset through unseen performance increases for offline rendering and real-time ray tracing, including Global Illumination (GI). The company also introduced RTT Xplore DeltaGen, a multi-touch navigation tool that facilitates the presentation set-up and improves scene interaction. RTT DeltaGen 12 for Teamcenter combines 3D visualization with PLM. The new version comes with an enhanced metadata concept, quick update processes and customized queries for search, filter and sorting. |
On a smaller scale, NVIDIA has a sort of cluster-in-a-box called the visual computing appliance (VCA). Plug it in, hook it to the Internet/intranet, and install your software. Now you can run, say, SolidWorks from your desk, from home, from the conference room or from the offices of your outside suppliers.
“You get staggering performance with SolidWorks and RealView,” says Cresci.
Cloud Rendering
If this approach is appealing, but you don’t want to actually build and maintain your own cluster, you can always go to the cloud. Autodesk’s 360, for example, is a cloud rendering solution currently available for AutoCAD, Revit and Fusion.
“We’re trying to shift people from just rendering for presentation purposes, when the design is finished, to thinking of rendering as a process that occurs throughout the design,” says John Hutchinson, senior software architect and SWD manager. “We spent a good deal of time validating ]our renderer’s] accuracy from a photometric standpoint. The renderer is accurate, so using it early on in the process can inform many aspects of the design.”
With AutoCAD and Revit, the application runs and stores data locally, then shoots a compressed soup of triangles to the cloud for rendering—obfuscating your intellectual property.
“This is shifting as more of our products become ‘cloudified,’” says Hutchinson. “Fusion 360 is an example of that. While you do a desktop install, all the data is stored in the cloud. The storage is much closer to the compute in that scenario.”
While Autodesk 360 is tied to a handful of Autodesk applications, Lagoa’s cloud-based renderer will import from a variety of file formats to render in your browser window.
“You can load a full CATIA file of a BMW 3 series car, and it will render in less than a minute,” says Lagoa Vice President Chris Williams. “Every detail—even the stitching inside the car is modeled in 3D.”
To accomplish this, Williams says, “you have to build a lot of infrastructure, from things like decimation of geometry, compression routines to stream rendering, components to make a browser-based component run like an application, version control and assent management. We’ve got a very beefy infrastructure on the back end that dynamically scales.”
Lagoa’s approach presents other possibilities. “We can deliver our platform not only as an application, but also as a set of APIs,” Williams says, referring to application programming interfaces. “About a third of our inbound interest today is people looking to improve the visual experience in a web environment. Most of these are web configurators.”
With a few hundred lines of code, you can give your customers, both internal and external, access to your cars, shoes or desktop speakers, letting them repaint and resurface them and view the results in full 3D, in real-time.
“Five or 10 years ago, if you’d come out with this really sophisticated rendering engine, it would have had a hard time getting traction,” Williams concludes. “But what we’re seeing in the market is that people have become more sophisticated, and they’re looking for more.”
Contributing Editor Mark Clarkson is DE’s expert in visualization, computer animation, and graphics. His newest book is Photoshop Elements by Example. Visit him on the web at MarkClarkson.com or send e-mail about this article to [email protected].
More Info
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Mark ClarksonContributing Editor Mark Clarkson is Digital Engineering’s expert in visualization, computer animation, and graphics. His newest book is Photoshop Elements by Example. Visit him on the web at MarkClarkson.com or send e-mail about this article to [email protected].
Follow DE