NVIDIA GTC 2014: The Dawn of Pascal; the Rise of the Machines

At NVIDIA GPU Technology Conference (GTC, March 24-27, San Jose, California), the self-driving Audi Connect upstaged even NVIDIA’s enigmatic CEO Jen-Hsun Huang. The autonomous vehicle drove itself onto the stage, providing the big finish to Huang’s keynote. But the Audi’s presence may have a purpose greater than the Wow factor. Huang suggests the GPU would play a crucial role in machine learning.

As he stepped up to deliver his keynote address to the GPU faithfuls in San Jose’s McEnery Convention Center, Huang quipped, “A good friend said [GTC] is like the Woodstock of computational mathematicians. I hope it turns out the same way.”

For the past several years, NVIDIA has worked to redefine the GPU’s identity. The company’s message: The graphics processor is not just for fueling the blood, gore, and explosions in video games and movies. When bunched together, they have sufficient firepower to tackle large-scale problems that affect humanity—from accurate weather simulation to DNA sequencing. For the era of the Internet of Things (IoT), that means automated parsing of visual cues to make decisions.

“The CPU is designed for low-latency single-threaded performance; the GPU is designed for high-throughput massively parallel performance,” Huang offered a contrast. The Ying-Yang dichotomy makes the pair “a perfect combo,” in his words. “These two processors can work harmoniously because of CUDA [NVIDIA’s GPU programing language],” he added.

“In 2010, the number one focus of GTC was high-performance computing (HPC). The supercomputing crowd first adopted [GPU computing] because the problems in front of them were too large to solve otherwise,” Huang reflected. “In 2012, in addition to super computing, we saw energy exploration, life sciences, molecular dynamics simulation, genomics ... This year, the topics range from big data analytic, machine learning, and computer vision.”

The bottleneck is the inter-processor communication. Splitting up a big computing job into smaller jobs for parallel processing requires a coordinated attack by all processing cores. That invariably increases chip-to-chip chatter, a detriment to performance. NVIDIA’s fix for this is NVLink, a chip-to-chip communication technology that the company claims to outperform the PCIe transfer by 5X to 12X. Huang said, “The software can adopt that interface very easily,” but the proof will have to come from software vendors—in the case of engineering and design, from simulation and rendering software developers who support GPU acceleration.

The company also plans to tackle the bandwidth issue with what it describes as 3D packaging—literally stacking up chips on top of one another. “We’re gonna—for the first time—build chips on top of other chips, and pile heterogeneous chips on top of one wafer,” Huang said. This method let the GPU have a larger memory capacity without expanding its size or demanding additional power, Huang explained.

Both NVLink and 3D packaging are part of PASCAL, the next generation of GPU from NVIDIA that will measure one-third the size of a PCIe card. It’s “a supercomputer no bigger than the size of two credit cards,” Huang described. He expected the dramatic increase in memory and computation speed to fuel what some might call artificial intelligence (AI)—“computers that almost appear to think, computers that not only do the same thing every single time but are programmed to learn, programs that get smarter as more data is presented,” in his words.

For mobile computing NVIDIA is also pushing the boundaries with the release of JETSEN TK1, a developer kit capable of 326 GFLOPS. Cheekily, the company is pricing the hardware with 192 CUDA cores at $192. The small form factor is easy to fit into other devices, which led Huang to speculate it would drive “walking robots, driving robots, swimming robots, stationary robots, robot cars of all kinds.” The kit is preloaded with NVIDIA’s VisionWorks, capable of detecting and recognizing basic objects and geometry.

The hardware power increase, at least in theory, is sufficient to accommodate gesture recognition, computer vision, and AI-like robotic behaviors, but it’s difficult to predict how swiftly software developers will be able to incorporate the new features into their code. If and when they do, we may begin to see a generation of software that lets you design and simulate product behaviors with your fingers, arms, and eyes instead of a mouse and a keyboard. If Huang’s prediction comes true, you may even be able to simulate robots autonomously making decisions—before the robots are built.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#19944