October 5, 2017
NVIDIA and its systems partners Dell EMC, Hewlett Packard Enterprise, IBM and Supermicro unveiled more than 10 servers featuring NVIDIA Volta architecture-based Tesla V100 GPU (graphics processing unit) accelerators, advanced GPUs for AI and other compute-intensive workloads.
NVIDIA V100 GPUs, with more than 120 teraflops of deep learning performance per GPU, are designed to deliver the computing performance required for AI deep learning training and inferencing, high-performance computing, accelerated analytics and other workloads, the company reports.
Drawing on the AI computing capabilities offered by NVIDIA’s latest GPUs, Dell EMC, HPE, IBM and Supermicro are offering to the global market a broad range of multi-V100 GPU systems in various configurations.
V100-based systems announced include the following:
- Dell EMC: The PowerEdge R740 supporting up to three V100 GPUs for PCIe, the PowerEdge R740XD supporting up to three V100 GPUs for PCIe and the PowerEdge C4130 supporting up to four V100 GPUs for PCIe or four V100 GPUs for NVIDIA NVLink interconnect technology in an SXM2 form factor;
- HPE: HPE Apollo 6500 supporting up to eight V100 GPUs for PCIe and HPE ProLiant DL380 systems supporting up to three V100 GPUs for PCIe;
- IBM: The next generation of IBM Power Systems servers based on the POWER9 processor will incorporate multiple V100 GPUs and apply the latest generation NVLink interconnect technology; and
- Super micro: Products supporting the new Volta GPUs include a 7048GR-TR workstation for all-around performance GPU computing, 4028GR-TXRT, 4028GR-TRT and 4028GR-TR2 servers designed to handle the deep learning applications, and 1028GQ-TRT servers built for applications such as advanced analytics.
V100 GPUs are supported by NVIDIA Volta-optimized software, including CUDA 9.0 and the newly updated deep learning SDK, including TensorRT 3, DeepStream SDK and cuDNN 7, as well as all major AI frameworks.
For more info, visit NVIDIA.
Sources: Press materials received from the company.