NVIDIA DGX Cloud Aims to Supercharge Generative AI Training

Tool launched to supercharge generative AI training, company says.

Tool launched to supercharge generative AI training, company says.

DGX Cloud is an AI supercomputing service that gives enterprises immediate access to the infrastructure and software needed to train advanced models for generative AI. Image courtesy of NVIDIA.


NVIDIA DGX Cloud—which delivers tools that can help turn nearly any company into an AI company— is now broadly available, with thousands of NVIDIA GPUs online on Oracle Cloud Infrastructure, as well as NVIDIA infrastructure located in the U.S. and U.K.

Unveiled at NVIDIA’s GTC conference in March, DGX Cloud is an AI supercomputing service that gives enterprises immediate access to the infrastructure and software needed to train advanced models for generative AI and other groundbreaking applications.

Generative AI could add more than $4 trillion to the economy annually, turning business knowledge across many of the world’s industries into next-generation AI applications, according to recent estimates by global management consultancy McKinsey.

Nearly every industry can benefit from generative AI. Software companies are using it to develop AI-powered features and applications. And others are using DGX Cloud to build AI factories and digital twins of assets.

Dedicated AI Supercomputing 

DGX Cloud instances provide dedicated infrastructure enterprises rent on a monthly basis, ensuring customers can quickly develop large, multi-node training workloads without waiting for accelerated computing resources.

This approach to AI supercomputing removes the complexity of acquiring, deploying and managing on-premises infrastructure. Providing NVIDIA DGX AI supercomputing paired with NVIDIA AI Enterprise software, DGX Cloud makes it possible for businesses everywhere to access their own AI supercomputer using a web browser.

AI Supercomputing and Software in a Browser

Each instance of DGX Cloud features eight NVIDIA 80GB Tensor Core GPUs for 640GB of GPU memory per node. A high-performance, low-latency fabric ensures workloads can scale across clusters of interconnected systems, allowing multiple instances to act as one massive GPU. High-performance storage is integrated into DGX Cloud to provide a complete solution.

Enterprises manage and monitor DGX Cloud training workloads using NVIDIA Base Command Platform software. The platform provides a connected user experience across DGX Cloud and on-premises NVIDIA DGX supercomputers, so enterprises can combine resources when needed.

Learn more about how to get started with DGX Cloud.

Sources: Press materials received from the company and additional information gleaned from the company’s website.

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#27980