NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure

The NVIDIA AI foundry service uses three elements—NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services.

The NVIDIA AI foundry service uses three elements—NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services.

 Businesses can deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications. Image courtesy of NVIDIA.


NVIDIA has introduced an artificial intelligence foundry service for the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models, the company reports. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

“Enterprises need custom models to perform specialized skills trained on the proprietary DNA of their company—their data,” says Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s AI foundry service combines our generative AI model technologies, LLM training expertise and giant-scale AI factory. We built this in Microsoft Azure so enterprises worldwide can connect their custom model with Microsoft’s world-leading cloud services.”

“Our partnership with NVIDIA spans every layer of the Copilot stack—from silicon to software—as we innovate together for this new age of AI,” says Satya Nadella, chairman and CEO of Microsoft. “With NVIDIA’s generative AI foundry service on Microsoft Azure, we’re providing new capabilities for enterprises and startups to build and deploy AI applications on our cloud.”

NVIDIA’s AI foundry service can be used to customize models for generative AI-powered applications across industries, including enterprise software. Once ready to deploy, enterprises can use a technique called retrieval-augmented generation (RAG) to connect their models with their enterprise data and access new insights.

Curated, Optimized Models

Customers using the NVIDIA foundry service can pick from several NVIDIA AI Foundation models, including a family of NVIDIA Nemotron-3 8B models hosted in the Azure AI model catalog. Developers can also access the Nemotron-3 8B models on the NVIDIA NGC catalog, as well as community models such as Meta’s Llama 2 models optimized for NVIDIA for accelerated computing, which are also coming soon to the Azure AI model catalog.

Optimized with 8 billion parameters, the Nemotron-3 8B family includes versions tuned for different use cases and have multilingual capabilities for building custom enterprise generative AI applications.

NVIDIA DGX Cloud Now Available 

NVIDIA DGX Cloud AI supercomputing is available today on Azure Marketplace. It features instances customers can rent, scaling to thousands of NVIDIA Tensor Core GPUs, and comes with NVIDIA AI Enterprise software, including NeMo, to speed LLM customization.

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#28352