Nvidia says its mission is to democratize AI and make it more accessible to all businesses. For a price just under six figures, the founder therefore opens access to its DGX SuperPod machines, which consist of 20 or more DGX machines. Nvidia has in fact joined forces with NetApp to launch its cloud-based Base Command Platform, which will provide access to SuperPods from $ 90,000 per month starting this summer.
NetApp provides flash storage and manages customers, while Nvidia owns the equipment housed in operator Equinix’s data centers. “The objective is to allow customers to have access to this powerful supercomputer, the superpod, on the basis of a simple rental, to experience it, to work on it and, from there, to acquire their own superpod or to get into large-scale AI, for example in the public cloud, ”says Manuvir Das, head of corporate IT at Nvidia.
“This creates a true hybrid model, where the customer has a single interface to submit their work and perform all of their AI work. This interface can be used for their own superpod equipment that is on-site or for the infrastructure of the instances with GPUs in the cloud, but it’s the same experience in both cases. ” The manager adds that the minimum offered to customers will consist of three or four DGX A100 machines grouped together in a cluster, instead of the 20 complete machines.
Certification extended to Arm-based CPUs
Nvidia said customers using the basic command can deploy artificial intelligence workloads on AWS SageMaker, with support for Google Cloud coming soon. For customers who can’t afford a SuperPod, Nvidia is opening up the technology inside and allowing system makers to “remove all of the individual parts that were designed inside the DGX”.
At the Computex event, new systems using BlueField-2 data processing units (DPUs) were announced by Asus, Dell Technologies, Gigabyte, QCT and Supermicro. “BlueField DPUs move infrastructure tasks from the CPU to the DPU, making more server CPU cores available to run applications, increasing server and data center efficiency. “, says the company.
“The DPU places a ‘computer in front of the computer’ for each server, providing a separate and secure infrastructure provisioning isolated from the server’s application domain. This enables agentless workload isolation, the security isolation, storage virtualization, remote management and telemetry on virtualized and bare servers. ” Servers using BlueField-2 are expected to appear later this year, and “several” of them will be certified by Nvidia once specifications are formalized.
Finally, Nvidia has indicated that its certification will be extended to include Arm-based CPUs, with servers expected to arrive in 2022. “Increasingly, servers are equipped with a CPU, GPU and DPU And a lot of the workload calculation work is done on the GPU, while the DPU manages the network, and secures the network and provides security capabilities like firewalls, ”says Manuvir Das.