Chipmaker Nvidia raised eyebrows at its annual developer conference on May 11, 2017, with its introduction of the Nvidia GPU Cloud.
Despite the name, Nvidia is not entering the commercial cloud business to compete with Microsoft Azure, Amazon Web Services, or the Google Cloud Engine.
Instead, Nvidia is leveraging its position as a provider of high-speed, high-powered computing muscle to serve as a sort of cloud-based system integrator for customers who need access to massive computing power on a recurring or temporary basis.
The Nvidia GPU Cloud is an integration desktop in which developers can select among computing frameworks – for example, Caffe2, Theano, TensorFlow, Microsoft Cognitive Toolkit, MXNet, or PyTorch. The user then specifies which version of the framework and libraries to include, and the number and type of GPU instances to provision.
Nvidia GPU Cloud then creates an instance of a high-performance distributed virtual system to solve the user’s computing problem. Hardware may include the customer’s own, Amazon AWS, Microsoft Azure, or Nvidia’s Saturn V supercomputer.
The Saturn V, which Nvidia says is the world’s 28th fastest supercomputer, will be reserved for an elite, approved list of researchers – including Nvidia’s own engineers and scientists.
Nvidia will serve as a broker and demand aggregator for retail cloud services, not as a competitor to them. The GPU Cloud will collected the latest versions of AI software stacks and development frameworks, then deploy them on the specified hardware infrastructure. Nvidia hinted that the list of cloud partners could grow to include more players than Amazon and Microsoft.