In addition to the new GPU for Google Cloud, NVIDIA is also introducing its Accelerator-Optimised VM A2 instance family to the cloud service provider. As per the company’s official statement, the A2 VM instances are capable of delivering different levels of performance that can accelerate workloads across CUDA-powered machine learning training, inference, data analytic,s and High-Performance Computing (HPC). As a quick recap, NVIDIA officially announced its A100 Deep Learning supercomputing GPU back in May this year. The GPU is, by technicality, NVIDIA’s first Ampere-based GPU, and is built on a 7nm die lithography that is provided by TSMC. Specs-wise, the GPU houses 40GB HBM2 memory and if paired with multiple A100s, can both achieve and maintain speeds of up to 600GB/s.
Speaking of performance, the A100 achieves up to 312 TFLOPS in FP32 training, 19.5 FLOPS in FP64 HPC, and 1248 TOPS in INT8 Inference operations. Google Cloud also announced that it would bring support for NVIDIA’s A100 GPU to its Kubernetes Enginer, Cloud AI Platform, and other of its services in the future. (Source: NVIDIA, Google Cloud Blog)