The NVIDIA A100 Tensor Core GPU delivers unprecedentedacceleration—at every scale—to power the world's highest performing elasticdata centres for AI, data analytics, and high-performance computing (HPC)applications. As the engine of the NVIDIA data centre platform, A100 providesup to 20X higher performance over the prior NVIDIA Volta generation. A100 canefficiently scale up or be partitioned into seven isolated GPU instances withMulti-Instance GPU (MIG), providing a unified platform that enables elasticdata centres to dynamically adjust to shifting workload demands.
NVIDIA A100 Tensor Core technology supports a broad range ofmath precisions, providing a single accelerator for every workload. The latestgeneration A100 80GB doubles GPU memory and debuts the world's fastest memorybandwidth at 2 terabytes per second (TB/s), speeding time to solution for thelargest models and most massive datasets.
A100 is part of the complete NVIDIA data centre solutionthat incorporates building blocks across hardware, networking, software,libraries, and optimized AI models and applications from the NVIDIA NGC catalogue.Representing the most powerful end-to-end AI and HPC platform for data centres,it allows researchers to deliver real-world results and deploy solutions intoproduction at scale.
Specification:
FP64: 9.7 TFLOPS
FP64 Tensor Core: 19.5 TFLOPS
FP32: 19.5 TFLOPS
Tensor Float: 32 (TF32): 156 TFLOPS
BFLOAT16 Tensor Core: 312 TFLOPS
FP16 Tensor Core: 312 TFLOPS
INT8 Tensor Core: 624 TOPS
GPU Memory: 80GB HBM2e
GPU Memory Bandwidth: 1,935GB/s
Max Thermal Design Power (TDP): 300W
Multi-Instance GPU: Up to 7 MIGs @ 10GB
Form Factor: PCIe
Interconnect:
NVIDIA NVLink Bridge for 2 GPUs: 600GB/s
PCIe Gen4: 64GB/s
Server Options: Partner and NVIDIA Certified Systems with 1-8 GPUs