Servers – Rackmount – A100 Workload Benchmark

Benchmark your AI or HPC workload on the new NVIDIA A100 systems with AMD EPYC processors

Modern compute-intensive workloads in AI (Artificial Intelligence) training and inference all massively benefit from increased GPU performance. To support these users, Aberdeen is offering workload benchmarking sessions on our new NVIDIA HGX-based GPX servers.

These systems pair NVIDIA A100 GPUs with AMD EPYC CPUs –the world’s highest-performing x86 server CPUi --to minimize bottlenecks between the compute and the acceleration.

The result is faster time to insight and improved ROI. In fact, this pairing provides the world’s fastest memory bandwidth (over 2 TB/s) to run the largest models and datasetsii

Each system is pre-loaded with key AI and ML software tools such as:

  • TensorFlow
  • Python
  • PIP
  • CUDA
  • cuDDN
  • PyTorch
  • Keras
  • Jupyter
  • Anaconda
  • NGC Containers
  • R

Ideal Workloads:

  • Natural Language Processing
  • Machine Learning
  • Deep Learning
  • Predictive Analytics
  • Business Intelligence
  • Image processing
  • Object detection/recognition/classification
  • Computer vision

Our platform offers the performance of leading GPU accelerated server appliances without the high total cost of ownership (TCO) of thousands of servers or the locked-in design or vendor commitment of the NVIDIA® DGX A100.

If you are looking to improve performance for highly parallel workload like the following, consider benchmarking your workload on an NVIDIA A100 powered GPX system and seeing what they can do for you.

MLN-016: Results as of 01/28/2021 using SPECrate®2017_int_base. The AMD EPYC 7763 measured estimated score of 798 is higher than the current highest 2P server with an AMD EPYC 7H12 and a score of 717

Read Full Report on SPEC.org >.

OEM published score(s) for EPYC may vary. SPEC®, SPECrate® and SPEC CPU® are registered trademarks of the Standard Performance Evaluation Corporation. See SPEC.org for more information >


Read More on NVIDIA.com >

Option 1

Lochness QN4-22E2-4NVLINK
  • Dual AMD EPYC 7532 32-Core 2.4GHz CPUs
  • NVIDIA® HGX™ A100 - 4x A100 GPUs - 160GB Memory
  • 2TB 3200MHz ECC Memory
  • 2x 3.84TB U.2 NVMe PCIe 4.0 SSDs
  • 4x Mellanox ConnectX-6 VPI 200GB InfiniBand

Option 2

Lochness QN6-42E2-8NVLINK
  • Dual AMD EPYC 7742 64-Core 2.25GHz CPUs
  • NVIDIA® HGX™ A100 - 8x A100 GPUs - 320GB Memory
  • 2TB 3200MHz ECC Memory
  • 4x 3.84TB U.2 NVMe PCIe 4.0 SSDs
  • 8x Mellanox ConnectX-6 VPI 200GB InfiniBand

Questions, Feedback, or Support

Our Technical Support staff is standing by to help you troubleshoot issues, answer technical questions, locate product manuals, request an RMA, set up an on-site service visit, and much more.