Furthermore, there was little difference in the A100's performance whether it was used in conjunction with Arm processors, x86 processors, or in Azure instances.
And for edge applications, Nvidia's Orin – admittedly a pre-production sample – was the top performer in five out of six tests for edge AI.
Orin is designed for use in robotics, autonomous vehicles and other autonomous systems.
The MLPerf Inference results showed Orin is up to five times faster than the previous generation (Xavier) and up to 2.3 times more energy efficient.
|
It also outperformed the Qualcomm Snapdragon 865 on the edge benchmarks.
Nvidia senior product manager for AI inference and cloud Dave Salvator pointed out that Nvidia's advantages include the way that a single A100 can run all six MLPerf tests simultaneously (thanks to the way it can be partitioned into as many as seven instances), and that improvements to the software the company provides have boosted performance by around 50% in just over a year.