Nvidia's full stack — including the Cuda-X AI and HPC libraries, GPU-accelerated AI frameworks and software development tools such as PGI compilers with OpenACC support — will be available for Arm CPUs in addition to the existing support for x86 and Power.
The company already uses Arm technology in some of its system on a chip products aimed at the portable gaming, autonomous vehicles, robotics and embedded AI computing markets.
"Supercomputers are the essential instruments of scientific discovery, and achieving exascale supercomputing will dramatically expand the frontier of human knowledge," said Nvidia founder and chief executive Jensen Huang.
|
Arm chief executive Simon Segars said: "Arm is working with our ecosystem to deliver unprecedented compute performance gains and exascale-class capabilities to Arm-based SoCs.
"Collaborating with Nvidia to bring Cuda acceleration to the Arm architecture is a key milestone for the HPC community, which is already deploying Arm technology to address some of the world's most complex research challenges."
The news was welcomed by several players in the HPC community.
"Both Nvidia and Arm leverage technologies that offer high performance computing customers greater levels of energy efficiency. Nvidia's support for Arm complements our latest developments on the HPE Apollo 70, an Arm-based, purpose-built HPC system, and now, Nvidia GPU-enabled," said HPE vice-president and general manager of HPC and AI, Bill Mannel.
Cray president and chief executive Peter Ungaro said, "We are excited to partner with Nvidia to help realise this vision in our supercomputers by leveraging their Cuda and Cuda-X HPC and AI software stack to the Arm platform and integrating it closely with our Cray system management and programming environment (compilers, libraries and tools) already enabled to support Arm processors across our XC and future Shasta supercomputers."
Riken Centre for Computational Sciences director and Tokyo Institute of Technology professor Satoshi Matsuoka said: "We have been a pioneer in using Nvidia GPUs on large-scale supercomputers for the last decade, including Japan's most powerful ABCI supercomputer. At Riken R-CCS, we are currently developing the next-generation, Arm-based, exascale Fugaku supercomputer and are thrilled to hear that Nvidia's GPU acceleration platform will soon be available for Arm-based systems."
In other news, Nvidia has revealed its DGX SuperPOD supercomputer, said to be the world's 22nd fastest.
Built in three weeks by connecting 96 DGX-2H computers (for a total of 1536 V100 Tensor Core GPUs) with Mellanox technology, the 9.4 petaflops system is designed for training neural networks for self-driving cars.
Five of the ten fastest supercomputers on the latest (June 2019) Top500 list use Nvidia GPUs, as do eight of the top ten on the Green500 list which re-ranks the Top500 list on the basis of energy efficiency in terms of flops per watt.