The UALink initiative aims to create an open standard for AI accelerators to communicate more efficiently.
The first UALink specification, version 1.0, will connect 1,024 accelerators within an AI computing pod in a low-latency network.
The specification allows for direct data transfers between the memory attached to accelerators.
|
“The work being done by the companies in UALink to create an open, high performance and scalable accelerator fabric is critical for the future of AI," said AMD executive vice president and general manager data centre solutions group Forrest Norrod in a statement.
"Together, we bring extensive experience in creating large scale AI and high-performance computing solutions that are based on open-standards, efficiency, and robust ecosystem support. AMD is committed to contributing our expertise, technologies, and capabilities to the group as well as other open industry efforts to advance all aspects of AI technology and solidify an open AI ecosystem.”
As per Tom's Guide, AMD, Broadcom, Google, Intel, Meta, and Microsoft develop their own AI accelerators (Broadcom designs for Google), Cisco produces networking chips for AI, while HPE builds servers. These companies are standardising infrastructure for their chips as much as possible, which is why they developed this UALink Consortium.
The UALink was also developed to compete with Nvidia's NVLink. Since Nvidia has its infrastructure, it sees no reason to join UALink.
In a statement, the companies said the UALink will connect OEMs, IT professionals, and system integrators and create a pathway for integration, flexibility, and scalability of their AI-connected data centres.