The UALink Consortium has released its first GPU interconnect specification.

Now available for public download, the 200G 1.0 specification offers up to 200Gbps throughput per lane, linking up to 1,024 accelerators per pod.

connectivity
– Getty Images

Founded in May 2024 to define and establish an open industry standard that will enable AI accelerators to communicate more effectively and efficiently, UALink bills itself as an alternative to Nvidia’s NVLink offering, which at present supports up to 576 GPUs per pod.

Current consortium members include AMD, Intel, Meta, Hewlett-Packard Enterprise, Amazon Web Services (AWS), Apple, Cisco, Google, Lightmatter, Microsoft, and Synopsys. According to the group, UALink offers low-latency, high-bandwidth interconnection with the same raw speed as ethernet, while its use of a “significantly smaller die area for link stack” lowers its TCO.

In December 2024, Synopsys unveiled a UALink IP solution that will support the newly released specification. Synopsys’ solution is scheduled to be available in the second half of 2025.

Additionally, photonic computing company Lightmatter, which joined the consortium at the start of 2025, said it would offer up its Passage interconnect to help the consortium achieve its aim of standardizing advanced interconnect solutions for large numbers of AI accelerators.

“With the release of the UALink 200G 1.0 Specification, the UALink Consortium’s member companies are actively building an open ecosystem for scale-up accelerator connectivity,” said Peter Onufryk, UALink Consortium president. “We are excited to witness the variety of solutions that will soon be entering the market and enabling future AI applications.”

Subscribe to The Compute, Storage & Networking Channel for regular news round-ups, market reports, and more.