Leading tech companies including Intel, Google, Microsoft and Meta are forming a new industry group, the Ultra Accelerator Link (UALink) Promoter Group, to guide the development of components that link AI accelerator chips in data centers.
Announced on Thursday, the UALink Promoter Group, whose members also include AMD (but not Arm), Hewlett Packard Enterprise, Broadcom, and Cisco, is proposing a new industry standard for connecting the AI accelerator chips that are being installed in a growing number of servers. Broadly defined, an AI accelerator is any chip, from GPUs to custom-designed solutions, that speeds up the training, fine-tuning, and execution of AI models.
“The industry needs open standards that allow us to move forward quickly in an open way. [format] “We need standards that allow multiple companies to add value across the ecosystem,” Forrest Norrod, general manager of data center solutions at AMD, said during a press conference on Wednesday. “The industry needs standards that allow us to innovate at a rapid pace without being locked into a single company.”
Version 1 of the proposed standard, UALink 1.0, will connect up to 1,024 AI accelerators (GPUs only) in a single computing “pod.” (The group defines a pod as one or more racks within a server.) Based on “open standards” such as AMD's Infinity Fabric, UALink 1.0 will enable direct loads and stores between memory connected to AI accelerators, improving speeds while reducing data transfer latency compared to existing interconnect specifications, according to the UALink Promoter Group.
Image credit: UALink Promoter Group
The group said it will establish a consortium, the UALink Consortium, in the third quarter to oversee future development of the UALink specification. UALink 1.0 will be available to companies that join the consortium around the same time, with a higher-bandwidth updated specification, UALink 1.1, scheduled for release in the fourth quarter of 2024.
The first UALink products are expected to be released “within the next few years,” Norrod said.
Notably absent from the group's membership list is Nvidia, the largest maker of AI accelerators, with an estimated 80% to 95% of the market. Nvidia declined to comment for this story, but it's clear why the chipmaker isn't committed to UALink.
First, Nvidia provides its own interconnect technology for connecting GPUs inside datacenter servers, and the company probably wouldn't be too keen on supporting a spec based on a rival technology.
Add to that the fact that Nvidia is operating from an extremely powerful and influential position.
In Nvidia's most recent fiscal quarter (Q1 2025), its data center revenue, which includes sales of its AI chips, grew more than 400% year over year. If Nvidia continues on its current trajectory, it is on track to overtake Apple as the world's second-most valuable company this year.
So, simply put, Nvidia doesn't have to cooperate if it doesn't want to.
As for Amazon Web Services (AWS), the only public cloud giant not contributing to UALink, it may be in wait-and-see mode (no pun intended) as it slowly winds down its various in-house accelerator hardware efforts. It's also possible that AWS, which dominates the cloud services market, doesn't see much strategic sense in taking on Nvidia, which supplies many of the GPUs it offers to its customers.
AWS did not respond to TechCrunch's request for comment.
Indeed, outside of AMD and Intel, the biggest beneficiaries of UALink appear to be Microsoft, Meta, and Google, which have collectively poured billions of dollars into Nvidia GPUs to power their clouds and train their ever-growing AI models. All are looking to move away from a vendor that has a worryingly dominant hold on the AI hardware ecosystem.
Google has its own custom chips for training and running AI models, the TPU, Axion; Amazon has multiple AI chip families; Microsoft entered the fray last year with Maia and Cobalt; and Meta is improving its own accelerator lineup.
Meanwhile, Microsoft and its close collaborator OpenAI are reportedly planning to spend at least $100 billion on supercomputers to train AI models with future versions of Cobalt and Maia chips. Something is needed to link these chips together, and that will likely be UALink.