Data Infrastructure

TechGiants Form UALink Promoter Group to Drive Data Center AI Connectivity

Group Developing Open Industry Standard Interface for Enabling High-performance, Low-latency Communications Interconnect for Next-generation AI Accelerators in Data Centers

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

The Promoter Group companies bring extensive experience creating large-scale AI and HPC solutions based on open standards, efficiency and robust ecosystem support.

Driving Scale-Up for AI Workloads

As the demand for AI compute grows, it is critical to have a robust, low-latency and efficient scale-up network that can easily add computing resources to a single instance. Creating an open, industry standard specification for scale-up capabilities will help to establish an open and high-performance environment for AI workloads, providing the highest performance possible.

This is where UALink and an industry specification becomes critical to standardize the interface for AI and Machine Learning, HPC, and Cloud applications for the next generation of AI data centers and implementations. The group will develop a specification to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI computing pods.

The 1.0 specification will enable the connection of up to 1,024 accelerators within an AI computing pod and allow for direct loads and stores between the memory attached to accelerators, such as GPUs, in the pod. The UALink Promoter Group has formed the UALink Consortium and expects it to be incorporated in Q3 of 2024. The 1.0 specification is expected to be available in Q3 of 2024 and made available to companies that join the Ultra Accelerator Link (UALink) Consortium.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

CAI Appoints India Data Center Country Lead

PR Newswire

Salesforce Signs Definitive Agreement to Acquire Slack

Business Wire

Global Wi-Fi 6 And Wi-Fi 6E Chipset Market to 2028

Business Wire