AI

Tachyum 8 AI Zettaflops Blueprint to solve OpenAI Capacity Limitation

Tachyum®, creator of Prodigy®, the world’s first Universal Processor, today released a white paper that presents how Tachyum’s customers planning to build a new HPC/AI supercomputer data centers far exceeding not only the performance of existing supercomputers but also the target performance for next-generation systems.

Built from the ground up to provide the highest performance and efficiency, Prodigy’s revolutionary new architecture enables supercomputers to be deployed in fully homogeneous environments, providing simple development, deployment and maintenance. The solution is ideally suited for OpenAI, others like Microsoft Azure, CoreWeave, Ori, etc. and research facilities needing AI datacenters that today do not have the system architecture capable of serving all interested customers.

Developed by Tachyum’s world-class systems, solutions and software engineering teams, the Prodigy-enabled supercomputer, commissioned by a U.S. company this year, delivers the unprecedented performance of 50 exaflops of IEEE double-precision 64-bit floating-point operation and 8 zettaflops of AI training for large language models.

For the supercomputer solution referenced in the white paper, Prodigy has a custom 46RU rack with a liquid-cooled reference design. The rack supports 33 four-socket 1U servers for a total of 132 Prodigy processors. The racks have a modular architecture with the ability to combine them into a two-rack cabinet to optimize floor space.

Tachyum’s HPC/AI software stack provides a complete software environment to enable Prodigy family HPC/AI deployments, delivering full support for all aspects of HPC/AI clusters from low-level firmware to complete HPC/AI applications, and incorporating leading edge software environments for networking and storage. Tachyum’s software team has already integrated a software package for HPL LINPACK and other software for HPC with AI running on Prodigy FPGA soon.

“After our announcement of the purchase order we received this year, we attracted a lot of attention from other interested parties, several of them large organizations, looking to build a similar scale system for their AI applications and workloads,” said Dr. Radoslav Danilak, founder and CEO of Tachyum. “Prodigy’s system architecture fits well into a wide range of deployments, including those that need data center scale-out once the infrastructure for it is already in place. The scale of machines enabled by this new HPC/AI supercomputer data center likely will determine who will win the fight for compute and AI supremacy in the world.”

Prodigy provides both the high performance required for cloud and HPC/AI workloads within a single architecture. As a Universal Processor offering utility for all workloads, Prodigy-powered data center servers can seamlessly and dynamically switch between computational domains. By eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization, Prodigy reduces CAPEX and OPEX significantly while delivering unprecedented data center performance, power, and economics. Prodigy integrates 192 high-performance custom-designed 64-bit compute cores, to deliver up to 4x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

Tachyum’s latest white paper follows previous releases detailing how to use 4-bit Tachyum AI (TAI) and 2-bit effective per weight (TAI2) formats in Large Language Models (LLM) quantization without accuracy degradation, reducing the cost of LMMs by up to 100x and bringing them to the mainstream.

Those interested in reading the “Tachyum 50EF/8ZF Datacenter Can Solve OpenAI and Other Problems” white paper can download a copy at https://www.tachyum.com/resources/whitepapers/2023/12/12/tachyum-prodigy-universal-processor-enabling-50-ef—8-ai-zf-supercomputers-in-2025/.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

Red Hat Lowers Barriers to AI Projects with Red Hat OpenShift

Business Wire

MIT Technology Reviews AI Readiness Comes Down to Data Readiness

Business Wire

Involta is now ark data centers

PR Newswire