AI

Cerebras Triples Industry-Leading Inference Performance, Sets New Record

Cerebras Inference delivers 2,100 tokens/second for Llama 3.2B 70B — 16X performance of the fastest GPUs and 68x faster than hyperscale clouds

Today, Cerebras Systems, the pioneer in high performance AI compute, smashed its previous industry record for inference, delivering 2,100 tokens/second performance on Llama 3.2 70B. This is 16x faster than any known GPU solution and 68x faster than hyperscale clouds as measured by Artificial Analysis, a third-party benchmarking organization. Moreover, Cerebras Inference serves Llama 70B more than 8x faster than GPUs serve Llama 3B, delivering an aggregate 184x advantage (8x faster on models 23 x larger). By providing Instant Inference for large models, Cerebras is unlocking new AI use cases powered by real-time, higher quality responses, chain of thought reasoning, more interactions and higher user engagement.

“The world’s fastest AI inference just got faster. It takes graphics processing units an entirely new hardware generation — two to three years- – to triple their performance. We just did it in a single software release,” said Andrew Feldman, CEO and co-founder, Cerebras. “Early adopters and AI developers are creating powerful AI use cases that were impossible to build on GPU-based solutions. Cerebras Inference is providing a new compute foundation for the next era of AI innovation.”

From global pharmaceutical giants like GlaxoSmithKline (GSK), to pioneering startups like Audivi, Tavus, Vellum and LiveKit, Cerebras is eliminating AI application latency with 60x speed-ups:

  • GSK: “With Cerebras’ inference speed, GSK is developing innovative AI applications, such as intelligent research agents, that will fundamentally improve the productivity of our researchers and drug discovery process,” said Kim Branson, SVP of AI and ML, GSK.
  • LiveKit: “When building voice AI, inference is the slowest stage in your pipeline. With Cerebras Inference, it’s now the fastest. A full pass through a pipeline consisting of cloud-based speech-to-text, 70B-parameter inference using Cerebras Inference, and text-to-speech, runs faster than just inference alone on other providers. This is a game changer for developers building voice AI that can respond with human-level speed and accuracy,” said Russ d’Sa, CEO of LiveKit.
  • Audivi AI: “For real-time voice interactions, every millisecond counts in creating a seamless, human-like experience. Cerebras’ fast inference capabilities empower us to deliver instant voice interactions to our customers, driving higher engagement and expected ROI,” said Seth Siegel, CEO of Audivi AI.
  • Tavus: “We migrated from a leading GPU solution to Cerebras and reduced our end-user latency by 75%,” said Hassan Raza, CEO of Tavus.
  • Vellum: “Our customers are blown away with the results! Time to completion on Cerebras is hands down faster than any other inference provider and I’m excited to see the production applications we’ll power via the Cerebras inference platform,” Akash Sharma, CEO of Vellum.

Cerebras is gathering the llama community in llamapalooza NYC, a developer event that will feature talks from meta, Hugging Face, LiveKit, Vellum, LaunchDarkly, Val.town, Haize Labs, Crew AI, Cloudflare, South Park Commons, and Slingshot.

Cerebras Inference is powered by the Cerebras CS-3 system and its industry-leading AI processor, the Wafer Scale Engine 3 (WSE-3). Unlike graphic processing units that force customers to make trade-offs between speed and capacity, the CS-3 delivers best in class per-user performance while delivering high throughput. The massive size of the WSE-3 enables many concurrent users to benefit from blistering speed. With 7,000x more memory bandwidth than the Nvidia H100, the WSE-3 solves Generative AI’s fundamental technical challenge: memory bandwidth. Developers can easily access the Cerebras Inference API, which is fully compatible with the OpenAI Chat Completions API, making migration seamless with just a few lines of code.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

Flapmax & Microsoft to Scale Africa’s Digital Ecosystem

PR Newswire

2023 State of ML Ops report: iMerit

PR Newswire

Paige Launches AI Software to Enable Detection of Cancer Metastases

Business Wire