Machine Learning

CoreWeave Raises $221M Series B Funding

  • Magnetar Capital is leading the equity round, with contributions from NVIDIA, former GitHub CEO Nat Friedman and former Apple executive Daniel Gross
  • The explosion of generative AI technology and LLMs is the driving force behind the demand for the specialized services that CoreWeave offers
  • Funding to support infrastructure expansion for compute-intensive workloads and development of new U.S.-based data centers

CoreWeave (“the Company”), a specialized cloud provider built for large-scale GPU-accelerated workloads, today announced it has secured $221 million in Series B funding. The round was led by Magnetar Capital (“Magnetar”), a leading alternative asset manager, with contributions from NVIDIA, and rounded out by Nat Friedman and Daniel Gross.

The latest funding will be used to further expand CoreWeave’s specialized cloud infrastructure for compute-intensive workloads — including artificial intelligence and machine learning, visual effects and rendering, batch processing and pixel streaming — to meet the explosive demand for generative AI technology. This strategic focus has allowed CoreWeave to offer purpose-built, customized solutions that can outperform larger, more generalized cloud providers. The new capital will also support U.S.-based data center expansion with the opening of two new centers this year, bringing CoreWeave’s total North American-based data centers to five.

“CoreWeave is uniquely positioned to power the seemingly overnight boom in AI technology with our ability to innovate and iterate more quickly than the hyperscalers,” said CoreWeave CEO and co-founder Michael Intrator. “Magnetar’s strong, continued partnership and financial support as lead investor in this Series B round ensures we can maintain that momentum without skipping a beat. Additionally, we’re thrilled to expand our collaboration with the team at NVIDIA. NVIDIA consistently pushes the boundaries of what’s possible in the field of technology, and their vision and guidance will be invaluable as we continue to scale our organization.”

NVIDIA recently released the highest-performance data center GPU, the NVIDIA H100 Tensor Core, along with the NVIDIA HGX H100 platform. CoreWeave announced at the NVIDIA GTC conference in March that its HGX H100 clusters are live and currently serving clients such as Anlatan, the creators of NovelAI. In addition to HGX H100, CoreWeave offers more than 11 NVIDIA GPU SKUs, interconnected with the NVIDIA Quantum InfiniBand in-network computing platform, which are available to clients on demand and via reserved instance contracts.

Investor Perspectives on $221M Series B Round

AI has reached an inflection point, and we’re seeing accelerated interest in AI computing infrastructure from startups to major enterprises,” said Manuvir Das, Vice President of Enterprise Computing at NVIDIA. “CoreWeave’s strategy of delivering accelerated computing infrastructure for generative AI, large language models and AI factories will help bring the highest-performance, most energy-efficient computing platform to every industry.”

“With the seemingly limitless boundaries of AI applications and technologies, the demand for compute-intensive hardware and infrastructure is higher than it’s ever been,” said Ernie Rogers, Magnetar’s chief operating officer. “CoreWeave’s innovative, agile and customizable product offering is well-situated to service this demand and the company is consequently experiencing explosive growth to support it. We are proud to collaborate with NVIDIA in supporting CoreWeave’s next phase of growth as it continues to bolster its already strong positioning in the marketplace.”

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

TopCourt Unveils Application

PR Newswire

Insight Earns Microsoft Partner of the Year Award for Manufacturing

Business Wire

CoreWeave Among First Cloud Providers to Offer NVIDIA HGX H100

Business Wire