Unveils VMware Private AI to Accelerate New Generation of Apps
Expands Collaboration with NVIDIA to Build Generative AI Platform Running on VMware Cloud Infrastructure
VMware Explore 2023 — Today at VMware Explore 2023, VMware, Inc. (NYSE: VMW) introduced new Private AI offerings to drive enterprise adoption of generative artificial intelligence and tap into the value of trusted data. Private AI is an architectural approach that unlocks the business gains from AI with the practical privacy and compliance needs of an organization.
To make Private AI a reality for enterprises and fuel a new wave of AI-enabled applications, VMware announced:
- VMware Private AI Foundation with NVIDIA, extending the companies’ strategic partnership to ready enterprises that run VMware’s cloud infrastructure for the next era of generative AI.
- VMware Private AI Reference Architecture for Open Source to help customers achieve their desired AI outcomes by supporting best-in-class open source software (OSS) technologies today and in the future.
VMware Private AI is bringing compute capacity and AI models to where enterprise data is created, processed, and consumed, whether that is in a public cloud, enterprise data center, or at the edge. With these new offerings, VMware is helping customers combine the flexibility and control required to power a new generation of AI-enabled applications that will help dramatically increase worker productivity, ignite transformation across major business functions, and drive economic impact. A McKinsey report estimates generative AI could add up to $4.4 trillion annually to the global economy.1
A multi-cloud environment is the foundation for this new class of AI-powered applications because it makes private yet highly distributed data easier to harness. VMware’s multi-cloud approach provides enterprises with greater choice and flexibility where AI models are built, customized with an enterprise’s private data, and consumed, while still enabling required security and resiliency across any environment.
“The remarkable potential of generative AI cannot be unlocked unless enterprises are able to maintain the privacy of their data and minimize IP risk while training, customizing, and serving their AI models,” said Raghu Raghuram, CEO, VMware. “With VMware Private AI, we are empowering our customers to tap into their trusted data so they can build and run AI models quickly and more securely in their multi-cloud environment.”
Enterprises today face a hard choice when it comes to generative AI. They can either take advantage of public AI models to build their generative AI applications, but with the attendant risks of data exposure and uncertain training sources. Or they can attempt a “do-it-yourself” model, a strategy which lacks cost-efficiency and time-to-value. VMware AI Labs developed VMware Private AI specifically to solve this problem.
“AI has traditionally been built and designed by data scientists, for data scientists,” said Chris Wolf, vice president of VMware AI Labs. “With the introduction of these new VMware Private AI offerings, VMware is making the future of AI serve everyone in the enterprise by bringing the choice of compute and AI models closer to the data. Our Private AI approach benefits enterprise use cases ranging from software development and marketing content generation to customer service tasks and pulling insights from legal documents.”
VMware Private AI Foundation with NVIDIA To Help Enterprises Become AI Ready
VMware Private AI Foundation with NVIDIA, comprised of a set of integrated AI tools, will empower enterprises to run proven models trained on their private data in a cost-efficient manner and will enable these models to be deployed in data centers, on leading public clouds, and at the edge. VMware Private AI Foundation with NVIDIA will integrate VMware’s Private AI architecture, built on VMware Cloud Foundation, with NVIDIA AI Enterprise software and accelerated computing. The turnkey offering will provide customers with the accelerated computing infrastructure and cloud infrastructure software they need to customize models and run generative AI applications, including intelligent chatbots, assistants, search and summarization. VMware Private AI Foundation with NVIDIA will be supported by Dell Technologies, Hewlett Packard Enterprise (HPE) and Lenovo. Read the joint VMware and NVIDIA press release here.
An Interconnected and Open Ecosystem Supports Customers’ AI Strategies
VMware Private AI Reference Architecture for Open Source integrates innovative OSS technologies to deliver an open reference architecture for building and serving OSS models on top of VMware Cloud Foundation. At VMware Explore, VMware is showcasing collaborations with leading companies from across the AI value chain:
- Anyscale: VMware is bringing the widely adopted open source Ray unified compute framework to VMware Cloud environments. Ray on VMware Cloud Foundation makes it easy for data scientists and MLOps engineers to scale AI and Python workloads much more easily by utilizing their current compute footprints for ML workloads instead of defaulting to the public cloud.
- Domino Data Lab: VMware, Domino Data Lab and NVIDIA have teamed up to provide a unified analytics, data science, and infrastructure platform that is optimized, validated, and supported, purpose-built for AI/ML deployments in the financial services industry.
- Global Systems Integrators: VMware is working with leading GSIs such as Wipro and HCL to help customers realize the benefits of Private AI by building and delivering turnkey solutions that combine VMware Cloud with AI partner ecosystem solutions.
- Hugging Face: VMware is collaborating with Hugging Face to help launch SafeCoder today at VMware Explore. SafeCoder is a complete commercial code assistant solution built for the enterprise that includes service, software and support. VMware is utilizing SafeCoder internally and publishing a reference architecture with code samples to enable the fastest possible time-to-value for customers when deploying and operating SafeCoder on VMware infrastructure. Read the full launch blog here.
- Intel: VMware vSphere/vSAN 8 and Tanzu are optimized with Intel’s AI software suite to take advantage of the new built-in AI accelerators on the latest 4th Gen Intel® Xeon® Scalable processors.
In addition, VMware is announcing a new VMware AI Ready program, which will connect ISVs with tools and resources needed to validate and certify their products on VMware Private AI Reference Architecture. The program will be available to ISVs focused on ML and LLM Ops, data and feature engineering, developer tools for AI, and embedded AI applications. This new program is expected to be live by the end of 2023.
Intelligent Assist Infuses Generative AI Into VMware’s Multi-Cloud Offerings
VMware is introducing Intelligent Assist, a family of generative AI-based solutions trained on VMware’s proprietary data to simplify and automate all aspects of enterprise IT in a multi-cloud era. The Intelligent Assist features will be seamless extensions of the investments enterprises have made in VMware Cross-Cloud Services and will be built upon VMware Private AI. VMware products with Intelligent Assist are expected to include:
- VMware Tanzu with Intelligent Assist (Tech Preview) will address the challenges of multi-cloud visibility and configuration by allowing users to conversationally request and refine changes to their enterprise’s cloud infrastructure.
- Workspace ONE with Intelligent Assist (Tech Preview)will empower users to create high-quality scripts using natural language prompts for a faster and more efficient script writing experience.
- NSX+ with Intelligent Assist (Tech Preview)will allow security analysts to quickly and more accurately determine the relevance of security findings and effectively remediate threats.
Customer and Partner Quotes
“As we continue transforming Las Vegas into a world class city of the future, we are leveraging generative AI to further our innovation in public safety, citizen engagement, and transportation,” said Michael Sherwood, chief innovation and technology officer, City of Las Vegas. “VMware is a critical component of our multi-cloud infrastructure upon which these generative AI initiatives are built. With growing demands and limited resources, we rely heavily on the real-time data analytics and scalable solutions VMware provides to improve outcomes which have a positive impact on both residents and visitors.”
“As an imaging and IoT company that embraces multi-cloud, one of our most significant generative AI challenges has been ensuring data accuracy, privacy and security when working with large language models,” said Vishal Gupta, chief information and technology officer, Lexmark. “We require Lexmark’s team of data scientists to have the infrastructure to innovate flexibly, while also protecting our proprietary data. Working with VMware means we are confident that our AI initiatives can scale seamlessly and with trust, and our talent can therefore entirely focus on delivering business impact and solving problems for customers.”
“VMware and its customers are some of the most innovative companies out there, and they all want to know how they can infuse AI into their products,” said Robert Nishihara, co-founder and CEO, Anyscale, developer of the widely adopted open source Python framework Ray machine learning project. “But given the rapid growth of this nascent field, companies are struggling to stay at the forefront of AI while also scaling, productizing, and iterating quickly. Because Ray can run anywhere – on any cloud provider, on-premises, on your laptop – and VMware’s customers run everywhere, it’s a natural collaboration to make it easier for companies to accelerate their business using generative AI.”
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!