AI

Pinecone announced the launch of its Serverless Vector Database

Pinecone serverless is now generally available for mission-critical workloads 

More than 20,000 organizations build with Pinecone serverless in public preview

Pinecone, a leading vector database company, today launched Pinecone serverless into general availability. The state-of-the-art vector database designed to make generative artificial intelligence (AI) accurate, fast, and scalable is now ready for mission-critical workloads.

“Businesses are already building delightful and knowledgeable AI products with Pinecone,” said Edo Liberty, founder and CEO of Pinecone. “After making these products work in the lab, developers want to launch these products to thousands or millions of users. This makes considerations like operating costs, performance at scale, high availability and support, and security matter a lot. This is where Pinecone serverless shines, and why it’s the most trusted vector database for production applications.”

Confidently moving forward with AI

Pinecone serverless has been battle-tested with rapid adoption over the course of four months in Public Preview mode. More than 20,000 organizations have used it to date. Large, critical workloads with billions of vectors are also running with select customers, making up part of the collective 12 billion embeddings already indexed on the new architecture. Serverless users, large and small, include organizations like Gong, Help Scout, New Relic, Notion, TaskUs, and You.com. With Pinecone serverless, these organizations are eliminating significant operational overhead by reducing costs up to 50x, and building more accurate AI applications at scale.

Making AI knowledgeable

Pinecone research shows that the most effective method to improve the quality of generative AI results and reduce hallucinations – unintended, false, or misleading information presented as fact – is by using a vector database for Retrieval-augmented Generation (RAG). A detailed study from AI consulting services firm Prolego supports the findings that RAG significantly improves the performance of large language models (LLMs). For example, compared with the well-known GPT-4 LLM alone, GPT-4 with RAG and sufficient data reduces the frequency of unhelpful answers from GPT-4 by 50% for the “faithfulness” metric, even on information that the LLM was trained on. Moreover, as more data becomes available for context retrieval, the more accurate results become.

Making AI easy and affordable with the best database architecture

Pinecone serverless is architected from the ground up to provide low-latency, always-fresh vector search over unrestricted data sizes at low cost. This is making generative AI easily accessible.

Separation of reads from writes, and storage from compute in Pinecone serverless significantly reduces costs for all types and sizes of workloads. First-of-their-kind indexing and retrieval algorithms enable fast and memory-efficient vector search from object storage without sacrificing retrieval quality.

Introducing Private Endpoints

Security, privacy, and compliance are paramount for businesses as they fuel artificial intelligence with more and more data. Today, Pinecone is unveiling Private Endpoints in public preview to help ensure customer data adheres to these demands, as well as governance and regulatory compliance.

Private Endpoints support direct and secure data plane connectivity from an organization’s virtual private cloud (VPC) to their Pinecone index over AWS PrivateLink, an Amazon Web Services (AWS) offering that provides private connectivity between VPCs, supported AWS services, and on-premises networks without exposing traffic to the public Internet.

Building with the AI Stack 

To make building AI applications as simple as possible, Pinecone serverless is launching with a growing number of partner integrations. Companies in Pinecone’s recently-announced partner program can now let their users seamlessly connect with and use Pinecone directly inside those users’ coding environments. These companies include Anyscale, AWS, Confluent, LangChain, Mistral, Monte Carlo, Nexla, Pulumi, Qwak, Together.ai, Vectorize, and Unstructured. Pinecone is also working with service integrator partners like phData to help joint customers onboard to Serverless.   

Get started with Pinecone serverless for free, today.

Read the launch announcement blog and learn more about Private Endpoints for Pinecone serverless.

Customer Quotes

  • Jacob Eckel, VP, R&D Division Manager, Gong
    “Pinecone serverless isn’t just a cost-cutting move for us; it is a strategic shift towards a more efficient, scalable, and resource-effective solution.”
  • Luis Morales, VP of Engineering, Help Scout
    “At Help Scout, Pinecone’s scalable, serverless architecture is crucial for powering AI innovation and delighting customers. It enables our engineering teams to seamlessly integrate new features, pushing the boundaries of customer support. With Pinecone, we’re setting the pace in a vibrant tech landscape.”
  • New Relic Chief Strategy and Design Officer Peter Pezaris
    “With New Relic AI, our generative AI assistant, engineers can use natural language to explore vast amounts of telemetry and access our all-in-one observability platform. By adding Pinecone vector databases for semantic search and RAG to our unified platform, we have enriched the data set our users can draw insights from and introduced new features that help engineers take action on data faster. Pinecone aligns with our vision to democratize data accessibility for all engineers, and we’re excited to uncover more potential with its new serverless architecture.”
  • Akshay Kothari, Co-Founder and COO, Notion
    “Notion is leading the AI productivity revolution. Our launch of a first-to-market AI feature was made possible by Pinecone serverless. Their technology enables our Q&A AI to deliver instant answers to millions of users, sourced from billions of documents. Best of all, our move to their latest architecture has cut our costs by 60%, advancing our mission to make software toolmaking ubiquitous.”
  • Manish Pandya, SVP of Digital Transformation, TaskUs
    “Pinecone has transformed our customer service operations, enabling us to achieve unprecedented levels of efficiency and customer satisfaction. We are prioritizing its serverless architecture to support our diverse portfolio of AI products across multiple regions. With our scale and ambitions, Pinecone is an integral component of our TaskGPT platform.”
  • Bryan McCann, CTO & Co-Founder, You.com
    “No other vector database matches Pinecone’s scalability and production readiness. We are excited to explore how Pinecone serverless will support the growth of our product capabilities.”

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

BrainBox AI to Showcase Innovation at UN Climate Change Conference

PR Newswire

Geospark Analytics awarded NATO contract for use of Hyperion and AI-driven risk models

PR Newswire

ML expert Zenith AI is acquired by Opentrons Labworks

PR Newswire