AI

Pinecone working with AWS to solve GenAI hallucination challenges

Integration with Amazon Bedrock can help enterprises overcome greatest challenge in bringing reliable GenAI applications to market

Pinecone, the vector database company providing long-term memory for artificial intelligence (AI), announced an integration with Amazon Bedrock, a fully managed service from Amazon Web Services (AWS) for building GenAI applications. The announcement means customers can now drastically reduce hallucinations and accelerate the go-to-market of Generative AI (GenAI) applications such as chatbots, assistants, and agents.

The Pinecone vector database is a key component of the AI tech stack, helping companies solve one of the biggest challenges in deploying GenAI solutions — hallucinations — by allowing them to store, search, and find the most relevant and up-to-date information from company data and send that context to Large Language Models (LLMs) with every query. This workflow is called Retrieval Augmented Generation (RAG), and with Pinecone, it aids in providing relevant, accurate, and fast responses from search or GenAI applications to end users.

With Amazon Bedrock, the serverless platform lets users select and customize the right models for their needs, then effortlessly integrate and deploy them using popular AWS services such as Amazon SageMaker.

Pinecone’s integration with Amazon Bedrock allows developers to quickly and effortlessly build streamlined, factual GenAI applications that combine Pinecone’s ease of use, performance, cost-efficiency, and scalability with their LLM of choice. Pinecone’s enterprise-grade security and its availability on the AWS Marketplace allow developers in enterprises to bring these GenAI solutions to market significantly faster.

“We’ve already seen a large number of AWS customers adopting Pinecone,” said Edo Liberty, Founder & CEO of Pinecone. “This integration opens the doors to even more developers who need to ship reliable and scalable GenAI applications… yesterday.”

“With generative AI, customers have the ability to reimagine their applications, create entirely new customer experiences, and improve overall productivity,” said Atul Deo, general manager, Amazon Bedrock at AWS. “Latest personalization techniques like Retrieval Augmented Generation (RAG) have the ability to deliver more accurate generative AI responses that make the most of pre-existing knowledge but can also process and consolidate that knowledge to create unique, context-aware answers, instructions, or explanations in human-like language rather than just summarizing the retrieved data. This integration of Amazon Bedrock and Pinecone will help customers streamline their generative AI application development process by helping deliver relevant  responses.”

“We have AI applications in AWS and tens of billions of vector embeddings in Pinecone,” said Samee Zahid, Director of Engineering, Chipper Cash. “Connecting the two in a simple, serverless API is a game-changer for our development velocity.”

The integration will be generally available by the end of the 4th quarter for all Amazon Bedrock and Pinecone users. Review the latest blog post for more information on the integration.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

SparkCognition Acquires Integration Wizards & expands its IP portfolio

PR Newswire

ElephantThink, ACMAS Partner to Deliver WBT for ING’s Global Staff

Business Wire

Emergent Software Welcomes Marc Kermisch as Chief Technology Officer

PR Newswire