Developers can now natively use the Elastic vector database to store and search Cohere’s new int8 text embeddings
Elastic (NYSE: ESTC), the company behind Elasticsearch®, today announced the Elasticsearch open Inference API now supports Cohere’s text embedding models. This includes Elasticsearch native support for efficient int8 embeddings, which optimize performance and reduce memory cost for semantic search across the large datasets commonly found in enterprise scenarios.
With this integration, Elasticsearch developers can experience immediate performance gains, including up to 4x memory savings and up to 30% faster search, without impacting search quality.
“We’re excited to collaborate with Elastic to bring state-of-the-art search solutions to enterprises,” said Jaron Waldman, chief product officer at Cohere. “Elasticsearch delivers strong vector retrieval performance on large datasets, and their native support for Cohere’s Embed v3 models with int8 compression helps unlock gains in performance, efficiency, and search quality for enterprise-grade deployments of semantic search and retrieval-augmented generation (RAG).”
“Developers who want to build more intuitive and accurate semantic search experiences for enterprise use cases need to look at Elasticsearch and Cohere,” said Shay Banon, founder & chief technology officer at Elastic. “Innovation is rarely insular, and our work with the great team at Cohere showcases how we bring developers the best of both worlds. The Cohere and Elastic communities now have great models to generate embeddings with support for inference workloads and seamless integration into the leading search and analytics platform that has invested in creating the best vector database.”
Support for Cohere embeddings is available in preview with Elastic 8.13 and will soon be generally available in an upcoming Elasticsearch release.
Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!