Anthropic’s Claude 3 Model Now Available in the Tabnine AI Code Assistant Through Amazon Bedrock
Tabnine, the originators of the AI code assistant category, today announced Anthropic’s Claude 3 model is now available as one of the Large Language Models (LLMs) integrated to support Tabnine’s AI-enabled software development tools, using the API directly from Amazon Bedrock.
The collaboration between Tabnine and Anthropic, a leading AI safety and research company, will boost engineering velocity, code quality, and developer happiness. Tabnine has built a specialized AI platform that supports code generation, code explanations, automated generation of documentation and tests, code quality, and more. Tabnine’s deep expertise in leveraging and optimizing LLMs for development tasks combined with the Claude 3 model delivers one of the highest performance code assistants available.
“This integration with Anthropic will help developers deliver better code, faster, and that’s what Tabnine, as creator of the AI code assistant category, has always tried to make possible. Our technology is the furthest along in this category, and we’re able to unleash even more innovation by working with innovators such as Anthropic,” said Brandon Jung, vice president, ecosystem and business development at Tabnine. “We’re making it as simple as possible for enterprises to benefit from new models as they emerge, safely and securely, so development teams can have the best in generative AI at their fingertips.”
Benefits include:
- Optimized Development Tools: Access the full capabilities of Tabnine with Claude 3, including code generation, code explanations, documentation generation, AI-created tests, and more.
- AI Recommendations Tailored to Each Team: Tabnine’s users get accurate and development team-specific AI interactions and recommendations. This is delivered by using both context from locally available data in integrated development environments (IDEs) as well as Tabnine’s understanding of the full codebase of a user’s company.
- Enhanced Flexibility: Instantly select and switch models to best fit project requirements without having to change AI tools. Tabnine is designed specifically for software development use cases, and tailors the interaction with each LLM to deliver the highest quality of AI assistance and recommendations.
“We’re excited to make the Claude 3 model family available to developers using Tabnine as a way to power their AI code assistant solution,” said Jamie Neuwirth, revenue leader at Anthropic. “The Claude 3 models are designed to offer industry-leading options across intelligence and speed, all while prioritizing AI safety. We look forward to seeing the innovative solutions that Tabnine users build and put into production.”
Earlier in May, Tabnine launched the switchable models capability for Tabnine Chat that allows users to switch the model that powers Tabnine Chat in real time. To ensure that each LLM performs at it’s best, Tabnine refines the specific prompts and the context provided to each to ensure that the best possible answers to the software development queries a user makes are returned. This combination of Tabnine’s engineering and the performance of the underlying model maximizes the value you get from each of the LLMs. With today’s announcement, Tabnine is not only expanding the number of state-of-the-art models available on Tabnine Chat, but it is also helping users in future-proofing AI investments by avoiding lock-in for the underlying LLM.
By working with Anthropic, Tabnine further demonstrates its commitment to bringing new, state-of-the-art models to Tabnine Chat as soon as they become available in the market.
When considering which underlying model to select, Tabnine provides transparency into the behaviors and characteristics (e.g. security and performance) of each available model to help decide which is best for each unique use case.
Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!