AI

Appen launches solution for enterprises to customize LLMs

The new platform capabilities empower businesses to accelerate the development of their AI applications

Appen Limited (ASX: APX), a leading provider of high-quality data for the AI lifecycle, announced the launch of new platform capabilities that will support enterprises customizing large language models (LLMs).

The solution supports internal teams who are attempting to leverage generative AI within the enterprise. Through a common and consistent process now available in Appen’s AI Data Platform, a user can move through the training of their LLM model(s) from use case to production. The steps include:

  • Model selection: Appen’s platform connects directly to any model, enabling you to evaluate existing models, test new models, and conduct comprehensive benchmarking.
  • Data preparation: High quality data is critical to accurate and trustworthy AI. Appen’s annotation platform enables the preparation of datasets for vectorization and Retrieval-Augmented Generation (RAG).
  • Prompt creation: To effectively validate model performance, a set of custom prompts are required for use cases. Appen’s platform enables you to connect with your internal experts or our global crowd for the creation of custom prompts for model evaluation.
  • Model optimization: Appen’s platform streamlines the process of capturing human feedback for model evaluation. Our platform includes templates for human evaluation, A/B testing, model benchmarking and other custom workflows to inspect performance throughout your RAG process.
  • Safety assurance: Appen’s platform and Quality Raters help ensure that your models are safe to deploy. We have detailed workflows and teams to support red teaming to identify toxicity, brand safety and harm.

Appen’s new capabilities offer enterprises a way to incorporate proprietary data and collaborate with internal subject matter experts to refine LLM performance for enterprise-specific use cases—all within a single platform. Companies can deploy solutions on-premises, in the cloud, or hybrid, and balance LLM accuracy, complexity, and cost-effectiveness.

“Generative AI has created significant opportunities for enterprise innovation,” said Appen CEO, Ryan Kolln. “However, the challenge that enterprises are facing is how to ensure that their LLM enabled applications are accurate and trustworthy. Appen has been at the forefront of human-AI collaboration for over 25 years, and I’m super excited that we can now bring our products and expertise to enterprises looking to build accurate and trustworthy LLM enabled applications.”

For almost three decades, Appen has excelled in the collection and preparation of high-quality, large volumes of data with global reach- exactly the data that is required to train large language models and get accurate, consistent outputs. Appen’s new capabilities will allow enterprises the flexibility to leverage Appen’s crowd-curated data, while tapping into their own proprietary data and human expertise for optimal LLM output.

If you’re interested in learning more about Appen’s new capabilities, please visit our website at Appen.com or contact an AI Specialist.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

CVFP Medical Group Expands Use of Navina’s AI For Primary Care

PR Newswire

Ainnocence Launches Self-Evolving AI Drug Discovery Platform

PR Newswire

Pure Storage Partners with Meta on AI RSC

PR Newswire