AI

Writer AI Large Language Model Achieve Top Scores on Stanford HELM

Benchmarks reinforce Palmyra as the enterprise-ready LLM model with transparency and accuracy for enterprise generative AI use cases

Writer, the leading generative AI platform for enterprises, announced today that Palmyra, its family of large language models (LLMs), has achieved top benchmark scores from Stanford’s Holistic Evaluation of Language Models (HELM), demonstrating its leadership in the generative AI field.

In key benchmark tests, Palmyra outperformed models by OpenAI, Cohere, Anthropic, Microsoft, and important open-source models such as Falcon 40B and LLaMA-30B.

HELM is a benchmarking initiative by Stanford University’s Center of Research on Foundation Models that evaluates prominent language models across a wide range of scenarios. Palmyra excelled in tests that evaluated a model’s ability to understand knowledge and answer natural language questions accurately.

  • Palmyra ranked first in several important tests, scoring 60.9% on Massive Multitask Language Understanding (MMLU), 89.6% on BoolQ, and 79.0% on NaturalQuestions.
  • Palmyra ranked second in two additional key tests with 49.7% on Question Answering in Context and 61.6% on TruthfulQA.

The HELM results validate Palmyra’s proficiency in knowledge comprehension, making inferences, and accurately answering open-ended, context-based questions that are worded naturally. These scores highlight Palmyra’s power and ability to complete advanced tasks, which makes it uniquely capable of tackling a wide range of enterprise use cases.

“We are thrilled to see Writer Palmyra at the top of these benchmarks,” said Waseem AlShikh, Writer co-founder and chief technology officer. “Our models have demonstrated their breadth of knowledge comprehension and ability to accurately answer questions in natural language – all with an efficient-sized model that doesn’t exceed 43 billion parameters. These results offer further proof that the Writer generative AI platform is the enterprise-ready choice for organizations looking to accelerate growth, increase productivity, and align brand.”

In a world where LLMs are increasingly undifferentiated, training data, duration, and methodology make a big difference. Unlike other model families, Palmyra is trained on high-quality formal writing and has a deep vertical focus, with industry-specific models for healthcare and financial services. The models are transparent and auditable rather than black box, built so data stays private, and can be self-hosted. Given that Palmyra LLMs don’t exceed 43 billion parameters, these latest rankings further demonstrate that smaller, more efficient, and more accessible models can still deliver superior results.

See Writer Palmyra resources here:

  • Palmyra results on HELM
  • Hugging Face MMLU & TruthfulQA results
  • Palmyra LLMs on Hugging Face
  • Whitepaper: Becoming Self-Instruct

Comparison of Writer and closed models

 CohereClaudeText Davinci-003ChatGPTWriter
BoolQ85.6%81.5%88.1%73.9%89.6%
MMLU45.2%48.1%56.9%59.8%60.9%
Natural Questions76.0%68.6%77.0%63.7%79.0%

Results from HELM. Models used for testing are Cohere Command beta (52.4B), Anthropic-LM v4-s3 (52B), OpenAI text-davinci-003, gpt-3.5-turbo-0301, Palmyra-X

Comparison of Writer and open source models

 MMLUTruthfulQA
Palmyra-X60.9%61.6%
Falcon-40B57.0%41.7%
llama-30b56.8%42.3%

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

Fero Labs Secures $15M to Reduce Manufacturing Emissions with AI

PR Newswire

Surgalign Releases HOLO™ AI Insights for Neurovascular Research

GlobeNewswire

NVIDIA introduces Generative AI Foundry Service on Microsoft Azure

GlobeNewswire