AI

HackerRank Introduces New Benchmark to Assess Advanced AI Models

HackerRank Introduces New Benchmark to Assess Advanced AI Models

Industry Leader Known for Software Development Skills Expertise Introduces Real-World Benchmark of AI Software Development Capabilities

HackerRank, the Developer Skills Company, today introduced its new ASTRA Benchmark. ASTRA, which stands for Assessment of Software Tasks in Real-World Applications, is designed to evaluate the capabilities of advanced AI models, such as ChatGPT, Claude or Gemini, to perform tasks across the entire software development lifecycle.

The ASTRA Benchmark consists of multi-file, project-based problems designed to mimic real-world coding tasks. The intent of the HackerRank ASTRA Benchmark is to determine the correctness and consistency of an AI model’s coding ability in relation to practical applications.

“With the ASTRA Benchmark, we’re setting a new standard for evaluating AI models,” said Vivek Ravisankar, co-founder and CEO of HackerRank. “As software development becomes more human + AI, it’s important that we have a very good understanding of the combined abilities. Our experience pioneering the market in assessing software development skills makes us uniquely qualified to assess the abilities of AI models acting as agents for software developers.”

A key highlight from the benchmark showed o1 from OpenAI was the top performer, but Claude- -3.5-sonnet produced more consistent results.

Key features of ASTRA Benchmark include:

  • Diverse skill domains: The current version includes 65 project-based coding questions, primarily focused on front-end development. These questions are categorized into 10 primary coding skill domains and 34 subcategories.
  • Multi-file project questions: To mimic real-world development, ASTRA’s dataset includes an average of 12 source code and configuration files per question as model inputs. This results in an average of 61 lines of solution code per question.
  • Model correctness and consistency evaluation: To provide a more precise assessment, ASTRA prioritizes comprehensive metrics such as average scores, average pass@1 and median standard deviation.
  • Wide test case coverage: ASTRA’s dataset contains an average of 6.7 test cases per question, designed to rigorously evaluate the correctness of implementations.
  • Benchmark Results: For a full report and analysis of the initial benchmark results, please visit hackerrank.com/ai/astra.

Ravisankar added, “By open sourcing our ASTRA Benchmark, we’re offering the AI community the opportunity to run their models against a high-quality, independent benchmark. This supports the continued advancement of AI while fostering more collaboration and transparency in the AI community to ensure the integrity of new models.”

For more information about HackerRank’s ASTRA Benchmark, contact rafik@hackerrank.com.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

GlobeNewswire

GlobeNewswire is one of the world's largest newswire distribution networks, specializing in the delivery of corporate press releases financial disclosures and multimedia content to the media, investment community, individual investors and the general public.

Related posts

Zenoss Recognized in 2022 Gartner® Market Guide for AIOps Platforms

Business Wire

Baidu releases its 2022 ESG report

PR Newswire

Auto Insurers Embrace AI at Scale in 2021

Business Wire