Application Security

Contrast Security releases Assess Feature

The code security leader has introduced a new capability within its Secure Code Platform to address prompt injection, the top entry of the OWASP Top 10 for LLMs

Contrast Security (Contrast), the code security platform built for developers and trusted by security, today announced it will extend its market-leading application security testing (AST) platform to support testing of Large Language Models (LLMs) from OpenAI. In this first release, Contrast rules help teams that are developing software using the OpenAI application programming interface (API) set to identify and mitigate weaknesses that could expose an organization to prompt injection vulnerabilities: i.e., attacks involving injection of a prompt that deceives the application into executing unauthorized code.

Prompt injection was identified as the top risk for LLM applications by the just-released OWASP 10 Top for Large Language Model Applications project. Contrast has continued to support OWASP’s mission to improve Application Security (AppSec): In fact, Contrast’s Chief Product Officer Steve Wilson led the 400-person volunteer team that created the OWASP Top 10 for LLMs.

“As project lead for the new OWASP Top 10 for LLMs, I can say our group looked deeply at many attack vectors against LLMs. Prompt Injection repeatedly rose to the top of the list in our expert group voting for the most important vulnerability,” said Wilson. “Contrast is the first security solution to respond to this new industry standard list by delivering this capability. Organizations can now identify susceptible data flows to their LLMs, providing security with the visibility needed to identify risks and prevent unintended exposure.”

According to the OWASP Top 10 for LLMs, a prompt injection vulnerability allows an attacker to craft inputs that can manipulate the operation of a trusted LLM. This results in the LLM acting as a “confused deputy” on behalf of the attacker. Given the high degree of trust usually associated with an LLM’s output, the manipulated responses may go unnoticed and may even be trusted by the user, allowing the attack to potentially poison search results, deliver incorrect or malicious responses, produce malicious code, circumvent content filters, or to leak sensitive data.  Prompt injections can be introduced via various avenues, including websites, emails, documents or any other data source that an LLM might rely on.

Contrast is ideal for identifying all types of injection accurately, including this new form of AI prompt injection. Contrast uses runtime security to monitor actual application behavior and detect vulnerabilities, rather than scanning source code or simulating attacks. This approach is fast, easy and highly accurate, ensuring that developers are instantly notified of issues and provided all the information they need to correct problems. User input sent through OpenAI’s official Python API to an LLM in a Python agent-instrumented application triggers the prompt injection rule.

To learn more about how Contrast Assess provides the first security solution created to identify potential prompt injection vulnerabilities, please visit our website or schedule a demo.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

New Relic Announces General Availability of Vulnerability Management

Business Wire

TrustInSoft Announces Bug-Free IoT Application Security Test

PR Newswire

Penta Security to Showcase WAPPLES at GITEX Technology Week 2021

PR Newswire