Backslash Research Team performs GPT-4 developer simulation exercise to identify security blindspots in LLM-generated code
Backslash Security, a modern application security solution that leverages deep reachability analysis for enterprise AppSec and product security teams, today announced the findings of its GPT-4 developer simulation exercise, designed and conducted by the Backslash Research Team, to identify security issues associated with LLM-generated code. The Backslash platform offers several core capabilities that address growing security concerns around AI-generated code, including open source code reachability analysis and phantom package visibility capabilities.
According to Gartner, 63% of organizations are currently piloting or deploying AI code assistants. Due to its simplicity of use, AI-generated code will dramatically increase the pace of new code development. However, this technology introduces a diverse range of potential vulnerabilities and security challenges.
In research conducted by the Backslash Research Team to explore security gaps associated with AI-generated code from the perspective of developers, the team created and performed a variety of developer simulation exercises via a series of tests using GPT-4. The results revealed critical security blindspots associated with AI-generated code and its use of third-party open source software (OSS):
- Some LLMs can generate vulnerable OSS package recommendations due to being ‘frozen in time’: Given that many LLMs are trained on static datasets that are only current up to a specific date and are thus deprived of dynamic patch updates, OSS package recommendations generated by LLMs may be outdated. This includes previous OSS package versions with older code and potential security vulnerabilities that have since been fixed in newer versions.
- ‘Phantom’ packages can introduce unseen reachable risks: LLM-generated code can include indirect OSS packages that developers are not aware of. Developers often have little to no visibility or control over these “phantom” packages, which may insert security issues into code production via outdated, vulnerable packages.
- Seemingly safe code-snippet outputs can create an illusion of trust: Experiments reveal that when using the same prompt, GPT-4 generated different recommendations, occasionally suggesting vulnerable package versions. Although these outputs often include disclaimers or instructions regarding the package version, they sometimes do not. Such inconsistencies may cause developers to view all AI-generated code as reliable, thereby introducing significant product security risks for large development teams that deploy high volumes of code daily.
As AI-generated code continues to gain momentum, security issues stemming from the unintended use of outdated OSS packages in application code will become increasingly prominent. The Backslash platform, which was built first and foremost with security in mind for application security teams concentrating on SCA and SAST, offers several capabilities that address AI-generated code security concerns associated with open source software:
- In-depth reachability analysis: The Backslash platform’s unique approach to SCA analysis assesses the reachability of OSS vulnerabilities, enabling AppSec and product security teams to pinpoint the risks that are reachable and exploitable, and prioritize the genuine threats.
- Phantom package visibility: Extending beyond traditional SCA, Backslash can identify and assess phantom package risks. The platform detects phantom packages being used by code that is not declared in manifest files and determines whether the packages are reachable and the level of risk they pose.
“The way we create code is rapidly changing, and that means the way that we secure code must also change. AI-generated code offers immense possibility, but also introduces an entirely new scale of security challenges – and application security teams now bear the burden of securing an unprecedented volume of potentially vulnerable code due to the sheer speed of AI-enabled software development,” said Shahar Man, co-founder and CEO of Backslash Security. “Our research shows that securing open source code is more critical than ever before due to product security issues being introduced by AI-generated code that is associated with OSS.”
Book a demo with Backslash Security to see its OSS Reachability Analysis and Phantom Package Visibility capabilities in action.
Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!