Cyber Security

Mindgard’s new tool shields businesses from GenAI data breaches

New Mindgard AI data loss prevention module combines greater security and streamlined compliance, reducing risk of reputational damage for businesses integrating LLMs into offerings

Mindgard, the cybersecurity startup specialising in protecting businesses’ AI, GenAI, and LLM solutions, today launched a new module tailored specifically for data loss prevention (DLP). The new offering enables organisations to minimise business and reputational risk from data loss whilst leveraging the productivity benefits of third-party LLM and GenAI services such as ChatGPT and Microsoft CoPilot.

The breakneck pace of AI evolution has elevated governance and security into urgent concerns. Businesses face reputational risks from both directions: by failing to integrate LLMs into their products and services fast enough to keep up with competitors, and by leaving themselves exposed to data loss from unmonitored use of third-party GenAI solutions. AI systems process vast amounts of data, which could be mishandled intentionally or accidentally, leading to identity theft, financial fraud, and abuse. In 2023, ChatGPT experienced a significant data breach caused by a bug in an open-source library, exposing users’ personal information and chat titles. 

Dr. Peter Garraghan, CEO/CTO of Mindgard and Professor at Lancaster University, said: “Many companies, racing to keep up with competitors in today’s GenAI arms race, are focused on rapidly getting LLM services deployed into their organisations without fully understanding the security implications of data that is at risk within these AI implementations. Decision makers must ensure that their data controllers are using and developing AI in a way that fully complies with legal requirements, and comprehensively protects their organisations from AI-related cyber threats.”

Mindgard’s platform already helps customers manage AI security risks, ranging from data poisoning to model theft, across internal AI systems and third-party models. The new module adds protection against the three major data loss threats facing AI systems: outbound risk, external attackers compromising internal models, and ecosystem risk. 

Dr. Garraghan continued: “Mindgard is the only provider that comprehensively manages all of these risks within a single platform. With interconnected AI systems, a compromise anywhere in the value chain can expose vulnerabilities. Mindgard provides visibility and control for all integrated AI components across the system stack.”

The new module allows customers to holistically monitor, detect, and report risk data from LLMs and GenAI. Granular AI data access controls allow flexible configuration based on organisational needs, as well as limit insider risks from rogue employees.

This approach stands apart from existing AI compliance solutions – allowing organisations to develop or consume AI services without compromising their security posture. Mindgard anticipates strong demand as more countries and states enact AI regulations over the coming years.

Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!

Related posts

Smarttech247 Offering Free Security Reviews for Credit Unions

AI TechPark

CTM360 Surpasses Expectations at “FIRSTCON22”

Business Wire

Zero Trust Pioneer John Kindervag Joins Traceable AI as an Advisor

PR Newswire