Cyber Security

UK Government publishes new AI security guidelines

AI cyber security startup Mindgard, the Lancaster University spin-out, has provided key recommendations to the British Department for Science, Innovation, and Technology 

The British government published a new collection of research reports on the cyber security of AI pulling on sources from the private and public sectors. It includes a broad set of recommendations for organisations prepared by Mindgard, the report’s only startup contributor. This report, together with the new draft Code of Practice on cyber security governance, was created in response to the Chinese cyberattack on the Ministry of Defence earlier this year, and is aimed specifically at directors and business leaders in the federal and private sectors.  

The Department for Science, Innovation, and Technology (DSIT) commissioned Mindgard to conduct a systematic study to identify recommendations linked to addressing cyber security risks to Artificial Intelligence (AI). Mindgard’s contributions focused specifically on identifying and mapping vulnerabilities across the AI lifecycle. Titled Cyber Security for AI Recommendations, the Mindgard report described 45 unique technical and general recommendations for addressing cyber security risks in AI. 

The first type of recommendation proposed by Mindgard is technical. ⁤⁤This technology-focused approach aims to mitigate cybersecurity risks in AI by altering the software, hardware, data, or network access of a computer system that runs the AI. ⁤⁤This can also involve altering the AI model itself, encompassing adjustments in training methodologies, pre-processing techniques, and model architecture. ⁤⁤These measures collectively work towards reducing cybersecurity vulnerabilities when exposed to an AI cyber attack. ⁤

Equally important are general recommendations, which are conceptual frameworks for mitigating cybersecurity risks in AI. These recommendations entail ‘security hygiene’ by establishing organizational practices, company policies, governance, and security measures. Among them are:

  • Managing of legal and regulatory requirements involving AI 
  • Stakeholder engagement
  • Creating an Organizational AI Program / Sec Dev Program
  • Implementing controls to limit unwanted model behavior
  • Creating and documenting AI project requirements
  • Conducting red teaming and risk analysis, etc.

Other key contributors included Grant Thornton UK LLP,  Manchester Metropolitan University, and IFF Research. Thanks to their combined efforts, the governmental report determined a number of key areas for improvement around legal and regulatory requirements, stakeholder engagement, controls to limit unwanted model behaviour, and documentation. The accompanying literature furthermore identified 23 distinct security vulnerabilities within AI based on meticulous research of previous attacks. With the exception of one security incident, all the studied attacks used some form of adversarial machine learning to achieve their goals. 

Outside of the firm’s research work, Mindgard’s platform takes a unique approach to managing AI security risks from data poisoning to model theft. Modules protect against outbound risk, external attackers compromising internal models, and ecosystem risk.

Dr. Peter Garraghan, CEO/CTO of Mindgard and Professor at Lancaster University, said: “Research has always been fundamental to Mindgard’s work and mission. Directing that research towards initiatives that strengthen cybersecurity and address the weaknesses of proprietary AI in its current iteration on a national level is a responsibility and a privilege.”

About Mindgard

Mindgard is a deep-tech startup specialising in cybersecurity for companies working with AI, GenAI, and LLMs. Mindgard was founded in 2022 at world-renowned Lancaster University and is now based in London, UK. It has achieved €3.5 million in funding, backed by leading investors like IQ Capital and Lakestar. Mindgard’s primary product – born from eight years of rigorous R&D in AI security – offers an automated platform for comprehensive security testing, red teaming, and rapid detection/response for AI, GenAI, and LLMs. Learn more at

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Related posts

Orange Cyberdefense acquires Swiss companies SCRT and Telsys


Keeper Security Parental Practices Report: Keeper Security

PR Newswire

WiMi Hologram Cloud to Develop A Data Security Risk Perception System

PR Newswire