Image default
Guest Articles

Human and AI Relationships Will Improve with Explainability

Max Heinemeyer sheds light on how explainability can bolster the human and AI relationship with strengthened security forces. He elucidates insights into the different levels of decision-making blending NLP and threat detection.

Cybersecurity is not a human-scale problem. Digital environments and the requirements to secure them are too complex for humans to always navigate successfully–but that’s what’s needed. An attacker only has to succeed once; defenders have to succeed all the time.

There is too much data in modern organizations for humans to keep up with and anticipate every cyber-attack. Manually sifting through all an organization’s security logs and writing static detections does not work or scale. Cybersecurity practitioners need augmentation.

With AI fast becoming a non-negotiable in security workflows, Explainable Artificial Intelligence (XAI) will be more important than ever before – for cyber professionals and executives alike.

Cybersecurity practitioners are skeptical by trade. To trust a system they work with, they need to understand it. AI and human teams must work together to defend from increasingly sophisticated attackers with increasingly sophisticated technology. While AI breakthroughs can help security teams optimize performance, they cannot rely on advanced mathematical algorithms alone. Humans must have agency and control over their systems and understand how AI impacts them. 

There will be an increase in focus on XAI, in sharp contrast with the concept of a “black box.” In cybersecurity, a black box is a system that can be viewed in terms of inputs and outputs without knowledge of its internal workings. Yet these outputs are often produced without explanation, and with security top-of-mind for company boards, today’s teams need to convey AI’s expected impacts, potential biases, and actions. 

XAI flips that on its head, ensuring that security professionals can be taken ‘under the hood’ of the black box and understand the choices taken by the technology (and, in this case, specifically by the AI). XAI produces detailed reasoning to clarify an AI’s decisions. To build the trust required, humans must stay in control and understand an AI engine’s decision-making process. It is not about understanding and questioning every decision but rather being able to drill into the decision-making when necessary. This ability is crucial when investigating cyber incidents and confidently evaluating how to act.

In today’s world, merely understanding that something came to be is not enough; security teams must understand how and why. You cannot identify a vulnerability or root cause of security exploitation without understanding how a cyber-attacker got through those defenses or why the AI was able to contain the threat. So, how can XAI be approached in cybersecurity? It is all about granting human access to underlying input on various levels and explaining what has been done throughout the process. It’s about getting understandable answers. 

XAI makes the underlying data available to the human team, where possible and safe. It is presented in plain English, often also with visualizations or other tools. These processes and methods that allow human users to comprehend and trust the results and output created by machine learning must be at the forefront of Security Operations Centers (SOCs). 

XAI provides insight into the different levels of decision making – from a high-level abstraction close to the output through to the low abstraction levels closer to the input. By programming AI to explain the why behind the micro-decisions it makes daily, human teams then become empowered to make the macro-decisions that impact the business at large and need human context. 

One example is using natural language processing (NLP) in threat data analysis. When combined with sophisticated AI threat detection and response, NLP can help make sense of the data and autonomously ‘write’ complete reports. These reports would explain the entire attack process step-by-step, from the earliest stages to attack progression, to the necessary remediation actions taken. In some cases, NLP can also be applied to existing frameworks, such as the commonly used MITRE ATT&CK framework, to help express the findings in a way that also adds value to even seasoned security analyst workflows.

NLP may even be able to outline the hypotheses behind a cyber-attack, conveying the ‘how,’ in addition to the ‘what.’ Not only does this break down the actions of the threat detection and proportionate response in a simple, digestible way, it could also inform teams about how to stop these threats from happening again.

Yet it is not just security leaders who are recognizing the importance of XAI – regulators also realize that the means of training AI could themselves pose risks. Usually, AI is trained on large, sensitive datasets, which may be shared across teams, organizations, and regions, complicating regulations and compliance measures. To make life as easy as possible for organizations and regulators alike when navigating these complex issues, we must implement XAI across the board in the interest of transparency, objectivity, and, ultimately, building AI resilience. 

Organizations should also take measures to leverage AI to materially benefit human teams and make them stronger, more efficient, and more robust. If biases or inaccuracies emerge in the algorithms, organizations must be able to rely on XAI to identify where biases formed and how to take measures to mitigate them (in addition to understanding the processes behind its decisions.) 

With this identification and optimization, AI becomes a true force for good and helps eliminate–rather than propagate–existing challenges and problems for human teams in the future. For AI algorithms to truly bolster security defenses, the human behind the AI needs to understand those decisions with Explainability. 

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybersecurity, go to AI-Techpark.com.

Related posts

Digital Transformation in Healthcare 2023: Benefits and Trends

Usama Shabbir

Equipping Enterprises with Cyber Threat Intelligence in COVID Times

Anup Nair

The Rise of Network Observability: A Strategic Technology Enabler

Arun Desouza