Interview

AITech Interview with Sebastian Gierlinger, VP of Engineering at Storyblok

Sebastian Gierlinger, discusses AI-driven cybersecurity risks, human error, and strategies to safeguard organizations from evolving threats.

Sebastian, can you start by sharing your background and what led you to your current role as VP of Engineering at Storyblok?

My journey in the tech industry began with a deep interest in software development and a passion for creating innovative solutions. Over the years, I have held various roles in engineering and management, which have provided me with a broad perspective on technology and its applications. 

Before joining Storyblok, I worked with several startups and established companies, focusing on building scalable and secure software solutions. My experience in these diverse environments has been instrumental in shaping my approach to engineering and leadership. With Storyblok, I was drawn to the company’s vision of transforming content management and the opportunity to lead a talented team in driving this innovation forward.

With 72% of CISOs concerned that AI solutions may lead to security breaches, what are your thoughts on the potential risks generative AI poses to cybersecurity?

The concerns of CISOs are well-founded, as generative AI indeed introduces new dimensions of risk in cybersecurity. AI systems, especially those that can generate content, can be exploited to create highly convincing phishing emails and social engineering attacks. The sophistication of these AI-generated attacks makes them harder to detect using traditional security measures and the ability of AI to automate and scale such attacks exponentially increases the threat landscape. To mitigate these risks, it is crucial to enhance our detection and response mechanisms, utilising AI and machine learning to identify anomalies and suspicious activities that may indicate AI-driven cyber threats.

How do you see generative AI being used to scale and automate cyber-attacks, and why is it difficult to identify these AI-driven incidents?

Generative AI can automate the creation of malicious content at an unprecedented scale, making it easier for attackers to launch widespread campaigns with minimal effort. This includes generating realistic phishing emails, fake news, and even malicious code. The challenge in identifying AI-driven incidents lies in their sophistication and variability. Unlike traditional attacks that may follow predictable patterns, AI-generated attacks can constantly evolve and adapt, making them harder to detect with standard rule-based security systems. Advanced threat detection tools that utilize AI themselves are needed to keep up with these evolving threats.

In what ways can generative AI be utilized to create malicious content such as phishing emails and social engineering attacks?

Generative AI can produce highly realistic and personalized phishing emails by analyzing vast amounts of publicly available data about potential targets. This allows attackers to craft messages that are more likely to deceive recipients into divulging sensitive information. Similarly, AI can generate fake social media profiles or impersonate trusted contacts, enhancing the effectiveness of social engineering attacks. The ability to produce high-quality, contextually relevant content at scale means that these AI-generated threats can bypass many traditional security filters designed to catch generic phishing attempts.

Given the difficulty in differentiating AI-generated content from human-built content, how should cybersecurity professionals approach this challenge?

Cybersecurity professionals need to adopt a multi-layered approach. This includes employing advanced machine learning models to detect subtle anomalies that may indicate AI involvement. Continuous monitoring and behavioral analysis can help identify unusual patterns that differ from typical human interactions. Additionally, educating users about the potential risks and signs of AI-generated attacks can improve vigilance and reduce the likelihood of successful phishing attempts. Collaboration and information sharing within the cybersecurity community are also vital to stay ahead of emerging threats.

The current cybersecurity measures seem adequate. What specific measures do you believe are most effective against AI-driven attacks?

While current cybersecurity measures provide a foundation, they need to be enhanced to effectively counter AI-driven attacks. Key measures include advanced threat detection where AI and machine learning are used to detect and respond to threats in real-time, behavioral analytics, which is the monitoring of user behavior to identify deviations that may indicate compromised accounts. Zero Trust Architecture is also important which involves implementing a model where verification is required for every access request, regardless of its origin.

Keeping staff informed about the latest threats and best practices to mitigate human error are also key measures in reducing the threat of AI-driven cyber attacks as is Multi-Factor Authentication (MFA) where an extra layer of security is added to verify user identities.

How does human error play a role in data breaches related to generative AI tools, and what steps can organizations take to minimize this risk?

Human error is a significant factor in data breaches, especially with the increasing use of generative AI tools. Employees might inadvertently share sensitive information with AI tools or fall victim to AI-generated phishing attacks. Organizations can minimize this risk by educating employees on the risks associated with AI tools and best practices for data protection.

Implementing strict access controls to limit exposure of sensitive information, continuously monitoring AI tool usage, and conducting regular audits to ensure compliance with security policies are also crucial. Establishing clear guidelines on the acceptable use of AI tools within the organization can further help mitigate these risks.

The major risks associated with employees using generative AI tools like ChatGPT and sharing sensitive business information include data leakage, where sensitive business information could be inadvertently shared with AI tools that may not have adequate security measures. Additionally, employees may receive AI-generated phishing messages that are difficult to distinguish from legitimate communications, leading to phishing and social engineering attacks. 

AI tools can also produce inaccurate or misleading information, leading to poor decision-making and potential misinformation. Sharing sensitive data with AI tools might violate data protection regulations, resulting in compliance violations. Organizations should implement strict usage policies and ensure that employees are aware of the potential risks to mitigate these issues.

How do you foresee state actors potentially leveraging generative AI for cyber-attacks, and what types of attacks might they prioritize?

State actors could use generative AI to conduct sophisticated cyber espionage and disinformation campaigns. They might prioritize attacks that disrupt critical infrastructure, targeting power grids, financial systems, and communication networks. Espionage activities could involve stealing sensitive information from government agencies and private enterprises. Disinformation campaigns would focus on spreading false information to influence public opinion and destabilize societies. Additionally, state actors could introduce malicious code into essential software systems to cause widespread disruption through sabotage.

In terms of best practices, businesses should implement advanced security measures by using AI-driven security solutions to detect and respond to threats. Regular training and awareness programs are essential to educate employees about the risks and best practices for using AI tools. Establishing strong access controls to limit access to sensitive information and ensuring robust authentication methods are also critical.  Continuous monitoring of AI tool usage and keeping an eye on any suspicious activity will help in early detection of potential threats. 

Sebastian Gierlinger

VP of Engineering at Storyblok

Sebastian is Storyblok’s VP of Engineering as an experienced developer, team builder, and leader with over 10 years of experience in CMS systems, security, performance, and usability as his primary focus points. Outside of work, Sebastian is passionate about organizing community events for developers and is always up for a good cup of coffee.

Related posts

AITech Interview with Doug Dooley, Data Theorem

AI TechPark

Interview with Dr Radu Rusu, Co-founder and CEO at Fyusion

AI TechPark

AITech Interview with Chase Doelling, Principal Strategist at JumpCloud

AI TechPark