Interview

AITech Interview with Maria Cardow, Chief Information Officer, LevelBlue

Address the root causes of shadow AI and unintentional insider threats by prioritizing human-centric design over traditional software silos.

Maria, you’ve often said, “There are no tech problems, only people problems.” That’s a powerful statement. Can you share what inspired this philosophy and how it’s shaped your leadership approach at LevelBlue?

What I’ve learned over the years is that the attack surface is largely people. Even our most complex technical issues eventually come down to humans — their decisions, their understanding, and their habits. Technology can usually be explained and resolved through guidelines, standards, or best practices. Human behavior isn’t that simple. We can tell people not to click a link, but then they get an email that says it’s really important, and they still click it. Most of the time, it’s not malicious intent; it’s simply not knowing any better.

In my own leadership, I’ve learned to approach cybersecurity problems with human solutions. The moment you add people to the loop, you introduce greater variability. That’s what makes the work fascinating. My goal is to create environments where people feel informed, empowered, and part of the security process, not simply managed by it.

In your experience, what are the most common misconceptions organizations have about cybersecurity being purely a technological challenge rather than a human one?

The biggest misconception is that technology can solve everything if we just buy the right tool. Organizations understand human risk on paper, but in practice, it still gets the least investment and attention. People think of human error as something to train out of existence, but it’s really about ensuring that you’re building your architecture with the understanding that you’re not protecting bits and bytes; you’re protecting people.

When you treat cybersecurity as purely technical, you lose sight of how people actually work. People will find ways to get their jobs done even if that means taking shortcuts or using tools outside the official process. If leaders don’t take the time to understand what drives those decisions, they’ll keep treating symptoms rather than addressing the root causes. That’s why integrating security thinking early in planning, in design, in how work really happens, is so important.

You’ve emphasized that insider risks are frequently unintentional. What are some real-world behaviors or scenarios that typically lead to these kinds of risks?

In many cases, internal security incidents are caused by the errors or negligence of employees or individuals who work within the organization or have authorized access to its systems and networks. You don’t have to plan a cybercrime to cause an incident; sometimes, one accidental click is all it takes.

One of the biggest dangers of insider threats is how easily they go unnoticed. Since the people involved often use valid credentials, they don’t immediately raise red flags. It might be someone who falls for a phishing email, skips a protocol, or misconfigures a system without realizing the impact. We’re also seeing more situations where users —such as contractors, vendors, or former employees —have legitimate access that has been hijacked.

Insider threats aren’t just technical failures; they reflect human dynamics, outdated processes, and gaps in security infrastructure. Protecting against them means pairing strong tools with well-prepared teams and creating a culture where people understand their role in keeping the organization secure.

Many cybersecurity strategies focus heavily on tools and systems. How can leaders recalibrate their approach to put human behavior and culture at the center of their security framework?

It starts with realizing that technology problems often require human solutions. Leaders need to make sure they’re building their security architecture with people in mind. 

Too often, organizations bring security in at the end. The earlier you can have people thinking about architecture and security together, the more likely you are to come up with ways to integrate security earlier in the process, and the better your outcomes will be. It’s in those early conversations that you figure out how people are going to interact with the technology and where friction might appear.

Culture is another piece. A resilient organization is one where everyone understands their role in protecting it and feels safe raising concerns. That kind of culture doesn’t happen through one-time training; it happens when leaders make security a shared value that runs through the business, not a checklist.

How does organizational structure—especially siloed teams—contribute to weak spots in security posture, and what steps can leaders take to bridge these gaps effectively?

Silos are one of the biggest hidden risks in security. When teams don’t talk to each other, it’s easy for things to fall through the cracks. One group assumes another is handling something, and no one realizes the gap until there’s a problem.

Leaders have to be intentional about breaking down those silos. That means creating more opportunities for teams to connect early in the process. It also helps when leaders stay close to what’s happening on the ground. You can’t lead security from behind a desk. You need to understand what tools people are using, where they’re running into obstacles, and what they need to do their jobs well. The more you understand that day-to-day reality, the better you can align teams and eliminate those gaps before they turn into risks.

With AI now transforming how we work, what new human vulnerabilities or biases do you believe companies need to pay closer attention to?

Shadow AI is a growing problem in the workforce. For example, an employee may use personal devices or AI tools like ChatGPT on their phones because the approved ones don’t meet their needs. While it may seem harmless at first, it can create a lot of risk, such as data leakage or legal and regulatory liabilities. 

When people turn to workarounds, it’s often a signal that security isn’t aligned with how they actually work. When it comes to shadow AI, leaders must first acknowledge that there is a problem, then put themselves in their employees’ shoes to understand why they’re using it. Many of the activities employees are looking to do are well within the bounds of approval if you ensure they have the right tools.

How can CIOs and CISOs foster a stronger culture of collaboration and psychological safety, where employees feel empowered to engage in cybersecurity rather than intimidated by it?

In cybersecurity, we have to think outside the box. Threat actors are constantly finding new angles, so defenders have to do the same. That means encouraging diverse thinking, making it safe to share ideas, and giving people the room to approach problems in unexpected ways.

Leaders set that tone. The best ones are willing to admit when they don’t have all the answers and surround themselves with people who bring different perspectives. Collaboration and trust are at the core of any strong security culture. People need to feel comfortable speaking up when they see something that doesn’t look right, asking questions, or admitting when they’ve made a mistake.

There’s also a real need to bridge the gap between leadership and technical teams. Five years ago, the C-suite often responded to what flowed up from the teams closest to the work. Now, a lot of conversations are dictated from the top down, which means important details can get missed. Part of a CIO’s responsibility is to illuminate those unknowns and make sure the organization is asking the right questions.

In developing more resilient and security-aware cultures, what role does continuous learning and behavioral reinforcement play?

There’s a community aspect to cybersecurity that’s easy to overlook. It’s important to remember we’re all working toward the same goal: protecting people and organizations. An organization with a resilient culture is one where everyone understands their role in that effort and takes accountability for it.

Leaders play a big part in shaping that mindset. When they create a culture where security is part of everyday work, it stops feeling like a checklist and starts becoming a shared value. That means encouraging people to practice safe online behaviors, to speak up when something looks off, and to keep learning as threats evolve.

Regular, role-based cybersecurity training is also key. Everyone needs different tools and awareness depending on what they do, and training programs have to evolve along with the threat landscape. Continuous learning keeps people informed, confident, and ready. 

Looking ahead, what will distinguish the next generation of cybersecurity leaders—those capable of managing both technological complexity and human dynamics?

The next generation of cybersecurity leaders will need to be translators. They’ll have to understand technology deeply, but also know how to connect it to people, process, and purpose. It’s not enough to be technical; you have to be able to explain risk and resilience in a way that makes sense to everyone in the organization.

Cybersecurity will always be complex, but leading through that complexity requires empathy, communication, and curiosity. The leaders who succeed will be those who can inspire trust both in their people and in the systems that protect them.

Maria Cardow

Chief Information Officer, LevelBlue

As the Chief Information Officer of LevelBlue, Maria Cardow leads global technology strategy and transformation initiatives, aligning cybersecurity operations with broader business objectives. She’s played a pivotal role in LevelBlue’s recent acquisitions of Aon’s Cybersecurity and Consulting Business, Trustwave, and Cybereason, managing security integrations and keeping a pulse on potential security gaps throughout the process. Prior to LevelBlue, she held senior leadership roles at top firms, including Goldman Sachs, Citadel, Credit Suisse, Merrill Lynch, and CIBC, where she successfully guided large-scale technology modernization, infrastructure optimization, and security transformation programs. Maria has been instrumental in building high-performing, diverse global technology teams that deliver measurable business results through improved efficiency, resilience, and innovation.

AI TechPark

Artificial Intelligence (AI) is penetrating the enterprise in an overwhelming way, and the only choice organizations have is to thrive through this advanced tech rather than be deterred by its complications.

Related posts

AITech Interview with Andrew Madson, Data Analytics, Data Science, and AI Evangelist at Dremio

AI TechPark

AITech Interview with Yashin Manraj, Chief Executive Officer at Pvotal

AI TechPark

AITech Interview with Sebastian Gierlinger, VP of Engineering at Storyblok

AI TechPark