Shift from static software to active governance. Learn why treating AI like a new hire is the only way to ensure accountability and security.
Joel Burleson-Davis, as we begin, how has your path to becoming CTO at Imprivata shaped the way you interpret the evolving role of AI inside modern organizations?
I’ve spent my career building high-performing teams and technologies in high-stakes environments, first at SecureLink and now at Imprivata; from industrials to high tech to healthcare. It’s taught me that the more powerful or impactful the capability, the more intentional and forward leaning the governance has to be. I see AI the same way: it’s powerful, it’s impactful so introducing it should lead with governance with intentional scope and value. Inside that framework then, you can let the new, powerful technology move you as fast as is safe.
What makes AI adoption fundamentally different from integrating traditional enterprise technologies?
AI adoption is fundamentally different because organizations won’t just use it like any other tool, they’ll work with it. Traditional enterprise tech is normally deterministic and static; AI systems learn, evolve, and adapt; and even if they don’, they’re still not deterministic. We’re already seeing AI agents that can, like employees, make autonomous decisions and take actions, which turns AI into a dynamic participant in workflows, not a passive system in the background. That shift means governance, training, and oversight can’t be bolted on later. They have to be designed in from day one.
Explain the risks organizations face when they treat AI systems as independent tools rather than entities requiring structured guidance.
Treating AI systems as independent tools invites real organizational risk. For example, when you point AI at initiatives that touch large volumes of user data including personal information, access patterns, and behavioral signals, without strong or proper controls, you’re inviting trouble for a future breach. Over-reliance on automation adds another layer of risk: AI can streamline a lot of work, but critical decisions still need human judgment in the loop or as a backstop, particularly if you expect human level accountability. Otherwise, you end up with black-box outcomes and potential errors like denied access, missed threats, and other failures that directly impact people and security and no real ownership or accountability.
How does framing AI as something that must be “trained” shift expectations around early deployment and long-term management?
Framing AI as something you have to train resets expectations. It’s not plug-and-play, it’s a long-term, lifecycle commitment. Like employees, AI systems need onboarding, training, feedback, and supervision. This means planning for longer initial deployments, clear checkpoints, and human oversight wired into every stage of the AI lifecycle.
What types of supervision are essential to prevent AI from taking autonomous actions that fall outside intended boundaries?
You can’t just “set and forget” AI and hope it stays in bounds. You need supervision across the whole lifecycle, from training data integrity to post-deployment monitoring, to ensure AI operates within its defined limits. The practical way to do that is with a real AI governance program that effectively identifies and addresses AI-driven risks. Built to put privacy- and security-first, a strong program will unite a cross-functional group from HR, legal, security, and other key teams. That diversity allows organizations to monitor and evaluate every AI use case across departments, understand the risk each introduces, and put safeguards in place before it impacts people or operations. And after this is in place, we have to execute the oversight program, just like we were launching a new division with a set of new employees.
How should teams design oversight processes that mirror the mentorship and monitoring given to new employees?
Teams should design AI oversight the way they onboard and support new employees (mostly): with structured onboarding, supervision, and guardrails. That starts with defined responsibilities and controlled permissions, then continues with auditable activity and regular reviews against expected outcomes, just as employees receive role-based access and regular performance reviews. On top of that, you need strong safeguards around data privacy, automated decision-making, and regulatory compliance requirements so the AI doesn’t unintentionally expose sensitive data, make unchecked mistakes, or drift into policy violations.
When AI errors occur, what governance mechanisms ensure clarity around responsibility and prevent accountability gaps?
When AI systems make mistakes, you still need a clear answer to a simple question: who’s on the hook for this? The best AI deployments won’t be judged by how sophisticated the models are, but by how well leaders understand what the AI is doing, why it’s doing it, and who’s responsible when something goes wrong. That requires strong governance frameworks that establish explicit ownership across every stage of the AI lifecycle. From there, access data and usage analytics help tighten accountability by showing, in real time, who’s using which AI capabilities, how often, and for what purposes.
How can organizations reinforce a culture where AI outcomes—positive or negative—are owned rather than dismissed as system failures?
To establish AI accountability across the organization, leaders must treat AI as a part of the team, not as a black box that gets blamed when something breaks. That means shifting the mindset from “the system failed” to “we designed, trained, and governed this, so we own the outcome.” When something goes wrong, teams should approach it like a post-incident review: what did the model do, what in our data, prompts, or guardrails allowed it, and how do we improve both the AI and the surrounding process. That kind of response culture reinforces shared ownership and closes the gaps in accountability.
What are the most important early signals that an AI system is drifting from expected behavior and needs intervention?
Early signs of AI drift usually show up in how it behaves day to day. Sudden spikes in denied access, anomalous approvals, or inconsistent classifications are all strong signals the model is straying from its trained parameters and requires intervention. A rise in manual overrides is another key indicator. If humans are constantly stepping in, they’re quietly compensating for declining accuracy. Together, these patterns provide an early chance to intervene before small deviations become significant operational or security issues.
For leaders preparing long-term AI strategies, what mindset is necessary to balance innovation, ethical guardrails, and operational accountability?
Leaders building long-term AI strategies must adopt a dual mindset: one that leans into AI’s transformative upside and another that insists on disciplined governance and oversight. It’s crucial to recognize that AI isn’t just another neat thing in the enterprise toolkit; it’s a dynamic teammate that’s always learning, evolving, and adapting. Transparency, auditability, and accountability must remain non-negotiable priorities for sustainable, secure AI implementation.
- A quote or advice from the author
“AI is no longer a tool you deploy — it’s a capability you manage. To unlock its full potential, leaders must build the same guardrails, accountability, and oversight they expect from their human workforce. Organizations that treat AI as a governed member of the team, not an autonomous system, will see the safest and most sustainable innovation.”

Joel Burleson-Davis
Chief Technology Officer, Imprivata
As Chief Technology Officer at Imprivata, Joel is responsible for building, delivering, and evolving Imprivata’s suite of cybersecurity products including Privileged Access Management, Privacy Monitoring, and Identity Governance solutions. Prior to Imprivata, Joel was Chief Technical Officer at SecureLink, where he led technology and operational strategy and execution across Product Development, Quality Assurance, IT and Cybersecurity Operations, Compliance, and Customer Success. Additionally, Joel is an established thought-leader across mission-critical industries, with expert insights published in Infosecurity Magazine, MedCity News, InformationWeek, Healthcare IT News and more.
