Interview

AITech Interview with Patrick Harding, Chief Product Architect at Ping Identity

Moving beyond static credentials to dynamic, real-time authentication for autonomous systems. Learn why AI agents require first-class identity.

Patrick, you’ve had an extensive career shaping the evolution of digital identity. What initially drew you to this field, and how has your perspective evolved with the rise of AI?

I’ve always been fascinated by the intersection of technology and trust, specifically how systems recognize who or what they’re dealing with, and how that recognition enables secure interaction. When I first entered the identity space, the focus was on managing human users and applications. The parameters were relatively fixed. With AI, those boundaries have blurred. We’re no longer just authenticating people or devices; we’re authenticating autonomous systems that make decisions, learn, and even act on our behalf. The rise of agentic AI forces us to rethink identity as a dynamic, continuously verified construct rather than a static credential. It’s an evolution from “who are you?” to “who are you right now, and what are you capable of doing?”

As enterprises begin integrating agentic AI into their operations, what fundamental shifts do you see happening in the way identity is defined and managed?

The biggest shift is moving from identity as a point-in-time assertion to identity as an ongoing relationship. Traditional systems authenticate a user once and assume that trust persists. With AI agents, identity must reflect the context, including the model version running, the data it has access to, and how its behavior has evolved. Enterprises will need identity systems that are event-driven and adaptive, continuously evaluating risk and intent to ensure effective security. This is a foundational change – AI identities must be as dynamic as the agents themselves.

You’ve spoken about treating AI agents as “first-class identities.” What does that concept entail, and why is it such a crucial change in mindset for organizations?

Treating AI agents as first-class identities means giving them the same rigor, visibility, and governance that we apply to human or service accounts. Today, many organizations deploy AI tools under shared credentials or broad API keys, essentially treating them as anonymous helpers. That approach breaks down once these agents start making independent decisions. A first-class identity allows for fine-grained control, allowing us to define what an agent is authorized to do, track its behavior, and revoke its access instantly if something goes wrong. Without that shift, enterprises risk losing accountability in their AI ecosystems.

How does dynamic governance differ from traditional identity frameworks, and what makes it essential when managing AI systems that continuously learn and evolve?

Traditional identity frameworks assume static relationships – a user joins, gets a role, and maybe changes departments. AI systems don’t follow that model. They evolve by learning new skills, ingesting new data, and interacting in unpredictable ways. Dynamic governance adds continuous oversight. It uses telemetry and policy feedback loops to adapt entitlements and risk thresholds in real time. Instead of a quarterly access review, governance becomes a living process that adjusts as the AI’s behavior changes. It’s the only viable way to maintain control when your “users” are self-modifying systems.

Managing the credential lifecycle for AI agents sounds complex. What processes or safeguards should enterprises implement to ensure responsible provisioning and retirement of these identities?

Enterprises should think in terms of cradle-to-grave identity management for AI. That starts with secure provisioning, assigning unique credentials and scoped permissions when the agent is deployed. Those credentials must be rotated and updated automatically as the AI model or its dependencies evolve. Equally important is retirement. When an AI agent is decommissioned or replaced, its credentials must be revoked and its data lineage captured. Automating that lifecycle prevents orphaned identities and reduces exposure from forgotten or reused access keys.

Real-time verification is becoming increasingly important. How does continuous authentication strengthen trust and accountability in AI-driven environments?

Continuous authentication ensures that trust isn’t static. In AI environments, it’s not enough to know who initiated an action; you must also know whether the actor is still trustworthy at that moment. By continuously monitoring behavior, model integrity, and context, we can detect deviations that signal compromise or malfunction. It’s the equivalent of constantly checking that an AI agent is still behaving within expected bounds, strengthening both security and auditability.

What role do entitlement policies play in defining the boundaries of what AI agents are allowed to do, and how can they prevent misuse or overreach?

Entitlement policies act as the operational “fence line” for AI agents. They define what data an agent can access, which systems it can interact with, and under what conditions. Well-designed policies not only prevent overreach, but also encode ethical and compliance boundaries. By embedding these rules directly into the identity framework, enterprises can ensure that even the most autonomous agents operate within acceptable limits.

Revocation speed seems critical in mitigating AI-related risks. How can enterprises design systems that enable immediate response when an agent behaves unpredictably or maliciously?

Speed is everything in AI risk mitigation. Once an agent starts producing harmful outputs or misusing data, you have seconds to act. Enterprises need automated kill-switch mechanisms tied directly to the identity layer. That means the ability to revoke credentials and cut off API access in real time – no human approval loop required. Coupled with anomaly detection and behavioral analytics, this enables instantaneous containment and audit logging.

With multiple states pushing forward with AI-specific legislation, how should enterprises align their identity strategies to stay compliant and future-ready?

The regulatory landscape is evolving quickly, and identity is at the center of most AI governance frameworks. Enterprises should design their identity systems with traceability, consent management, and accountability built in. By linking every AI action to a verifiable identity and maintaining transparent audit trails, organizations can demonstrate compliance regardless of how state or federal laws evolve. Think of identity as the backbone of compliance for AI.

Looking ahead, how do you envision the relationship between identity and AI evolving? Will identity management ultimately become the central pillar of secure AI adoption?

Identity will be the organizing principle that binds AI ecosystems together. As agentic AI becomes more autonomous, we’ll need a common fabric of trust to govern interactions between humans, systems, and intelligent agents. Identity management provides that fabric. It’s how we define accountability, enforce policy, and maintain transparency. In that sense, the future of AI security isn’t just about smarter models, it’s about smarter identity.

Quote by Author: It’s important to recognize that every identity – human or AI – needs to be treated with the same level of caution. The key is to anticipate threats, not just react to them.

Patrick Harding

Chief Product Architect at Ping Identity

Patrick Harding, Chief Product Architect at Ping Identity, leads innovation, architecture, and identity standards. With 25+ years in security and identity, he helped develop SCIM for automating identity data exchange. He has served on the boards of the Information Card Foundation and Open Identity Exchange and represents Ping on the Open Wallet Foundation. His experience spans financial services, travel, blockchain, and security consulting.

AI TechPark

Artificial Intelligence (AI) is penetrating the enterprise in an overwhelming way, and the only choice organizations have is to thrive through this advanced tech rather than be deterred by its complications.

Related posts

AITech Interview with Prateek Bhajanka, APJ Field CISO Director at SentinelOne

AI TechPark

AITech Interview with Ed Szofer, President, Co-Founder, and Chief Executive Officer of SenecaGlobal

AI TechPark

AITech Interview with Mark Cusack, Chief Technology Officer at Yellowbrick Data

AI TechPark