Interview

AITech Interview with Joel Rennich, VP of Product Management at JumpCloud

Learn how AI influences identity management in SMEs, balancing security advancements with ethical concerns.

Joel, how have the unique challenges faced by small and medium-sized enterprises influenced their adoption of AI in identity management and security practices?

So we commission a biannual small to medium-sized enterprise (SME) IT Trends Report that looks specifically at the state of SME IT. This most recent version shows how quickly AI has impacted identity management and highlights that SMEs are kind of ambivalent as they look at AI. IT admins are excited and aggressively preparing for it—but they also have significant concerns about AI’s impact. For example, nearly 80% say that AI will be a net positive for their organization, 20% believe their organizations are moving too slowly concerning AI initiatives, and 62% already have AI policies in place, which is pretty remarkable considering all that IT teams at SMEs have to manage. But SMEs are also pretty wary about AI in other areas. Nearly six in ten (62%) agree that AI is outpacing their organization’s ability to protect against threats and nearly half (45%) agree they’re worried about AI’s impact on their job. I think this ambivalence reflects the challenges of SMEs evaluating and adopting AI initiatives – with smaller teams and smaller budgets, SMEs don’t have the budget, training, and staff their enterprise counterparts have. But I think it’s not unique to SMEs. Until AI matures a little bit, I think that AI can feel more like a distraction.

Considering your background in identity, what critical considerations should SMEs prioritize to protect identity in an era dominated by AI advancements?

I think caution is probably the key consideration. A couple of suggestions for getting started:

Data security and privacy should be the foundation of any initiative. Put in place robust data protection measures to safeguard against breaches like encryption, secure access controls, and regular security audits. Also, make sure you’re adhering to existing data protection regulations like GDPR and keep abreast of impending regulations in case new controls need to be implemented to avoid penalties and legal issues.

When integrating AI solutions, make sure they’re from reputable sources and are secure by design. Conduct thorough risk assessments and evaluate their data handling practices and security measures. And for firms working more actively with AI, research and use legal and technical measures to protect your innovations, like patents or trademarks.

With AI, it’s even more important to use advanced identity and authentication management (IAM) solutions so that only authorized individuals have access to sensitive data. Multi-factor authentication (MFA), biometric verification, and role-based access controls can significantly reduce that risk. Continuous monitoring systems can help identify and thwart AI-related risks in real time, and having an incident response plan in place can help mitigate any security breaches. 

Lastly, but perhaps most importantly, make sure that the AI technologies are used ethically, respecting privacy rights and avoiding bias. Developing an ethical AI framework can guide your decision-making process. Train employees on the importance of data privacy, recognizing phishing attacks, and secure handling of information. And be prepared to regularly update (and communicate!) security practices given the evolving nature of AI threats.

AI introduces both promises and risks for identity management and overall security. How do you see organizations effectively navigating this balance in the age of AI, particularly in the context of small to medium-sized enterprises?

First off, integrating AI has to involve more than just buzzwords – and I’d say that we still need to wait until AI accuracy is better before SMEs undertake too many AI initiatives. But at the core, teams should take a step back and ask, “Where can AI make a difference in our operations?” Maybe it’s enhancing customer service, automating compliance processes, or beefing up security. Before going all in, it’s wise to test the waters with pilot projects to get a real feel of any potential downstream impacts without overcommitting resources.

Building a security-first culture—this is huge. It’s not just the IT team’s job to keep things secure; it’s everybody’s business. From the C-suite to the newest hire, SMEs should seek to create an environment where everyone is aware of the importance of security, understands the potential threats, and knows how to handle them. And yes, this includes understanding the role of AI in security, because AI can be both a shield and a sword.

AI for security is promising as it’s on another level when it comes to spotting threats, analyzing behavior, and monitoring systems in real time. It can catch things humans might miss, but again, it’s VITAL to ensure the AI tools themselves are built and used ethically. AI for compliance also shows a lot of promise. It can help SMEs stay on top of regulations like GDPR or CCPA to avoid fines but also to build trust and reputation. 

Because there are a lot of known unknowns around AI, industry groups can be a good source for information sharing and collaboration. There’s wisdom and a strength in numbers and a real benefit in shared knowledge. It’s about being strategic, inclusive, ethical, and always on your toes. It’s a journey, but with the right approach, the rewards can far outweigh the risks.

Given the challenges in identity management across devices, networks, and applications, what practical advice can you offer for organizations looking to leverage AI’s strengths while addressing its limitations, especially in the context of password systems and biometric technologies?

It’s a surprise to exactly no one that passwords are often the weakest security link. We’ve talked about ridding ourselves of passwords for decades, yet they live on. In fact, our recent report just found that 83% of organizations use passwords for at least some of their IT resources. So I think admins in SMEs know well that despite industry hype around full passwordless authentication, the best we can do for now is to have a system to manage them as securely as possible. In this area, AI offers a lot. Adaptive authentication—powered by AI—can significantly improve an org’s security posture. AI can analyze things like login behavior patterns, geo-location data, and even the type of device being used. So, if there’s a login attempt that deviates from the norm, AI can flag it and trigger additional verification steps or step-up authentication. Adding dynamic layers of security that adapt based on context is far more robust than static passwords.

Biometric technologies offer a unique, nearly unforgeable means of identification, whether through fingerprints, facial recognition, or even voice patterns. Integrating AI with biometrics makes them much more precise because AI algorithms can process complex biometric data quickly, improve the accuracy of identity verification processes, and reduce the chances of both false rejections and false acceptances. Behavioral biometrics can analyze typing patterns, mouse or keypad movements, and navigation patterns within an app for better security. AI systems can be trained to detect pattern deviations and flag potential security threats in real time. The technical challenge here is to balance sensitivity and specificity—minimizing false alarms while ensuring genuine threats are promptly identified.

A best practice with biometrics is to employ end-to-end encryption for biometric data, both at rest and in transit. Implement privacy-preserving techniques like template protection methods, which convert biometric data into a secure format that protects against data breaches and ensures that the original biometric data cannot be reconstructed.

AI and biometric technologies are constantly evolving, so it’s necessary to keep your systems updated with the latest patches and software updates. 

How has the concept of “identity” evolved in today’s IT environment with the influence of AI, and what aspects of identity management have remained unchanged?

Traditionally, identity in the workplace was very much tied to physical locations and specific devices. You had workstations, and identity was about logging into a central network from these fixed points. It was a simpler time when the perimeter of security was the office itself. You knew exactly where data lived, who had access, and how that access was granted and monitored.

Now it’s a whole different ballgame. This is actually at the core of what JumpCloud does. Our open directory platform was created to securely connect users to whatever resources they need, no matter where they are. In 2024, identity is significantly more fluid and device-centered. Post-pandemic, and with the rise of mobile technology, cloud computing, and now the integration of AI, identities are no longer tethered to a single location or device. SMEs need for employees to be able to access corporate resources from anywhere, at any time, using a combination of different devices and operating systems—Windows, macOS, Linux, iOS, Android. This shift necessitates a move from a traditional, perimeter-based security model to what’s often referred to as a zero-trust model, where every access transaction needs to have its own perimeter drawn around it. 

In this new landscape, AI can vastly improve identity management in terms of data capture and analysis for contextual approaches to identity verification. As I mentioned, AI can consider the time of access, the location, the device, and even the behavior of the user to make real-time decisions about the legitimacy of an access request. This level of granularity and adaptiveness in managing access wasn’t possible in the past.

However, some parts of identity management have stayed the same. The core principles of authentication, authorization, and accountability still apply. We’re still asking the fundamental questions: “Are you who you say you are?” (authentication), “What are you allowed to do?” (authorization), and “Can we account for your actions?” (accountability). What has changed is how we answer these questions. We’re in the process of moving from static passwords and fixed access controls to more dynamic, context-aware systems enabled by AI.

In terms of identity processes and applications, what is the current role of AI for organizations, and how do you anticipate this evolving over the next 12 months?

We’re still a long away from the Skynet-type AI future that we’ve all associated with AI since the Terminator. For SMEs, AI accelerates a shift away from traditional IT management to an approach that’s more predictive and data-centric. At the core of this shift is AI’s ability to sift through vast, disparate data sets, identifying patterns, predicting trends, and, from an identity management standpoint, its power is in preempting security breaches and fraudulent activities. It’s tricky though, because you have to balance promise and risk, like legitimate concerns about data governance and the protection of personally identifiable information (PII). Tapping AI’s capabilities needs to ensure that we’re not overstepping ethical boundaries or compromising on data privacy. Go slow, and be intentional.

Robust data management frameworks that comply with evolving regulatory standards can protect the integrity and privacy of sensitive information. But keep in mind that no matter the benefit of AI automating processes, there’s a critical need for human oversight. The reality is that AI, at least in its current form, is best utilized to augment human decision-making, not replace it. As AI systems grow more sophisticated, organizations will require workers with  specialized skills and competencies in areas like machine learning, data science, and AI ethics.

Over the next 12 months, I anticipate we’ll see organizations doubling down on these efforts to balance automation with ethical consideration and human judgment. SMEs will likely focus on designing and implementing workflows that blend AI-driven efficiencies with human insight but they’ll have to be realistic based on available budget, hours, and talent. And I think we’ll see an increase in the push towards upskilling existing personnel and recruiting specialized talent. 

For IT teams, I think AI will get them closer to eliminating tool sprawl and help centralize identity management, which is something we consistently hear that they want. 

When developing AI initiatives, what critical ethical considerations should organizations be aware of, and how do you envision governing these considerations in the near future?

As AI systems process vast amounts of data, organizations must ensure these operations align with stringent privacy standards and don’t compromise data integrity. Organizations should foster a culture of AI literacy to help teams set realistic and measurable goals, and ensure everyone in the organization understands both the potential and the limitations of AI technologies.

Organizations will need to develop more integrated and comprehensive governance policies around AI ethics that address:

How will AI impact our data governance and privacy policies? 

What are the societal impacts of our AI deployments? 

What components should an effective AI policy include, and who should be responsible for managing oversight to ensure ethical and secure AI practices?

Though AI is evolving rapidly, there are solid efforts from regulatory bodies to establish frameworks, working toward regulations for the entire industry. The White House’s National AI Research and Development Strategic Plan is one such example, and businesses can glean quite a bit from that. Internally, I’d say it’s a shared responsibility. CIOs and CTOs can manage the organization’s policy and ethical standards, Data Protection Officers (DPOs) can oversee compliance with privacy laws, and ethics committees or councils can offer multidisciplinary oversight. I think we’ll also see a move toward involving more external auditors who bring transparency and objectivity.

In the scenario of data collection and processing, how should companies approach these aspects in the context of AI, and what safeguards do you recommend to ensure privacy and security?

The Open Worldwide Application Security Project (OWASP) has a pretty exhaustive list and guidelines. For a guiding principle, I’d say be smart and be cautious. Only gather data you really need, tell people what you’re collecting, why you’re collecting it, and make sure they’re okay with it. 

Keeping data safe is non-negotiable. Security audits are important to catch any issues early. If something does go wrong, have a plan ready to fix things fast. It’s about being prepared, transparent, and responsible. By sticking to these principles, companies can navigate the complex world of AI with confidence.

Joel Rennich

VP of Product Management at JumpCloud 

Joel Rennich is the VP of Product Strategy at JumpCloud residing in the greater Minneapolis, MN area. He focuses primarily on the intersection of identity, users and the devices that they use. While Joel has spent most of his professional career focused on Apple products, at JumpCloud he leads a team focused on device identity across all vendors. Prior to JumpCloud Joel was a director at Jamf helping to make Jamf Connect and other authentication products. In 2018 Jamf acquired Joel’s startup, Orchard & Grove, which is where Joel developed the widely-used open source software NoMAD. Installed on over one million Macs across the globe, NoMAD allows macOS users to get all the benefits of Active Directory without having to be bound to them. Joel also developed other open source software at Orchard & Grove such as DEPNotify and NoMAD Login. Over the years Joel has been a frequent speaker at a number of conferences including WWDC, MacSysAdmin, MacADUK, Penn State MacAdmins Conference, Objective by the Sea, FIDO Authenticate and others in addition to user groups everywhere. Joel spent over a decade working at Apple in Enterprise Sales and started the website afp548.com which was the mainstay of Apple system administrator education during the early years of macOS X.

Related posts

AITech Interview with Hernan Arber, AI Product Strategy Lead at UserWay

AI TechPark

AITech Interview with Carl D’Halluin, Chief Technology Officer, Datadobi

AI TechPark

AITech Interview with Belsasar Lepe, Co-Founder & CEO at Cerby

AI TechPark