Interview

AITech Interview with Kiranbir Sodhia, Senior Staff Engineering Manager at Google

Explore expert advice for tech leaders and organizations on enhancing DEI initiatives, with a focus on the ethical development and deployment of AI technologies.

Kiranbir, we’re delighted to have you at AITech Park, could you please share your professional journey with us, highlighting key milestones that led you to your current role as a Senior Staff Engineering Manager at Google?

I started as a software engineer at Garmin then Apple. As I grew my career at Apple, I wanted to help and lead my peers the way my mentors helped me. I also had an arrogant epiphany about how much more I could get done if I had a team of people just like me. That led to my first management role at Microsoft.

Initially, I found it challenging to balance my desire to have my team work my way with prioritizing their career growth. Eventually, I was responsible for a program where I had to design, develop, and ship an accessory for the Hololens in only six months. I was forced to delegate and let go of specific aspects and realized I was getting in the way of progress. 

My team was delivering amazing solutions I never would have thought of. I realized I didn’t need to build a team in my image. I had hired a talented team with unique skills. My job now was to empower them and get out of their way. This realization was eye-opening and humbled me.

I also realized the skills I used for engineering weren’t the same skills I needed to be an effective leader. So I started focusing on being a good manager. I learned from even more mistakes over the years and ultimately established three core values for every team I lead:

  1. Trust your team and peers, and give them autonomy.
  2. Provide equity in opportunity. Everyone deserves a chance to learn and grow.
  3. Be humble.

Following my growth as a manager, Microsoft presented me with several challenges and opportunities to help struggling teams. These teams moved into my organization after facing cultural setbacks, program cancellations, or bad management. Through listening, building psychological safety, providing opportunities, identifying future leaders, and refusing egos, I helped turn them around. 

Helping teams become self-sufficient has defined my goals and career in senior management. That led to opportunities at Google where I could use those skills and my engineering experience.

In what ways have you personally navigated the intersection of diversity, equity, and inclusion (DEI) with technology throughout your career?

Personally, as a Sikh, I rarely see people who look like me in my city, let alone in my industry.  At times, I have felt alone. I’ve asked myself, what will colleagues think and see the first time we meet?

I’ve been aware of representing my community well, so nobody holds a bias against those who come after me. I feel the need to prove my community, not just myself, while feeling grateful for the Sikhs who broke barriers, so I didn’t have to be the first. When I started looking for internships, I considered changing my name. When I first worked on the Hololens, I couldn’t wear it over my turban.

These experiences led me to want to create a representative workplace that focuses on what you can do rather than what you look like or where you came from. A workplace that lets you be your authentic self. A workplace where you create products for everyone.

Given your experience, what personal strategies or approaches have you found effective in promoting diversity within tech teams and ensuring equitable outcomes?

One lesson I received early in my career in ensuring our recruiting pipeline was more representative was patience. One of my former general managers shared a statistic or a rule of halves:

  • 32 applications submitted
  • 16 resumes reviewed by the hiring manager
  • 8 candidates interviewed over an initial phone screen
  • 4 candidates in final onsite interviews
  • 2 offers given
  • 1 offer accepted

His point was that if you review applications in order, you will likely find a suitable candidate in the first thirty applications. To ensure you have a representative pipeline, you have to leave the role open to accept more applications, and you get to decide which applications to review first. 

Additionally, when creating job requisitions, prioritize what’s important for the company and not just the job. What are the skills and requirements in the long term? What skills are only necessary for the short term? I like to say, don’t just hire the best person for the job today, hire the best person for the team for the next five years. Try to screen in instead of screening out.

To ensure equitable outcomes, I point to my second leadership value, equity in opportunity. The reality of any team is that there might be limited high-visibility opportunities at any given time. For my teams, no matter how well someone delivered in the past, the next opportunity and challenge are given to someone else. Even if others might complete it faster, everyone deserves a chance to learn and grow. 

Moreover, we can focus on moving far, not just fast, when everyone grows. When this is practiced and rewarded, teams often find themselves being patient and supporting those currently leading efforts. While I don’t fault individuals who disagree, their growth isn’t more important than the team’s.

From your perspective, what advice would you offer to tech leaders and organizations looking to strengthen their DEI initiatives, particularly in the context of developing and deploying AI technologies?

My first advice for any DEI initiative is to be patient. You won’t see changes in one day, so you want to focus on seeing changes over time. That means not giving up early, with leaders providing their teams more time to recruit and interview rather than threatening position clawbacks if the vacancy isn’t filled.

Ultimately, AI models are only as good as the data they are trained on. Leaders need to think about the quality of the data. Do they have enough? Is there bias? Is there data that might help remove human biases? 

How do biased AI models perpetuate diversity disparities in hiring processes, and what role do diverse perspectives play in mitigating these biases in AI development?

Companies that already lack representation risk training their AI models on the skewed data of their current workforce. For example, among several outlets, Harvard Business Review has reported that women might only apply to a job if they have 100% of the required skills compared to men who apply when they meet just 60% of the skills. Suppose a company’s model was built on the skills and qualifications of their existing employees, some of which might not even be relevant to the role. In that case, it might discourage or screen out qualified candidates who don’t possess the same skillset.

Organizations should absolutely use data from current top performers but should be careful not to include irrelevant data. For example, how employees answer specific interview questions and perform actual work-related tasks is more relevant than their alma mater. They can fine-tune this model to give extra weight to data for underrepresented high performers in their organization. This change will open up the pipeline to a much broader population because the model looks at the skills that matter.

In your view, how can AI technologies be leveraged to enhance, rather than hinder, diversity and inclusion efforts within tech organizations?

Many organizations already have inherent familiarity biases. For example, they might prefer recruiting from the same universities or companies year after year. While it’s important to acknowledge that bias, it’s also important to remember that recruiting is challenging and competitive, and those avenues have likely consistently yielded candidates with less effort.

However, if organizations want to recruit better candidates, it makes sense to broaden their recruiting pool and leverage AI to make this more efficient. Traditionally, broadening the pool meant more effort in selecting a good candidate. But if you step back and focus on the skills that matter, you can develop various models to make recruiting easier. 

For example, biasing the model towards the traditional schools you recruit from doesn’t provide new value. However, if you collect data on successful employees and how they operate and solve problems, you could develop a model that helps interview candidates to determine their relevant skills. This doesn’t just help open doors to new candidates and create new pipelines, but strengthens the quality of recruiting from existing pipelines.

Then again, reinforcing the same skills could remove candidates with unique talent and out-of-the-box ideas that your organization doesn’t know it needs yet. The strategy above doesn’t necessarily promote diversity in thought.

As with any model, one must be careful to really know and understand what problem you’re solving and what success looks like, and that must be without bias.

In what specific ways do you believe AI can be utilized to identify and address systemic barriers to gender equality and diversity in tech careers?

When we know what data to collect and what data matters, we understand where we introduce bias, place less effort, and miss gaps. For example, the HBR study I shared that indicated women needed 100% of the skills to apply also debunked the idea that confidence was the deciding factor. Men and women cited confidence as the reason not to apply equally. The reality was that people needed to familiarize themselves with the hiring process and what skills were considered. So our understanding and biases come into play even when trying to remove bias!

An example I often use for AI is medical imaging. A radiologist regularly looks at MRIs. However, their ability to detect an anomaly could be affected by multiple factors. Are they distracted or tired? Are they in a rush? While AI models may have other issues, they aren’t susceptible to these factors. Moreover, continuous training of AI models means revisiting previous images and diagnoses to improve further because time isn’t a limitation. 

I share this example because humans make mistakes and form biases. Our judgment can be clouded on a specific day. If we focus on ensuring these models don’t inherit our biases, then we remove human judgment and error from the equation. This will ideally lead to hiring the mythical “best” candidate objectively and not subjectively.

As we conclude, what are your thoughts on the future of AI in relation to diversity and inclusion efforts within the tech sector? What key trends or developments do you foresee in the coming years?

I am optimistic that a broader population will have access to opportunities that focus on their skills and abilities versus their background and that there will be less bias when evaluating those skills. At the same time, I predict a bumpy road. 

Teams will need to reevaluate what’s important to perform the job and what’s helpful for the company, and that’s not always easy to do without bias. My hope is that in an economy of urgency, we are patient in how we approach improving representation and that we are willing to iterate rather than give up.

Kiranbir Sodhia

Senior Staff Engineering Manager at Google

Kiranbir Sodhia, a distinguished leader and engineer in Silicon Valley, California, has spent over 15 years at the cutting edge of AI, AR, gaming, mobile app, and semiconductor industries. His expertise extends beyond product innovation to transforming tech teams within top companies. At Microsoft, he revitalized two key organizations, consistently achieving top workgroup health scores from 2017 to 2022, and similarly turned around two teams at Google, where he also successfully mentored leaders for succession. Kiranbir ‘s leadership is characterized by a focus on fixing cultural issues, nurturing talent, and fostering strategic independence, with a mission to empower teams to operate independently and thrive. Kiranbir Sodhia: Transforming Tech Teams; Cultivating Leaders

Related posts

Interview with Ravi Shankar, Senior VP and CMO, Denodo

AI TechPark

AITech Interview with Andrew Russell, Chief Revenue Officer at Nyriad

AI TechPark

AITech interview with Dmitry Petrov, Co-Founder & CEO at Iterative.ai

AI TechPark