Image default
Guest Articles

Responsible AI use demands greater diversity

Louise Lunn, Vice President, Global Analytics Delivery, FICO, discusses how to get more women involved in analytics 

Flick through the applications on your phone and you will see artificial intelligence is already entwined with our day-to-day lives. From entertainment to finance, the apps and websites we use are learning from us every day, digesting information, and using it to make decisions and offer tailor-made solutions.

Spotify’s knowledge of the music you love comes from AI, Netflix recommends new movies and TV shows based on your past viewing choices, and suitable leasing options in car dealerships are produced by code that quickly analyses the applicant’s details and financial position.

AI in financial services

The adoption of AI was widespread before the pandemic, particularly in financial services, but now it has accelerated, with organisations of all sizes examining their scope for machine learning.
Our clients, and the industry as a whole, are demanding the data and digital tools to help support AI.

In the last 15 months, more and more businesses have been investing in AI tools, but we have not seen an elevated importance placed on diversity or responsibility. Organizations are increasingly leveraging AI to automate key processes that – in some cases – are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable, and diverse AI model governance.

A recent FICO study of 100 C-level analytic and data executives found that almost half (49%) of the respondents report an increase in resources allocated to AI projects over the past 12 months, followed by team productivity (46%) and predictive power of AI models (41%). However, only 39% have prioritized increased resources to AI governance during model development and 28% have prioritized ongoing AI model monitoring and maintenance.

For financial services, the real benefits of AI are time saving and improved customer services. What sets organisations and brands apart from each other is quickly getting the right offer to the customer at the right time. Early in the pandemic, for example, when payment holidays were first announced, banks were struggling to meet the increased call centre demand. With AI automation we can complete tasks at large scales.

Why we need diverse teams building AI

As with any new operational branch, organisations need a team to manage how AI will be deployed, measured and managed. The businesses that are doing this in the right way are looking to build diverse teams.

AI, in its simplest form, is a program that is capable of performing a task that requires intelligence. It learns from data, and datasets are prone to bias, no matter how much they may appear to be balanced or universal.. To counteract this bias, organisations need to have greater diversity in teams building AI.

These teams will make better AI-based models because they will be better at spotting potential bias, both in the data and in the results. Working with people from a wide range of backgrounds will also drive creative thinking and innovation.

Handling AI bias

Further results from FICO’s study found only 38% have data bias mitigation steps built into model development processes. However, evaluating the fairness of model outcomes is the most popular safeguard in the business community today, with 59% of respondents saying they do this to detect model bias. Additionally, 55% say they isolate and assess latent model features for bias and half (50%) say they have a codified mathematical definition for data bias and actively check for bias in unstructured data sources.

Combating AI model bias is essential, but many enterprises haven’t fully operationalized this effectively. According to our survey, 80% of AI-focused executives are struggling to establish processes that ensure responsible AI use.

Women in AI

Women are underrepresented in a lot of science, technology, engineering, and maths (STEM) fields. While the difference in numbers is larger in engineering, we still need to get more women into AI and I believe this begins with education and raising awareness of AI at an early age.

We need to start conversations about the qualifications required to work in STEM-based positions. Women need to look at these courses but then also consider incorporating aspects of programming skills, statistics, big data technologies, and architecture framework.

Communication and problem solving are also key in AI, as are hand-on opportunities. I did a placement unit at university and it provided me with great insight into the field and what I would need to work in it.

Another key ability required to be successful in analytics, and which demonstrates the need for diversity in the field, is intellectual curiosity. AI needs people who will challenge and change how things are done. And this is best achieved with a mix of vantage points. A group of people from the same demographic, who were brought up in similar ways, and attended a select group of universities, will ask the same questions and approach problems from the same angle. A diverse group will see the many ways to solve the problem, and work together to find the best solution, whether it is a hybrid of multiple opinions or a single viewpoint.

AI is an exciting and important field to work in. Some of the world’s biggest problems will be fixed by AI. This is a succinct way of illustrating the need for diversity in AI. To solve the world’s big problems, the team must reflect the world, and not a select group of it.

At FICO, we have women working in analytics throughout the company, and a network called Women @ FICO that supports women playing a bigger role in the major decisions made at the business. Our work reflects a bigger trend toward diversity in AI and business decisions. GCHQ, the British intelligence service, published an editorial in the Financial Times where its Director said the teams that develop AI must be as diverse as possible to reduce the risk of bias.

Similarly, the recent ‘Coded Bias’ documentary on Netflix concluded with the need for more diverse teams to create AI code that is fair and representative of the wider population.

If the topic of AI excites you, I recommend joining networks on LinkedIn that explore AI’s application. Ted Talks are a great source of knowledge and inspiration, and there is so much available on social media. Those interested can follow industry leaders online and actively engage with this energetic and supportive community. As the pandemic continues to increase the financial services industry’s dependence on AI, it is essential we develop diverse AI teams and build responsible AI models. Organisations can make life-changing decisions for their customers, but to create AI code that can do this fairly, it must be written by a group that reflects the diversity of those lives.

For more such updates and perspectives around Digital Innovation, IoT, Data Infrastructure, AI & Cybsercurity, go to

Related posts

Why Cloud Governance Belongs in the Boardroom

Robert Ford

Risk should Define Cybersecurity Strategy: Theoretical vs. Probable Threats

Jason Floyd

Human and AI Relationships Will Improve with Explainability

Max Heinemeyer