Image default
Guest Articles

What Does Responsible AI Mean to Me?

Wilson Pang, CTO at Appen, reflects on the role of ethics in artificial intelligence

The dialogue on responsible artificial intelligence (AI) continues to expand as the applicability of AI broadens across many facets of our daily lives. Responsible AI is a weighted topic, one that means many different things to different people. But it’s also vital that we have the conversation.

As a short answer to “What does Responsible AI mean to me?” I tend to think of responsible AI as the ethical and transparent creation and usage of AI in a way that works for everyone. All of us in the AI ecosystem must dedicate our efforts to exploring how we deploy responsible AI , and which factors to monitor when building AI using an ethical approach.

In my experience at Appen, where I work with a multitude of clients in supporting their AI data and deployment needs, I’ve developed my own definition of responsible AI. I’ve gained an understanding of where responsible AI is needed most, what can go wrong when it’s not factored in, and what companies can do to make sure they’re using an ethical approach through their model build and beyond.

Where Responsible AI Matters Most

Where does an ethical approach factor into building AI? There are several key touchpoints where responsible decision-making plays a meaningful role. First, take notice of how your model is intended to perform and certify that it performs as expected. If a model delivers unexpected results, there could be unintended consequences from its decisions.

Even post-deployment is a critical area to monitor.

Model drift occurs, wherein external conditions change to the point of affecting the accuracy of the AI model’s predictions. The model must be consistently retrained on fresh data to account for drift. This will ensure your model continues to produce the expected outcomes.

The second touchpoint is the need for the AI we build to support fairness and inclusion. Humans used to make these now automatic decisions and were responsible for considering various factors to make a decision fair. With AI replacing us as the decision-makers, it’s imperative to train AI to consider the same components humans would have.

Building AI through an ethical lens is vital, which is the third touchpoint. AI teams must ask crucial questions, such as, what are the limitations of my model? How will those limitations influence the results? Consider the consequences of what your model can do, and what it can’t do. In the data preparation stage, ensure you have permission to use the data and the proper safeguards in place to protect it.

A final touchpoint that doesn’t often receive the attention it deserves is the treatment of the people who build AI. I refer not just to the technical teams, but more broadly to include the contracted crowd of workers who annotate and prepare training data. These individuals make up a significant portion of the AI workforce. Organizations must adopt fair pay practices and equitable working conditions for these contributors, or be sure to partner with data providers that are committed to these principles. At Appen, for instance, we implemented a Crowd Code of Ethics to support our global network of contributors.

What Can Go Wrong without Responsible AI?

It’s the question every responsible organization should ask before building and deploying an AI solution: what could go wrong? I find three major consequences can occur when companies don’t focus on responsible AI, each with varying degrees of severity.

Customer Experience

If a model performs poorly (either not as intended or with unexpected results), it’s likely to impact customer experience. Imagine, for example, a company that uses AI to determine who’s to be issued a credit card. Let’s further imagine that one minority group was under-represented in the data used to train the model, with the unfortunate effect that the model denies credit cards at a greater rate to members of that minority group than to others. In this case, the lack of an ethical lens affects the customer experience negatively, most likely leading to lower revenue.

Reputation

If we take our previous credit card example, we can project that the company’s reputation may end up being questioned. This happens in one of two ways: the customers vocalize their poor experience with the brand, or the source of the error comes to light. In either case, the brand suffers.

Illegality

AI built without attention to ethics can break laws. In industries like housing, education and employment, where there’s a direct impact on the public, there are strict regulations for business operations. For example, companies aren’t permitted by law to discriminate against people based on gender, race, or sexual orientation. Without intending to, companies may build AI models that violate anti-discrimination laws, such as in our credit card company example. Of course, no one builds AI with the intention to break the law, but it does happen.

In any of these cases, the ramifications for the company and the affected persons can be significant. It begs the question, what can organizations do to approach AI development from an ethical standpoint?

A Responsible AI Journey Starts Here

Responsible AI requires using an ethical lens throughout all phases of a project, from training data preparation to model build to deployment. I break down the actions organizations should pursue into three key areas:

Model Performance

AI teams measure model performance via confidence thresholds, prediction accuracy, and more. What many fail to include, however, is an ethics-based metric. How does your model perform from an ethical perspective? To answer this question, I recommend examining error rates for minority groups. Are these on par with the error rates for non-minority groups? If not, why?

Before beginning the model build phase, decide how you will measure the ethical performance of your model. Track these measurements during the model build, deployment, and post-deployment.

Data Balance

Typically, companies care about the quality of their data and the speed of obtaining that data. What you need to evaluate as well, though, is data representation. Do you have data across all of the classes you want represented, including all minority groups? If not, you may need to seek out new datasets or create synthetic data to capture these classes.

This concern must be tackled head-on. Some teams believe that by removing data related to age, race, nationality, etcetera, they’re safe from bias. The opposite is more likely to be true. You may not realize you have data that’s highly correlated to these demographic factors that will influence your model’s predictions. For that reason, monitor bias and the balance of your data in all cases. I highly recommend a human-in-the-loop approach wherein a diverse group of people provides ground truth accuracy throughout the model training and retraining process.

Team Diversity

When building AI, a responsible organization will ensure two things – that the project team is diversified and that the team has a responsibility-focused mindset. Let’s address the first item. Yes, it can be difficult to recruit a diverse group of people who have the technical prowess to build your model while representing your end-users. That’s why ethics-based measurements play such a critical role in mitigating any bias that the team may inadvertently introduce into the model.

To achieve success, your AI team must view the model from an ethical perspective and with fairness in mind. They need to ask the hard questions, such as, what’s the potential harm this model could cause if it performs well? What if it doesn’t perform well? In examining data, they must question if they’re obtaining that data in an ethical way. Recruiting a team that will ask the right ethical questions from start to finish will further the mission of building responsible AI.

Transparency

Transparency is imperative. As an example, let’s say you’re someone who’s up for parole and is waiting for a judge to decide whether to grant it to you. Let’s also say that this judge is basing his or her decisions on an AI model’s predictions of whether you’re likely to be a repeat offender. If you’re not granted parole, wouldn’t you demand to know how the algorithm came to that decision? Here, we’re getting at the issue of explainability.

The ability to explain your model is often very important to your end-user, especially when that model is making decisions of great consequence.

Prior to building AI, consider the importance of explainability. In several cases, I’ve seen clients choose a model that may perform with lesser accuracy but is more explainable than its counterpart. Selecting your model and documenting the model build process should be decisions made with transparency in mind.

Looking Ahead

In the data we use, the models we build, the teams we select, and the societal impact of our solutions, responsible AI needs to be at the forefront of our minds. This conversation matters not just to the organizations deploying AI, but also to the people who build it and the people who use it. In other words, it matters to everyone. If we can all commit to taking an ethical approach to AI from model build to deployment, and beyond, we are together one step closer to building AI that works for all of us.

Related posts

Six ways to use AI for lead automation

Todd Fisher

When tech glitches threaten your brand perception

Chris Rogers

How AI Can Tackle the Rising Tide of Business Lending Fraud

Chirag Shah