AI Ethics Part 1: Understanding and Identifying Biases in Our Algorithms – Data-driven marketing

Kshira Saagar

  • Chief Data Officer, Latitude Financial Services

Kshira Saagar (the K is silent as in a knight) is currently the Chief Data Officer of Latitude Financial Services and has spent 22.3% of her life helping key decision makers and CxOs make smarter decisions using data , and strongly believes that every organization can become truly data-driven. Outside of work, Kshira devotes a lot of time to advancing data literacy initiatives for high school and undergraduate students.

Algorithms based on AI (artificial intelligence) are becoming an essential part of modern businesses. As we begin to rely more on AI to make critical decisions in our organizations, it becomes essential to ensure they are made ethically and free from unfair biases. There is a need for responsible AI systems that make transparent, explainable, and accountable decisions, with a conscious understanding of the various AI and data biases that can undermine them.

This article explores the different forms of AI bias, ways to understand them and identify them in our algorithms and other decision support tools. It should be noted that AI bias can lead to injustice, which in some situations may amount to unlawful discrimination or other forms of illegality. The concept of governance algorithm bias is now more relevant in organizations due to the commoditization of the ability of machine learning to deliver hyper-personalized customer experience at scale in the form of recommendation engines, tools credit decision and more.

What is AI Bias?

AI bias describes systematic and repetitive errors caused by a digital system that leads to unfair results, such as favoring an arbitrary group of users over others. Biases can manifest themselves due to a variety of factors – the design of the algorithm, poor collection of inaccurate data or, even worse, unfair and biased people exploiting the decision systems. AI bias is prevalent in all systems in our modern society that use supporting “artificial intelligence”, ranging as broadly as search engines, social media platforms, credit decision, application of law and recruitment.

It should be noted that this AI bias can be both intentional and unintentional – it is a common myth that bias can only be intentional, leading to incorrect/inadequate control measures. For example, a credit scoring algorithm that recommends loan approval to one set of users but denies loans to another set of nearly identical users based on unrelated financial criteria, is a biased algorithm. , if this behavior can be consistently repeated over and over. AI’s intentional and unintentional biases can be interpreted as illegal and discriminatory.

Understand and identify AI biases

Bias can be introduced into an algorithm in several ways. The three main types of bias are:

1. Dataset bias – Inaccuracies or systematic errors in the dataset used for the algorithm, namely:

  • Historical bias is the pre-existing bias in the world that has seeped into our data.
  • Representation bias results from incorrectly sampling a population to create a dataset
  • Measurement bias occurs when biased indirect measures are used instead of protected variables

2. Modeling bias – Improper algorithm design and inexplicable techniques used, namely:

  • Evaluation bias occurs when an algorithm is evaluated using incorrect parameters
  • Aggregation bias occurs when an algorithm is tailored to various groups of people

3. Human confirmation bias – Even though the algorithm makes unbiased predictions, a human reviewer may introduce their own biases when accepting or ignoring the output.

A human reviewer could overrule a fair result, depending on their own systemic bias. An example might be, “I know this demographic, and they never do well. So the algorithm must be wrong.

At a fundamental level, prejudice is inherently present in the world around us and encoded in our society. While we cannot address real-world bias directly, we can take steps to eliminate bias from our data, our models, and our human review process.

Main risks associated with biased use of AI

Although algorithms serve to make decision-making smarter and more relevant to humans and other organizations, there are also significant risks associated with their use. There are risks in developing an erroneous understanding of individuals and some of the risks that stand out are:

Errors that unfairly disadvantage people because of their gender, race, age, or other characteristics that could be interpreted as propagating systemic bias.

When AI systems produce unfair results, it can sometimes meet the technical legal definition of unlawful discrimination. Regardless of the strict legal stance, this bias could disproportionately affect people who are already disadvantaged in society.

Risks of harm must be considered in the context in which they arise – the consequences of unfair outcomes are more severe when considering equal access to an essential right/service.

Risk of opaque consumer targeting and price discrimination. The Australian Competition and Consumer Commission (ACCC) describes these types of consumer harm as “risks related to increased consumer profiling” and “discrimination and exclusion”.

Risk of loss of consumer trust and engagement due to limited consumer choice and control due to unfair algorithms and the perception that organizations use such algorithms

Principles for responsible AI systems: REAL

Every organization that embarks on the path of building its machine learning and AI muscle must not only resolve to build responsible AI systems that follow the four principles of good governance which include replicability, explainability , accountability and learning, but also create forums to keep their teams and stakeholders accountable for the results of these algorithms.

Reproducible – The input data and the algorithm process must be reproducible at any time and be built using a standardized enterprise-wide architecture for operationalization.

Explainable – Algorithm results should be explainable to technical and non-technical users and withstand logical and legal scrutiny.

Payable – Algorithms must be built within the appropriate governance parameters for bias and fairness, and ensure that a “human in the loop” exists to validate results.

Learnable – Algorithms must be able to learn and relearn through a safe feedback loop, while continuously monitoring drifts/changes in the expected behavior of the model.

And now?

Biases in our algorithms pose a significant threat not only to customers or others for whom the decision is made, but also pose legal risks and liabilities to organizations. Ensuring the fairness of our AI systems will be an ongoing process, with a combination of technology, process and people changes necessary for their success.

In the next article, we will look at all the people-, technology-, and process-oriented methods to mitigate this bias in our algorithms, as well as concrete examples that anyone can implement in their organization for MYOD – Make Your Organization Data- Driven, in the true sense of the word.

Keywords: digital marketing, data-driven marketing, machine learning, data analytics, artificial intelligence (AI)

Sharon D. Cole