Taming AI algorithms – finance without bias

When the computer says “no”, is it doing it for the right reasons?

Human bias creeps too easily into AI technology

This is the question that financial regulators are increasingly asking, concerned about possible biases in automated decision-making.

While the Bank of England and the Financial Conduct Authority (FCA) both point out that new technologies could negatively impact lending decisions and the Competition and Markets Authority (CMA) is examining the impact of algorithms on competition, this is a topic that should have broad governance implications. . So much so that the European Banking Authority (EBA) wonders if the use of artificial intelligence (AI) in financial services is “socially beneficial”.

However, as consumers increasingly expect one-click loan and mortgage approvals, and with some estimates suggesting that AI applications could potentially save businesses over $400 billion, there is has many incentives for banks, for example, to eagerly adopt this technology. .

But, if bias in financial decisions is “the biggest risk from using data-driven technology,” as the findings of the Center for Data Ethics and Innovation’s AI Barometer report suggest, then what is the answer ?

Algorithmovigilance.

In other words, financial services companies can systematically monitor the algorithms used by computers to assess customer behaviors, credit referencing, anti-money laundering and fraud detection, and decisions. regarding loans and mortgages, to ensure that their response is correct and appropriate.

Algorithmic vigilance is needed because human bias too easily creeps into AI technology, making it vulnerable to often unrecognized social, economic and systemic trends that lead to discrimination – both explicit and implicit. .

The problem is that the datasets that companies compile and feed to AI and machine learning (ML) systems are often not only incomplete, outdated and incorrect, but also skewed – unintentionally (but maybe sometimes not) – by the inherent prejudices and presumptions of those who develop them.

This means that the analysis and conclusion of a system can be anything but objective. The old IT adage of “garbage in, garbage out” still applies.

And when it comes to training an ML algorithm, just like with a child, unchecked bad habits repeat themselves and become entrenched.

Thus, as long as humans are at least partially involved in making lending decisions, there is potential for discrimination.

Ethical AI should be a priority

Designing AI and ML systems that perform in accordance with all legal, social, and ethical standards is clearly the right thing to do. And going forward, financial services firms will be under pressure to ensure they are fully transparent and compliant.

Those who fall behind or don’t make it a priority can find themselves facing significant lawsuits, fines and long-term reputational damage.

Trust has become the currency of our times, an extremely valuable asset that is hard to earn and easily lost if an organization’s ethical behavior (doing the right thing) and competence (keeping promises) are called into question. .

If people feel they are on the wrong side of inexplicable decisions that they cannot challenge because the “AI black box” means a bank cannot explain them, and which cannot be understood by regulators who often do not have the technical expertise to do so, there is a problem.

How big is this problem?

No one is quite sure. However, the National Health Service (NHS) in England, in a first of its kind pilot study of algorithmic impact assessments in health and care services, can give us an idea.

With a third of companies already using AI to some degree, according to IBM’s 2021 Global AI Adoption Index, more and more senior executives are going to have to think long and hard about how to protect their clients against prejudice, discrimination and a culture of assumption.

Create transparent systems

If we are to enter a world where AI and ML systems actually work as intended, senior leaders must engage in algorithmic vigilance to ensure it is both seamlessly integrated into processes existing corporate and governance frameworks, then supported by ongoing monitoring and evaluation, with immediate corrective action taken as necessary.

Thus, organizations should ensure that personnel working with data or building machine learning models focus on developing models that are free of implicit and explicit biases. And since there is always a risk of drifting towards discrimination, the training of systems should be seen as ongoing, including monitoring and managing the response of particular algorithms to changing market conditions.

For those who have not yet fully understood what needs to be done, how could they move forward?

Creating an internal AI center of excellence can be a good start. Having subject matter experts in one place allows for focus and a more centralized approach, which helps build momentum by focusing on solving high-value, low-complexity problems to deliver quickly demonstrable returns.

Certainly, banks and financial institutions should learn from all the best practice examples shared by regulators to learn about biases in their systems that they may not be aware of.

And so, we come to the crux of the matter: our fundamental relationship with technology. Although artificial intelligence and machine learning systems have transformative potential, we must not forget that they are there to serve a purpose and not an end in themselves.

Removing unintended biases from the equation will be a multi-layered challenge for financial institutions, but one that must be met if they are to put “malicious algorithms” back in their box.

Sharon D. Cole