The Benefits of Responsible and Explainable AI

Across industries, organizations of all sizes are increasing their use of AI to make business decisions. Some of these decisions have greater impact or importance on society than others. If a tool doesn’t write perfect copy (see ChatGPT) or an app doesn’t recommend the best clothes to fit a person’s body, it adversely impacts the user experience — but it has minimal impact on society. On the other hand, if AI makes mistakes when selecting the time to pick crops, determining who gets loans from a bank, or predicting where someone will commit a crime, the technology (and its decisions) can wreak havoc on people’s lives.

This is where responsible AI comes into play. According to Accenture, “Responsible AI is the practice of designing, developing, and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society — allowing companies to engender trust and scale AI with confidence.”

Governmental and industry bodies are attempting to rein in ‘The Wild West’ of AI. Increasing regulatory scrutiny (Blueprint for an AI Bill of Rights in the U.S. and The AI Act in the EU) represents early signals that AI will need to be explainable very soon. Explainable algorithms within AI help organizations understand how the AI makes its decisions. For example, among financial institutions, practices like Model Risk Management (MRM) focus on helping to explain the AI employed in their decision-making for numerous processes, such as financial crime compliance, loan approvals, and product offer eligibility. With explainable and responsible AI, you reduce a wide range of risks to the business, including litigation, compliance failures, fines, penalties, and damage to brand reputation.

Black Box vs Glass Box

In many cases, AI acts like a black box that ingests data, then provides an output or takes an action without providing any information or explanation on how or why that output was produced. That’s not a problem when the results match people’s expectations. But when the results seem counter-intuitive or incorrect, people start questioning their AI models and do not trust the results.

Glass box AI stands in sharp contrast to black box models. Glass box AI, also known as ‘explainable AI’ or ‘XAI’ for short, brings clarity to AI by making the models more understandable to humans. Thanks to XAI, technical and non-technical people alike can view model outputs with strong context to understand what they mean. This knowledge makes it much easier for users to decide how (or if) they should use the AI model outputs. In this way, glass box AI enables human-in-the-loop decision processes that lead to trustworthy final decisions.

Explainable AI

While it’s true that AI has dramatically improved in performance and credibility in recent years, many of the AI advances still come in products/services — ChatGPT and DALL-E included — that rely on black-box models. This has hurt adoption of AI, particularly because it does not support a robust approach to MRM, making it a challenge for FIs to accept AI-based products and services.

By contrast, WorkFusion’s Digital Workers align with MRM by providing explainability around the AI that drives their decisions. For example, in the world of financial crime compliance, such as AML (anti–money laundering), Digital Workers leverage XAI to resolve 98% of the alerts output by sanctions-screening software, such as FircoSoft. That leaves only 2% of the alerts needing a final review by humans. Not only does that free up people to work on higher value work, AI-driven intelligent automation that leverages human-in-the-loop as a second set of eyes delivers high accuracy rates and further explainability.

Responsible AI based on XAI can help your organization in three critical ways:

  1. Reduce your risk in line with MRM and other risk reduction processes.
  2. Engender trust for your AI practices and prevent reputational risk.
  3. Enhance product efficacy across a broader distribution of the population.

The time is now for AI in Banking and Finance

Most FIs simply do not have the resources to keep pace with financial criminals as well as the volume of alerts generated by potentially suspicious activities. This is why organizations like OFAC have recognized the need for FIs to use AI to combat fraud and financial crime. In fact, just this past Fall, OFAC issued guidance that encourages the use of AI to manage sanctions risks that could arise in the context of instant payments.

It has become clear that we have reached a tipping point in the financial services sector where the debate over whether banks and other financial institutions should pursue advanced technologies — including Intelligent Automation, AI, and machine learning — has shifted from “if” to “when, how, and on what scale?” The right path forward is to incorporate these advanced technologies in a way that both streamlines your financial crime compliance operations and makes your decisions/actions explainable and defensible to regulators.

For additional insights into the responsible and explainable AI delivered by WorkFusion’s Digital Workers, explore the rest of the WorkFusion blog.

Share this article