Navigating AI’s Inflection Point in AML Compliance: A Conversation Hosted by ALPS with WorkFusion

AI has reached an inflection point in AML/BSA compliance. Now more than ever regulators want financial institutions (FIs) to innovate and look for new ways to perform these old processes. The AML Act of 2020 made this abundantly clear, especially coupled with FinCEN’s Innovation Initiative, and the recent release of the Wolfsberg Group’s principles on artificial intelligence (AI) and machine learning (ML).

Banks, credit unions, and lenders should be using innovative approaches, like AI, to combat money laundering and terrorist financing. For those organizations looking to add AI to alleviate AML/BSA compliance challenges it does not have to be a “big bang approach,” advises Grant Vickers, WorkFusion’s Head of Financial Crimes Strategy.

Recently, Grant joined Samuel Peret, Managing Director at ALPS, a B2B Marketplace of innovative technologies and professional services, and a new WorkFusion strategic partner to discuss AI and how it is impacting FinCrime operations teams, from sanctions and KYC to regulatory compliance.

Starting small and getting quick wins is a solid approach to implementing AI regardless of your bank’s size. Grant continued, “Your large mega banks that have an Innovation Center and a ton of budget and resources to implement these things, they typically don’t even start with The Big Bang. Most financial institutions, big or small, start with one area. They get a win in that area; they build internal momentum; they get other people on board; they get regulator buy-in and then they scale from there.”

Here are some highlights from the conversation.

Breaking the “Rules”: Advanced AML Problems Need Modern Problem-Solving Capabilities

AI is core to solving a host of AML challenges including bank staffing challenges, a deluge of sanctions alerts, as well as KYC onboarding and periodic refresh. These problems are difficult and require the advanced learning and problem-solving capabilities that AI offers.

Rules-based automation only gets you so far. It’s good for menial tasks, but not something more complicated, take sanctions as an example. “AI allows you to read the unstructured text in a payment message and make a decision on that. And when we say unstructured text, that can be a comment field in a payment message. There could be an infinite number of variations that a person may write in this comment field. AI can read all that information, understand it, and then make a decision on it. You can’t do that with rules,” said Grant.

“By nature, AML is an investigatory function,” Grant explained. “It’s researching. It’s spending eight hours to review one high-risk customer for KYC with very complex operating procedures that might be 300 pages long. Or it’s spending 45 minutes on a single transaction monitoring alert where you have to navigate five systems in six sources to write a narrative. That is really difficult to do. And that’s why we need this next generation level of advanced learning.”

What About Generative AI?

You can’t discuss AI today without including Generative AI.

“ChatGPT, generative AI, and large language models (LLMs) are being looked at to be brought into our solutions,” said Grant. “There’s a lot of interest in using these types of models. I think probably the most direct correlation, or the most direct use case is narrative generation.”

He continued, “In the e-mail space, you must write a lot of narratives. Whether it’s writing a memo for a high-risk customer trying to refresh or writing a narrative for a transaction monitoring investigation. Or, writing and ensuring you have the who, what, where, why, when, and how as part of your SAR. There are a lot of narratives being written and, if we go back to the rules-based approach, that’s where things start to fail. You can have a rule that might be able to bring some data in to help an analyst. A narrative. So now we have digital investigators that will be able to do these types of things (using LLMs).”

Responsible AI

Organizations, especially banks, need to validate and trust that AI is doing what it is intended to do. Without question, responsible AI is going to become increasingly important.

“I’ll use a training wheel analogy,” said Grant. “Nobody’s ever going to deploy AI and just let it run and cross their fingers and hope for the best. They’re going to first allow it to run with training wheels and supervise it and monitor it and test it until they get to a place of comfort. To be specific, if we have a customer that is using one of our Digital Workers, the first thing that we will do is run their data through a test environment and our SMEs will look at the output of those models and understand how they’re performing against historical work that the bank’s human operators have done. So, there’s a comparison in what WorkFusion is doing and comparing that quality to what your human operators have historically done.”

“We want to make sure there are no mistakes,” he continued. “Once we’ve done that test, then we will help the bank tune confidence thresholds accordingly. Every decision that is made has mathematical support as to why that decision was made. Ultimately when a decision is made, we can look back into the audit trail of the Digital Worker and say that the AI was extremely confident that they were accurate in this decision and that they were not going to make an error. We set that threshold for the bank based off their own risk tolerance, and going forward that Digital Worker will never make a decision if it is not confident.”

From a broader industry perspective, if an organization is looking for AI-enabled technology, they will need to go through Model Risk Management (MRM), risk rate their models, which will then inform them on how often they should supervise those models. It is a very quantitative process and customers can choose their own risk tolerance.

Operationalizing AI Doesn’t Have to Be Difficult

Getting AI operationalized can be difficult – but it doesn’t have to be. There are a lot of building blocks that go into a successful AI implementation. You have a lot of different models and pieces built-in to get these working integrations. The key to overcoming a challenge like this is building a pre-packaged product.

WorkFusion’s AI Digital Workers are different from RPA-based automation tools. The Digital Workers are pre-programmed, pre-trained digital embodiments of the operations analysts needed to perform AML and sanctions compliance at financial institutions. They provide a blend of cognitive thinking, intelligent document processing (IDP), automation, and role-specific training—all operating in unison. They deliver the scalability, consistent quality, and speed necessary to eliminate sanctions alert and KYC backlogs. And because each Digital Worker can automate an entire role in AML/Sanctions/KYC compliance and immediately alleviate staffing challenges, they quickly place any FI into good standing with regulators while boosting operational efficiency.

Click here to meet our AI Digital Workers and request a demo. You can watch the full webinar here.

Share this article