How to Boost Automation Rates Using Targeted AI Plus an LLM

Maximizing automation rates is the key to cost-efficient compliance operations at banks, FinTechs and other financial services businesses. It has led to the popularity of WorkFusion AI Digital Workers because they leverage AI, ML and multiple other technologies to maximize automation to reduce manual work, enhance quality, increase speed, and expand compliance team capacity. So, it’s little wonder that existing and prospective customers want to know how we can automate even more with the latest technology has to offer – AI and LLMs (large language models) in combination.

In this post, we explain how targeted AI plus an LLM can improve automation rates and what types of improvement levels you can expect to gain from this powerful combination.

To demonstrate the potential automation rates you can achieve, we will use two real examples – the first from a FinTech customer and the second from a traditional bank customer. Before we demonstrate, it’s important that you understand the context for automation. Traditional banks have a low risk appetite, many compliance operations personnel and a desire for straight-through processing only when confidence levels are extremely high. FinTechs also desire minimal risk. However, they tend to have fewer compliance operations personnel, leading them to seek a higher rate of straight-through processing for their transactions. As a result, FinTechs strive for higher automation rates. Both traditional banks and Fintechs boost their automation rates when combining AI and LLMs.

Example 1: AI plus an LLM at the FinTech

This FinTech customer uses the AI Digital Worker Evelyn to perform KYC (know your customer) checks to ensure that the FinTech is not doing business with risky individuals or organizations. When the FinTech’s screening tool sends an alert to the compliance team around potential adverse news about a prospective client or existing client, Evelyn leverages “her” AI ensemble model to determine whether or not the adverse media alert is a false alarm (a false positive) or a legitimate sign of risk. In any case where Evelyn is 90% certain that the alert is a false positive, she will automatically disposition the alert as “False Positive”, provide a written explanation and move on to the next alert for review. Thus, the false positive is automatically dispositioned.

Under this approach, Evelyn has been able to automatically disposition 70% of all alerts for the FinTech. Said another way, the FinTech obtains straight-through processing (STP) for 70% of all alerts at a 90% confidence level for accuracy. Yet, the company wanted even higher automation rates without increasing risk. They sought to avoid hiring additional personnel as their business grew and knew an even higher STP rate could help them achieve that goal. So, to gain higher automation while maintaining the 90% confidence level, WorkFusion added an LLM to the mix.

How it works 

With an already-impressive 70% automation rate for alert disposition, there were still 30% of alerts that could potentially become dispositioned automatically and avoid human review. This 30% of alerts gets fed into an LLM for a second opinion. To gain the most informed opinion possible from the LLM, Evelyn is pre-loaded with targeted prompts that help the LLM provide an assessment of the alert details. Prompts include such questions as “Who is the subject of the article? How do you know that they are the subject? How do you know their activity is a crime? etc.?”

After the extensive prompting, the LLM’s assessment becomes the answer, and that answer can only be 1 of 2 things. It can be an assessment of “false positive” in which case Evelyn dispositions the alert as “False Positive” and provides a detailed explanation from her review plus that of the LLM. Alternatively, the LLM can provide an assessment of “True Positive”. For all true positives, Evelyn automatically escalates the alert (with a full narrative) to a human for review.

Using this targeted AI + LLM approach, the FinTech has been able to reach a sustained automation rate of 95%. That means 95% STP for all alerts, with only 5% requiring human review.

Example 2: AI plus an LLM at the traditional bank

This bank uses the AI Digital Worker Tara to perform transaction screening alert review wherein Tara automates the dispositioning of transaction alerts to protect the bank from processing payments associated with sanctioned individuals and organizations. As with Evelyn in the FinTech example, Tara leverages “her” AI ensemble model to determine whether or not alerts are false positives or true positives. Yet, unlike at the FinTech, this traditional bank uses a two-person checking system, also called a 4-eye check and requires a confidence level of 99%. The bank wanted to maximize STP only for the first person in the two-person system, always leaving a second human in the loop to ensure ultimate confidence in all decisions.

Under this approach, Tara has been able to automatically disposition 50% of all alerts previously destined for the first checker. Said another way, the bank achieved straight-through processing (circumventing the first human reviewer) for 50% of all alerts at a 99% confidence level.

Once the bank added the use of an LLM to the process, the automation rate climbed from 50% to 62% while maintaining the 99% confidence level.

*Note that the LLM process works exactly the same way that it does at the FinTech, using targeted prompts developed by WorkFusion.

Choice exists for selecting an LLM

As the use of LLMs gains mainstream traction across industries, users are finding that certain LLMs perform better under different use cases. For this reason, WorkFusion now enables all customers to test a variety of LLMs before establishing a standard operating procedure, including ChatGPT, LLaMA, Gemini, Mistral, and Claude.

These are exciting times for compliance teams seeking to maximize automation without increasing risk. To learn more, visit

Share this article