Lane Cipriani – TIQK CTO and Co-Founder; Imogen Maley – TIQK Product Manager, Risk & Compliance; Amir Shareghi Najar – TIQK Lead Data Scientist

Our vision at TIQK is to automate compliance in organisations

Over the past couple of years the market has ridden waves of hype and disillusionment about the potential for Artificial Intelligence (AI) to transform the way organisations assess and improve regulatory compliance.

At TIQK we tried — and initially failed — to build a pure AI regulatory technology (RegTech) system that audits written financial advice.

Since then, we’ve come to believe that it’s not just a simple Pass or Fail on the use of AI in compliance. We’ve learnt that a blend of AI and more traditional approaches is the best way to augment human experts in the audit process.

This article is a more technical dive into the problems we’re trying to solve; where we’ve found AI works well — and where it doesn’t; and those compliance tasks that we believe will continue to rely on human expertise for some time yet.

A problem worth solving

Every year Australian financial advisers write millions of financial advice documents called Statements of Advice (SoAs) for their clients.

The financial services businesses they are licensed to (AFSLs) don’t have a clear measure of the risk in those documents. These SoA files are complex, lengthy, subject to legislation and regulations, and require expert skills to audit. Industry practice is to audit only 5-10% of all SoAs produced.

The recent Australian Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry has raised the profile of RegTech as a way to minimise further legal, financial, brand, and community impact from bad financial advice.

Risk in the financial advice process

The financial advice process in Australia looks like this:

The areas of risk in this process include:

  1. The financial adviser editing the SoA before it is sent to the client, potentially creating inconsistencies, omissions, or errors;
  2. It can be physically impossible for human experts to manually review every SoA – yet their organisation is liable for any compliance issues; and
  3. A large amount of time can pass between the client receiving an SoA and when an auditor has capacity to review it.

Auditing SoAs is an interesting challenge

Auditing an SoA means more than searching for regulatory hot words.

As an official record of the advice process an SoA is a complex, self-contained document that contains a number of related pieces of information:

From a technical perspective it presents a number of challenges:

  • The information inside the file is written in both structured (tables, lists) and unstructured (free text) formats;
  • The SoA may mention more than one person, e.g. a married couple, and the analysis must keep track of the information that applies to each person;
  • Every financial services business uses a different SoA template and writing style;
  • Financial advisers can (and do) deviate from the SoA template to personalise it;
  • They can be hundreds of pages long; and
  • Even minor inconsistencies, errors and omissions can lead to severe consequences.

Manually auditing a single SoA is expensive and time-consuming. Manually auditing thousands is near-impossible.

As technologists, it’s tempting to think about reducing risk and increasing the efficiency of audits by removing humans from the process.

However, we believe that financial advisers will play an important role in the advice process for the foreseeable future. They build trusted relationships and develop a nuanced understanding of their clients’ needs, converting them into actions and better outcomes. Their communication and interpersonal skills are often their most important qualities. They are in the best position to determine the mix of software and human-curated strategies to achieve those outcomes.

So we’re left with an interesting technical challenge: how to accurately and efficiently audit SoA files that are edited by imperfect humans.

AI, in a nutshell

Artificial Intelligence (AI) technologies have been around since the 1950s going through periods of excitement and disillusionment – and hype and mis-information. AI is a broad umbrella term that covers many technologies. Some of the most well-known include:

Natural Language Processing (NLP)

Technologies that extract useful information from, understand, and generate spoken and written language:

Machine learning

Systems that can be trained with historical data to classify information, and to make assessments, predictions and recommendations:

This area also includes neural networks and deep learning that mimic the way the brain processes information and are efficient at solving complex tasks like understanding handwriting and objects inside photographs.

Our first attempt at AI failed

The TIQK Minimum Viable Product (MVP) in 2017 that we tell people about was really our second attempt.

AIs work great with unstructured information like that found in regulations and SoAs. Combined with low-cost access to cloud-based AI services, we thought that solving the SoA audit problem was going to be straightforward.

We were wrong.

A product must solve a problem for its target audience. At TIQK our target audience is auditors and compliance experts. This is an exacting and risk averse group of people, and our first experiments with an AI-only system taught us that:

  • Probabilities aren’t helpful. We used approaches like machine learning classificationand multivariate regression analysis that produced results with a confidence score, e.g.  78% confidence that text [x] is compliant with the law. Our test audiences told us that anything other than black-or-white results just creates uncertainty and more work for them.
  • It didn’t nail the basics. AI is great at certain types of analysis but using it on the wrong tasks can produce uncertain (or worse, incorrect) results — like checking if a financial adviser’s license is correct and active on the date that they produced an SoA. If you can’t convince your audience that you can do the basics well, they won’t trust you with more complex tasks.
  • Lack of traceability. AIs tend to operate as black-boxes that don’t justify their outcomes. You can demonstrate world-class AI training and test processes, but compliance experts really like traceable decisions i.e. understanding the who, what, where, and why of each risk that the system flags. It’s even more important when you’re asking them to put their faith in a new technology.

Back to the drawing board

We decided to put our AI efforts on the shelf and re-build our MVP as a traditional expert system. These are systems that make decisions using hard-coded rules. The big consulting firms have been looking at them for tax auditing since the 90s.

Expert systems are fast, accurate, and you can verify the logic behind every result. An expert system of some 550+ rules still runs at the core of every TIQK SoA audit. It has constantly surprised us with how effectively and accurately it can automate many SoA audit tasks like:

  • Regulatory “checklists”, e.g. testing that the SoA contains correct license information, statements, and disclosures;
  • Asset Allocation Variance tests;
  • Product Replacement tests;
  • When the SoA lists more than one person (e.g. a married couple), keeping track of which information and advice relates to which person; and
  • Even extending it to perform the world’s first Best Interest Duty test for SoA files — something that we were told would be impossible to automate.

And interestingly the data analytics that the expert system produces have proven popular with compliance teams, enabling them to pinpoint risk hotspots and trends on a scale not previously possible.

One technology to do it all: a pipe-dream

Unfortunately, expert systems have their limitations:

  • They are time consuming to build: you have to code every rule for every possible combination of compliance test, written language style, and data point extracted from the SoA file; and
  • They are poor at dealing with variations in data and unexpected scenarios, e.g. financial advisers that change the way they write advice over time.

More formally (thanks, Richard Susskind): it’s challenging to build an expert system that is simultaneously transparent (it can explain its reasoning), heuristic (it can reason using both formal and informal knowledge), and flexible (its underlying knowledge can be easily modified as needed).

So we decided to revisit our use of AI, this time adding it on top of our expert system. It turns out this is not an unusual approach:

Even better, all the effort we put into building the expert system wasn’t wasted. By manually documenting the various ways that financial advice is presented in SoAs, we ended up in the perfect position to build and train an AI system that can understand and assess risk.

Challenge accepted: detecting cookie-cutter advice

Mira is the name of our AI. It is designed to conduct auditing tasks that are challenging for both expert humans, and expert systems.

One of the most common requests we’d heard from compliance teams was to automate the difficult task of detecting “cookie-cutter” financial advice in SoAs. This is when a financial adviser lists the same client goals and objectives (or recommends the same financial strategies) in multiple SoAs for different clients. It is one of the Key Risk Indicators listed by the industry regulator (ASIC) and is challenging to do manually because it requires expert skills to review every SoA produced by every adviser.

Mira leverages the data points and compliance results captured by the expert system in every SoA audit; uses NLP to understand the client goals and recommended strategies; and applies machine learning to accurately classify them against every financial adviser – even if the adviser changes the way they write over time to avoid detection:

When similar goals (or recommended strategies) are detected in multiple SoAs produced by an adviser, Mira flags it as an issue for a human auditor to investigate.

Next up: testing appropriateness of advice

We can adapt those same expert system + Mira technologies to automate another complex audit task: testing for alignment between a client’s circumstances, goals and objectives, and the financial strategies and products recommended by their financial adviser.

In other words, is the advice presented by the adviser appropriate for the client?

A misalignment between the underlying data in any of these components in an SoA is flagged for further investigation by a human auditor.

Can technology replace an auditor yet?

Today we automate a number of mundane and complex tasks in the SoA audit process.

We predict that it will soon be possible to automate common workflows like pre-vet (reviewing financial advice before it is given to the client, usually done with new financial advisers or those under a remediation plan) and standard annual hindsight reviews.

However, there are some areas where today’s technology is no match for a human expert:

  • A comprehensive understanding of the client: Advisers build an understanding of clients over time and in many ways: paper and online forms, face-to-face, handwritten notes, phone calls, emails, and more. That data is not always well structured nor unified in back-office systems. An auditor will often consider both the stated (explicit) and unstated (implied) needs of the client from all these sources when deciding if the financial advice provided is appropriate to a client.
  • ‘Intuitive’ assessments: Auditors often test if an adviser has a reasonable basis for the recommendations they’ve made. Creating an automated reasonableness test seems like a large, yet solvable problem. You might ingest legislation, regulator scenarios, complaints data, tribunal outcomes, codes of practices, organisation policies, and more to generate an AI model that is capable of making accurate assessments of whether a piece of advice is reasonable or not. We’ve seen promises of systems like this, but today and for most organisations this remains firmly in the domain of human experts.
  • Coaching and development: The International Standard for Compliance Management Systems highlights the importance of measuring and improving competence through education, training, mentoring, and work experience. Systems like TIQK can be used to drive more efficient, personalised adviser training plans, but right now we still rely on compliance experts to design and deliver great learning and development strategies.
  • Moving from observations, to recommendations: A natural extension for platforms like TIQK is to generate predictions of future risk based on historical observations and to recommend actions to mitigate risk. AI technologies are well suited to this. However, designing and running predictive models and recommendation engines requires exceptional data and significant domain knowledge – and results can vary wildly with even minor inconsistencies in the underlying models and data. And there’s an open question about an AI’s role and liability for any recommendations it makes…

Conclusion

There’s a worthwhile problem to solve by efficiently auditing written financial advice in Australia: for government, financial institutions, and the community.

Despite the hype (and disillusionment), our experience shows that AI combined with more traditional technologies — and human experts — can transform the way organisations assess and improve regulatory compliance.

Today, there are a number of tasks in the financial advice audit process that only a human can do. But that gap is closing fast.

We believe that compliance experts are becoming increasingly comfortable with delegating mundane and more complex audit tasks to technology, in turn freeing them to focus on complex and intuitive analysis, coaching and development, and strategy.