Artificial Intelligence B2B Banking Business Fintech Guest Posts

The Good and Bad of AI in Financial Services

The rise of artificial intelligence (AI) has brought about a range of reactions in the financial services industry from confusion and worry to excitement. As I interact with other data scientists, fraud analysts and risk managers, the benefits and risks of AI and its subdivision Generative AI (GenAI) have become a common topic of discussion. One observation in these conversations is the conflation of terms like Machine Learning (ML), AI and GenAI.

I started working as a data scientist in the late 1990’s. At that time, logistic regression was the predominant method in credit risk and fraud analysis[1]. AI typically referred to models that updated themselves, such as a fraud use case where customers provided fraud outcomes about a consumer or accounts on a timely basis, then the model automatically re-optimized on some recurring schedule to incorporate the fraud feedback into the algorithm, which then has the new intelligence to flag similar schemes as high risk.

While these models existed, they were rare because of the additional regulatory and governance requirements of statistical models in lending. All other models that lacked this automated feedback loop were associated with ML and included human developers and tight oversight into the model creation process.

Browse more about Fintech Insights: Global Fintech Interview with Sadra Hosseini, CEO at Ryft

Defining the Definitions

The definition of AI has become broader with time. According to Columbia University School of Engineering, AI models “mimic and go beyond human capabilities to provide information or automatically trigger actions without human interference.

Terms like “without human interference” introduce some ambiguity. Does a human coder or review panel clarify the criteria for human interference to qualify the model as non-AI? Even the most complex computer-created models can be generated using paper, pencil and a calculator if given enough time. However, since key elements of statistical models include (1) variable selection, (2) variable binning, (3) performance classification, (4) weights optimization, and (5) calibration, and one or all of these elements could be automated, even logistic regressions conform to the AI definition if they include at least one computer determined step.

Accordingly, our organization has its own prescribed definition: “Machine-based systems which infer solutions to set tasks and have a degree of autonomy.” Given this definition, which is consistent with Columbia University’s, it is safe to say that computer derived models are well established in financial services. Logistic regression, gradient boosting, random forests and neural nets all satisfy the AI requirement.

AI Subcategories

We’ve already defined AI and ML. GenAI, a subset of AI, has received a lot of press of late. While it feels new, you shouldn’t be surprised coming from this data scientist when I say its core elements have been around for a while in some form.

For practical reasons, to best understand GenAI, it’s helpful to define categories of AI.

GenAI is a natural extension of a larger area of AI known as “Natural Language Process” (NLP) and “Large Language Models” (LLM). These language models have been in use for years in sentiment analysis like product or restaurant reviews or to autocomplete internet searches, drive a chatbot, categorize unstructured data in attribute development or parse text fields in police reports.

What’s novel about GenAI is that it adds a bit of creativity and randomness that better replicates human conversation in virtually every language on the globe.  While older versions of LLM/NLP interpreted unstructured text, GenAI has mastered the creation of meaningful text. While more traditional analytic methods provide the same outputs if run twice with the same inputs, GenAI will provide different yet typically consistent outputs if run twice with the same prompts.

Extractive AI are algorithms designed to extract data points from a training dataset. These models, often referred to as ML models, can extract elements or categories from a dataset without an explicit performance tag – commonly referred to as unsupervised models. Alternately they can be built using a specific objective function categorizing some outcome as desirable (non-fraud) vs. undesirable (fraud).

If run with the same parameters on the same samples, extractive models will typically extract the same information. GenAI, on the other hand, lets users interpret and create contextually relevant content based on provided structured or unstructured inputs and a pre-trained notion of the information available to it[2].

One of the more important impacts of GenAI is it has democratized AI and analytics, which essentially means that anyone with an internet connection can use GenAI to solve problems that require insight or creativity. OpenAI (ChatGPT), Google (Gemini, previously known as Bard), Anthropic (Claude) and others created these now well received software packages.  A simple internet search generates myriad examples.

Historically, algorithm development at banks and other providers and vendors required an advanced degree in data science, statistics, computer science, or similar, to incorporate statistical models into workflows. GenAI essentially makes algorithm development available to anyone with limited training. With the right prompts, it can ostensibly create excellent starters for marketing campaigns, internal documentation or externally facing content for use in engagements like chatbots or risk assessments.

This democratization brings risks in a regulated industry like financial services. Given regulatory regimens, it is important to maintain adherence to several guidelines.[3] Many of these regulations include specific requirements for statistically derived models. Moreover, as government entities across the globe consider and pass AI legislation, these regulations will include additional tightened requirements for safe and sound lending.

Do These Models Require Human Oversight? Yes. Yes, They Do

It’s important to demonstrate that analytically developed products are empirically derived and statistically sound. This means they are fair, accurate, transparent, well documented and lack bias. Decentralized application of statistical techniques like GenAI will bring risks if not incorporated into an institution’s evolving regulatory processes. While GenAI has facilitated the generation of these models across an organization, it does not excuse the institution from these regulatory requirements. Indeed, no regulation has carve outs for GenAI relative to other approaches.

There are numerous opportunities and risks in using AI in general, particularly GenAI. Data science combined with AI, for example, can function as a force multiplier, a concept from military applications indicating that a combination of factors can accomplish greater feats than when alone. For example, whereas physical mining adopted earth-moving equipment to replace picks and shovels, data mining can employ tools that increase data mining capabilities and insights to replace frequencies and univariate analyses.

Force-multiplying statistical models can automate complex processes that scale abilities to create customized deliverables that serve specific needs or insights. These models are powerful, but despite these benefits, it is also vital to closely monitor strengths and weaknesses of AI models regarding accuracy, bias or other elements.

It’s also imperative that models are grounded in the context for which they are applied. For instance, does it make more sense in a small community experiencing a widespread yet locally isolated health issue to utilize algorithms focused on available treatments in a community hospital or a leading global research hospital? GenAI and AI models are far more likely to be generated based on the latter institution type, creating some concerns if implemented at smaller community hospitals.

Responsible AI is vital to the successful implementation of AI. LexisNexis® Risk Solutions has developed a set of practical guidelines and checkpoints for developing and supporting products and services under a Responsible AI framework, established strategic guardrails for permissible use of data science. Our organization is part of  RELX – collectively, we adhere to the RELX Responsible AI Principles. While these principles are not replacements for sound regulatory support, they define what good looks like and provide differentiation, especially in regulated markets.

As AI regulation becomes increasingly likely, I’ve generally found that adherence to these principles puts us in a better position to respond to customer product inquiries. By applying this higher standard internally, we can be sure that our approach and use of AI meets external standards from the outset.

In short, we believe that for all analytically derived solutions we should:

  • Consider the real-world impact of solutions on people.
  • Take action to prevent the creation or reinforcement of unfair bias.
  • Explain how our solutions work.
  • Create accountability through human oversight.
  • Respect privacy and champion robust data governance.

Risks

Like any good thing, these tools are susceptible to being used for nefarious purposes. One significant concern is their use by bad actors. Just as AI could be used to cure a major disease like cancer, there’s also concern that it could be used by bad actors to create diseases. In the fraud world, these tools have already made phishing campaigns or deep fakes more convincing.

It’s important to remember that AI is not the problem here – its bad actors using AI to execute and often streamline their attacks. Phishing and deep fakes aren’t new, they are just more convincing when employing AI. We, as an industry, must be more mindful of these kinds of risk signals before over-responding to a threat.

There is no silver bullet anti-fraud solution anymore. For fraud detection workflows to be effective, we must have a multilayered defense approach. While it’s simple enough to simulate a piece of personally identifiable information (PII), it is much more challenging to simulate full identities. It is vital to cast the net over the entire customer journey to make it more challenging to simulate various identity elements.

While anomalies in PII such as name, address, social security number and date of birth are still important elements of a fraud detection strategy, it’s equally important to include digital (mobile identity, email identity) and behavioral elements to fraud assessments. I like to think of this as a data moat around an identity – this approach makes it increasingly harder for a fraudster to simulate multiple interlocking channels of information about an identity than it is to simulate a single identity element.

Summary

AI is here and it isn’t going anywhere. It’s been in use in financial services for decades and it’s important to appreciate the capabilities and limitations of these approaches. It is equally important to be a steward of the data, the solution and your customers. As we use AI to build data moats around consumers, these techniques can decrease friction for good actors, while increasing friction for bad actors – which is the ultimate goal for effective application of AI for fraud detection.

[1] There were exceptions, most notably, HNC’s application of neural nets in the transaction fraud space.   HNC has since been acquired by FICO.

[2] Generative AI is more than capable of using live inputs as a learning dataset.   When unconstrained, these sorts of models have historically been “chatbot” type elements that struggle to differentiate fact from fiction in the inputs without human oversight.

[3]Examples from the US include, Fair Credit Reporting Act, Equal Credit Opportunity Act, Graham Leach Bliley, Unfair Deceptive and or Abusive Acts or Practices, etc.   There are similar regulations in many countries or regions across the globe.

Latest Fintech Insights : Global Fintech Interview with Kapil Kale, Co-founder and COO of Tremendous

[To share your insights with us, please write to psen@itechseries.com ]

Related posts

Deep Labs Announces $16 Million Investment to Drive Next Phase of Growth

Fintech News Desk

DealCloud Appoints Mark Coronato as Client Development Director for EMEA

Fintech News Desk

LeaseAccelerator Launches EZLease Application

Fintech News Desk
1