Artificial Intelligence Finance Fintech Guest Posts Risk Management Robotic Process Automation Security

Gen AI and Its Impact on Fraud and Identity Verification

By James Bruni, GBG IDology

The technology used to combat identity theft and limit fraud is constantly evolving. Unfortunately, bad actors also use emerging technologies to create new opportunities for fraud. Generative AI (Gen AI) has already had a massive impact on the threat landscape and is set to create more complications in the future.

Deloitte predicts that Gen AI could cause fraud losses to hit $40 billion in the US by 2027. That’s up from only $12.3 billion last year. A recent global fraud study by GBG IDology found that executives listed generative AI as the biggest trend in identity verification over the next three to five years.

Gen AI presents the fintech industry with a new threat because it allows fraudsters to easily fabricate realistic synthetic identities, text and email messages, and other documents used to carry out fraudulent activities. The key to mitigate this risk is to get ahead of it with awareness, knowledge, and strategic planning to minimize its impact.

The Rise of Generative AI Threats in 2024

Since ChatGPT’s release in late 2022, the Gen AI market has boomed and more people than ever have access to AI and machine learning tools. We are still in the early adoption stages of this technology and see more sophisticated applications daily.

Deepfakes and other fraud technologies are available on the dark web for small fees, giving more threat actors access to sophisticated AI tools. Gen AI also makes identity theft and creating synthetic identities much more scalable. This has the potential to expand the threat landscape exponentially. The next several years will introduce new Gen AI applications that make the need to evolve fraud detection and identity verification methods an utmost priority for the industry.

Read More: Safe AI Strategy for Community Financial Institutions: Turning Concepts into Action

Threat Vectors Influenced by Generative AI

Gen AI uses advanced machine learning algorithms to generate highly convincing outputs, often indistinguishable from human-created content. It can rapidly produce human-like text, realistic images, and even deepfake videos at scale. This capability enables fraudsters to easily craft believable synthetic identities, phishing emails, and texts, making it harder for you to detect their schemes.

Here are some of the most familiar types of fraud that are influenced by Gen AI.

More Accurate Synthetic Identities

The ability to combine real and fake, AI-generated data creates a much higher risk for synthetic identity fraud (SIF). GBG IDology’s recent fraud report found 51% of companies have seen SIF increase or stay the same and 39% are unsure whether SIF has increased. This is why 45% of companies are worried about generative AI’s ability to create more accurate synthetic identities and 74% are concerned about the potential for SIF to increase.

One limiter for SIF in the past was the amount of information needed to create believable identities. Now fraudsters can easily generate personal histories, photos, and social media profiles and build realistic synthetic identities over months or years with little effort.

Increased Volume of Phishing and Smishing

According to the FBI, there were 21,832 instances of business email fraud in 2022 with total losses of $2.7 billion. Deloitte estimates that generative AI could cause this number to increase to $11.5 billion by 2027.

Phishing involves scammers sending emails to get the recipient to click on a link containing malware, to reveal sensitive information, or to transfer money. Smishing (SMS phishing) involves a similar social engineering attack over text messages. While many have learned to look for warning signs when reading suspicious emails or texts, Gen AI can mimic the writing style of trusted individuals and limit those common warning signs. Research shows that since ChatGPT was released, phishing emails have shown significant growth in linguistic complexity, volume of text, punctuation, and sentence length.

Deepfakes and Voice Deception

Gen AI can be used to create deepfakes by mimicking the face or voice of trusted individuals to create highly realistic videos, images, or voice messages. GBG IDology’s fraud study found an increase in the use of deepfakes for fraud across all industries, with gaming (46%), retail (43%), and banking (42%) as the most prevalent.

Voice deception can be an especially threatening tactic that allows fraudsters access to secure systems, financial theft, or other forms of social engineering. According to McAfee, 77% of respondents targeted by an AI voice clone lost money.

Using AI for Identity Verification and Fraud Management

Gen AI presents significant challenges and a much larger threat landscape. However, AI and machine learning solutions can also help mitigate these risks. Just as AI is used to increase the volume and sophistication of fraud, fintech leaders can use AI to scale and automate risk management.

AI can be used to quickly uncover suspicious activity by processing large volumes of data to identify patterns, and detecting tampered or fake documents. Velocity alerts are used to create real-time notifications of specific patterns or activity that are known to be suspicious. With fraudsters increasing the use of Gen AI, even more patterns emerge in their attacks.

While AI can streamline fraud management and help fast-track trusted identities, there still needs to be human oversight. Human fraud experts bring transparency, oversight, and continuous improvement to AI which will improve machine learning models and increase protection.

The threat landscape will continue to move quickly as we head into 2025. AI offers new efficiencies and the ability to monitor much larger data sets and a broader range of documents. This will allow businesses to better combat the continuously evolving use of generative AI for fraud and identity theft.

About the Author: James Bruni, Managing Director, Identity and Fraud

James Bruni is the Managing Director at GBG IDology, a market leader in the Americas delivering a comprehensive suite of identity verification, AML/KYC compliance and fraud protection solutions that provide the intelligence businesses need to establish customer trust, protect against fraud and drive revenue. Bruni oversees GBG IDology’s growth and innovation throughout the Americas while ensuring strategic alignment with global teams and enabling customers to seamlessly and securely verify identities any place in the world. Throughout his 25-year career, he’s developed a reputation for building high-performing teams and strong cultures that successfully drive top-and-bottom-line results.

Read More: Global FinTech Series Interview with Trent Sorbe, Chief Payments Officer at First International Bank and Trust (FIBT)

[To share your insights with us, please write to psen@itechseries.com ]

Related posts

Avalara Unveils E-Invoicing and Live Reporting API for Multinational Businesses

Business Wire

Luma Financial Technologies and iPipeline Collaborate to Streamline Annuity and Life Insurance Solutions for Financial Advisors and Agents

Business Wire

Crypto Custodian Brane Capital Completes Application for US Trust Charter

Fintech News Desk
1