Artificial Intelligence Finance Fintech Guest Posts Security Trading

Addressing the Cybersecurity Challenges in Finance’s Adoption of AI Agents

The adoption of AI agents has become prevalent in various industries, including finance. AI agents can operate intelligently and autonomously, such as customer service agents or financial trading bots that analyze markets and execute trades without human input. However, they can also create cybersecurity and compliance challenges. The finance industry needs the tools to enable agentic AI securely and how to use it safely so they can harness all of the upside.

How Agentic AI is transforming the financial services industry

An AI agent is a software program or system that can solve problems, make decisions or perform tasks independently, typically by imitating human mental functions. These agents use AI techniques such as natural language processing, computer vision and machine learning to interact with their surroundings and accomplish the goals they were created for. AI agents are embedded into our daily lives in apps like Salesforce, Microsoft Office, ServiceNow, OpenAI, etc., and also can be custom-built by business users across the enterprise.

Companies are increasingly enabling business users to use and build AI agents via low-code development platforms so that even employees with no development background can create agents they need for improved work productivity, using specific on-the-job expertise.

Read More on Fintech : Global Fintech Interview with Scott Weller, CTO at EnFI

In the finance industry, Agentic AI can handle repetitive, data-intensive processes that optimize workflows to increase productivity and reduce human error. Financial institutions can also create “personalized robo-advisors or adaptive asset management systems that adjust strategies in real-time based on market changes and customer preferences.” One example we’re seeing is financial trading bots, used by firms like Goldman Sachs and Moody’s.

Dangers of Agentic AI

One aspect of AI agents is that they allow business users to develop apps and process data without going through the traditional IT/development process. However, this can increase the possibility that these apps could be misconfigured by the users, which introduces additional challenges.

Agents are helpful because they can access corporate data and perform tasks autonomously. Going back to our previously-mentioned examples of how these are being used in this sector:

Moody’s has developed 35 agents, some for project management and others for financial analysis. They have access to data and research; they receive specific instructions and their own “personalities.” Goldman Sachs uses AI agents to analyze market trends and autonomously perform trades based on algorithms. Agentic AI can also be used to monitor financial transactions in real-time to detect fraud, as PayPal does.

When an organization gives access to a lot of data and can then do many things with that data outside the purview of human decision-making process, it has a lot of risk.  Here’s a glimpse of the scope of the problem: Business users within the typical large organization build almost 80,000 agents, copilots, apps and bots. About 11,000 of them have access to sensitive data; which makes for a vast attack surface. Moreover, standard code scanning AppSec and CI/CD pipeline tools cannot detect these apps. If security teams don’t know which apps and resources hold or can access sensitive data, they can’t secure them.

Gartner notes that by 2028, “at least 15% of daily business decisions will be made autonomously through agentic AI—up from 0% in 2024.” Yet the analyst firm also warns that by that time, 25% of enterprise breaches will be tied to AI agent abuse.

Zenity research has found that nearly 62% of AI agents and apps at typical large organizations have at least one security violation. When building an agent, a business user is responsible for configuring:

  • What knowledge the agent has (data, corporate or otherwise)
  • What actions the agent performs (such as underwriting a loan, operating a trade or handling customer tickets
  • What triggers the agent responds to: Agents respond not just to text prompts from users in chats but also things like changes in databases, incoming emails, new calendar invitations and so on

 Businesses need to be aware of these four significant risks:

  1. Prompt injection – As just mentioned, some agents are at risk for prompt injection due to human innovation as attackers try to jailbreak an agent or get it to do things outside the bounds of its original design.
  2. Data leaks – Low- or no-code development and AI assistance enable people with no development experience to create apps, data flows and automations that access financial and other sensitive data. This fact, plus the security issues already noted, makes data leakage a genuine concern.
  3. Complex platforms – Because there are many development platforms, each with its own language, it’s hard to keep track of the varied automations, apps, copilots and GPTs that are developed.
  4. Managing vulnerabilities – Security teams must be able to find and abate vulnerabilities among tens, hundreds or thousands of apps. Often, this isn’t the case.

How to use Agentic AI safely

Financial services organizations that use AI agents must realign their security strategy to avoid these risks. To enable business-led innovation via the secure use of AI agents, you need a system that watches and profiles all AI agents that the enterprise is using. The system needs to find and address threats like hidden instructions, indirect prompt injection attacks, least privilege violations and more. It must also stop those threats as AI agents are developed, via proactive risk reduction and implementation of AI Security Posture Management (AISPM) controls.

The first step in your AI agent security strategy is observability for agents, followed by identifying the composition of how each is built. Next, layer in detection and response, and then automated enforcement, which prevents data leaks and limits access to sensitive data via the implementation of security policies and playbooks. All of this is meant towards building an insider threat model program for Agents as you do for humans.

Move forward securely

The benefits of Agentic AI are clear – the dangers, not so much. It’s precisely because they are hidden away from the watchful eyes of your security team that you need to enact a security strategy specifically for AI agents. Let’s face it: if your agent is useful, it is also risky. Review the abovementioned risks and then look at your current security program to ensure it aligns with best practices. Then, your financial organization will be able to move forward confidently into the emerging era of Agentic AI.

Catch more Fintech Insights : Global FinTech Interview with Steve Cover, CTO, iPipeline

[To share your insights with us, please write to psen@itechseries.com ]

Related posts

Nakisa Introduces Nakisa Lease Administration for Lessor Accounting

Fintech News Desk

ZorroSign Welcomes Silicon Valley Leader as Strategic Advisor and Acting Chief Growth Officer

Fintech News Desk

Get Paid to Trade on BitXmi.com

Fintech News Desk
1