Report reveals that while the majority of companies are leveraging AI, governance continues to lag, leaving them vulnerable.
AuditBoard, the AI-powered global platform for connected risk transforming audit, risk, and compliance, announced the findings of its latest research into how global risk teams are navigating the dawning era of ubiquitous artificial intelligence (AI) technology. Key findings in the report From blueprint to reality: Execute effective AI governance in a volatile landscape reveal that while many companies have drafted policies, few have embedded AI governance into their organizations’ operational fabric, leaving them susceptible to unforeseen risks.
Read More on Fintech : Reinventing Identity Security in the Age of AI
Organizations across industry sectors are racing to integrate generative and machine learning tools into their core business processes, seeking productivity gains and competitive advantages. But this momentum has triggered a parallel challenge: managing the associated risks. The resulting policy-practice gap is emerging as a new risk frontier, rooted in executional uncertainty, cultural fragmentation, and misaligned ownership. To gain a clearer picture of the challenges inherent in that gap, AuditBoard, in partnership with Panterra Research, surveyed over 400 GRC and audit professionals across the United States, Canada, Germany, and the United Kingdom, finding:
- Overconfidence, in this context, becomes a risk in itself: While 92 percent of respondents said they are confident in their visibility into third-party AI use, only two-thirds of organizations report conducting formal, AI-specific risk assessments for third-party models or vendors. That leaves roughly one in three firms relying on external AI systems without a clear understanding of the risks they may pose. When companies assume they have control, they’re less likely to invest in proactive auditing, centralized model inventories, or employee education. And when vulnerabilities surface, they’re often caught off guard, leading to downstream consequences.
- Policies are not enough: 86 percent of respondents said their organization is aware of upcoming AI regulations and those already in force. However, only 25 percent of respondents said they have a fully implemented AI governance program. Many have policies in place or in development, but few have made the leap from documentation to disciplined execution.
- Barriers to AI governance are cultural, not technical: Respondents identified the leading obstacles to AI governance as lack of clear ownership (44 percent), insufficient internal expertise (39 percent), and resource constraints (34 percent). Fewer than 15% said the main problem was a lack of tools. Policy tells the organization what should happen, but culture and structure determine whether it happens.
“This report validates the critical need for a more integrated, operational approach to AI risk,” said Michael Rasmussen, CEO of GRC Report. “AuditBoard’s expertise in aligning audit, risk, and compliance functions makes them well-equipped to provide the framework and tools necessary for companies to move from policy creation to impactful AI governance.”
“AI governance today is a test of execution, not awareness,” said Rich Marcus, Chief Information Security Officer at AuditBoard. “This report confirms that the most persistent AI governance challenges are clarity, ownership, and alignment. Organizations that treat governance as a core capability, not a compliance box-checking exercise, will be better positioned to manage risk, build trust, and respond to a rapidly evolving regulatory landscape.”
Catch more Fintech Insights : The Future of Banking Starts with Customers
[To share your insights with us, please write to psen@itechseries.com ]