LOGIN News & Insights Contact

Modernising the Regulatory Monitoring of Artificial Intelligence in Finance

October 10, 2025

The FSB calls for smarter, faster, and more connected supervisory data systems

The Financial Stability Board’s October 2025 report, Monitoring Adoption of Artificial Intelligence and Related Vulnerabilities in the Financial Sector, marks a decisive shift in tone. Rather than debating whether to monitor AI in finance, it asks a more practical question: how can oversight become faster, more consistent, and more data-driven?

AI adoption across the financial system is accelerating, but the methods used to track it remain largely manual and fragmented. Regulators collect information through irregular surveys, qualitative outreach, and ad-hoc supervisory exchanges. These approaches reveal how financial institutions are experimenting with AI, yet they struggle to capture where the real vulnerabilities lie — in third-party dependencies, model governance, cyber exposure, and market concentration.

 

Fragmented monitoring in need of modernisation

Current supervisory practices were built for a slower era of technology change. Definitions of AI vary across jurisdictions, and data collection often focuses on headline adoption rates rather than risk intensity or systemic relevance. The result is an incomplete picture: authorities know that AI is spreading through financial services, but they lack the structured data to assess how that diffusion might affect stability.

The FSB’s analysis concludes that AI monitoring needs to evolve from descriptive surveys to integrated, repeatable, and machine-readable processes capable of generating early-warning signals and supporting international comparison.

 

Building a smarter data architecture

The report sets out a series of practical design principles — a blueprint for what the next generation of AI monitoring should look like:

Simplify and standardise data collection

Shorter, clearer questionnaires and consistent definitions reduce cost and make results comparable across institutions and jurisdictions. Alignment with the OECD and EU AI Act taxonomies is encouraged.

Share data across supervisory bodies
Coordination between prudential, conduct, cyber, and data-protection authorities avoids duplication and allows each to see the wider systemic picture.

Embed AI questions into existing frameworks
Instead of creating parallel reporting systems, supervisors can integrate AI metrics into existing operational-risk, third-party-risk, or model-risk reporting — ensuring proportionality and efficiency.

Increase timeliness through digital and public data sources
High-frequency indicators — such as patent filings, job postings, or cyber-incident trackers — can complement formal supervisory data, providing a more current view of AI activity and potential vulnerabilities.

Adopt digital reporting standards
The FSB’s Format for Incident Reporting Exchange (FIRE) is highlighted as a model for machine-readable, cross-border information exchange on AI-related cyber or operational events.

Create structured registries of critical AI services and providers
Mapping which vendors and models underpin critical financial functions enables targeted supervision of concentration, substitutability, and systemic risk.

Leverage AI to monitor AI
The same technologies used by firms can help regulators enhance their own monitoring — from anomaly detection in reporting data to natural-language analysis of disclosures.

 

From observation to orchestration

Collectively, these measures sketch an emerging digital architecture for AI oversight: one built on standardised definitions, shared infrastructures, and continuous data flows rather than intermittent surveys. The direction of travel is toward supervisory telemetry — where data on AI use, risk incidents, and third-party dependencies can be analysed in near real time.

The FSB’s message is pragmatic but urgent: financial authorities must modernise the way they monitor AI if they are to keep pace with the technology itself. In practice, that means investing not only in analytical capability, but in the data pipelines, common standards, and inter-agency coordination that will turn fragmented insights into a coherent global view of AI-related risk.

 

Our First Thoughts

The FSB’s report reads as both diagnosis and design brief. It confirms what many market participants already suspect — that oversight of AI remains patchy — while outlining a credible roadmap for progress.

What stands out is the implicit call for supervisory digitalisation: embedding telemetry, registries, and data-exchange standards into the core of regulatory infrastructure. Yet the report also hints at a growing opportunity for the technology and RegTech community.

Supervisors will need new tools — data pipelines, model registries, semantic tagging systems, cross-jurisdiction dashboards — to operationalise the indicators the FSB describes. Vendors capable of providing transparent, auditable, and interoperable solutions could play a pivotal role in shaping how the next generation of AI oversight is built.

In that sense, the FSB’s blueprint is more than guidance for regulators — it’s an open invitation for innovators to help design the data architecture of financial supervision itself.

Back

Related articles

Why AI Adoption Surveys Don’t Reflect Reality in Financial Services

March 31, 2026

A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated

The Missing Layer in Enterprise AI Adoption in Financial Services

March 20, 2026

Between individual usage and autonomous systems, a critical layer is driving real AI value.

AI and the “Existential Threat” to Financial Software

February 19, 2026

AI promises rapid software disruption, yet in capital markets the outcome is shaped as much by structural stickiness as by technological capability

Subscribe to the LinkedIn newsletter

Follow Distinctive Insights on LinkedIn and receive an invitation to subscribe to our newsletter.