Why AI Adoption Surveys Don’t Reflect Reality in Financial Services
March 31, 2026A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated
Machine Learning in market risk is sometimes framed as a modelling upgrade – better non-linear fit and faster recalibration. In practice, the differentiator is not simply the algorithm. It is the control evidence that surrounds it.
Market risk models influence limits and, for internal-model banks, regulatory capital. When ML techniques enter Value-at-Risk, Expected Shortfall or stress frameworks, they stop being “interesting” and become something supervisors expect to be explainable, testable and governed.
Three risks now define the bargain: complexity that must be justified, lifecycle governance, and concentration effects – in techniques and third parties.
ML can widen a familiar gap: model richness versus controllability. You may gain predictive power, but lose clarity about drivers, stability and boundary conditions.
Supervisory expectations are moving in a clear direction: added complexity should be defensible. It translates into questions teams cannot answer with accuracy metrics alone:
In effect, the assessment becomes as much about credibility as performance. If the model cannot be understood well enough to be challenged, it is hard to rely on for limits-relevant decisions.
Traditional market risk governance is model-centric: documentation, independent validation, change control, periodic review. ML forces the risk lens wider because behaviour is shaped by data pipelines, feature engineering and retraining logic as much as by the model code.
Risk migrates into the production system:
This is where “dynamic” models become difficult. A system that adapts on the fly may be operationally attractive, but it complicates the question of what was approved, what changed, and whether the institution can demonstrate ongoing compliance with its own standards of control.
As ML spreads, concentration risks become harder to ignore.
The first is behavioural. If firms converge on similar model families and similar representations of market states, risk measurement and decisioning can become more correlated. That is not collusion; it is normalisation. When the same assumptions dominate, blind spots can align.
The second is operational. ML pipelines deepen dependencies on specialised data, tooling and compute. Third parties may sit inside model development, validation, or the delivery of critical inputs. The risk is not only interruption – it is evidencing what an external dependency did, when it did it, and how it affected model behaviour.
For market risk leaders, this makes third-party risk management part of model-risk management. Contingency plans, exit options and minimum in-house understanding become prerequisites for a credible internal-model framework.
Market risk is where ML meets the sharp edge of accountability. The prize is clear – richer modelling – but the price is control: explainability, lifecycle governance and a credible approach to concentration.
Firms that deploy ML successfully are likely to differentiate less through exotic techniques and more through disciplined evidence: traceability end to end, robust change governance, and clear decision rights when models adapt. That is what will make ML deployable, not merely possible.
Back
A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated