Why AI Adoption Surveys Don’t Reflect Reality in Financial Services
March 31, 2026A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated
Treasury analytics have always existed. Machine Learning changes the proposition when it becomes client-facing decision support – particularly in FX exposure management and cash forecasting, where decisions are frequent, outcomes are measurable, and timing matters.
The attraction is clear: better forecasts can reduce hedging costs, improve liquidity planning, and make guidance more responsive to real commercial activity. The implementation burden is less obvious. Once ML influences client outcomes, the bar shifts from “useful model” to “operable service”: integration, monitoring, disclosure and escalation.
Three implementation questions determine whether treasury ML scales beyond pilots.
The first hurdle is practical: putting the model output where treasurers already work.
In an FX workflow, ML usually sits mid-stream – turning transaction and market data into exposure estimates and suggested hedge parameters. In cash forecasting, it sits earlier – reconciling operational feeds into a forward view that can drive funding decisions and limit checks.
Implementation succeeds when the output is:
That last point matters. If the model cannot be questioned quickly, it will not be used under pressure. And without capture of overrides and outcomes, firms have no credible way to improve the service – or to prove it is delivering value.
The second hurdle appears when the recommendation crosses the client boundary. Internally, a bank can rely on policy and escalation to manage model risk. Externally, the question becomes: who owns the decision, and what does the client believe they are buying?
Implementation teams need to translate that into explicit service design:
This is not a legal footnote. It shapes adoption. If the service feels like a black box, clients default to familiar heuristics. If it feels like fragile automation, relationship teams will avoid relying on it during volatility – precisely when clients most want support.
The third hurdle is operational. Client-facing ML needs the disciplines of a production service: controlled releases, incident response and support workflows.
Treasury makes this hard because data is heterogeneous. Cash forecasts rely on ERP and payments feeds; FX exposure relies on sales and settlement flows. Small upstream changes can shift model behaviour. A “good model” can still produce a bad client outcome if inputs degrade silently.
Successful implementations treat the ML service as a governed model:
A recent example makes the point. Citi and Ant International announced a pilot combining Citi’s fixed-rate FX hedging approach with Ant International’s AI forecasting model, stating that an airline client achieved meaningful hedging cost savings in live transactions. Whether that scales may depend less on algorithm choice and more on the service wrapper: workflow integration, confidence monitoring, and clear accountability for when recommendations are wrong.
Market-facing treasury is turning ML from internal optimisation into an external capability. The practical test is simple: can the bank operate the model as a client service through volatility, data breaks and scrutiny?
In our research, the winners are unlikely to be those with the cleverest models. They will be the firms that do the unglamorous work: integration into controls, measurable feedback loops, and an operating model that can explain, support and, when necessary, override the system.
Back
A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated