Why AI Adoption Surveys Don’t Reflect Reality in Financial Services
March 31, 2026A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated
Many custodians now have AI pilots in place, from document-reading tools for corporate actions to machine learning models that prioritise breaks. Yet few have fully translated these experiments into scaled, production-grade capabilities that genuinely change operating models. The barrier is rarely imagination. It is the practical difficulty of moving from proofs of concept to integrated, controlled services across multi-region hubs, legacy platforms, and demanding clients.
Institutions are turning to AI, data, and outsourcing to build more scalable and resilient investment operations, but change must also be sequenced carefully to avoid destabilising core servicing. This article outlines a pragmatic roadmap: establishing the data and event foundation, sequencing use cases and controls, and aligning operating model and ecosystem decisions.
AI in custody starts with the integrity of event and position data. Corporate actions remain one of the most complex unstructured data problems in post-trade, with multiple sources, formats, and late-breaking updates. Before scaling AI, firms need to clarify golden sources for key fields, rationalise event taxonomies, and ensure that cash and securities records are consistently aligned across systems.
Implementation-wise, this often means building a shared event store that sits between external feeds and internal books of record, with clear ownership, change-management processes, and data-quality metrics. Language models and machine learning components for cleansing and enrichment can be introduced as services around this store, but the governance of the data model remains a human responsibility. Without this groundwork, even well-designed AI tooling will amplify inconsistencies rather than reduce them.
With a more stable data backbone in place, the next step is to select and phase AI use cases. Our research work in asset and wealth management finds firms focusing early on reconciliation, data checks, and workflow optimisation, where benefits are tangible and risk relatively contained. Custodians can adopt a similar approach in servicing, prioritising areas such as event comparison, narrative cleansing, client notification drafting, and break prediction.
Each use case should be framed as a production service from day one, even if volumes are initially limited. That means defining model owners, specifying input data, setting performance thresholds, and agreeing how exceptions and overrides will be handled. Control functions need to be involved early, not as an approval gateway at the end. When pilots demonstrate value, scaling should proceed in measured stages: expanding client segments, extending to additional event types, and only then moving towards more autonomous agent behaviour. This incremental pattern helps protect day-to-day operations while still delivering visible improvement.
The final part of the roadmap concerns people, processes, and partners. AI-native custody operations cut across traditional boundaries between technology, operations, risk, and client-facing teams. Practical implementation therefore requires new roles and routines: product owners for AI-enabled services, joint working groups between operations and data science, and training for frontline staff who will supervise or explain AI-supported decisions to clients.
Firms also face strategic choices on whether to build, buy, or partner. Some may choose to source document intelligence or reconciliation AI from specialist vendors or infrastructure providers; others may embed capabilities within broader outsourcing relationships. Whatever mix is chosen, contracts and service-levels should reflect the reality that AI components will evolve over time, requiring transparent change processes, access to telemetry, and clear incident-handling expectations.
Moving custody towards AI-native operations will be about sustained, disciplined implementation. Firms that invest in a robust event and data foundation, sequence use cases carefully, and align operating model choices with their risk appetite will be better placed to scale AI safely.
Over the coming years, competitive differentiation in custody is likely to hinge on the ability to deliver reliable, explainable automation in high-friction servicing areas while meeting intensifying regulatory expectations on resilience and third-party risk.
Back
A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated