Market Risk ML Now Has a Supervisory Price
January 9, 2026MRM component of our capital markets machine learning series
As custodians and administrators begin to embed AI into servicing workflows, the risk map of custody is being redrawn. Corporate actions interpretation, proxy voting, and income allocation are now potential sites for model error, hallucinated outputs, and unnoticed mis-routing of instructions. At the same time, regulators are tightening expectations on operational resilience, third-party risk, and governance over critical services, with custody and fund servicing singled out as systemically important.
The question is no longer whether AI will enter custody, but how to contain its failure modes. This article looks at three clusters of risk: interpretation risk in issuer-to-investor communication, control and accountability when agents act on behalf of clients, and the concentration risks that arise as AI-heavy platforms intermediate a growing share of global servicing.
The heart of custody risk lies in the fidelity of communication from issuer to investor. Corporate actions, tax disclosures, and proxy materials are often long, ambiguous, and inconsistent across sources. AI systems that summarise prospectuses, compare agent messages, or generate client-friendly notices can reduce workload but also introduce new failure modes if key conditions, dates, or eligibility criteria are misinterpreted. Banks and infrastructures piloting automation of corporate actions announcements have already highlighted how data completeness and standardisation remain persistent challenges.
From a risk perspective, this means model validation must extend beyond accuracy scores on historic data. Testing needs to include rare but high-impact events, cross-jurisdictional tax nuances, and the reconciliation between multiple official sources. Controls such as dual-sourcing, human challenge for complex events, and clear error-handling playbooks become as important as the AI itself. If mis-interpretation is caught only at the point of client complaint or post-event break, the damage is already done.
A second risk cluster emerges when agentic systems move from suggesting actions to initiating them. In corporate actions and proxy servicing, it is tempting to let agents propose default elections, chase missing instructions, and even submit decisions based on pre-agreed rules. The operational benefits are clear, but so are the questions: who is accountable when an agent mis-applies a mandate, and what evidence exists to reconstruct its reasoning?
Regulators are sharpening expectations around outsourcing and critical third parties, with policy statements emphasising that boards remain responsible for risks arising from technology providers and automated services. For AI-enabled custody, this means that agent behaviour must be observable, explainable, and subject to challenge. Log trails should show which inputs were used, which policies applied, and where human approvals were obtained. Risk teams will need to define clear boundaries on what can be fully automated, what requires dual control, and what must remain firmly in human hands, particularly where client instructions, voting rights, or tax positions are at stake.
Finally, AI intensifies an existing concern: concentration of operational risk in a small number of custodians and market infrastructures. As institutions rely on a few global providers for safekeeping, corporate actions processing, and proxy services, any failure in those AI-heavy platforms can have market-wide impact.
Custodians are already under pressure to demonstrate cyber resilience, contingency planning, and the robustness of their operating models. When AI is woven into core servicing, outages or model failures may propagate more quickly and be harder to diagnose. Third-party risk frameworks will need to extend beyond traditional service level metrics to include questions such as: how are AI components monitored and updated, what rollback mechanisms exist, and how are clients informed of material model changes? Supervisory attention to operational resilience and critical third parties suggests that regulators will increasingly expect detailed evidence on these points, not general assurances.
AI in custody and asset servicing requires a new layer of model, operational, conduct, and systemic risk over already complex workflows. Interpretation errors, opaque agent behaviour, and concentration in a small number of AI-enabled platforms could all undermine trust if not controlled.
For financial institutions, this means treating AI components as risk objects in their own right: inventorying them, subjecting them to rigorous validation, and ensuring that contractual and supervisory expectations are reflected in day-to-day controls. As regulatory frameworks on operational resilience, outsourcing, and model risk continue to evolve, custodians that can demonstrate credible governance over AI-automated servicing will be better placed to win mandates and withstand future scrutiny.
Back