Why AI Adoption Surveys Don’t Reflect Reality in Financial Services
March 31, 2026A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated
Personalisation is meant to make systems smarter – more relevant, efficient, and responsive.
But what happens when it becomes too good at predicting what we already like, do, or believe?
Across finance, personalisation technologies are moving from simple recommendations to adaptive systems that shape workflows, portfolios, and risk decisions. Yet in doing so, many are developing an invisible blind spot – a quiet tendency to repeat and reinforce familiar patterns.
This is what we call the sameness loop – when systems learn so effectively from past preferences that they stop genuinely learning at all.
In social media, we recognise this as the “filter bubble.”
In financial systems, it’s subtler – a pattern of contextual narrowing that erodes diversity of thought and action.
Personalisation engines in investment, credit, and operations are typically designed to optimise for accuracy and efficiency. But these very goals make them converge towards what’s already known and validated. Each click, trade, or approval reinforces the past.
The outcome: smarter systems that keep rediscovering the same answers.
Three forces drive this narrowing effect – and together they create a false sense of comfort.
Short-term metrics – Systems optimise for engagement, conversion, or accuracy, not for curiosity or diversity.
Governance inertia – Exploration and experimentation look risky compared to stable, compliant behaviour.
Data feedback loops – Training data derived from prior outputs turns the model into its own echo chamber.
And crucially – it feels right. Familiar data and stable outputs create an illusion of reliability.
The system seems to “understand us” while quietly trimming away difference.
Over time, this contextual narrowing trap makes exploration feel like error rather than insight.
Just as large language models can overfit to a user’s tone or structure, financial personalisation systems can overfit to institutional habits – a form of stylistic overfitting.
They become fluent at repeating the firm’s preferred patterns: consistent, polished, and reassuring – but gradually less creative and less analytically useful.
The sameness loop can appear almost anywhere personalisation or adaptive automation is deployed:
Each of these cases shares the same DNA: the feedback loops that make systems “learn” also make them self-referential. They validate what they already know – and suppress what they don’t.
For institutions that depend on continual learning, the cost is strategic as well as operational:
Over time, the institution becomes expert at repeating itself – smarter at being the same.
Sameness is typically not intended but arises from context mismanagement.
Adaptive systems keep layering new logic onto old assumptions until the frame itself becomes invisible.
The discipline is to decide when to retain, when to reset, and when to reframe:
This triad helps systems stay adaptive rather than habitual, keeping curiosity alive within structure.
Design for exploration, not just optimisation
Reserve part of every recommendation or workflow for “off-pattern” alternatives.
Track novelty and diversity as real performance metrics.
Govern for diversity
Treat sameness as a cognitive risk. Add it to risk registers alongside bias and drift. Conduct periodic “diversity audits” of model outputs and decision patterns.
Back
A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated