Why AI Adoption Surveys Don’t Reflect Reality in Financial Services
March 31, 2026A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated
Distinctive Insights’ “Big Ideas” series explores how AI – from Machine Learning and Large Language Models to emerging Agentic systems – could plausibly reshape strategy, governance, and advantage across financial services. Each article builds on key thematic ideas, using foresight rather than prediction to help senior leaders think through where control, value, and differentiation might migrate as these capabilities mature.
Private markets remain a craft business under pressure
Founder access is still brokered through networks; diligence is patchy under time constraints; LPs want clearer evidence without slowing decisions.
Meanwhile, digital exhaust from product telemetry, platform interactions, and third-party data is compounding faster than most teams can parse. Compliance expectations rise as outreach scales. Everyone says they are “data-driven”, yet few can show how the data truly moves the needle.
AI could act as a forcing function
ML can surface non-obvious patterns in deal flow; language models compress sector landscaping and documentation; agentic systems coordinate compliant outreach at machine speed. These capabilities don’t settle debates – they reframe where scarce advantage sits.
The following themes sketch plausible trajectories for how this function could evolve as AI matures. They are prompts for judgement, not predictions – a way to examine where authority, value, and control may migrate next.
The centre of gravity in origination may shift from who you know to what your models can reliably see. ML can correlate conversion signals across fragmented data – product usage blips, hiring telemetry, supply-chain breadcrumbs – revealing patterns that humans routinely miss.
Signal ownership becomes strategic. Firms able to generate or access proprietary signal streams could out-originate larger rivals with shallower data accessibility.
Culture will need to evolve. Relationship craft still matters, but the agenda for those conversations may increasingly be set by what the signals suggest, not by who picked up the phone.
Search is commoditising. As language models begin to sweep entire sectors, compressing landscaping and document triage, the scarce thing becomes original interpretation – the thesis that explains why a corner of the market is mispriced and how it might inflect.
Teams that codify their thinking will benefit most. Ontologies, source hygiene, and promptable research frames allow language models to assemble evidence consistently while analysts spend time on tension points, not boilerplate. The memo becomes a living artefact – crisp, linkable, and auditable.
But sameness is a risk. If everyone asks similar models similar questions, outputs converge. Distinctive theses may rely on integrating firm-specific data, contrarian priors, and deliberate exploration paths that avoid the obvious summaries.
The strategic pivot is subtle: are you adding analysts, or are you investing in the intellectual property of your theses – the reusable patterns that make originality repeatable?
Speed to founder contact is increasingly a systems problem. Agentic AI can orchestrate compliant outbound contacts – drafting tailored notes, checking conflicts, scheduling meetings, logging interactions, and managing information rights.
Outreach becomes continuous, precise, and explainable. Knowledge graphs ensure that every touch enriches context, while guardrails keep policies intact.
The human layer doesn’t disappear; it is redeployed to judgement calls and sensitive conversations. The machine handles cadence, targeting, and record-keeping – and does not forget.
LPs are asking for clearer lines from assumption to decision. As language models and tooling begin to log sources, rationales, and scenario branches, committees gain a shared, inspectable trail – not just a conclusion but how the team got there.
This could centralise underwriting authority where evidence governance is strongest. Reproducible memos, consistent factor definitions, and challenge logs make decisions reviewable without paralysing speed. Disagreement becomes an asset when the trail captures alternative paths.
The posture will require humility. Structured reasoning can create a false sense of certainty; models can over-read patterns. The point isn’t mechanistic objectivity – it’s defendable judgement, with the telemetry to show why and when it changed.
The decision ahead is cultural: do you want discretion that is hard to evidence, or procedural authority built on reasoning trails that anyone – especially LPs – can interrogate?
Across these themes, one thread stands out – AI doesn’t settle private markets’ debates so much as it sharpens them. Signals challenge relationships; originality pushes back against automated synthesis; outreach becomes orchestration; committees trade folklore for auditable reasoning; economics tilt toward what actually predicts. The firms that win may be those that treat data rights, research IP, and reasoning telemetry as core assets – and align culture, compliance, and technology accordingly.
Two ideas to take into your next leadership discussion. Where do you already possess proprietary signal, thesis frameworks, or reasoning trails that could compound with modest investment? And if your current advantage is thin in those areas, which partnership or build paths would let you earn the right – credibly and compliantly – to move first when the next great founder appears?
Back
A closer look at what surveys actually measure - and why adoption, usage, and impact are often conflated