AI Everywhere, But Where’s The ROI? How Banks Are Investing Without a Clear Payoff
Banks are doubling down on AI, yet the measurable return is landing in pockets rather than across the enterprise. Here’s a grounded look at what’s actually working, where the smart money is shifting, and how leaders are redefining ROI in a rapidly maturing landscape.
Freya Scammells
AI Practice Lead
freya.scammells@caspianone.co.uk
Banks are moving fast on AI, from internal copilots to supervised agents, yet few leaders can say with confidence what the true return on investment looks like. In 2024 and 2025, most institutions shifted from sandbox experiments to supervised pilots, focused on governed, internal deployments that keep sensitive data in‑house. That pattern will continue, but the tension remains: leaders feel they have to be in the game, even when the business case is still taking shape.
From my conversations with teams across investment and retail banking, the real story is this: AI adoption is accelerating, budgets are being reshuffled to fund it, and governance is catching up. The payoff, however, is uneven and often measured in time saved rather than revenue created. That is not a failure; it is a signal to rethink where and how we measure AI ROI.
What AI use cases are banks prioritising, and why?
Most banks are now deploying internal, governed assistants that sit over proprietary knowledge bases, pitches, deal materials, policies and procedures. The aim is to compress cycle times, improve accuracy and make teams more effective in day‑to‑day work, especially in pitching and deal preparation across the front office and coverage teams. Where last year was heavy on concepts and proofs for these types of projects, this year is about supervised pilots in real workflows. Fully autonomous agents are a stretch in highly regulated environments, so banks are keeping humans in the loop while they validate value and manage risk.
While getting these projects off the ground is starting to become a reality, it is efficiency over revenue that is driving early measures of whether the tool is working or not. Internal automation, document summarisation, search, software engineering productivity, compliance reporting and customer‑service triage lead the way here. Surveys across financial services point to average productivity uplifts around 20 percent in software development and service, which explains the focus on cost and cycle‑time reduction.
Moreover, cost savings are being considered on the risk and fraud analytics front by aiming to minimise lost revenue via AI-powered detection. At the same time as deepfakes and synthetic identities rise as a board‑level concern, banks are leaning on AI to identify potential AML, KYB and fraud cases. The dual reality is obvious, AI is both a new threat vector and a promising detection layer, which is why many teams prioritise risk use cases with clearer regulatory urgency. Evidence points in the same direction. McKinsey, BCG, and PwC all highlight internal automation, customer experience and risk controls as the near‑term value pools, with agentic systems emerging, but still supervised and scoped.
Why is AI ROI hard to measure in financial institutions, and is it slowing adoption?
Most benefits accrue as time and quality, not immediate revenue. Banks are harvesting time saved, fewer handoffs, faster pitch prep, and better policy retrieval. Those gains are real, but they require baselines, adoption metrics, and attribution, otherwise they are anecdotal, and that makes classic ROI calculations harder. Several analyses warn that dabbling in many small use cases without end‑to‑end process change leads to limited returns, a state of pilot purgatory, and diluted business cases. The lesson, anchor AI in business strategy, pick fewer, bigger bets, wire end‑to‑end measurement in from day one.
Leaders openly acknowledge capability gaps, with research showing that only a small minority of firms are able to generate value at scale while most still struggle to move AI pilots into full production. Rather than slowing adoption, this is pushing organisations to redirect investment toward high‑ROI, core processes where outcomes can be measured with clearer scorecards.
At the same time, regulation is shaping progress. As the EU AI Act phases in, credit‑scoring systems fall under high‑risk classifications, and supervisors are actively mapping these requirements against existing banking rules. This is steering institutions toward more prudent, well‑governed deployment, especially for internal, non‑high‑risk use cases. The result is an AI landscape where adoption isn’t slowing at all; it’s becoming more targeted, more disciplined, and ultimately healthier for AI ROI.
Are banks investing out of strategic necessity, competitive FOMO, or genuine problem‑solving?
The answer is all three but weighted toward necessity and problem‑solving. Competitive dynamics are intensifying. Analysts suggest AI may compress cost bases, reshape customer behaviour, and even erode profit pools for those slow to move, so sitting out is not an option.
The pressure to improve efficiency is intensifying as retail banks contend with slowing revenue growth and rising fixed costs, all while digital competitors operate with far leaner cost‑to‑income ratios. This gap is driving the acceleration of automation and the rapid adoption of internal copilots as a controllable lever to bring operational costs back in line.
Similarly, senior leaders are becoming increasingly explicit about the stakes; banking CEOs recognise that gaining an AI advantage requires accepting some level of risk, prompting a shift away from scattered, tactical experiments and toward more targeted, enterprise‑wide strategies.
In my view, this is why budgets are being reallocated, not because of competitive FOMO, but because leaders see a greater strategic risk in ignoring AI than in moving forward carefully with the right guardrails.
Where will AI deliver the most realistic value in the next 12–18 months?
Here are five themes I’m consistently hearing from leaders and seeing play out across real bank AI programmes right now.
1. Internal productivity with governed copilots
Expect measurable gains in policy retrieval, knowledge search, meeting and document workflows, code assist and analytics self‑service. Case studies already show banks deploying Microsoft Copilot, Copilot Studio and proprietary assistants for secure, in‑house value.
2. Customer service augmentation, not full autonomy
AI‑assisted agents will handle routine inquiries and next‑best actions, with human oversight for complex or sensitive interactions, delivered through bank apps and contact centres.
3. Fraud, AML and identity
As deepfake and synthetic identity risk climbs, AI‑powered detection and verification will be prioritised. Reports point to rising consumer anxiety, industrialised fraud patterns and the need for layered controls.
4. Middle and back‑office automation
Collections, reconciliations, KYC refresh and policy control rooms are fertile ground for supervised agents that lower cost per case and compress cycle times. Significant cost reductions are forecast when these processes are re‑engineered end-to-end.
5. Early revenue impact in advice and personalisation
While fully autonomous trading agents are not ready for broad release in regulated contexts, banks will use AI to hyper‑personalise next‑best‑offer, improve digital sales conversion and support relationship managers with timely insights. It is projected revenue uplifts for banks that scale these patterns in a controlled way.
Why the ROI conversation feels messy, and how to fix it
A lot of teams set ROI targets after they have already spent the money. They lead with technology exploration, then scramble to attribute value later. However the roles should be reversed. Start with the humans who will use the system, ask which tasks drain energy and time, and design the workflow change around that, with the metrics set up front.
The paradox of last year’s AI spend is instructive, billions went into proofs, few could show a bankable return. The institutions pulling ahead are those treating AI as a program of end‑to‑end workflow reinvention, tightly coupled to business strategy and measured as they go.
Banks can prove AI ROI, but it demands fewer initiatives, tighter alignment to real workflows, real baselines, and an adoption plan that treats humans as the centre of the system. If you do that, you will have the evidence to keep investing, even when budgets get tight. If you do not, you will have experiments, not outcomes.
About Caspian One and How We Help
Caspian One’s AI Practice helps financial institutions bridge the gap between AI ambition and AI impact by providing specialist AI talent with real financial‑market experience, not generalists or experimenters. Our teams understand trading logic, regulatory constraints, risk sensitivity and the operational realities of financial markets, ensuring AI initiatives are built for measurable outcomes rather than stalled proofs‑of‑concept.
Led by AI & Data expert Freya Scammells, we support banks investing in AI for trading, risk, compliance and automation by aligning talent to business value, delivery requirements and governance expectations. By embedding practitioners who can navigate complex systems and regulated environments, we help institutions scale AI safely, effectively and with confidence.
Disclaimer: This article is based on publicly available, AI-assisted research and Caspian One’s market expertise as of the time of writing; written by humans. It is intended for informational purposes only and should not be considered formal advice or specific recommendations. Readers should independently verify information and seek appropriate professional guidance before making strategic hiring decisions. Caspian One accepts no liability for actions taken based on this content. © Caspian One, 2026. All rights reserved.
Read more or search for topics that matter most to you!