AI in Finance: No Longer Stalling, Now Scaling
Six months on from our ‘AI Adoption in Financial Services’ April report, adoption has surged… but without the right talent, firms risk turning acceleration into chaos instead of ROI.
Exclusive Update: September 2025 publication, authored by Freya Scammells - Head of AI Practice
Back in April, we argued that AI adoption in financial services was stalling. The promise was clear, but the execution was faltering... tangled in legacy tech, compliance hurdles, and a lack of the right talent. Six months on, the landscape looks very different.
AI in finance is no longer stuck in pilot mode. It’s accelerating.
85% of financial firms now use AI at scale, up from just 30% in 2023
Half have deployed their first generative AI applications, with another 28% going live before year-end
Top global banks have rolled out AI tools to more than 800,000 employees - two-thirds of their combined workforce
Measurable returns are showing up: 70% of firms report revenue growth of 5–20% from AI; over 60% report cost reductions
Generative AI now drives more than half of all new banking AI projects.
Generative AI has gone from experiment to essential. In H1 2025, more than half of new bank AI projects included generative capabilities, and customer-facing deployments are finally emerging. Wells Fargo’s Gemini-powered assistant and Commerzbank’s AI avatar “Ava” are just the start of a new wave.
But scaling hasn’t made things simpler. Agentic AI (systems capable of semi-autonomous decision-making) is already in pilot at nine of the world’s top 50 banks. Regulators are watching closely. The EU AI Act begins to bite in 2026, treating most financial AI as “high-risk,” while US and UK supervisors are urging proactive governance, bias checks, and human oversight. Compliance is catching up, and firms that aren’t ahead of it risk reputational and regulatory blowback.
And then there’s the people problem.
Technology is no longer the bottleneck, talent is.
78% of CFOs say skills gaps are a major barrier to AI progress
Only 54% of experienced finance professionals feel equipped to use AI, compared to 89% of students entering the field
Projects staffed with domain-aware AI talent deliver 80% faster than those led by generic data scientists
AI success in finance doesn’t come from technology alone. It comes from having the right people in the right seats.
New roles are appearing - prompt engineers, AI product managers, governance officers - but the demand is far outpacing supply. Diversity remains a concern: 73% of women in finance want to build AI skills, but only 24% expect to rely heavily on AI at work, highlighting an inclusion gap that can’t be ignored.
The lesson is blunt: AI success in finance doesn’t come from technology alone. It comes from having the right people in the right seats.
That was our conclusion in April, and it holds even more weight now. AI adoption is no longer stalling, it’s scaling fast. But scaling without the specialist talent to implement, govern, and integrate these systems is a recipe for stalled ROI.
For senior leaders, the opportunity, and the risk, has never been clearer. The firms that invest in domain-savvy AI talent today will convert acceleration into advantage. The rest will find themselves falling behind, with half-built projects and rising regulatory exposure.
AI in finance is now a people story. The question isn’t whether you’ll adopt it, but whether you have the team that can make it deliver. This being the case, what steps are you going to take next?
Click ‘Contact Freya’ to arrange a time to speak with our AI Practice Lead, Freya Scammells, at your preferred time/date.
Or, use the dropdowns below to discover more information on these key topics; scroll down to access the original report (April publication).
-
When we published our April report, we highlighted Gartner’s forecast that by late 2025, over 70% of financial institutions would be using AI at scale - up from just 30% in 2023. That sounded ambitious at the time. Six months later, the reality has outpaced the prediction.
Recent surveys show 85% of financial firms are now actively applying AI across functions from fraud detection to IT operations and customer engagement (RGP, 2025). This isn’t about proof-of-concept experiments anymore, it’s enterprise-wide rollouts.
The investment side tells the same story. The AI market in financial services remains on track to grow fivefold this decade, from £28.93 billion in 2024 to £143.56 billion by 2030, at a CAGR of 30.6% (Market Research Future, 2025). Within that, generative AI is the fastest-growing segment, projected to jump from £1.67 billion to £12.1 billion by 2030, a CAGR of nearly 39% (MRFR, 2025).
And the projects are no longer “innovation theatre.” In H1 2025, the top 50 global banks doubled their number of live AI use cases compared to late 2024, and more than half of those 173 new projects included generative AI (Evident, 2025). Generative AI has gone from hype to mainstream: it now accounts for roughly 25% of all new AI use cases in finance (Stanford AI Index, 2025).
The returns are starting to show up. NVIDIA’s State of AI in Financial Services 2025 survey found that nearly 70% of financial firms reported revenue growth of at least 5% from AI, with a growing subset seeing 10–20% gains, while over 60% reported cost reductions of 5% or more (NVIDIA, 2025). The most profitable use cases? Algorithmic trading and portfolio optimisation (25% of firms), closely followed by customer experience enhancements (21%).
Generative AI adoption in particular has surged: 50% of surveyed financial institutions had already deployed their first generative AI application, and a further 28% expected to go live within six months: meaning by the end of 2025, around 78% of firms will be running GenAI in production (NVIDIA, 2025).
The scale is striking. According to Evident, ten of the world’s largest banks have rolled out AI tools to more than 800,000 employees, around two-thirds of their combined workforce (Evident, 2025). That shift - from AI as the remit of innovation labs to something embedded in daily workflows - is the clearest signal yet that adoption isn’t stalling, it’s accelerating.
And the forecasts ahead remain bullish. Analysts expect 85% of financial institutions will have fully integrated AI into operations by the end of 2025, up from 45% just three years ago (Gartner, 2025). Global investment in AI (especially generative AI) continues to rise, with funding in the space topping $33.9 billion in 2024, up 19% year-on-year (Stanford AI Index, 2025).
The picture is clear: AI in financial services has shifted from “potential” to “present tense.” The pace of adoption has accelerated, the investment is flowing, and firms are beginning to see tangible ROI.
-
If AI adoption is accelerating, regulation is racing to catch up. Since April, we’ve seen the clearest signals yet of how supervisors intend to oversee AI in financial services. The message is blunt: innovation is welcome, but firms will be held accountable.
Europe: The AI Act moves from paper to practice
The EU’s AI Act, which entered force in August 2024, is now in its implementation phase. Most requirements kick in by August 2026, giving firms less than a year to prepare (Norton Rose Fulbright, 2024). Some rules bite earlier: the ban on “unacceptable risk” AI systems (like social scoring or manipulative algorithms) has applied since 1 February 2025 (European Commission, 2024).
For finance, the implications are significant. The Act treats most credit scoring, trading algorithms, and risk models as “high-risk AI”, demanding strict standards for data quality, transparency, and human oversight. Enforcement will come through financial supervisors - the EBA, ESMA, and national regulators - meaning AI oversight will sit alongside prudential and conduct regulation (European Commission, 2024). Firms that haven’t already mapped their AI use cases to the Act’s risk categories should start now. The deadline may look distant, but compliance programmes, audit trails, and governance structures take time to build.
United States: SEC warns against “gambling with trust”
The SEC has not introduced formal AI rules yet, but in March 2025 it held a landmark AI in Finance Roundtable with industry leaders. The tone was clear: financial institutions must govern AI proactively, before regulation forces their hand.
SEC officials highlighted that AI can create “gaps in existing frameworks” but stopped short of proposing prescriptive new rules. Instead, they called on firms to maintain AI inventories, conduct risk classification, and create cross-functional AI governance councils (SEC, 2025). One commissioner warned that proceeding without proper controls is “gambling with trust and compliance.”
The roundtable also flagged new risks: the rise of agentic AI (semi-autonomous agents that can act independently), already being piloted in several banks, and a spike in AI-driven fraud (deepfakes and automated social engineering). Supervisors made it clear that existing rules (from suitability to consumer protection) will be applied to AI systems, even if no AI-specific regulation exists yet.
United Kingdom: Sandbox over statute
The UK has taken a different line. The Financial Conduct Authority (FCA) insists it “does not need new rules for AI in financial services”, arguing that current frameworks (Senior Managers Regime, Consumer Duty, GDPR) are sufficient (FCA, 2025).
Instead, the FCA is leaning on innovation-friendly supervision. It launched an AI Live Testing sandbox (described as a “supercharged sandbox”) starting October 2025, allowing firms to trial AI solutions under regulatory oversight (FCA, 2025). Regulators will observe, guide, and stress-test models in areas like fraud prevention and financial inclusion. The FCA’s Chief Data & AI Officer, Jessica Rusu, framed it as a competitive edge: by avoiding premature, rigid rules, the UK hopes to stay attractive to innovation.
The Bank of England’s Financial Policy Committee is also examining systemic risks, such as herd behaviour if too many firms rely on similar models, but the overall stance is principle-based, not prescriptive.
Global moves: Convergence on principles
Beyond Europe, the US, and UK, regulators are closing gaps elsewhere. Singapore’s MAS updated its AI governance principles for banking. Canada continues to refine its proposed Artificial Intelligence and Data Act, which would cover automated decision systems in finance. And global bodies (IOSCO, the Basel Committee, and the G20) are coordinating on AI principles around transparency, accountability, and resilience.
What this means for financial institutions
The direction of travel is obvious. AI in finance will be regulated... the only question is how soon and how strict. Europe’s approach is rules-heavy, the US is leaning into oversight, and the UK is offering sandbox-style engagement. But all expect firms to act now:
Inventory and classify AI systems - understand where you’re exposed
Build explainability and bias testing into your models
Create governance structures that link compliance, risk, and business leaders
Plan for auditability - regulators want documentation, not just results
In 2023–24, the excuse was “regulation is unclear.” By late 2025, that line won’t hold. The regulators’ patience is running out, and firms that don’t get ahead of governance risk being caught flat-footed.
-
In April, much of AI in finance still lived in pilots and proofs-of-concept. Six months on, the dial has shifted. Banks are no longer testing if AI works, they’re proving how far it can scale.
Generative AI: From labs to live clients
Generative AI has moved centre-stage. In H1 2025, the top 50 global banks doubled their number of live AI use cases compared to late 2024, and over half of those 173 new projects leveraged generative AI (Evident, 2025).
Customer-facing deployments are finally materialising:
Wells Fargo integrated Google’s Gemini 2.0 into its “Fargo” assistant, making it a genuinely conversational mobile banking tool (Evident, 2025)
Commerzbank launched “Ava,” an AI avatar on its digital platform, answering customer queries in natural language and personalising account support (Evident, 2025)
Until recently, most firms kept generative AI internal, summarising research, writing code snippets, or handling employee queries. These examples show a turning point: banks are starting to trust generative AI with client-facing roles. That said, the majority of projects still keep a human in the loop. Out of 97 generative AI use cases catalogued in 2025, only 15 are fully client-facing without human oversight (Evident, 2025).
Agentic AI: The next frontier and a new risk
The buzzword of mid-2025 is agentic AI: systems that can take autonomous actions and chain tasks together. According to Evident, nine of the world’s top 50 banks have agentic AI pilots (Evident, 2025).
Examples include JPMorgan Chase and BNY Mellon, both experimenting with AI agents for internal operations: think workflow orchestration and automated reporting. While these systems promise efficiency, regulators have already flagged concerns. At the SEC’s March 2025 roundtable, “agentic AI” was singled out as a governance risk: firms need audit trails, controls, and fallback mechanisms before letting autonomous agents loose (SEC, 2025).
Enterprise scale: AI in the hands of thousands
The most striking shift is scale. Ten leading global banks have now equipped more than 800,000 employees (roughly two-thirds of their combined workforce) with AI tools (Evident, 2025). This isn’t AI tucked away in an innovation lab; it’s embedded in day-to-day work across sales, trading, risk, IT, and operations.
Front-office functions are leading the charge. Over two-thirds of scaled “AI wins” reported in 2025 have been in trading, investment, or IT/security - the areas where efficiency and revenue gains are easiest to prove (Evident, 2025). For example:
Trading desks are deploying AI to flag client opportunities and optimise order routing
IT teams are using AIOps (AI for IT operations) to predict outages and resolve incidents before they hit
Governance and MLOps: The “last mile” challenge
One of the clearest lessons of 2025 is that success depends on operations, not just algorithms. Firms that raced ahead with generative AI often hit bottlenecks when moving from proof-of-concept to production.
M&T Bank introduced a continuous credit monitoring system using explainable AI, ensuring outputs could be audited and understood by regulators and credit officers alike (American Banker, 2025)
Goldman Sachs revealed how it has embedded AI across five strategic areas - from developer productivity to trader surveillance - each paired with a governance framework and retraining cycle (Goldman Sachs, 2025)
These examples highlight a shift: firms are investing as much in governance, MLOps, and data infrastructure as in the models themselves. NVIDIA’s 2025 survey noted that budget constraints for AI have dropped by 50% year-on-year; not because firms are spending less, but because boards are now willing to allocate serious capital to AI factories, platforms, and data pipelines (NVIDIA, 2025).
The stumbles: Lessons learned
Not every initiative has gone smoothly. Several European banks have had to pause generative AI wealth management pilots after models produced flawed financial advice. Others shelved AI-driven lending tools due to concerns about bias and explainability under fair lending rules. These cases underline why many firms are still cautious about exposing clients directly to AI decisions.
Legacy integration also remains a drag. Even with APIs and microservices, connecting modern AI systems to decades-old core banking technology is a costly, multi-year journey. And resilience is now a regulatory focus: supervisors want firms to show how they’d handle an AI outage or failure without disrupting client services.
The bottom line
Between April and September 2025, AI in finance has moved from promise to production. The case studies prove it: real deployments at scale, measurable ROI, and clear lessons learned. But they also show that technology alone doesn’t guarantee success. The firms leading the pack are those that invested in the less glamorous side of AI: governance, MLOps, and the people to run them.
The rest risk finding themselves with flashy demos, but no durable advantage.
-
The biggest truth about AI in finance hasn’t changed since April: technology doesn’t deliver results, people do. What has changed is just how stark the talent gap has become.
The widening skills gap
Finance leaders are increasingly blunt about it. In June, 78% of CFOs said skills gaps are a significant barrier to achieving their AI objectives (OneStream, 2025). At the same time, 86% of finance professionals expect to use AI regularly in their careers, but organisations are struggling to recruit or train the talent to meet that demand (OneStream, 2025).
There’s also a generational divide:
Among veteran finance professionals, only 54% feel equipped to use AI
For younger professionals, that rises to 63%
Among finance students entering the industry, it jumps again to 89% (OneStream, 2025)
The message is clear: tomorrow’s workforce is ready for AI... today’s isn’t. Unless firms act now to reskill, they risk an internal capability gap that will stall adoption.
Domain expertise makes the difference
We argued in April that generalist data scientists often fail in finance. The data since then has only reinforced the point. According to Goldman Sachs, projects led by AI specialists with financial domain expertise deliver 80% faster than those staffed by generalists (Goldman Sachs, 2024).
This isn’t about raw technical skill. It’s about context:
Quants who understand portfolio constraints as well as neural networks
Engineers who know what “low latency” means on a trading desk
Governance leads who can translate the EU AI Act into model design choices
Without that fluency, projects end up in compliance dead-ends or never make it out of the lab.
New roles, new demand
As adoption scales, new specialist roles have come into focus:
Prompt Engineers – now one of the fastest-growing AI job categories, projected to grow 33% annually through 2030 (MRFR, 2025). In finance, they’re optimising prompts for GenAI assistants, legal document parsers, and client-facing bots
AI Product Managers – straddling data science, business, and user experience to ensure AI solves real problems
MLOps Specialists – building the pipelines that get models from pilot to production and keep them reliable
AI Governance Officers – dedicated to explainability, bias audits, and compliance, often reporting to the C-suite
These aren’t “nice-to-have” hires anymore; they’re core to scaling AI responsibly.
Inclusion gaps that can’t be ignored
The skills shortage isn’t just about numbers. It’s about diversity. In the OneStream study, 73% of women in finance said they want to develop AI skills, but only 24% expect to rely heavily on AI in their roles, compared to 40% of men (OneStream, 2025).
Men also report higher day-to-day use of AI (71% vs 61%). That signals a confidence and access gap - not a lack of interest. If firms don’t address it, they’ll cut themselves off from a huge pool of motivated talent.
How firms are responding
The frontrunners aren’t just throwing money at the market. They’re building talent pipelines:
Morgan Stanley launched an internal AI Academy in mid-2025 to upskill advisors and analysts after rolling out its GPT-powered wealth assistant (Financial Times, 2025)
UK banks are partnering with Multiverse and other providers to deliver AI apprenticeships, rotating staff through data teams to build practical fluency
Several global firms have introduced rotation programmes where employees spend six months embedded with AI specialists before returning to their roles
In other words, they’re growing their own.
The takeaway
The numbers may tell us AI adoption is accelerating, but the real bottleneck is still people. The gap isn’t just technical - it’s contextual, cultural, and inclusive.
The institutions that win will be the ones that hire differently, not just more: domain-savvy engineers, compliance-aware data scientists, and cross-functional product leaders. The ones that build reskilling pathways for today’s workforce while making AI literacy a baseline for tomorrow.
-
The picture since April is clear: AI in financial services is no longer stalling, it’s accelerating. Adoption rates are climbing, generative AI is mainstreaming, and measurable ROI is finally emerging. But acceleration alone doesn’t equal advantage. For senior leaders, the challenge is turning speed into sustainable results.
Three pillars: technology, talent, trust
The firms pulling ahead share a common approach: they treat AI as a strategic programme anchored on three pillars.
Technology – Modern infrastructure, data pipelines, and AI “factories” that move models from pilot to production at scale. Budgets for this have grown; NVIDIA reports AI budget constraints dropped by 50% year-on-year as boards commit serious investment (NVIDIA, 2025)
Talent – The decisive factor. Projects with domain-savvy AI talent deliver 80% faster than those staffed by generalists (Goldman Sachs, 2024). CFOs overwhelmingly agree that skills gaps are now the single biggest barrier to adoption (OneStream, 2025)
Trust – Governance and compliance embedded from day one. With the EU AI Act’s high-risk requirements coming into force by August 2026 (European Commission, 2024), and the SEC and FCA warning firms to self-govern now (SEC, 2025; FCA, 2025), explainability, bias testing, and audit trails are non-negotiable
Ignore any of these three pillars and acceleration will collapse back into stalling.
Lessons for the C-suite
For financial services leaders, the insights since April translate into concrete actions:
Benchmark your AI maturity – If 85% of firms are deploying AI and you’re not, that’s a competitive gap. If others have AI in the hands of 50,000 staff and you have a team of 50, you need to understand why
Make talent a board-level agenda item – Treat AI hiring and reskilling as strategically as regulatory compliance. Build pathways for today’s employees to gain AI fluency and recruit for hybrid skills, not just technical brilliance
Embed governance into delivery – Don’t wait until 2026 to scramble for compliance. Build explainability, bias monitoring, and documentation into every AI system now. Regulators will expect it
Pick visible wins – Focus AI investment on areas with clear ROI and business value: faster onboarding, fraud reduction, or trader productivity. Quick, measurable results build internal momentum and justify further spend
Maintain a client-centric lens – Whether deploying AI in trading, lending, or customer support, keep the focus on fairness and service quality. Trust lost through one AI misstep can outweigh years of efficiency gains
The strategic conclusion
In April, we argued AI adoption was stalling because firms were missing one thing: the right people. By September, adoption has accelerated but the conclusion hasn’t changed. Talent remains the differentiator.
Technology is scaling. Regulation is taking shape. But the firms converting acceleration into advantage are the ones who invested in the people (domain-aware engineers, governance experts, MLOps specialists) who can make AI work inside the messy, regulated realities of financial services.
For senior stakeholders, the next 12–18 months are pivotal. Get your AI strategy right now - with the right teams in place - and you’ll lead the next wave. Get it wrong, and you’ll be playing catch-up in a market that’s moving faster than ever.
Disclaimer
This September 2025 update has been prepared using credible, publicly available sources alongside Caspian One’s internal market expertise. While every effort has been made to ensure accuracy and reliability at the time of publication, the content is provided for informational purposes only and does not constitute formal advice or a specific recommendation regarding AI adoption, regulatory compliance, or hiring strategies.
The AI landscape, associated regulations, and talent markets within financial services are developing rapidly. Readers should independently verify current standards, market conditions, and applicable regulatory requirements before making strategic decisions or investments. Caspian One accepts no liability for any actions taken based on the information provided herein.
Any case studies, scenarios, or ROI figures referenced are illustrative in nature and do not guarantee future outcomes.

AI Adoption in Finance Is Stalling. Here’s What Needs to Change…
Unpack the real reasons AI adoption is stalling in finance: talent gaps, compliance pressure, legacy tech - and why most hiring strategies miss the mark. Discover how a smarter, market-specific approach unlocks AI that actually delivers!
Published April 2025
By Caspian One - AI Practice

[Executive Brief]
Artificial Intelligence (AI) is rapidly transforming financial services, but the majority of institutions are struggling to turn promise into performance.
Despite increased investment and board-level urgency, AI adoption at scale remains slow, expensive, and underwhelming in terms of ROI.
The underlying issue isn’t the technology - it’s the talent.
This research explores the true state of AI adoption in financial markets, spotlighting the systemic barriers that prevent progress: talent shortages, regulatory pressure, legacy systems, and cultural inertia.
Crucially, we examine why traditional AI hiring models often fail in finance, and how a smarter, industry-specific approach to talent can unlock scalable, compliant, and high-impact AI solutions.
Grounded in learnings from McKinsey, BCG, Deloitte, EY, PwC, and others, and supported by insight from Freya Scammells, AI Practice Lead at Caspian One, this paper offers a practical roadmap for financial firms looking to future-proof their AI capability through better-aligned talent strategies.
The Real Reason AI Projects Fail…
AI is no longer a futuristic potential - it’s a boardroom priority.
By late 2025, over 70% of financial institutions will be utilising AI at scale, up from just 30% in 2023 (Gartner). From algorithmic trading and fraud detection to risk modelling and compliance automation, AI has the potential to enhance decision-making, improve efficiency, and reduce cost across virtually every area of the business.
Yet for all the promise, reality has fallen short.
According to Deloitte’s Financial AI Adoption Report (2024), only 38% of AI projects in finance meet or exceed ROI expectations; Over 60% of firms report significant implementation delays.
What’s going wrong?
At Caspian One, we believe the answer is clear: most financial institutions are hiring the wrong people.
This whitepaper unpacks that disconnect, using real data and market examples to help institutions understand where and why their AI strategies are stalling - and what they can do to change that.
“It’s not a question of whether AI can deliver value - it’s whether you have the right people who can deliver AI in your world. That means people who understand both the technology and the regulatory, operational, and cultural realities of finance.”
- Freya Scammells, Head of Caspian One’s AI Practice > contact
[Adoption Trends and Barriers]
A Market in Acceleration
Investment in AI across financial services has surged. McKinsey’s Global AI Survey (2024) reported that 58% of financial institutions directly attribute revenue growth to AI - primarily through enhanced trading performance, predictive risk management, and automation of operational processes.
AI-enabled fraud detection is already making a measurable difference.
Projections suggest that AI-based fraud systems will save global banks over £9.6 billion annually by 2026. Banks using advanced AI models report fraud detection accuracy exceeding 90%, reducing operational loss and boosting consumer confidence.
At the same time, the commercial viability of AI is clear: BCG (2024) notes that institutions adopting AI with specialist teams see up to 60% efficiency gains and 40% cost reductions in areas such as onboarding, compliance, and settlement.
The overall AI market in finance is projected to grow from £28.93 billion in 2024 to £143.56 billion by 2030, reflecting a compound annual growth rate (CAGR) of 30.6%. Within that, generative AI - a fast-emerging subset focused on content creation, automation, and data synthesis - is expected to grow even faster, rising from £1.67 billion to £12.10 billion over the same period, at a CAGR of 39.1%.
But Most AI Projects Still Underperform
Despite these promising headline figures, deeper analysis reveals significant challenges:
Only 29% of financial institutions report that AI has delivered meaningful cost savings, indicating many initiatives still fail to achieve significant operational efficiency (Boston Consulting Group, 2024)
65% of financial institutions experience implementation delays averaging 14 months, primarily driven by shortages in specialised AI talent who understand the intricacies of the financial sector (EY, Financial Services CTO Survey, 2024)
Institutions leveraging finance-specialised AI talent report significantly higher success rates and ROI -
with domain-experienced AI specialists achieving implementation nearly 80% faster than generalist counterparts (Goldman Sachs, AI Talent Insights, 2024).
The lesson is stark: general AI investment doesn’t guarantee AI success. The differentiator is the type of talent embedded in these initiatives.

Why Traditional AI Hiring Fails in Finance: The Generalist Problem
There’s a growing tendency to hire highly technical machine learning (ML) engineers from big tech or research backgrounds. While these individuals excel in model-building, many lack understanding of financial systems, compliance mandates, or operational constraints.
Goldman Sachs (2024) reinforces this point:
“We found that AI specialists familiar with finance produced successful outcomes 79% faster than generalists. This difference translates directly to millions in saved investment and faster realisation of returns.”
“We’ve seen countless projects stall because firms hired AI experimenters - not implementers. The talent gap isn’t just technical - it’s contextual.”
- Freya Scammells, Head of Caspian One’s AI Practice > contact
Here’s what typically happens when teams hire for technical skill without sector alignment.
2.
Projects are delayed by compliance rewrites and legal bottlenecks
4.
Infrastructure isn’t designed for low-latency, real-time financial environments
1.
AI models are built to optimise accuracy - not regulatory explainability
3.
ML teams don’t understand trading logic, portfolio constraints, or real-world risk
The result?
High-cost initiatives that never make it out of the lab.
[Barrier 1]
Specialist Talent Shortages
The World Economic Forum (2024) reports that 73% of financial services leaders cite AI talent scarcity as a critical barrier to progress. However, this isn’t just a case of not having enough people - it’s a matter of not having the right kind of people.
What financial institutions need are not just highly skilled technologists, but professionals who understand the nuance of applying AI within complex, regulated, and high-stakes environments. The roles most acutely affected include:
-
These professionals are responsible for developing the core algorithms that drive AI solutions across trading, risk, and operational platforms.
In financial services, their work requires more than theoretical knowledge - it demands fluency in building low-latency, high-throughput models that operate in real time and within tightly governed environments.
They must optimise for performance under constraints such as market volatility, regulatory limits, and risk exposure. Familiarity with trading architecture, time-series data, and algorithmic model tuning is essential.
-
Often sitting at the intersection of quant finance and machine learning, these experts design and refine models that drive predictive analytics, portfolio optimisation, and alpha generation.
They apply advanced statistical techniques, deep learning, and reinforcement learning methods, but always within the context of financial markets.
Their domain knowledge allows them to assess model outputs not just for accuracy, but for economic significance, interpretability, and regulatory compliance. This role is vital in ensuring that AI adds meaningful business value, not just technical novelty.
-
MLOps (Machine Learning Operations) specialists ensure that AI systems are not only built, but deployed, monitored, and maintained at scale.
In finance, this means integrating models into production environments where uptime, traceability, and explainability are non-negotiable. MLOps engineers build and manage robust pipelines for data ingestion, model versioning, and real-time monitoring.
They also play a critical role in governance - making sure that AI models perform consistently, don’t drift, and remain compliant with evolving internal and external controls.
-
Natural Language Processing is increasingly central to financial services - powering everything from regulatory compliance automation and contract analysis, to market sentiment analysis and AI assistants for internal operations.
NLP specialists in this space require more than language model expertise - they need an understanding of financial language, documentation formats, and the implications of extracting insights from sensitive or regulated content.
Their work directly impacts risk mitigation, reporting accuracy, and client communications.
-
As the regulatory landscape tightens, institutions must design AI systems that are not just effective - but also explainable, auditable, and fair. AI governance professionals bring expertise in compliance frameworks such as the EU AI Act, SEC guidance, and FCA rules.
They work closely with risk, legal, and compliance teams to ensure that model development adheres to ethical standards and avoids reputational, operational, or legal risk.
Their role is increasingly central in helping firms establish trust in AI across internal stakeholders and external regulators alike.
[Barrier 2]
Regulatory Complexity and Compliance Risk
As artificial intelligence becomes more embedded in financial decision-making, regulators across the globe are moving quickly to ensure that these systems are safe, fair, and accountable.
The forthcoming EU AI Act (2025) is the most comprehensive regulatory framework to date, and its implications for financial services are far-reaching. Under the Act, firms deploying high-risk AI systems - such as those involved in trading, credit scoring, or fraud detection - must comply with strict obligations around transparency, documentation, risk mitigation, and human oversight. Non-compliance could result in penalties of up to 6% of global annual turnover, posing a significant financial and reputational risk.
Alongside the EU, the SEC has issued new guidance focused on the use of AI in investment advice, algorithmic trading, and client communications - placing greater emphasis on the explainability and auditability of automated decisions. In the UK, the FCA has increased scrutiny of algorithmic trading platforms and AI-driven risk models, particularly those that impact market integrity or consumer outcomes.
In this environment, financial institutions are no longer simply encouraged - but increasingly obliged - to ensure their AI systems are:
Transparent & Explainable:
Capable of being understood by stakeholders, including regulators, clients, and internal governance teams. Black-box models with opaque decision logic are increasingly untenable in high-risk contexts.
Auditable:
Institutions must be able to evidence how decisions were made, what data was used, and whether appropriate controls were in place. This demands traceable workflows, version-controlled models, and structured governance processes.
Bias-Free & Fair:
Regulators expect firms to identify, monitor, and mitigate algorithmic bias. This includes regular fairness audits, scenario testing, and alignment with ethical AI frameworks.
Traceable & Accountable:
Every component of an AI system - data sources, model architecture, performance metrics, and decision logs - must be documented and accessible for audit and oversight purposes.
This complexity means compliance cannot be treated as an afterthought.
Instead, it needs to be integrated from the ground up, embedded into hiring, infrastructure, and development processes.
As PwC (2024) puts it:
“AI compliance isn’t optional. Institutions require governance specialists who understand both the models and the laws that govern them.”
Many firms are now recognising that the lack of AI governance talent is as much a barrier to AI adoption as infrastructure or investment. Without experienced professionals capable of aligning innovation with regulatory expectations, AI projects risk becoming not just ineffective - but non-compliant and unrealisable.
[Barrier 3]
Legacy Infrastructure and Technical Debt
While AI is often viewed as a cutting-edge solution, its success is intrinsically tied to the maturity of the environment it operates in. In financial services - particularly in established Tier 1 banks - legacy infrastructure remains one of the most significant barriers to scalable AI adoption.
Modern AI systems depend on:
Cloud-native architectures capable of supporting distributed training and scalable deployment
High-throughput data pipelines to manage the volume and velocity of financial data
Real-time feedback loops to facilitate continuous learning, monitoring, and model refinement
Yet many institutions are still working within ecosystems built a decade - or more - ago.
Core platforms often consist of monolithic applications, tightly coupled data sources, and outdated technology stacks that lack the flexibility needed for AI integration. According to EY’s Financial Services CTO Survey (2024):
68% of CTOs cited legacy systems as the most significant obstacle to AI adoption
AI initiatives commonly experience delays of 12–18 months due to compatibility challenges with existing infrastructure
Projects that reach deployment often do so with limited scalability or automation, undermining their long-term value
“AI can’t create value in isolation. If it can’t plug into your architecture, run in real time, or feed back into your business systems, it remains an academic exercise.”
- Freya Scammells, Head of Caspian One’s AI Practice > contact
The Hidden Cost of Technical Debt
Legacy environments create a cascade of operational and strategic challenges. Data silos limit access to clean, structured, and relevant data required for model training. Manual workflows inhibit the integration of AI into existing processes. Security and compliance concerns delay cloud migration and AI tool adoption. Lack of observability and automation increases the cost of model monitoring, versioning, and governance.
Even where AI pilots are successful in isolated test environments, institutions often struggle to transition from proof-of-concept to production due to friction with existing systems.
Modernisation without disruption: addressing legacy challenges doesn’t necessarily require a wholesale rebuild. Many institutions are adopting incremental strategies, such as:
Deploying AI solutions in containerised or hybrid-cloud environments
Using middleware and orchestration tools to bridge old and new systems
Building modular AI components that can integrate with existing processes via APIs
Phasing cloud migration to minimise operational risk while improving flexibility
However, this transition requires both technical leadership and specialist AI infrastructure talent - professionals who understand not just the engineering, but how to navigate organisational complexity and risk sensitivity within financial institutions.
Without this, even the most sophisticated AI models are likely to remain stuck on the shelf - underused, untrusted, and ultimately, unscalable.

A Smarter Approach to AI Hiring in Finance
To overcome the systemic barriers outlined above, financial institutions must reframe how they approach AI capability building. The solution isn’t simply to hire more - it’s to hire differently.
The firms seeing meaningful returns from AI are those that build blended teams with domain-specific expertise baked in from day one. According to research from McKinsey, Deloitte, and Goldman Sachs, successful AI transformation in finance hinges on three specialist profiles.
Caspian One’s AI Practice is built around precisely these needs, curating a network of highly specialised professionals with both technical depth and financial fluency.
Would you prefer to speak directly with our AI Lead about smarter AI hiring?
If you’re navigating the complexities of building or scaling AI teams in financial services, sometimes the most efficient next step is a conversation.
Freya, our AI Lead, works closely with financial institutions to solve real-world talent challenges - balancing technical expertise with sector-specific insight. Whether you’re at the early stages of adoption or refining an established strategy, she’s available to explore how a more intelligent, commercially aligned acquisition model could work for you.
Financial AI Engineers
These are AI practitioners fluent in finance - not just Python.
They understand trading desks, portfolio theory, and risk controls. They build models that can survive scrutiny from compliance teams and work within the real-world constraints of latency, capital requirements, and regulatory boundaries.
“You don’t want someone learning what a swap is halfway through your quant project.”
MLOps & AI Infrastructure Specialists
Without MLOps, even the best models will fail in production.
These professionals:
Design CI/CD pipelines for machine learning
Maintain real-time monitoring and feedback loops
Ensure reproducibility, scalability, and fault tolerance in financial systems
Firms investing in MLOps experience significantly shorter deployment cycles and greater model reliability (EY, 2024).
“AI isn’t just a science problem - it’s an engineering one. If you can’t deploy it, you can’t scale it.”
AI Governance & Compliance Experts
With regulations tightening globally, firms need professionals who can embed responsible AI practices at every stage of the pipeline - model design, deployment, monitoring, and audit.
These specialists understand how to:
Detect and mitigate bias
Ensure explainability for high-risk use cases
Align AI pipelines with FCA, SEC, and EU AI Act standards
According to PwC (2024), early-stage involvement of compliance experts can reduce the likelihood of regulatory breaches by over 70%.
Strategic Recommendations and Conclusion
Financial institutions are no longer debating whether to invest in AI - they’re now grappling with how to make that investment pay off. The key lies not in technology alone, but in who is trusted to design, build, and scale these solutions.
Here’s how institutions can shift towards AI strategies that deliver sustained, measurable value:
1. Prioritise Sector-Specific Talent
Generic AI expertise is no longer sufficient. Financial services firms should prioritise:
Hiring AI professionals with direct experience in finance
Seeking cross-functional understanding of both technical and regulatory constraints
Embedding domain knowledge into every stage of the AI lifecycle
This alignment improves time-to-value, model performance, and stakeholder confidence.
2. Integrate Governance from Day One
Don’t wait until deployment to think about compliance. Embed AI governance from the outset by:
Employing specialists with regulatory and ethical AI experience
Ensuring all models are explainable, auditable, and compliant
Aligning model outputs with industry-specific transparency standards
This reduces risk, accelerates approval processes, and builds internal trust in AI.
3. Build Infrastructure That Supports Scale
AI success isn’t only about talent - it also requires systems that can support deployment at scale. That means:
Investing in cloud-native tools and scalable data platforms
Breaking down legacy silos that slow down model integration
Hiring MLOps experts to operationalise AI workflows securely and efficiently
Without the right infrastructure, even world-class talent can’t deliver sustainable AI adoption.
4. Rethink the Role of Experimentation
Innovation is vital, but finance is not a research lab. Many firms fall into the trap of endless pilots that never translate into production. The smarter approach:
Define clear business outcomes before model development
Focus on practical applications with measurable ROI
Employ talent capable of bridging strategy and execution
“The AI conversation in finance needs to shift from possibility to practicality. That starts with hiring people who know how to make AI work - not just make it interesting.”
- Freya Scammells, Head of Caspian One’s AI Practice > contact
In conclusion…
AI is undeniably reshaping financial services - but turning that potential into real-world results hinges on one critical factor: the right talent.
Institutions that continue relying on generalist AI hires or research-heavy teams often find themselves facing prolonged delays, rising costs, and mounting compliance risk.
By contrast, firms that prioritise finance-specific AI expertise - practitioners who understand both the technology and the regulatory and operational realities of the sector - are able to move faster, embed trust, and realise measurable value.
Caspian One’s dedicated AI Practice was built with this in mind.
Led by industry specialist Freya Scammells, the practice addresses the core capability gaps most institutions face today - from AI governance and MLOps to financial engineering and real-time model deployment.
Every professional in our network combines technical depth with domain fluency, ensuring AI is not just delivered but delivered effectively, securely, and in line with business and regulatory priorities.
As the pace of AI adoption accelerates, the institutions that lead will be those who build teams capable of executing with precision, speed, and accountability. Strategic partnerships with specialist providers - those who understand the nuances of both AI and finance - are becoming not just advantageous, but essential.
Disclaimer
This report is based on research from credible, publicly available sources and Caspian One’s internal market expertise as of the time of writing. While every effort has been made to ensure the accuracy, reliability, and completeness of the content, this document is intended for informational purposes only and should not be interpreted as formal advice or a specific recommendation regarding AI adoption or hiring strategies.
The AI landscape - along with the associated regulatory frameworks and talent markets within financial services - is developing rapidly. Readers are encouraged to independently verify current standards, regulations, and market conditions before making strategic decisions or investments. Caspian One accepts no liability for any actions taken based on the information provided herein.
Any case studies, scenarios, or ROI figures included in this report are illustrative and do not represent guaranteed outcomes. Organisations should conduct their own due diligence and seek relevant professional guidance before implementing any AI-related initiatives. For consistency all currencies have been converted to UK GBP with exchange rates as-of April 2025.
This report is the intellectual property of Caspian One and was produced and published in March 2025. All rights reserved.
-
AI Practice at Caspian One (2025)
Caspian One AI Practice: Specialist AI Talent for Financial Markets.
https://www.caspianone.com/ai-practice
Accenture (2024)
Accenture Financial AI Report 2024: Maximizing Value from AI Investments in Banking and Capital Markets.
https://www.accenture.com/financial-ai-report-2024
Boston Consulting Group (2024)
AI in Financial Services: Unlocking Efficiency and Value through Specialized Talent.
https://www.bcg.com/publications/2024/ai-financial-services-specialist-talent
Deloitte (2024)
Financial AI Adoption Report 2024: Expectations vs. Reality.
https://www.deloitte.com/financial-ai-adoption-report-2024
EY (2024)
Financial Services CTO Survey 2024: AI Integration and Legacy System Challenges.
https://www.ey.com/financial-services-cto-survey-2024
Gartner (2023-2025)
Gartner Hype Cycle for Artificial Intelligence in Banking and Investment Services, 2024.
https://www.gartner.com/ai-hype-cycle-banking-2024
Goldman Sachs (2024)
Goldman Sachs AI & Talent Market Insights Report, 2024.
https://www.goldmansachs.com/insights/ai-talent-market-2024
LinkedIn Talent Insights (2024)
Financial Sector AI Talent Trends Report 2024.
https://business.linkedin.com/talent-solutions/ai-talent-trends-2024
McKinsey & Company (2024)
Global AI Survey: State of AI in Financial Services 2024.
https://www.mckinsey.com/business-functions/mckinsey-digital/global-ai-survey-2024
PwC (2024)
Navigating AI Compliance: A Guide to Upcoming Regulations in Financial Services.
https://www.pwc.com/financial-ai-compliance-guide-2024
World Economic Forum (WEF, 2024)
The Future of Jobs Report 2024: AI Talent and Skill Gaps in Finance.
-
Capgemini Research Institute (2024)
AI in Capital Markets: Accelerating Digital Transformation.
https://www.capgemini.com/ai-capital-markets-2024
CFA Institute (2024)
AI and Data Science in Investment Management: Opportunities and Challenges.
https://www.cfainstitute.org/research/ai-data-science-investment-2024
European Commission (2024)
EU AI Act Documentation: Guidelines, Compliance, and Implementation.
https://ec.europa.eu/digital-strategy/ai-act-2024
IBM Institute for Business Value (2024)
The State of AI in Banking and Financial Markets.
https://www.ibm.com/business/value/ai-financial-markets-2024
KPMG (2024)
Artificial Intelligence Regulatory Horizon Report 2024.
https://home.kpmg/ai-regulatory-report-2024
Oliver Wyman (2024)
AI in Financial Risk Management: Driving Value from Machine Learning.
https://www.oliverwyman.com/ai-financial-risk-2024
SEC.gov (2024)
SEC Guidelines for AI-Driven Trading and Investment Advisory (2024 update).