Folder Explainer
Back to folderResearch Explainer · Cambridge CCAF (2026)
Finance has gone all-in on AI, but the supervisors watching it have not
A 628-organisation, 151-jurisdiction survey finds 81% of financial firms now using AI, while regulators trail on adoption, data collection and the supervisory tools needed to keep up.
Published April 2026
81% of surveyed financial firms are adopting AI at some level, but only 14% see it as transformational to strategy
40% vs 20% share of industry vs regulators reporting advanced AI adoption (Scaling or Transforming)
52% of industry respondents are already piloting or deploying agentic AI; 81% expect it meaningfully achieved by 2030
69% of all respondents use OpenAI as a foundation model provider, with three providers serving over 80% of industry
The execution gap behind the headline number
Five years on from the 2020 CCAF-WEF AI report, the Cambridge Centre for Alternative Finance has gone back into the field with a much wider net: 628 organisations across 151 jurisdictions, split between fintechs, traditional financial institutions, AI vendors, and 130 central banks and supervisors. The headline is striking. 81% of financial firms now report some form of AI adoption.
Look one layer down and the picture gets more honest. Only 14% of industry respondents describe AI as transformational to their strategy. Forty per cent are at the Scaling or Transforming stage, double the rate among regulators (20%), but the bulk of the sector is still piloting or exploring. AI adoption has gone broad without going deep.
The split between fintechs and incumbents is sharp. Fintechs are more than three times as likely to have reached the Transforming stage (19% vs 6%), and lead on agentic AI adoption by 12 percentage points. Traditional banks lead on time-series forecasting and unsupervised learning, the long-established machine learning techniques that depend on years of clean internal data.
AI adoption maturity: industry races ahead, regulators lag
CCAF (2026), based on Figure 1.0. Industry n=352, regulators n=130. 'Advanced' covers Scaling plus Transforming stages.
Active AI adoption by technology category
CCAF (2026), Figure 1.4. Active adoption covers Piloting, Scaling or Transforming. Fintech n=203, traditional FIs n=149, regulators n=130.
Foundation model concentration across the sector
CCAF (2026), Figure 4.7. Combined responses across industry, vendors and regulators (n≈615). Multi-select question.
What AI is actually doing inside banks
The deployment map is conservative. Four of the top five financial services AI use cases are back-office: process automation (79%), data visualisation (75%), software engineering (75%) and data and knowledge management (69%). The leading front office use case is AI-powered customer support at 74%, with fraud detection (57%) and credit risk modelling (54%) heading the risk and compliance list.
What the report calls a structural finding is that current AI is mostly improving execution, not reconfiguring business models. The exception is among more mature adopters, where 51% are piloting or deploying entirely new AI-powered financial products, against 28% of less mature firms. Profitability follows a similar pattern: 62% of firms spending more than USD 100,000 a year on AI have reached advanced maturity, and 62% of that group report higher profitability. Fintechs again outperform on this measure, 56% reporting profitability gains versus 34% of traditional FIs.
Productivity gains are widely felt. They are reportedly highest in technology, data and product (79%), back office (75%) and front office (69%). But 55% of industry and 63% of regulators say measuring AI's actual value is difficult, rising to 76% among the largest financial institutions. The sector has accepted that AI helps; it has not yet built the instrumentation to prove how much.
A concentrated supply chain finance is now leaning on
Most organisations are not building AI from scratch. 63% of industry and 65% of regulators run internal workflows on top of external foundation models. At the time of the survey, OpenAI led across every group, used by 76% of industry, 48% of regulators and 33% of vendors. Google followed, then Anthropic. Three providers serve more than 80% of industry between them.
The cloud picture is just as concentrated. AWS leads industry (46%) and vendors (55%), Azure leads among the regulators that use cloud (39%), and 46% of regulators report using no cloud at all. Traditional FIs still lean far more heavily on on-premises deployment than fintechs (39% vs 23%), a divide visible since the 2020 report. The BIS contribution to the report frames this concentration as an emerging structural feature: a small group of US-headquartered firms now sit across multiple layers of the AI supply chain at once, from chips to applications.
DeepSeek's rise is one of the more interesting subplots. The open-weight Chinese model was used by 15% of industry respondents within months of public release, with adoption highest in Sub-Saharan Africa (21%) and Asia-Pacific (18%). Lower cost and openly accessible weights appear to be doing what closed proprietary models cannot: lowering the entry barrier in lower-resource markets.
Where the risks live, and who is watching
Stakeholders broadly agree on the top two risks: data privacy and protection (cited by 73% of all respondents) and model hallucinations (69%). After that, priorities split. Regulators worry most about cyber and operational resilience (59%) and model explainability (56%). Industry worries most about losing human oversight as automation scales (55%). Vendors worry most about data integrity and model drift (54%) and bias (43%).
On accountability, the gap is wide. Regulators (38%) most often place primary responsibility for AI-related harm on the regulated financial institution. Only 18% of industry and 16% of vendors agree. Industry and vendors prefer shared, joint or case-by-case arrangements. As more autonomous, agentic systems get deployed via third-party vendors, that disagreement becomes a real supervisory problem rather than a theoretical one.
The numbers on supervisory capacity are arguably the most important in the report. Only 24% of regulators currently collect data on AI adoption levels. Just 18% collect data on third-party AI dependencies. Only 5% collect data on potential discrimination, exclusion or systemic bias, despite 50% of regulators flagging algorithmic bias as a top-five risk. 65% of industry firms do not currently monitor their own models for bias either. The combined effect is an oversight system that is broadly aware of the risks and has not yet built the data infrastructure to see them.
Looking to 2030: agentic AI, AGI and an unsettled regulatory frontier
The forward-looking picture in the report is more dramatic than the present one. Agentic AI is expected to jump from 24% deployment today to 81% by 2030, the largest single increase across any AI category. Half of industry respondents and 51% of vendors expect artificial general intelligence to be meaningfully achieved by 2030, even though only 9% rank AGI in their current top-five risks. The disconnect is pragmatism, not denial: today's pressing problems are data leaks, hallucinations and cyber threats, not a hypothetical superintelligence.
On competition, the sector has revised its view sharply. In 2020, 42% of respondents thought the market status quo would prevail. Today only 8% do. Around half of industry respondents now expect more competitive or balanced market dynamics, and over a fifth flag genuine winner-takes-all concentration risk. Vendors are the most disruption-pilled; regulators are the most cautious, with 52% saying it is too early to tell.
On international cooperation, regulators are cautiously optimistic, 48% expecting it to improve from a difficult starting point. The report's closing observation is harder: AI deployment in the private sector is currently outpacing the supervisory frameworks and the technical capacity needed to oversee it. The case for stronger AI governance has broad support across all three stakeholder groups. The investment in supervisory tools, training and data collection that would make that governance real has not yet been made at the scale the ambition requires.
The bottom line
AI has become the operating assumption of modern finance, not an experiment. But 81% adoption masks a sector where most firms are still at the piloting stage, where three foundation model providers dominate the supply chain, and where supervisors lack the data infrastructure to monitor the risks they have already named. The gap between what regulators know about AI risk and what they can actually see in their data systems is the defining tension the report leaves unresolved.
Reference
Cambridge Centre for Alternative Finance (2026). The 2026 Global AI in Financial Services Report: Adoption, impact and risks. University of Cambridge Judge Business School, in partnership with BIS, IMF, WEF, IDB, CGAP and AMF, with support from the UK FCDO. https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/2026-global-ai-in-financial-services-report/