Research · VC & Analyst Reports

Back to sweep

Research sweep · standard · 2025 – present

Enterprise LLM Vendor Selection and Consumption Models

Enterprise LLM vendor selection and consumption patterns (April 2025–present): how companies choose between OpenAI, Anthropic, Google, hyperscaler-hosted model access, and direct API relationships; what decision metrics they use across availability, quality, price, governance, and SLAs; and how adoption differs by company size, workload criticality, and realtime versus offline use cases

  • financial
  • frontier
  • academic
  • vc
  • substack

Synthesised 2026-04-13

Narrative

Menlo Ventures' July 2025 survey of 150 technical leaders marks a decisive market inflection: Anthropic captured 32% of enterprise LLM production workloads, displacing OpenAI (25%, down from 50% in 2023) and Google (20%). Enterprise LLM spend nearly tripled from $3.5 billion in late 2024 to $8.4 billion by mid-2025, yet vendor switching remains rare at 11% annually, with 66% upgrading within their existing vendor—signaling strong switching costs but high model upgrade velocity. Code generation emerged as the killer app, with Claude capturing 42% developer share, more than double OpenAI's 21%, reshaping workload-specific vendor positioning. Closed-source models dominate 87% of enterprise workloads, up from 81%, while open-source declined to 13% as performance gaps widened. 37% of enterprises deploy five or more specialized AI models to minimize vendor lock-in and maximize ROI through workload-based portfolio strategies. Gartner's forward guidance reinforces this trend: by 2027, organizations will implement small, task-specific models at 3x the volume of general-purpose LLMs, while value accrues to platforms orchestrating workloads across diverse model portfolios, routing routine tasks to efficient domain-specific models. On consumption channels, cloud-based managed APIs (OpenAI, AWS Bedrock, Google Cloud, Azure) dominate for rapid adoption, while enterprises in healthcare, finance, and government increasingly favor on-premise deployments for full data governance and regulatory compliance. This bifurcation reflects a tension in enterprise decision-making: immediate time-to-value via direct API access versus long-term control and compliance via infrastructure abstraction.


Sources

ID Title Outlet Date Significance
v1 2025 Mid-Year LLM Market Update: Foundation Model Landscape + Economics Menlo Ventures 2025-07 Landmark VC-backed survey of 150 technical leaders quantifying enterprise LLM consumption: Anthropic 32% (up from niche), OpenAI 25% (down from 50%), Google 20%. Documents $3.5B→$8.4B spending surge, closed-source dominance (87%), and code generation as killer app (Claude 42% vs OpenAI 21%).
v2 Gartner Predicts That by 2030, Performing Inference on an LLM With 1 Trillion Parameters Will Cost GenAI Providers Over 90% Less Than in 2025 Gartner 2026-03 Cost trajectory forecast shaping vendor selection logic: 90% inference cost reduction by 2030, but overall costs rising due to token consumption surge. Emphasizes portfolio orchestration across small domain-specific models vs. commodity frontier models as strategic imperative.
v3 Gartner Predicts by 2027, Organizations Will Use Small, Task-Specific AI Models Three Times More Than General-Purpose Large Language Models Gartner 2025-04 Predicts 3:1 volume shift toward task-specific over general-purpose LLMs by 2027, driven by accuracy and cost. Recommends multi-model portfolio strategies with RAG/fine-tuning, implying vendors must compete on specialization and integration, not monolithic capability.
v4 Enterprise LLM Spend Hits $8.4B as Anthropic Tops OpenAI AI TechPark (amplifying Menlo Ventures data) 2025-08 Replicates Menlo findings with added vendor-switching insight: only 11% of teams switch providers annually; 66% upgrade within vendor; 23% make no changes. Documents market consolidation and sticky dynamics despite rapid share shifts.
v5 Responsible Innovation: A Strategic Framework for Financial LLM Integration Academic/Industry (multi-institutional) 2025 Six-step governance decision framework for regulated sectors (finance, healthcare). Maps selection criteria beyond performance: compliance frameworks, ROI justification, data governance, risk management—critical for high-stakes workload segments.
v6 Large Language Model Evaluation in 2025: Smarter Metrics That Separate Hype from Trust TechRxiv (peer-reviewed preprint) 2025 Documents evolution of enterprise LLM evaluation metrics (2020–2025): semantic accuracy, latency, explainability, adversarial robustness, fairness. Emphasizes production trade-offs (latency vs. benchmark score) shaping procurement decisions beyond leaderboard rankings.
v7 Buy versus Build an LLM: A Decision Framework for Governments Academic/Policy (arXiv) 2026-02 Cites Menlo Ventures data (88% market share held by Anthropic, OpenAI, Google) as evidence of concentration. Frames buy-vs-build decision tree relevant to enterprise SOI: diversification, talent, ecosystem maturity—mirrors commercial vendor selection trade-offs.
v8 A Cost-Benefit Analysis of On-Premise Large Language Model Deployment: Breaking Even with Commercial LLM Services Academic (arXiv) 2025-08 Quantifies on-prem vs. cloud API economics: breakeven analysis for open-source (Llama, Qwen) vs. commercial (OpenAI, Anthropic, Google). Evaluates data privacy, switching costs, and long-term TCO drivers shaping consumption model selection.
v9 LLM in Enterprise: A Complete Guide TrueFoundry (practitioner/infrastructure vendor) 2026-01 Contrasts on-premise (governance, control, compliance) vs. cloud-managed (OpenAI, AWS Bedrock, Google, Azure) consumption models. Documents operational shift from experimentation to production: governance, observability, billing controls as decision criteria.
v10 Emerging Patterns for Building LLM-Based AI Agents Gartner 2025 Gartner research on agentic AI architecture patterns and vendor capabilities. Relevant to workload-specific selection (agents vs. chat vs. retrieval), multi-step orchestration, and vendor-specific tool-use maturity.

We use analytics cookies to understand site usage and improve the service. We do not use marketing cookies.