Research · Blogs & Independent Thinkers
Back to sweepResearch sweep · deep · 1995 – 2026
Compounding Waves — How Each Tech Era Built the Substrate, and the Skills, for the Next
The compounding economic logic of three successive technology waves from January 1995 to May 2026 — internet disintermediation of distribution, software-defined platforms and cloud infrastructure, and the current AI/agentic systems wave — examining the technical, economic and human-skills dependencies that make each wave a precondition for the next, the new categories of work each wave created, and whether the relationship is best understood as cumulative compounding or as externalised costs harvested by later layers.
- financial
- academic
- blogs
- vc
Synthesised 2026-05-11
Narrative
The dominant analytical frame across independent blogs is Ben Thompson's Stratechery, which provides the most coherent through-line across all three waves. His 2015 Aggregation Theory piece identified how zero marginal distribution costs rewired industry power during wave one; his 2024 AI and enterprise piece traces how Salesforce's SaaS model bridged wave one's internet infrastructure to wave two's cloud platforms; and his 2026 Agents Over Bubbles argues that wave three breaks the aggregation model itself by reintroducing marginal compute costs at scale. The structural logic is explicit: Amazon could fund Anthropic because AWS generated the capital and the infrastructure, and Anthropic can train frontier models because AWS built the hyperscale compute layer — a three-generation compounding chain made concrete in a single corporate dependency graph.
Simon Willison's Weblog provides the practitioner counterpart. His annual LLM reviews (2023, 2024, 2025) document the 100x inference cost drop between 2022 and 2025 and the diffusion of GPT-4-class capability to 18 separate organisations, empirically tracing the commoditisation dynamic that Thompson theorises. Willison's 2026 Jevons paradox framing — that cheaper cognition generates more cognitive demand rather than less — is the clearest independent statement of the compounding dividend thesis, while his detailed attention to training data provenance and model limitations keeps the analysis honest about what open-web corpora actually contain and what happens when they thin.
On the labour-economics side, Substack writers engaging with Acemoglu, Autor, and the Frey–Osborne tradition form a coherent cluster. A February 2026 Great Leadership post applies the displacement-productivity-new-task-creation trifecta to current AI vacancy data and finds that the balance varies sharply by task type: routine cognitive tasks show clearly net-negative displacement while strategic and creative roles show job growth. Tom Bewick's Substack engages the newer Gans–Goldfarb O-ring automation model, which challenges the task-separability assumption underlying most exposure indices and suggests that automating easy tasks concentrates human effort on harder bottleneck tasks — a finding that cuts against simple displacement narratives without fully validating the compounding dividend story.
The externalised-harvest framing receives its clearest independent articulation in the Augmented Mind Substack, which characterises the AI platform stack as a structural repeat of wave-two platform dependency: cheap initial access to build user and developer lock-in, followed by pricing power once dependency is established. LessWrong contributors add the data exhaustion dimension, with multiple posts converging on an estimated 2026–2032 window when general-purpose internet training data runs dry — directly implicating wave-one open-web content (Common Crawl, Reddit, GitHub, Stack Overflow) as a finite and now-contested substrate, not an infinitely renewable externality.
Sources
| ID | Title | Outlet | Date | Significance |
|---|---|---|---|---|
| b1 | Aggregation Theory | Stratechery (Ben Thompson) | 2015-07 | Foundational framing of how internet-era zero marginal distribution costs restructured industry power from supply control to demand aggregation, directly explaining the economic logic of wave one's disintermediation. |
| b2 | Enterprise Philosophy and the First Wave of AI | Stratechery (Ben Thompson) | 2024-08 | Traces technology waves from mainframe digitisation through SaaS to AI, examining how enterprise adoption patterns and job displacement repeat structurally across each transition, with Salesforce as the hinge between wave one and two. |
| b3 | Agents Over Bubbles | Stratechery (Ben Thompson) | 2026-03 | Argues that agentic AI is not a bubble because each agent multiplies compute demand rather than substituting for it, and that the economic imperative to deploy agents will drive both workforce restructuring and hyperscaler capex compounding. |
| b4 | AI Integration and Modularization | Stratechery (Ben Thompson) | 2024-06 | Applies Christensen and Coase integration logic to the AI stack, showing why value accumulates at integrated layers rather than modular ones and explaining why Google's chip-to-model vertical stack differs structurally from AWS's marketplace approach. |
| b5 | AI and the Big Five | Stratechery (Ben Thompson) | 2023-01 | Maps which incumbent hyperscalers are positioned to capture AI value and which are threatened, drawing the explicit parallel between cloud infrastructure incumbency and AI layer incumbency. |
| b6 | AI Promise and Chip Precariousness | Stratechery (Ben Thompson) | 2025-04 | Traces the semiconductor dependency chain from Silicon Valley's founding through TSMC to current AI compute, arguing that wave three's binding constraint is geopolitical control of chip fabrication rather than software. |
| b7 | The End of Aggregation Theory and AI Economics (homepage synthesis) | Stratechery (Ben Thompson) | 2026-03 | Synthesises the claim that AI reintroduces marginal costs and ends the zero-marginal-cost era that Aggregation Theory described, marking the structural boundary between wave two and wave three economics. |
| b8 | Stuff we figured out about AI in 2023 | Simon Willison's Weblog | 2023-12 | First-hand practitioner account of the LLM breakthrough year, documenting the open-web training corpus as the substrate of wave-three capability and flagging the epistemic uncertainty around what models actually learn. |
| b9 | Things we learned about LLMs in 2024 | Simon Willison's Weblog | 2024-12 | Annual review documenting the 100x inference price drop, the proliferation of GPT-4-class models to 18 labs, and the transition toward agentic patterns, providing empirical evidence for the compounding cost-reduction dynamic of wave three. |
| b10 | 2025: The year in LLMs | Simon Willison's Newsletter (Substack) | 2026-01 | Introduces the Jevons paradox framing for AI knowledge work — cheaper cognition generates more demand for cognitive tasks rather than less — directly engaging the compounding-versus-displacement debate. |
| b11 | What if LLMs are mostly crystallized intelligence? | LessWrong | 2025 | Argues that frontier model capability growth is bottlenecked by domain-specific data quality and volume, with general-purpose internet data estimated to run dry by 2026–2032, making wave-one open-web content a finite and depletable input. |
| b12 | The next wave of model improvements will be due to data quality | LessWrong | 2025-06 | Identifies real-world deployment feedback (from Operator and Codex usage signals) as the next load-bearing training data source, framing the shift from static open-web corpora to dynamic synthetic-and-interaction data as a structural transition. |
| b13 | The New AI Infrastructure Stack | Medium (Devansh / Machine Learning Made Simple) | 2025-06 | Characterises the ASIC–CXL–Optical I/O triad as an interlocking dependency chain where adopting one layer forces adoption of the next, providing a concrete hardware-level illustration of wave-internal compounding. |
| b14 | A Simple Explainer of Acemoglu's Simple Macroeconomics of AI | Causal Inference (Substack) | 2025-04 | Unpacks Acemoglu's NBER 2024 model projecting TFP gains of 0.55–0.71% annually from AI under baseline assumptions, grounding the productivity-paradox debate in a tractable framework and cross-referencing Autor's task model. |
| b15 | The Future of Employment in the Age of Artificial Intelligence | Substack (José Luis Chávez Calva) | 2025-04 | Synthesises Acemoglu–Restrepo displacement, productivity, and new-task-creation effects against empirical vacancy data showing kinks in software employment post-2022, testing whether wave three is following wave-two job-creation patterns. |
| b16 | The Transition Is The Crisis: A DEEP Dive on AI, Jobs and The Future Of Work Over the Next 5 Years | Great Leadership (Substack) | 2026-02 | Applies the Acemoglu–Restrepo task-based model to current AI layoff data, documenting that routine cognitive task displacement is already net negative while creative and strategic roles show net job growth, testing the 'this time is different' thesis directly. |
| b17 | Daron Acemoglu on AI and Jobs | Center for Humane Technology (Substack) | 2024-05 | Acemoglu argues that automation since the 1980s has created two structural inequality tiers — capital vs labour, and task-commanding vs task-displaced workers — framing the AI wave as an amplifier of an already-running dynamic. |
| b18 | Acemoglu and Johnson on the Past and Future of Work | The One Percent Rule (Substack) | 2024-12 | Reviews 'Power and Progress' and its core argument that technological benefit distribution depends on institutional power structures, not on the technology itself, providing the historical context for the externalised-harvest framing. |
| b19 | Beyond AI Apocalypse as '47% of Jobs at Risk' | Tom Bewick (Substack) | 2026-02 | Critiques Frey–Osborne and engages Gans–Goldfarb 2026 O-ring automation model, arguing that automating easy tasks concentrates human effort on harder bottleneck tasks, shifting the skill premium rather than eliminating it. |
| b20 | The AI Disintermediation Panic is Unfounded | Playing FTSE (Substack) | 2026-01 | Argues the market is mispricing incumbents by conflating 'AI creates new competition' with 'AI destroys incumbent value overnight,' drawing a direct parallel to earlier waves where incumbents adapted rather than collapsed. |
| b21 | The Death of AI Extraction: Architecting Your Sovereign Exit | Augmented Mind (Substack) | 2025-05 | Frames the AI platform stack as a repeat of the wave-two platform dependency playbook — cheap access, capture dependency, raise prices — arguing that AI labs are converting wave-one open-web content into a proprietary subscription layer. |
| b22 | The 'vast uncertainty' of AI and jobs — David Autor | The Next Wave Futures (WordPress / Andrew Curry) | 2024-02 | Synthesises Autor's 2024 NBER paper showing 60% of US employment is in categories invented post-1940, and documents his explicit uncertainty about whether wave three will follow the same new-task-creation pattern. |
| b23 | AI 2027: What Superintelligence Looks Like | LessWrong | 2025-04 | Detailed scenario analysis of synthetic training data loops and agent-generated research, examining whether wave three can bootstrap its own training data supply and break free of wave-one corpus dependency. |
| b24 | Is the Internet Different? (critique of Aggregation Theory) | Stratechery (Ben Thompson) | 2020-10 | Documents the academic and legal pushback on Aggregation Theory, including Tim Wu's critique, providing a rigorous counterpoint that grounds the internet-disintermediation claim in contested rather than settled economics. |
| b25 | Inside the AI Buildout Wave: How Infrastructure Is Becoming the New Battleground | Defiance ETFs (Substack) | 2025-11 | Documents the Magnificent Seven's $21.1 trillion share of a $60 trillion S&P 500 market cap as evidence of how hyperscaler concentration compounds across technology waves, and identifies power, chip fabrication, and cooling as the next binding physical constraints. |