Xing (2026)
AI inference tokens are becoming a commodity, and someone has designed the futures contract
This paper argues that the tokens consumed by large language models share the economic properties of electricity and carbon credits, then proposes a complete standardized futures contract to let enterprises hedge their compute costs.
- 62% - 78%
- reduction in enterprise compute cost volatility from token futures hedging across all simulated scenarios
- 40×
- decline in GPT-4-level inference prices from early 2023 to early 2025 (from $60 to under $1.50 per million output tokens)