Material basis of AI

Energy, chips, water

2023-03-23 — 2025-10-07

Wherein economies of foundation models are examined and the disproportionate energy and water demands of large-scale training, including data‑centre cooling and emissions accounting, are described.

agents
AI safety
bounded compute
collective knowledge
distributed
economics
edge computing
extended self
faster pussycat
incentive mechanisms
innovation
language
machine learning
neural nets
NLP
technology
UI
when to compute
Figure 1

It’s complicated, important,and political, and extremely interesting. TODO.

1 Incoming

2 References

Agency. 2025. Energy and AI.”
Chung, Liu, Ma, et al. 2025. The ML.ENERGY Benchmark: Toward Automated Inference Energy Measurement and Optimization.”
Elsworth, Huang, Patterson, et al. 2025. Measuring the Environmental Impact of Delivering AI at Google Scale.” arXiv:2508.15734.
Gupta, Kim, Lee, et al. 2020. Chasing Carbon: The Elusive Environmental Footprint of Computing.”
Kamiya, and Coroamă. 2025. Data Centre Energy Use: Critical Review of Models and Results.”
Li, Hu, Choukse, et al. 2025. EcoServe: Designing Carbon-Aware AI Inference Systems.”
Luccioni, Viguier, and Ligozat. 2023. Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model.” Journal of Machine Learning Research.
Oviedo, Kazhamiaka, Choukse, et al. 2025. Energy Use of AI Inference: Efficiency Pathways and Test-Time Compute.”
Patel, Choukse, Zhang, et al. 2024. Characterizing Power Management Opportunities for LLMs in the Cloud.” In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3. ASPLOS ’24.
Samsi, Zhao, McDonald, et al. 2023. From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference.”
Stojkovic, Zhang, Goiri, et al. 2025. DynamoLLM: Designing LLM Inference Clusters for Performance and Energy Efficiency.” In 2025 IEEE International Symposium on High Performance Computer Architecture (HPCA).
Yang, Guo, Tang, et al. 2025. LServe: Efficient Long-Sequence LLM Serving with Unified Sparse Attention.”