Gradient descent at scale

Practical implementation of large optimisations

2021-07-14 — 2025-05-26

functional analysis
machine learning
model selection
optimization
premature optimization
statmech

This area is something that large firms are ploughing with hundreds of billions of dollars per year, which means that the rate of research is faster than I can track as a side-hustle. Do not depend upon this page being up to date or comprehensive.

Figure 1

There are many ways we can try to expand a neural model beyond the limits of a single GPU. Obviously, the details are complicated. But Lilian Weng has explained it all for us. See How to Train Really Large Models on Many GPUs?. Jeremy Jordan is also good, see training extremely large neural networks across thousands of GPUs..

1 General compute node provisioning

See practical cloud ml.

2 Basic multi-GPU

DDP and other multi-GPU techniques at medium scale (e.g. 100 GPUs). Various frameworks have different solutions for this, e.g. PyTorch Lightning brings basic DDP to PyTorch. There are also generic ones. Prominently, Horovod.

2.1 Horovod

Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. (source.).

With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.

3 Extremely large models

I have little to say here for now but I need to remember various terms for later use. ZeRO and other methods of efficient parallel/sharded gradient descent to enable very large models for a fixed GPU memory budget.

4 Hyperparameter tuning

I think there are a couple of attempts to scale hyperparameter tuning successfully. But the one I noticed was μP, which uses some kind of ‘natural parameterization’ to make the hyperparameter tuning problem for large models tunable by projecting from small models.

More recently, muon has made some interesting progress:

(; , ; ; ; )

I’m interested in their Newton-Schulz iterations, and more generally the modula system for metrized deep NN training.

5 Incoming

  • Training extremely large neural networks across thousands of GPUs.

  • How to Train Really Large Models on Many GPUs?

  • Colossal-AI Overview

    Colossal-AI is designed to be a unified system to provide an integrated set of training skills and utilities to the user. You can find the common training utilities such as mixed precision training and gradient accumulation. Besides, we provide an array of parallelism including data, tensor and pipeline parallelism. We optimize tensor parallelism with different multi-dimensional distributed matrix-matrix multiplication algorithm. We also provided different pipeline parallelism methods to allow the user to scale their model across nodes efficiently. More advanced features such as offloading can be found in this tutorial documentation in detail as well.

6 References

Bernstein, and Newhouse. 2024a. Old Optimizer, New Norm: An Anthology.”
———. 2024b. Modular Duality in Deep Learning.”
Bernstein, Vahdat, Yue, et al. 2020. On the Distance Between Two Neural Networks and the Stability of Learning.”
Higham. n.d. “Matrix Procrustes Problems.”
Kumar, Bradbury, Young, et al. 2020. Exploring the Limits of Concurrency in ML Training on Google TPUs.” arXiv:2011.03641 [Cs].
Large, Liu, Huh, et al. 2024. Scalable Optimization in the Modular Norm.”
Liu, Liu, Gore, et al. 2025. Neural Thermodynamic Laws for Large Language Model Training.”
Rajbhandari, Rasley, Ruwase, et al. 2020. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models.” arXiv:1910.02054 [Cs, Stat].
Rajbhandari, Ruwase, Rasley, et al. 2021. ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.” arXiv:2104.07857 [Cs].
Rasley, Rajbhandari, Ruwase, et al. 2020. DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters.” In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. KDD ’20.
Ren, Rajbhandari, Aminabadi, et al. 2021. ZeRO-Offload: Democratizing Billion-Scale Model Training.” arXiv:2101.06840 [Cs].
Tang, Gan, Awan, et al. 2021. 1-Bit Adam: Communication Efficient Large-Scale Training with Adam’s Convergence Speed.” arXiv:2102.02888 [Cs].
Yang, Hu, Babuschkin, et al. 2022. Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer.”
Yang, Simon, and Bernstein. 2023. A Spectral Condition for Feature Learning.”
Zhang, and He. 2020. Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping.” Advances in Neural Information Processing Systems.