Gradient descent at scale

Practical implementation of large optimisations



There are many ways we can try to expand a neural model beyond the limits of a single GPU. Obviously the details are complicated. But Lilian Weng has explained it all for us. See How to Train Really Large Models on Many GPUs?.

General compute node provisioning

See practical cloud ml.

Basic multi-GPU

DDP and other multi-GPU techniques.

Various frameworks have different solutions for this, e.g. pytorch-lighting brings basic DDP to pytorch. There are also generic ones. Prominently, Horovod.

Horovod

Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. (source.).

With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.

Extremely large models

Little to say here for now but I need to remember various terms for later use. ZeRO and other methods of efficient parallel/sharded gradient descent to enable much larger models for a fixed GPU memory budget.

References

Kumar, Sameer, James Bradbury, Cliff Young, Yu Emma Wang, Anselm Levskaya, Blake Hechtman, Dehao Chen, et al. 2020. “Exploring the Limits of Concurrency in ML Training on Google TPUs.” arXiv:2011.03641 [cs], November. http://arxiv.org/abs/2011.03641.
Rajbhandari, Samyam, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. “ZeRO: Memory Optimizations Toward Training Trillion Parameter Models.” arXiv:1910.02054 [cs, Stat], May. http://arxiv.org/abs/1910.02054.
Rajbhandari, Samyam, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. 2021. “ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.” arXiv:2104.07857 [cs], April. http://arxiv.org/abs/2104.07857.
Rasley, Jeff, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. “DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters.” In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 3505–6. KDD ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3394486.3406703.
Ren, Jie, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, and Yuxiong He. 2021. “ZeRO-Offload: Democratizing Billion-Scale Model Training.” arXiv:2101.06840 [cs], January. http://arxiv.org/abs/2101.06840.
Tang, Hanlin, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, and Yuxiong He. 2021. “1-Bit Adam: Communication Efficient Large-Scale Training with Adam’s Convergence Speed.” arXiv:2102.02888 [cs], June. http://arxiv.org/abs/2102.02888.
Zhang, Minjia, and Yuxiong He. 2020. “Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping.” Advances in Neural Information Processing Systems 33: 14011–23. http://arxiv.org/abs/2010.13369.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.