Gradient descent at scale

Practical implementation of large optimisations



There are many ways we can try to expand a neural model beyond the limits of a single GPU. Obviously the details are complicated. But Lilian Weng has explained it all for us. See How to Train Really Large Models on Many GPUs?.

General compute node provisioning

See practical cloud ml.

Basic multi-GPU

DDP and other multi-GPU techniques.

Various frameworks have different solutions for this, e.g. pytorch-lighting brings basic DDP to pytorch. There are also generic ones. Prominently, Horovod.

Horovod

Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. (source.).

With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.

Extremely large models

Little to say here for now but I need to remember various terms for later use. ZeRO and other methods of efficient parallel/sharded gradient descent to enable much larger models for a fixed GPU memory budget.

References

Kumar, Sameer, James Bradbury, Cliff Young, Yu Emma Wang, Anselm Levskaya, Blake Hechtman, Dehao Chen, et al. 2020. โ€œExploring the Limits of Concurrency in ML Training on Google TPUs.โ€ arXiv:2011.03641 [Cs], November.
Rajbhandari, Samyam, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. โ€œZeRO: Memory Optimizations Toward Training Trillion Parameter Models.โ€ arXiv:1910.02054 [Cs, Stat], May.
Rajbhandari, Samyam, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. 2021. โ€œZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.โ€ arXiv:2104.07857 [Cs], April.
Rasley, Jeff, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. โ€œDeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters.โ€ In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 3505โ€“6. KDD โ€™20. New York, NY, USA: Association for Computing Machinery.
Ren, Jie, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, Minjia Zhang, Dong Li, and Yuxiong He. 2021. โ€œZeRO-Offload: Democratizing Billion-Scale Model Training.โ€ arXiv:2101.06840 [Cs], January.
Tang, Hanlin, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, and Yuxiong He. 2021. โ€œ1-Bit Adam: Communication Efficient Large-Scale Training with Adamโ€™s Convergence Speed.โ€ arXiv:2102.02888 [Cs], June.
Zhang, Minjia, and Yuxiong He. 2020. โ€œAccelerating Training of Transformer-Based Language Models with Progressive Layer Dropping.โ€ Advances in Neural Information Processing Systems 33: 14011โ€“23.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.