There are many ways we can try to expand a neural model beyond the limits of a single GPU. Obviously the details are complicated. But Lilian Weng has explained it all for us. See How to Train Really Large Models on Many GPUs?.
General compute node provisioning
See practical cloud ml.
Basic multi-GPU
DDP and other multi-GPU techniques.
Various frameworks have different solutions for this, e.g. pytorch-lighting brings basic DDP to pytorch. There are also generic ones. Prominently, Horovod.
Horovod
Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. (source.).
With Horovod, an existing training script can be scaled up to run on hundreds of GPUs in just a few lines of Python code. Horovod can be installed on-premise or run out-of-the-box in cloud platforms, including AWS, Azure, and Databricks. Horovod can additionally run on top of Apache Spark, making it possible to unify data processing and model training into a single pipeline. Once Horovod has been configured, the same infrastructure can be used to train models with any framework, making it easy to switch between TensorFlow, PyTorch, MXNet, and future frameworks as machine learning tech stacks continue to evolve.
Extremely large models
Little to say here for now but I need to remember various terms for later use. ZeRO and other methods of efficient parallel/sharded gradient descent to enable much larger models for a fixed GPU memory budget.
- Advanced GPU Optimized Training
- ZeRO & DeepSpeed: New system optimizations enable training models with over 100 billion parameters
- Fit More and Train Faster With ZeRO via DeepSpeed and FairScale
- Model Parallelism and Big Models ยท Issue #8771 ยท huggingface/transformers
- Training 10x Larger Models and Accelerating Training on a Single GPU with ZeRO-Offloading
- microsoft/DeepSpeed: DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
- Exploring the limits of Concurrency in ML Training on Google TPUs Kumar et al. (2020) (BERT in 23s on a TPU-4096; โWe view the current competition in language understanding as a modern-day Space Race, with competing organizations assembling both giant machines and giant models in the quest for an Artificial General Intelligence breakthrough.โ)
No comments yet. Why not leave one?