Got good behaviour from a million parameter model? Want to see if stuff gets weirder as we hit a billion parameters? Turns out it does!
Brief links on the theme of scaling in the extremely large model/large data limit and what that does to the behaviour of the models. A new front in the complexity, and/or statistical mechanics of statistics.
As to how to scale up these models in practice, see distributed gradient descent.
Side note: The better lesson
Suttonβs famous bitter lesson:
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.
Lots of people declaim this one, e.g. On the futility of trying to be clever (the bitter lesson redux).
The better lesson is:
The biggest lesson that can be read from 70 years of AI research is that a lot of the good ideas that did not require massive compute budget have already been published by smart people who did not have GPUs, so we need to leverage our technological advantage if we want to get cited.
Research, and indeed predictive analytics, is a competitive market, and advice about relative advantage needs strategic context. But it does not sound as profound if we phrase it that way, eh?
Big transformers
One fun result comes from Transformer language models. An interesting observation way back in 2020 was that there seemed to be an unexpected trade-off where you can go faster by training a bigger network. Indeed, there is a whole family of observations in this vein trying to identify actual scaling behaviour.
nostalgebraist summarises Henighan et al. (2020);Kaplan et al. (2020):
L(D): information
OpenAI derives a scaling law called L(D). This law is the best you could possibly do β even with arbitrarily large compute/models β if you are only allowed to train on D data points.
No matter how good your model is, there is only so much it can learn from a finite sample. L(D) quantifies this intuitive fact (if the model is an autoregressive transformer).
L(βC): budgeting
OpenAI also derives another a scaling law called L(βC). This is the best you can do with compute C, if you spend it optimally.
What does optimal spending look like? Remember, you can spend a unit of compute on
- a bigger model (N), or
- training the same model for longer (S)
β¦In the compute regime we are currently in, making the model bigger is way more effective than taking more steps.
Controversy! the scaling laaws have been revised.
Incoming
- Zhang et al. (2020) (how do NNs learn from language as n increases?
- DeepSpeed Compression: A composable library for extreme compression and zero-cost quantization (targeting large language models)
No comments yet. Why not leave one?