I do not have much to say here RN.
Making neural models small
Obviously if your target model is a neural net one important step is making it be as small as possible in the sense of having as few parameters as possible.
Low precision nets
There is another sense of small: Using 16 bit float, fixed point, or even single bit, arithmetic so that the numbers involved are compact. TBD.
Training one learning algorithm to solve several problems simultaneously. Probably needs its own page.
Other frameworks convert to intermediate format ONNX which can be run on microcontrollers, although I suspect with higher overhead.
- Deploying on the Edge With ONNX
- Introducing ONNX Runtime mobile: a reduced size, high performance package for edge devices
- Deploying a PyTorch MobileNetV2 Classifier on the Intel Neural Compute Stick 2
- jomjol/AI-on-the-edge-device implements and image AI network is implemented on a ESP32 device so
I do not like this term, because it tends to imply that we care especially about some kind of centre-edge distinction, which we only do sometimes. It tends to imply that large NN models in data centres are the default type of ML. Chris Mountford’s Hasn’t AI Been the Wrong Edgy for Too Long?, mentioned in the comments riffs on this harder than I imagined, though↩︎