I do not have much to say here RN.
Making neural models small
Obviously if your target model is a neural net one important step is making it be as small as possible in the sense of having as few parameters as possible.
Low precision nets
There is another sense of small: Using 16 bit float, fixed point, or even single bit, arithmetic so that the numbers involved are compact. TBD.
Training one learning algorithm to solve several problems simultaneously. Probably needs its own page.
Browser ML in particular has some quirks.
Other frameworks convert to intermediate format ONNX which can be run on microcontrollers, although I suspect with higher overhead.
jomjol/AI-on-the-edge-device implements and image AI network is implemented on a ESP32 device
Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.
Minifying neural nets
compiling neural nets
- openvinotoolkit/openvino: OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
- apache/tvm: Open deep learning compiler stack for cpu, gpu and specialized accelerators
- tiny-dnn/tiny-dnn: header only, dependency-free deep learning framework in C++14 (defunct)
- pytorch/glow: Compiler for Neural Network hardware accelerators in AOT mode
- Minimalist tiny-dnn is a C++11 implementation of certain tools for deep learning. It is targets deep learning on limited-compute, embedded systems and IoT devices.
I do not like this term, because it tends to imply that we care especially about some kind of centre-edge distinction, which we only do sometimes. It tends to imply that large NN models in data centres are the default type of ML. Chris Mountford’s Hasn’t AI Been the Wrong Edgy for Too Long?, mentioned in the comments riffs on this harder than I imagined, though↩︎