# Neural Nets

Learning summary statistics
2022-11-24Gradient flows
infinitesimal optimization
2020-01-30
– 2022-11-02Deep learning as a dynamical system
2018-08-13
– 2022-10-30The edge of chaos
Computation, evolution, competition and other past-times of faculty
2016-12-01
– 2022-10-30Pytorch
#torched
2018-05-04
– 2022-10-23Generative art with language+diffusion models
2022-09-16
– 2022-10-20Neural tangent kernel
2020-12-09
– 2022-10-14Multi-objective optimisation
2021-07-14
– 2022-10-10Automatic programming
2016-10-14
– 2022-10-07Statistics and ML in python
2015-04-27
– 2022-10-06Neural denoising diffusion models
Denoising diffusion probabilistic models (DDPMs), score-based generative models, generative diffusion processes, neural energy models…
2021-11-11
– 2022-09-24Score matching
2021-11-11
– 2022-09-23Ensemble Kalman methods for training neural networks
Data assimilation for network weights
2022-09-20Gradient descent, first-order, stochastic
a.k.a. SGD, as seen in deep learning
2020-01-30
– 2022-09-05Causal inference in highly parameterized ML
2020-09-18
– 2022-09-02Transformer networks
The transformer-powered subtitle recommendation for this article was “Our most terrifyingly effective weapon against the forces of evil is our ability to laugh at them.”
2017-12-20
– 2022-08-23Technological singularities
Incorporating hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, nerd raptures and so forth
2016-12-01
– 2022-08-10Neural nets with implicit layers
Also, declarative networks, bi-level optimization and other ingenious uses of the implicit function theorem
2020-12-08
– 2022-08-09Nonparametrically learning dynamical systems
2018-08-13
– 2022-08-06Learning Gamelan
2016-04-05
– 2022-08-05Neural net attention mechanisms
On brilliance through selective ignorance
2017-12-20
– 2022-08-05Neural learning for spatiotemporal systems
2020-09-16
– 2022-07-28Gradient descent, Newton-like
2019-02-05
– 2022-07-25Overparameterization in large models
Improper learning, benign overfitting, double descent
2018-04-04
– 2022-05-27Machine learning and statistics in Julia
2019-11-27
– 2022-05-27Graph neural nets
2020-09-16
– 2022-05-18Implementing neural nets
2016-10-14
– 2022-05-18Probabilistic neural nets
Bayesian and other probabilistic inference in overparameterized ML
2017-01-11
– 2022-04-07Reservoir Computing
2022-03-28Generative flow
2022-03-07Differentiable learning of automata
2016-10-14
– 2022-02-19Neural nets for “implicit representations”
2021-01-21
– 2022-02-01Neural nets with basis decomposition layers
2021-03-09
– 2022-02-01Here’s how I would do art with machine learning if I had to
2016-06-06
– 2022-02-01Running neural nets backwards
2022-01-29Garbled highlights from NeurIPS 2021
2021-11-05
– 2021-12-15Gradient descent, Newton-like, stochastic
2020-01-23
– 2021-12-09Ensembling neural nets
Monte Carlo
2020-12-14
– 2021-11-25Convolutional neural networks
2017-11-10
– 2021-11-21Neural nets for “implicit representations”
2021-01-21
– 2021-11-16Deep generative models
2020-12-10
– 2021-11-11Random neural networks
2017-02-17
– 2021-10-12Regularising neural networks
Generalisation for street fighters
2017-02-12
– 2021-09-24Economics of automation
When to the robots come for my job?
2021-09-20Recurrent neural networks
2016-06-16
– 2021-09-06Neural network activation functions
2017-01-12
– 2021-08-02Learning summary statistics
2020-04-22
– 2021-07-15Multi-task ML
2021-07-14ML on small devices
Putting intelligence on chips small enough to be in disconcerting places
2016-10-14
– 2021-07-13Tensorflow
The framework to use for deep learning if you groupthink like Google
2016-07-11
– 2021-07-07ML Koans
Passing through the NAND-gate
2021-06-23Infinite width limits of neural networks
2020-12-09
– 2021-05-11Compressing neural nets
pruning, compacting and otherwise fitting a good estimate into fewer parameters
2016-10-14
– 2021-05-07ML benchmarks and their pitfalls
On marginal efficiency gain in paperclip manufacture
2020-08-16
– 2021-04-13Memory in machine learning
2021-03-03
– 2021-03-03Statistical mechanics of statistics
2016-12-01
– 2021-01-06Why does deep learning work?
Are we in the pocket of Big VRAM?
2017-05-30
– 2020-12-14Garbled highlights from NeurIPS 2020
2020-09-17
– 2020-12-11Big data ML best practice
2020-09-16
– 2020-09-21Dimensionality reduction
Wherein I teach myself, amongst other things, feature selection, how a sparse PCA works, and decide where to file multidimensional scaling
2015-03-22
– 2020-09-11Neural nets
Designing the fanciest usable differentiable loss surface
2016-10-14
– 2020-09-09Learning of manifolds
Also topological data analysis; other hip names to follow
2014-08-19
– 2020-06-23Deep fakery
2020-06-15Teaching computers to write music
2016-06-06
– 2020-03-25Learnable indexes and hashes
2018-01-12
– 2020-02-18Gradient descent, Higher order
2019-10-26Entity embeddings
2017-04-01Garbled highlights from NIPS 2016
2016-12-05
– 2017-02-03Pattern machine
2011-06-27
– 2015-11-24