# Neural Nets

Pytorch
#torched
2018-05-04
– 2021-12-15
Garbled highlights from NeurIPS 2021
2021-11-05
– 2021-12-15
Causal inference in highly parameterized ML
2020-09-18
– 2021-12-10
Gradient descent, Newton-like, stochastic
2020-01-23
– 2021-12-09
Overparameterization
a.k.a. improper learning
2018-04-04
– 2021-12-08
Ensembling neural nets
Monte Carlo
2020-12-14
– 2021-11-25
Convolutional neural networks
2017-11-10
– 2021-11-21
Technological singularities
Incorporating hard AI take-offs, game-over high scores, the technium, deus-ex-machina, deus-ex-nube, nerd raptures and so forth
2016-12-01
– 2021-11-18
Neural nets for “implicit representations”
2021-01-21
– 2021-11-16
Neural diffusion models
2021-11-11
– 2021-11-11
Deep generative models
2020-12-10
– 2021-11-11
Probabilistic neural nets
Bayesian and other probabilistic inference in overparameterized ML
2017-01-11
– 2021-11-03
Graph neural nets
2020-09-16
– 2021-10-27
Random neural networks
2017-02-17
– 2021-10-12
Neural nets for “implicit representations”
2021-01-21
– 2021-10-05
Neural net attention mechanisms
On brilliance through selective ignorance
2017-12-20
– 2021-10-01
Regularising neural networks
Generalisation for street fighters
2017-02-12
– 2021-09-24
Economics of automation
When to the robots come for my job?
2021-09-20
Implementing neural nets
2016-10-14
– 2021-09-20
Neural nets with implicit layers
Also, declarative networks
2020-12-08
– 2021-09-07
Recurrent neural networks
2016-06-16
– 2021-09-06
Neural network activation functions
2017-01-12
– 2021-08-02
Here’s how I would do art with machine learning if I had to
2016-06-06
– 2021-07-26
Learning summary statistics
2020-04-22
– 2021-07-15
Multi-task ML
2021-07-14
ML on small devices
Putting intelligence on chips small enough to be in disconcerting places
2016-10-14
– 2021-07-13
Tensorflow
The framework to use for deep learning if you groupthink like Google
2016-07-11
– 2021-07-07
ML Koans
Passing through the NAND-gate
2021-06-23
Infinite width limits of neural networks
2020-12-09
– 2021-05-11
Compressing neural nets
pruning, compacting and otherwise fitting a good estimate into fewer parameters
2016-10-14
– 2021-05-07
ML benchmarks and their pitfalls
On marginal efficiency gain in paperclip manufacture
2020-08-16
– 2021-04-13
Neural nets with basis decomposition layers
2021-03-09
Memory in machine learning
2021-03-03
– 2021-03-03
Causal inference in the continuous limit
2021-02-17
Statistical mechanics of statistics
2016-12-01
– 2021-01-06
Why does deep learning work?
Are we in the pocket of Big VRAM?
2017-05-30
– 2020-12-14
Garbled highlights from NeurIPS 2020
2020-09-17
– 2020-12-11
Nonparametrically learning dynamical systems
2018-08-13
– 2020-12-08
Big data ML best practice
2020-09-16
– 2020-09-21
Dimensionality reduction
Wherein I teach myself, amongst other things, feature selection, how a sparse PCA works, and decide where to file multidimensional scaling
2015-03-22
– 2020-09-11
Neural nets
designing the fanciest usable differentiable loss surface
2016-10-14
– 2020-09-09
Statistics and ML in python
2015-04-27
– 2020-08-27
Learning of manifolds
Also topological data analysis; other hip names to follow
2014-08-19
– 2020-06-23
Deep fakery
2020-06-15
Learning Gamelan
2016-04-05
– 2020-04-06
Deep learning as a dynamical system
2018-08-13
– 2020-04-02
Nonparametrically learning spatiotemporal systems
2020-09-16
– 2020-04-02
Teaching computers to write music
2016-06-06
– 2020-03-25
Learnable indexes and hashes
2018-01-12
– 2020-02-18
Gradient descent, first-order, stochastic
a.k.a. SGD, as seen in deep learning
2020-01-30
– 2020-02-07
Gradient descent, Higher order
2019-10-26
Automatic programming
2016-10-14
– 2019-09-11
Differentiable learning of automata
2016-10-14
– 2019-09-11
Gradient descent, Newton-like
2019-02-05
– 2019-09-03
Entity embeddings
2017-04-01
Garbled highlights from NIPS 2016
2016-12-05
– 2017-02-03
Pattern machine
2011-06-27
– 2015-11-24