Successor to Lua’s torch. Evil twin to Googles’s Tensorflow. Intermittently ascendant over Tensorflow amongst researchers, if not in industrial uses.

They claim certain fancy applications are easier in pytorch’s dynamic graph construction style, which resembles (in outcome if not implementation details) the dynamic styles of jax, most julia autodiffs, and tensorflow in “eager” mode.

PyTorch has a unique [sic] way of building neural networks: using and replaying a tape recorder.

Most frameworks such as TensorFlow, Theano, Caffe and CNTK have a static view of the world. One has to build a neural network, and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch. [… Pytorch] allows you to change the way your network behaves arbitrarily with zero lag or overhead.

Of course the overhead is not truly zero; rather they have shifted the user overhead baseline down a little compared to tensorflow. Discounting their hyperbole, it still provides relatively convenient autodiff.

The price we pay is that they have chosen different names and calling conventions for all the mathematical functions I use than either tensorflow or numpy, who already chose different names than one another (for no good reason as far as I know), so there is pointless friction in swapping between these frameworks. Presumably that is a tactic to engineer a captive audience? Or maybe just bad coordination. idk.

Getting started

An incredible feature of pytorch is its documentation, which is clear and consistent and somewhat comprehensive. That is hopefully no longer a massive advantage over Tensorflow whose documentation was garbled nonsense when I was using it.

going faster

Andrej Karpathy on Twitter: "good quick tutorial on optimizing your PyTorch code ⏲️: quick summary:" / Twitter

  • DataLoader has bad default settings, tune num_workers > 0 and default to pin_memory = True
  • use torch.backends.cudnn.benchmark = True to autotune cudnn kernel choice
  • max out the batch size for each GPU to ammortize compute
  • do not forget bias=False in weight layers before BatchNorms, it’s a noop that bloats model
  • use for p in model. parameters (): p. grad = None instead of model.zero_grad()
  • careful to disable debug APIs in prod (detect _anomaly/profiler/emit_nvt%/gradched.)
  • use DistributedDataParallel not DataParallel, even if not running distributed
  • careful to load balance compute on all GPUs if variably-sized inputs or GPUs will idle
  • use an apex fused optimizer (default PyTorch optim for loop iterates individual params, yikes)
  • use checkpointing to recompute memory-intensive compute-efficient ops in bwd pass (eg activations, upsampling,...)
  • use @torch.jit.script, e.g. esp to fuse long sequences of pointwise ops like in GELU

Custom Functions

There is (was?) some bad advice in the manual:

nn exports two kinds of interfaces — modules and their functional versions. You can extend it in both ways, but we recommend using modules for all kinds of layers, that hold any parameters or buffers, and recommend using a functional form parameter-less operations like activation functions, pooling, etc.

Important missing information:

If my desired loss is already just a composition of existing functions, I don’t need to define a Function subclass.

And: The given options are not a binarism but two things you need to do in concert. A better summary would be:

  • If you need to have a function which is differentiable in a non-trivial way, implement a Function
  • If you need to bundle a Function with some state or updatable parameters, additionally wrap it in a nn.Module
  • Some people claim you can also create custom layers using plain python functions. However, these don’t work as layers in an nn.Sequential model at time of writing, so I’m not sure how to take this advice.

Recurrent nets

It’s just as well it’s easy to roll your own recurrent nets because the default implementations are bad

The default RNN layer is heavily optimised using cuDNN, which is sweet, but for some complicated technical reason I do not give an arse about, only have a choice of 2 activation functions, and neither of them is “linear”.

Fairly sure this is no longer true. However, the default RNNs are still a little weird, and assume a 1-dimensional state vector. A DIY approach might fix this. Recent pytorch includes JITed RNN which might even make this DIY style performant. I have not used it.

Curve interpolation and ODEs

torchdiffeq has much ODE stuff.

Generic interpolation in xitorch

xitorch (pronounced “sigh-torch”) is a library based on PyTorch that provides differentiable operations and functionals for scientific computing and deep learning. xitorch provides analytic first and higher order derivatives automatically using PyTorch’s autograd engine. It is inspired by SciPy, a popular Python library for scientific computing.

NB, works in only one index dimension.

Logging and profiling

Leveraging tensorflow’s handy diagnostic GUI, tensorboard: Now native, via torch.utils.tensorboard. See also the PyTorch Profiler documentation.

Easier: just use lighting if that fits the workflow.

Also I have seen visdom promoted? This pumps graphs to a visualisation server. Not pytorch-specific, but seems well-integrated.

Further generic profiling and logging at the NN-in-practice notebook.

Visualising network graphs

Fiddly. The official way is via ONNX.

conda install -c ezyang onnx pydot # or
pip install onnx pydot

Then one can use various graphical model diagrams things.

brew install --cask netron # or
pip install netron
brew install graphviz

Also available, pytorchviz and tensorboardX support visualizing pytorch graphs.

pip install git+
from pytorchviz import make_dot
y = model(x)
make_dot(y, params = dict(model.named_parameters()))

Is the GPU working?

>>> import torch

>>> torch.cuda.is_available()

>>> torch.cuda.device_count()

>>> torch.cuda.current_device()

>>> torch.cuda.device(0)
<torch.cuda.device at 0x7efce0b03be0>

>>> torch.cuda.get_device_name(0)
'GeForce GTX 950M'

Hessians and other 2nd-order-ish derivatives


kazukiosawa/asdfghjkl: ASDL: Automatic Second-order Differentiation (for Fisher, Gradient covariance, Hessian, Jacobian, and Kernel) Library

The library is called "ASDL", which stands for Automatic Second-order Differentiation (for Fisher, Gradient covariance, Hessian, Jacobian, and Kernel) Library. ASDL is a PyTorch extension for computing 1st/2nd-order metrics and performing 2nd-order optimization of deep neural networks.

Not sure who to cite for this? Used in Daxberger et al. (2021) but Kazuki Osawa is not an author on those papers and they clearly authored the code.

backpack (Dangel, Kunstner, and Hennig 2019)

Provided quantities include:

  • Individual gradients from a mini-batch
  • Estimates of the gradient variance or second moment
  • Approximate second-order information (diagonal and Kronecker approximations)

Motivation: Computation of most quantities is not necessarily expensive (often just a small modification of the existing backward pass where backpropagated information can be reused). But it is difficult to do in the current software environment.

Documentation mentions the following capabilities: estimate of the Variance, the Gauss-Newton Diagonal, the Gauss-Newton KFAC

f-dangel/backpack: BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.


amirgholami/PyHessian: PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks (Yao et al. 2020):

PyHessian is a pytorch library for Hessian based analysis of neural network models. The library enables computing the following metrics:

  • Top Hessian eigenvalues
  • The trace of the Hessian matrix
  • The full Hessian Eigenvalues Spectral Density (ESD)


Lightning is a common training/utility framework for Pytorch.

Lightning is a very lightweight wrapper on PyTorch that decouples the science code from the engineering code. It’s more of a style-guide than a framework. By refactoring your code, we can automate most of the non-research code.

To use Lightning, simply refactor your research code into the LightningModule format (the science) and Lightning will automate the rest (the engineering). Lightning guarantees tested, correct, modern best practices for the automated parts.

  • If you are a researcher, Lightning is infinitely flexible, you can modify everything down to the way .backward is called or distributed is set up.
  • If you are a scientist or production team, lightning is very simple to use with best practice defaults.

Why do I want to use lightning?

Every research project starts the same, a model, a training loop, validation loop, etc. As your research advances, you’re likely to need distributed training, 16-bit precision, checkpointing, gradient accumulation, etc.

Lightning sets up all the boilerplate state-of-the-art training for you so you can focus on the research.

These last two paragraphs constitute a good introduction to the strengths and weaknesses of lightning: “Every research project starts the same, a model, a training loop, validation loop” stands in opposition to “Lightning is infinitely flexible”. >An alternative description with different emphasis “Lighting can handle many ML projects that naturally factor into a single training loop but does not help so much for other projects.”

If my project does have such a factorisation, Lightning is extremely useful and will do all kinds of easy parallelisation, natural code organisation and so forth. But if I am doing something like posterior sampling, or nested iterations, or optimisation at inference time, I find myself spending more time fighting the framework than working with it.

If I want the generic scaling up, I might find myself trying one of the generic solutions like Horovod.

C&C ignite?

Lightning tips

Like python itself, much messy confusion is involved in making everything seem tidy and obvious.

The Trainer class is hard to understand because it is an object defined across many files and mixins with confusing names.

One useful thing to know is that a Trainer has a model member which contains the actual LightningModule that I am training..

If I subclass ModelCheckpoint then I feel like the on_save_checkpoint method should be called as often as _save_model; but they are not. TODO: investigate this.

on_train_batch_end does not get access to anything output by the batch AFAICT, only the epoch-end callback gets the output argument filled in. See the code comments.

Probabilistic programming

There is a lot to say here; For me at least, probabilistic programming is the killer app of pytorch; Various frameworks do clever probabilistic things, notably pyro.

Stochastic gradients

There is some stochastic gradient infrastructure in pyro, in the sense of differentiation though integrals, both classic Score methods, reparameterisations and probably others See, e.g. Storchastic (van Krieken, Tomczak, and Teije 2021).


I think this fills a similar niche to lightning? The Catalyst homepage blurb seems to hit the same notes as lightning with a couple of sweeteners - e.g. it claims to support jax and tensorflow.

See also source and blogposts such as this one.


One can hack the backward gradient to impose regularising penalties, but why not just use one of the pre-rolled ones by Szymon Maszke ?


I am thinking especially of audio. Not too bad. Keunwoo Choi has some beautiful examples, e.g. Inverse STFT, Harmonic Percussive separation.

Today we have torchaudio or alternatively from Dorrien Herremans’ lab, nnAudio (Source), which is similar but has fewer dependencies.

Einstein convention

Einstein convention is supported by pytorch as torch.einsum.

Einops (Rogozhnikov 2022) is more general. It is not specific to pytorch, but the best tutorials are for pytorch:

Note that there was a hyped similar project , Tensor Comprehensions in PyTorch (See Announment) which apparently compiled the operations to CUDA kernels. It seems discontinued.


The KeOps library lets you compute reductions of large arrays whose entries are given by a mathematical formula or a neural network. It combines efficient C++ routines with an automatic differentiation engine and can be used with Python (NumPy, PyTorch), Matlab and R.

It is perfectly suited to the computation of kernel matrix-vector products, K-nearest neighbors queries, N-body interactions, point cloud convolutions and the associated gradients. Crucially, it performs well even when the corresponding kernel or distance matrices do not fit into the RAM or GPU memory. Compared with a PyTorch GPU baseline, KeOps provides a x10-x100 speed-up on a wide range of geometric applications, from kernel methods to geometric deep learning.

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions. These abstractions can be expressed concisely and clearly by leveraging the dynamism of the underlying Python language and the flexibility of the PyTorch library. fastai includes:

  • A new type dispatch system for Python along with a semantic type hierarchy for tensors
  • A GPU-optimized computer vision library which can be extended in pure Python
  • An optimizer which refactors out the common functionality of modern optimizers into two basic pieces, allowing optimization algorithms to be implemented in 4–5 lines of code
  • A novel 2-way callback system that can access any part of the data, model, or optimizer and change it at any point during training
  • A new data block API


Like other deep learning frameworks, there is some basic NLP support in pytorch; see pytorch.text.

flair is a commercially-backed NLP framework.


I do not do much NLP but if I did I might use the helpful utility functions in AllenNLP.

Outside of NLP there is a system of params and registrable which is very handy for defining various experiments via easy JSON config. Pro-tip from Aladsair Tran: use YAML, it is even nicer because it can handle comments.

That is handy, but beware, AllenNLP is a heavy dependency because it imports megabytes of code that is mostly about NLP, and some of that code has fragile dependencies. Perhaps the native pytorch lightning YAML config is enough.

allenai/allennlp: An open-source NLP research library, built on PyTorch.


Kornia is a differentiable computer vision library for pytorch. It includes such niceties as differentiable image warping via the grid_sample thing.


Memory leaks

Apparently you use normal python garbage collector analysis.

A snippet that shows all the currently allocated Tensors:

import torch
import gc
for obj in gc.get_objects():
        if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(
            print(type(obj), obj.size())
    except Exception as e:

See also usual python debugging. NB vs code has integrated pytorch debugging support.


The default cluster modes of python behave weirdly for pytorch tensors and especially gradients. They hav etheir own clone of python.multiprocessing. Multiprocessing best practices — PyTorch 1.12 documentation


Baydin, Atılım Güneş, Lei Shao, Wahid Bhimji, Lukas Heinrich, Lawrence Meadows, Jialin Liu, Andreas Munk, et al. 2019. Etalumis: Bringing Probabilistic Programming to Scientific Simulators at Scale.” In arXiv:1907.03382 [Cs, Stat].
Charlier, Benjamin, Jean Feydy, Joan Alexis Glaunès, François-David Collin, and Ghislain Durif. 2021. Kernel Operations on the GPU, with Autodiff, Without Memory Overflows.” Journal of Machine Learning Research 22 (74): 1–6.
Cheuk, Kin Wai, Kat Agres, and Dorien Herremans. 2019. “nnAUDIO: A Pytorch Audio Processing Tool Using 1d Convolution Neural Networks,” 2.
Dangel, Felix, Frederik Kunstner, and Philipp Hennig. 2019. BackPACK: Packing More into Backprop.” In International Conference on Learning Representations.
Daxberger, Erik, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. 2021. Laplace Redux — Effortless Bayesian Deep Learning.” In arXiv:2106.14806 [Cs, Stat].
Immer, Alexander, Matthias Bauer, Vincent Fortuin, Gunnar Rätsch, and Mohammad Emtiyaz Khan. 2021. Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning.” arXiv:2104.04975 [Cs, Stat], June.
Immer, Alexander, Maciej Korzepa, and Matthias Bauer. 2021. Improving Predictions of Bayesian Neural Nets via Local Linearization.” In International Conference on Artificial Intelligence and Statistics, 703–11. PMLR.
Krieken, Emile van, Jakub M. Tomczak, and Annette ten Teije. 2021. Storchastic: A Framework for General Stochastic Automatic Differentiation.” In arXiv:2104.00428 [Cs, Stat].
Le, Tuan Anh, Atılım Güneş Baydin, and Frank Wood. 2017. Inference Compilation and Universal Probabilistic Programming.” In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 54:1338–48. Proceedings of Machine Learning Research. Fort Lauderdale, FL, USA: PMLR.
Lezcano Casado, Mario. 2019. Trivializations for Gradient-Based Optimization on Manifolds.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc.
Rogozhnikov, Alex. 2022. “Einops: Clear and Reliable Tensor Manipulations with Einstein-Like Notation,” 21.
Smith, Daniel G. a, and Johnnie Gray. 2018. Opt_einsum - A Python Package for Optimizing Contraction Order for Einsum-Like Expressions.” Journal of Open Source Software 3 (26): 753.
Yao, Zhewei, Amir Gholami, Kurt Keutzer, and Michael Mahoney. 2020. PyHessian: Neural Networks Through the Lens of the Hessian.” In arXiv:1912.07145 [Cs, Math].

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.