# Automatic differentiation

Getting your computer to tell you the gradient of a function, without resorting to finite difference approximation, or coding an analytic derivative by hand. We usually mean this in the sense of automatic forward or reverse mode differentiation, which is not, as such, a symbolic technique, but symbolic differentiation gets an incidental look-in, and these ideas do of course relate.

Infinitesimal/Taylor series formulations, the related dual number formulations, and even fancier hyperdual formulations. Reverse-mode, a.k.a. Backpropagation, versus forward-mode etc. Computational complexity of all the above.

There are many ways you can do automatic differentiation, and I won’t attempt to comprehensively introduce the various approaches. This is a well-ploughed field. There is much of good material out there already with fancy diagrams and the like. Symbolic, numeric, dual/forward, backwards mode… Notably, you don’t have to choose between them - e.g. you can use forward differentiation to calculate an expedient step in the middle of backward differentiation, for example.

You might want to do this for ODE quadrature, or sensitivity analysis, for optimisation, either batch or SGD, especially in neural networks, matrix factorisations, variational approximation etc. This is not news these days, but it took a stunningly long time to become common since its inception in the… 1970s? See, e.g. Justin Domke, who claimed automatic differentiation to be the most criminally underused tool in the machine learning toolbox. (That escalated quickly.) See also a timely update by Tim Viera.

There is a beautiful explanation of reverse-mode the basics by Sanjeev Arora and Tengyu Ma. See also Mike Innes’ hand-on introduction, or his terse, opinionated introductory paper, Innes (2018). There is a well-establish terminology for sensitivity analysis discussing adjoints, e.g. Steven Johnson’s class notes, and his references .

## Terminology zoo

Too many words meaning the same thing, or quirky use of broad terms. Need some disambiguation

## Who invented backpropagation?

There is an adorable cottage industry in arguing about who first applied reverse-mode autodiff to networks. See, e.g. Schmidhuber’s blog post, Griewank (2012) and Schmidhuber (2015), a reddit thread and so on.

🏗

## Forward- versus reverse-mode

🏗

TaylorSeries.jl is an implementation of high-order automatic differentiation, as presented in the book by W. Tucker . The general idea is the following.

The Taylor series expansion of an analytical function $$f(t)$$ with one independent variable $$t$$ around $$t_0$$ can be written as

$f(t) = f_0 + f_1 (t-t_0) + f_2 (t-t_0)^2 + \cdots + f_k (t-t_0)^k + \cdots,$ where $$f_0=f(t_0)$$, and the Taylor coefficients $$f_k = f_k(t_0)$$ are the $$k$$th normalized derivatives at $$t_0$$:

$f_k = \frac{1}{k!} \frac{{\rm d}^k f} {{\rm d} t^k}(t_0).$

Thus, computing the high-order derivatives of $$f(t)$$ is equivalent to computing its Taylor expansion.… Arithmetic operations involving Taylor series can be expressed as operations on the coefficients.

## Symbolic differentiation

If you have already calculated the symbolic derivative, you can of course use this as a kind of automatic derivative. It might even be faster.

Calculating symbolic derivatives can itself be automated. Symbolic math packages such as Sympy, MAPLE and Mathematica can all do actual symbolic differentiation, which is different again, but sometimes leads to the same thing. I haven’t tried Sympy or MAPLE, but Mathematica’s support for matrix calculus is weak, and since I usually need matrix derivatives, this particular task has not been automated for me.

## In implicit targets

Long story. For use in, e.g. Implicit NN.

A beautiful explanation can be found in Blondel et al. (2021).

To do: investigate Benoît Pasquier’s F-1 Method.

This package implements the F-1 algorithm […] It allows for efficient quasi-auto-differentiation of an objective function defined implicitly by the solution of a steady-state problem.

## In ODEs

A trick in automatic differentiation which happens to be useful in differentiating likleyhoood (or other functions) of time0evolving systems. e.g. Errico (1997). Probably need to spin this into its own notebook. See Kidger, Chen, and Lyons (2020);Kidger et al. (2020);Li et al. (2020);Rackauckas et al. (2018);Stapor, Fröhlich, and Hasenauer (2018);Cao et al. (2003).

## Hessians in neural nets

We are getting better at estimating second-order derivatives in yet more adverse circumstances. For example, see the pytorch Hessian tools.

## Software

In decreasing order of relevance to me personally.

### jax

jax (python) is a successor to classic python autograd.

JAX is Autograd and XLA, brought together for high-performance machine learning research.

I use it a lot; see jax.

### Pytorch

Another neural-net style thing like tensorflow, but with dynamic graph construction as in autograd.

### Julia

Julia has an embarrassment of different methods of autodiff (Homoiconicity and introspection makes this comparatively easy.) and it’s not always clear the comparative selling points of each.

Anyway, there is enough going on there that it needs its own page. See Julia Autodiff.

### Tensorflow

Not a fan, but it certainly does work. See Tensorflow. FYI there is an interesting discussion of its workings in the tensorflow jacobians ticket request

### Aesara

Aesara at a Glance

Aesara is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays. It can use GPUs and perform efficient symbolic differentiation.

This is a fork of the original Theano library that is being maintained by the PyMC team.

• A hackable, pure-Python codebase
• Extensible graph framework suitable for rapid development of custom symbolic optimizations
• Implements an extensible graph transpilation framework that currently provides compilation to C and JAX JITed Python functions
• Built on top of one of the most widely-used Python tensor libraries: Theano

Aesara combines aspects of a computer algebra system (CAS) with aspects of an optimizing compiler. It can also generate customized C code for many mathematical operations. This combination of CAS with optimizing compilation is particularly useful for tasks in which complicated mathematical expressions are evaluated repeatedly and evaluation speed is critical. For situations where many different expressions are each evaluated once Aesara can minimize the amount of compilation/analysis overhead, but still provide symbolic features such as automatic differentiation.

### taichi

Taichi is a physics-simulation-and-graphics oriented library with clever compilation to various backends, embedded in python:

As a data-oriented programming language, Taichi decouples computation from data organization. For example, you can freely switch between arrays of structures (AOS) and structures of arrays (SOA), or between multi-level pointer arrays and simple dense arrays. Taichi has native support for sparse data structures, and the Taichi compiler effectively simplifies data structure accesses. This allows users to compose data organization components into complex hierarchical and sparse structures. The Taichi compiler optimizes data access.

We have developed 10 different differentiable physical simulators using Taichi, for deep learning and robotics tasks. Thanks to the built-in reverse-mode automatic differentiation system, most of these differentiable simulators are developed within only 2 hours. Accurate gradients from these differentiable simulators make controller optimization orders of magnitude faster than reinforcement learning.

I wouldn’t use this any longer. A better-supported near drop-in replacement is jax which is much faster and better documented.

autograd

can automatically differentiate native Python and Numpy code. It can handle a large subset of Python’s features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives. It uses reverse-mode differentiation (a.k.a. backpropagation), which means it can efficiently take gradients of scalar-valued functions with respect to array-valued arguments. The main intended application is gradient-based optimization.

AFAICT deprecated in favour of jax.

autograd-forward will mingle forward-mode differentiation in to calculate Jacobian-vector products and Hessian-vector products for scalar-valued loss functions, which is useful for classic optimization.

Andrej Karpathy’s teaching library micrograd is a 50 line scalar autograd library from which you can learn cool things.

### Enzyme

Applying differentiable programming techniques and machine learning algorithms to foreign programs requires developers to either rewrite their code in a machinelearning framework, or otherwise provide derivatives of the foreign code. This paper presents Enzyme, a high-performance automatic differentiation (AD) compiler plugin for the LLVM compiler framework capable of synthesizing gradients of statically analyzable programs expressed in the LLVM intermediate representation (IR). Enzyme synthesizes gradients for programs written in any language whose compiler targets LLVM IR including C, C++, Fortran, Julia, Rust, Swift, MLIR, etc., thereby providing native AD capabilities in these languages. Unlike traditional source-to-source and operator-overloading tools, Enzyme performs AD on optimized IR. …Packaging Enzyme for PyTorch and TensorFlow provides convenient access to gradients of foreign code with state-of-the art performance, enabling foreign code to be directly incorporated into existing machine learning workflows.

Sounds great but I suspect that in practice there is still a lot of work required to make this go.

### Theano

Mentioned for historical accuracy.

Theano, (python) supports autodiff as a basic feature and had a massive user base, although it is now discontinued in favour of other otions. See Aesara for a direct successor, and jax/pytorch/tensorflow for some more widely used alternatives.

A classic is CasADi (Python, C++, MATLAB)

a symbolic framework for numeric optimization implementing automatic differentiation in forward and reverse modes on sparse matrix-valued computational graphs. It supports self-contained C-code generation and interfaces state-of-the-art codes such as SUNDIALS, IPOPT etc. It can be used from C++, Python or Matlab

[…] CasADi is an open-source tool, written in self-contained C++ code, depending only on the C++ Standard Library.

Documentation is sparse; probably should read the source or the published papers to understand how well this will fit your needs and, e.g. which arithmetic operations it supports.

It might be worth it for such features as graceful support for 100-fold nonlinear composition, for example. It also includes ODE sensitivity analysis (differentiating through ODE solvers) which predates lots of fancypants ‘neural ODEs”. The price you pay is a weird DSL that you must learn to use it and that unlike many of its trendy peers it has no GPU support.

### KeOps

File under least squares, autodiff, gps, pytorch.

The KeOps library lets you compute reductions of large arrays whose entries are given by a mathematical formula or a neural network. It combines efficient C++ routines with an automatic differentiation engine and can be used with Python (NumPy, PyTorch), Matlab and R.

It is perfectly suited to the computation of kernel matrix-vector products, K-nearest neighbors queries, N-body interactions, point cloud convolutions and the associated gradients. Crucially, it performs well even when the corresponding kernel or distance matrices do not fit into the RAM or GPU memory. Compared with a PyTorch GPU baseline, KeOps provides a x10-x100 speed-up on a wide range of geometric applications, from kernel methods to geometric deep learning.

Another classic. ADOL-C is a popular C++ differentiation library with python binding. Looks clunky from python but tenable from c++.

### ceres solver

ceres-solver, (C++), the google least squares solver, seems to have some good tricks, mostly focussed on least-squares losses.

### audi

autodiff, which is usually referred to as audi for the sake of clarity, offers light automatic differentiation for MATLAB. I think MATLAB now has a whole deep learning toolkit built in which surely supports something natively in this domain.

### algopy

allows you to differentiate functions implemented as computer programs by using Algorithmic Differentiation (AD) techniques in the forward and reverse mode. The forward mode propagates univariate Taylor polynomials of arbitrary order. Hence it is also possible to use AlgoPy to evaluate higher-order derivative tensors.

Speciality of AlgoPy is the possibility to differentiate functions that contain matrix functions as +,-,*,/, dot, solve, qr, eigh, cholesky.

Looks sophisticated, and indeed supports differentiation elegantly; but not so actively maintained, and the source code is hard to find.

## References

Andersson, Joel A. E., Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. 2019. Mathematical Programming Computation 11 (1): 1–36.
Arya, Gaurav, Moritz Schauer, Frank Schäfer, and Christopher Vincent Rackauckas. 2022. In.
Baydin, Atilim Gunes, and Barak A. Pearlmutter. 2014. arXiv:1404.7456 [Cs, Stat], April.
Baydin, Atilim Gunes, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. 2015. arXiv:1502.05767 [Cs], February.
———. 2018. Journal of Machine Learning Research 18 (153): 1–43.
Baydin, Atılım Güneş, Barak A. Pearlmutter, and Jeffrey Mark Siskind. 2016. arXiv:1611.03777 [Cs, Stat], November.
Blondel, Mathieu, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares-López, Fabian Pedregosa, and Jean-Philippe Vert. 2021. arXiv:2105.15183 [Cs, Math, Stat], October.
Bolte, Jérôme, and Edouard Pauwels. 2020. In Advances in Neural Information Processing Systems. Vol. 33.
Cao, Y., S. Li, L. Petzold, and R. Serban. 2003. SIAM Journal on Scientific Computing 24 (3): 1076–89.
Carpenter, Bob, Matthew D. Hoffman, Marcus Brubaker, Daniel Lee, Peter Li, and Michael Betancourt. 2015. arXiv Preprint arXiv:1509.07164.
Charlier, Benjamin, Jean Feydy, Joan Alexis Glaunès, François-David Collin, and Ghislain Durif. 2021. Journal of Machine Learning Research 22 (74): 1–6.
Dangel, Felix, Frederik Kunstner, and Philipp Hennig. 2019. In International Conference on Learning Representations.
Errico, Ronald M. 1997. Bulletin of the American Meteorological Society 78 (11): 2577–92.
Fike, Jeffrey, and Juan Alonso. 2011. In 49th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition. Orlando, Florida: American Institute of Aeronautics and Astronautics.
Fischer, Keno, and Elliot Saba. 2018. arXiv:1810.09868 [Cs, Stat], October.
Gallier, Jean, and Jocelyn Quaintance. 2022. Algebra, Topology, Diﬀerential Calculus, and Optimization Theory For Computer Science and Machine Learning.
Giles, Mike B. 2008. In Advances in Automatic Differentiation, edited by Christian H. Bischof, H. Martin Bücker, Paul Hovland, Uwe Naumann, and Jean Utke, 64:35–44. Berlin, Heidelberg: Springer Berlin Heidelberg.
Gower, R. M., and A. L. Gower. 2016. Mathematical Programming 155 (1-2): 81–103.
Griewank, Andreas. 2012. Documenta Mathematica, 12.
Griewank, Andreas, and Andrea Walther. 2008. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. 2nd ed. Philadelphia, PA: Society for Industrial and Applied Mathematics.
Hu, Yuanming, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, and Frédo Durand. 2020. In ICLR.
Hu, Yuanming, Tzu-Mao Li, Luke Anderson, Jonathan Ragan-Kelley, and Frédo Durand. 2019. ACM Transactions on Graphics 38 (6): 1–16.
Innes, Michael. 2018. arXiv:1810.07951 [Cs], October.
Ionescu, Catalin, Orestis Vantzos, and Cristian Sminchisescu. 2016. arXiv.
Jatavallabhula, Krishna Murthy, Ganesh Iyer, and Liam Paull. 2020. In 2020 IEEE International Conference on Robotics and Automation (ICRA), 2130–37. Paris, France: IEEE.
Johnson, Steven G. 2012. “Notes on Adjoint Methods for 18.335,” 6.
Kidger, Patrick, Ricky T Q Chen, and Terry Lyons. 2020. ‘Hey, That’s Not an ODE’: Faster ODE Adjoints with 12 Lines of Code.” In, 5.
Kidger, Patrick, James Morrill, James Foster, and Terry Lyons. 2020. arXiv:2005.08926 [Cs, Stat], November.
Laue, Soeren, Matthias Mitterreiter, and Joachim Giesen. 2018. In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 2750–59. Curran Associates, Inc.
Launay, Julien, Iacopo Poli, François Boniface, and Florent Krzakala. 2020. In Advances in Neural Information Processing Systems, 33:15.
Li, Xuechen, Ting-Kam Leonard Wong, Ricky T. Q. Chen, and David Duvenaud. 2020. In International Conference on Artificial Intelligence and Statistics, 3870–82. PMLR.
Maclaurin, Dougal, David Duvenaud, and Ryan Adams. 2015. In Proceedings of the 32nd International Conference on Machine Learning, 2113–22. PMLR.
Mogensen, Patrick K., and Asbjørn N. Riseth. 2018. Journal of Open Source Software 3 (24): 615.
Moses, William, and Valentin Churavy. 2020. Advances in Neural Information Processing Systems 33.
Neidinger, R. 2010. SIAM Review 52 (3): 545–63.
Neuenhofen, Martin. 2018. arXiv:1801.03614 [Cs], January.
Pasquier, B, and F Primeau. 2019. SIAM Journal on Scientific Computing, 10.
Rackauckas, Christopher, Yingbo Ma, Vaibhav Dixit, Xingjian Guo, Mike Innes, Jarrett Revels, Joakim Nyberg, and Vijay Ivaturi. 2018. arXiv:1812.01892 [Cs], December.
Rall, Louis B. 1981. Automatic Differentiation: Techniques and Applications. Lecture Notes in Computer Science 120. Berlin ; New York: Springer-Verlag.
Revels, Jarrett, Miles Lubin, and Theodore Papamarkou. 2016. arXiv:1607.07892 [Cs], July.
Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. Nature 323 (6088): 533–36.
Schmidhuber, Juergen. 2015. Scholarpedia 10 (11): 32832.
Schüle, Maximilian, Frédéric Simonis, Thomas Heyenbrock, Alfons Kemper, Stephan Günnemann, and Thomas Neumann. 2019.
Stapor, Paul, Fabian Fröhlich, and Jan Hasenauer. 2018. bioRxiv, February, 272005.
Tucker, Warwick. 2011. Validated numerics: a short introduction to rigorous computations. Princeton: Princeton University Press.
Yao, Zhewei, Amir Gholami, Kurt Keutzer, and Michael Mahoney. 2020. In arXiv:1912.07145 [Cs, Math].

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.