Genetic programming
2015-12-21 — 2023-08-02
Wherein a nature-inspired method is recounted, in which programs are evolved by selection and recombination, is applied notably to symbolic regression problems, and is examined with historical and theoretical notes.
A nature-inspired approach to computing that mimics evolution for code. This method has fallen out of favour lately because it is typically not as good in practice as backprop, (e.g. Brauer et al. 2002). The kind of problems it seems like it might solve, symbolic regression, have alternatives that do pretty well, like neural automata, neural transformers, or Bayesian Symbolic regressions (Jin et al. 2020).
Nonetheless, there is some interesting theory here, some interesting history, and it might possibly be the right tool for some jobs.
Hence, this notebook.
To consider: connection of evolutionary learning to adversarial learning, connections to optimisation theory, particle filters, importance sampling…
Most of the interesting action is happening at evolutionary strategies, which is a specific approach to training neural nets that takes evolution-like approaches to training high-dimensional object like NNs, and is more competitive with backprop than genetic programming is with other methods for symbolic regression. It also has a less slavish reproduction of the many details of to biological evolution than genetic programming does, and is more of a clever Monte Carlo method for optimization than a direct analogue of evolution by messy, mutation, and selection of genomes.
TBC, maybe.
