Genetic programming

2015-12-21 — 2023-08-02

Wherein a nature-inspired method is recounted, in which programs are evolved by selection and recombination, is applied notably to symbolic regression problems, and is examined with historical and theoretical notes.

agents
distributed
optimization
probabilistic algorithms

A nature-inspired approach to computing that mimics evolution for code. This method has fallen out of favour lately because it is typically not as good in practice as backprop, (e.g. Brauer et al. 2002). The kind of problems it seems like it might solve, symbolic regression, have alternatives that do pretty well, like neural automata, neural transformers, or Bayesian Symbolic regressions (Jin et al. 2020).

Nonetheless, there is some interesting theory here, some interesting history, and it might possibly be the right tool for some jobs.

Hence, this notebook.

To consider: connection of evolutionary learning to adversarial learning, connections to optimisation theory, particle filters, importance sampling

Most of the interesting action is happening at evolutionary strategies, which is a specific approach to training neural nets that takes evolution-like approaches to training high-dimensional object like NNs, and is more competitive with backprop than genetic programming is with other methods for symbolic regression. It also has a less slavish reproduction of the many details of to biological evolution than genetic programming does, and is more of a clever Monte Carlo method for optimization than a direct analogue of evolution by messy, mutation, and selection of genomes.

TBC, maybe.

1 Tools

2 References

Atkinson, Subber, and Wang. 2019. “Data-Driven Discovery of Free-Form Governing Differential Equations.” In.
Bown, and Lexer. 2006. Continuous-Time Recurrent Neural Networks for Generative and Interactive Musical Performance.” In Applications of Evolutionary Computing. Lecture Notes in Computer Science 3907.
Brauer, Holder, Dries, et al. 2002. Genetic Algorithms and Parallel Processing in Maximum-Likelihood Phylogeny Inference.” Molecular Biology and Evolution.
Collins. 2002. Experiments with a New Customisable Interactive Evolution Framework.” Organized Sound.
Floreano, and Mattiussi. 2008. Bio-Inspired Artificial Intelligence: Theories, Methods, and Technologies (Intelligent Robotics and Autonomous Agents).
Genetic Programming. 2000.
Harper. 2010. The Replicator Equation as an Inference Dynamic.”
Jefferson, Collins, Cooper, et al. 1992. The Genesys System: Evolution as a Theme in Artificial Life.” In Proceedings of Second Conference on Artificial Life.
Jin, Fu, Kang, et al. 2020. Bayesian Symbolic Regression.” arXiv:1910.08892 [Stat].
Koza. 1992. Genetic Programming: On the Programming of Computers by Means of Natural Selection (Complex Adaptive Systems).
Lehman. 2007. “Evolution Through the Search for Novelty.”
Lehman, Gordon, Jain, et al. 2022. Evolution Through Large Models.”
Lehman, and Stanley. 2011. Abandoning Objectives: Evolution Through the Search for Novelty Alone.” Evolutionary Computation.
Levy. 1993. Artificial Life: A Report from the Frontier Where Computers Meet Biology.
Mitchell. 1996. An Introduction to Genetic Algorithms.
Mitchell, Hraber, and Crutchfield. 1993. Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations.” arXiv:adap-Org/9303003.
Poli, Langdon, and McPhee. 2008. A Field Guide to Genetic Programming.
Poli, Vanneschi, Langdon, et al. 2010. Theoretical Results in Genetic Programming: The Next Ten Years? Genetic Programming and Evolvable Machines.
Shalizi. 2009. Dynamics of Bayesian Updating with Dependent Data and Misspecified Models.” Electronic Journal of Statistics.
Stanley. 2007. Compositional Pattern Producing Networks: A Novel Abstraction of Development.” Genetic Programming and Evolvable Machines.
Vanchurin, Wolf, Katsnelson, et al. 2021. Towards a Theory of Evolution as Multilevel Learning.”
Whitley, Starkweather, and Bogart. 1990. Genetic Algorithms and Neural Networks: Optimizing Connections and Connectivity.” Parallel Computing.
Zhang, Lehman, Stanley, et al. 2023. OMNI: Open-Endedness via Models of Human Notions of Interestingness.”