Symbolic regression
March 14, 2023 — December 23, 2023
Fajardo-Fontiveros et al. (2023):
[C]onsider a dataset \(D=\left\{\left(y_i, \mathbf{x}_i\right)\right\}\), with \(i=1, \ldots, N\), generated using the closed form model \(m^*\), so that \(y_i=m^*\left(\mathbf{x}_i, \theta^*\right)+\epsilon_i\) with \(\theta^*\) being the parameters of the model, and \(\epsilon_i\) a random unbiased observation noise drawn from the normal distribution with variance \(s_\epsilon^2\). […] he question we are interested in is: Assuming that \(m^*\) can be expressed in closed form, when is it possible to identify it as the true generating model among all possible closed-form mathematical models, for someone who does not know the true model beforehand? Note that our focus is on learning the structure of the model \(m^*\) and not the values of the parameters \(\theta^*\), a problem that has received much more attention from the theoretical point of view. Additionally, we are interested in situations in which the dimension of the feature space \(\mathbf{x} \in \mathbb{R}^k\) is relatively small (compared to typical feature spaces in machine learning settings), which is the relevant regime for symbolic regression and model discovery.
That paper is particularly interesting for the connection to the statistical mechanics of statistics.
Often this seems to boil down to sparse regression plus some interpretable (i.e. mathematical) feature engineering.
1 SINDy et al
Symbolic regression + system identification = symbolic system identification.