A.k.a the *auxiliary method*.
At the moment I am mostly using the sub-flavour of this called Approximate Bayesian Computation, so that notebook is rather more developed.

In the (older?) frequentist framing you can get through an undergraduate program in statistics without simulation based inference arising. However, I am pretty sure it is required for be economists and ecologists.

Quoting Cosma:

[…] your model is too complicated for you to appeal to any of the usual estimation methods of statistics. […] there is no way to even calculate the likelihood of a given data set \(x_1,x_2,…x_t\equiv x_t\) under parameters \(\theta\) in closed form, which would rule out even numerical likelihood maximization, to say nothing of Bayesian methods […] Yet you can simulate; it seems like there should be some way of saying whether the simulations look like the data. This is where indirect inference comes in […] Introduce a new model, called the “auxiliary model”, which is mis-specified and typically not even generative, but is easily fit to the data, and to the data alone. (By that last I mean that you don’t have to impute values for latent variables, etc., etc., even though you might know those variables exist and are causally important.) The auxiliary model has its own parameter vector \(\beta\), with an estimator \(\hat{\beta}\). These parameters describe aspects of the distribution of observables, and the idea of indirect inference is that we can estimate the generative parameters \(\theta\) by trying to match those aspects of observations, by trying to match the auxiliary parameters.

Aaron King’s lab at UMichigan stamped a lot of this research. One wonders whether the optimal summary statistic can be learned from the data. Apparently yes?.

I gather the pomp R package does some simulation-based inference, but I have not checked in for a while so there might be broader and/or fresher options.

## References

*Proceedings of the National Academy of Sciences*111 (52): 18507–12.

*arXiv:1702.05390 [Physics, Stat]*, February.

*The Annals of Applied Statistics*3 (1): 319–48.

*Computer Physics Communications*244 (November): 170–79.

*Journal of The Royal Society Interface*5 (25): 885–97.

*Ecology*85 (11): 3140–50.

*Journal of Statistical Software*41 (1).

*Proceedings of the National Academy of Sciences*104 (51): 20392–97.

*arXiv:2102.07850 [Cs, Stat]*, June.

*Biometrika*99 (3): 741–47.

*The Econometrics Journal*15 (3): 490–515.

*Biometrika*97 (3): 621–30.

*arXiv:2202.04744 [Cs, Stat]*, February.

*Journal of Econometrics*, The interface between econometrics and economic theory, 136 (2): 397–430.

*Statistical Science*25 (2): 145–57.

*arXiv:1902.03175 [Cs, Stat]*, August.

*Journal of Econometrics*81 (1): 159–92.

*Econometric Theory*12 (04): 657–81.

*Journal of the American Statistical Association*98 (461): 67–76.

*Journal of Econometrics*59 (1–2): 5–33.

*Journal of Applied Econometrics*8 (December): S85–118.

*Journal of The Royal Society Interface*7 (43): 271–83.

*Proceedings of the National Academy of Sciences*103 (49): 18438–43.

*The Annals of Statistics*39 (3): 1776–1802.

*Statistical Science*19 (2): 239–63.

*Ecological Monographs*75 (2): 259–76.

*AISTATS*.

*Proceedings of the 32nd International Conference on Neural Information Processing Systems*, 2075–85. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.

*arXiv:2104.07359 [Math, Stat]*, April.

*Mathematical Methods of Statistics 19*, August, 327–64.

*Statistics and Computing*22 (6): 1273–76.

*arXiv:2104.03889 [Stat]*, March.

*Biometrika*88 (3): 603–21.

*arXiv:2011.08644 [Stat]*, February.

*The New Palgrave Dictionary of Economics*. Palgrave Macmillan.

*Journal of Applied Econometrics*8 (S1): S63–84.

*Nature*466 (7310): 1102–4.

*Computational Statistics*, July.

## No comments yet. Why not leave one?