One could read Sebastian Ruder’s NN-style introduction to “transfer learning”. NN people like to think about this in particular way which I like because of the diversity of out-of-the-box ideas it invites and which I dislike because it is sloppy.
For me it seems natural to consider learning well-factored causal graphical models and everything else is just an approximation to that. The reason this is hot topic in neural nets, I suspect, is that it is convenient for massive, low-human-effort neural networks to ignore graphical structure to get predictively good results from regressions in observational data. To recover the causal consistency in a black-box model is even more tedious than a classical one. Also it fits the social conventions of neural network research to reinvent methods to fix such problems without reference to previous conventions, for better and worse.
One thing that the machine learning set up gives us which is an additional emphasis: external validity, the traditional framing, would ask you whether the model you have learnt is still useful on new data. The transfer learning set up invites use to consider if we can transfer some of the computational effort from learning on one data set to learning on new dataset, and if so, how much. Maybe that is a useful insight?
Standard graphical models
We can just try some basic graphical model technology and see how far we get. If the right independences are enforced, presumably we are doing something not too far from learning a transferable model? Or, if we work out that the necessary parameters are not identifiable, then we discover that we cannot in fact learn a transferable model, right? (But maybe we can learn a somewhat transferable model?) I guess the key weakness here is that graphical models will miss some types of transferability, specifically, independences that are dependent on particualr values of the nodes, so this might be less powerful.
Is this the fundamental problem of human operation in the world?
See anthropic principles.
salad is a library to easily setup experiments using the current state-of-the art techniques in domain adaptation. It features several of recent approaches, with the goal of being able to run fair comparisons between algorithms and transfer them to real-world use cases.
To facilitate the development of ML models that are robust to real-world distribution shifts, our ICML 2021 paper presents WILDS, a curated benchmark of 10 datasets that reflect natural distribution shifts arising from different cameras, hospitals, molecular scaffolds, experiments, demographics, countries, time periods, users, and codebases.
“Meta” (a.k.a. transfer?) learning in pytorch. I’m not actually sure. TBC.