TBD.
This Maori gentleman (name unspecified) from the 1800s demonstrates an artful transfer learning from the western fashion domain
One could read Sebastian Ruder’s NN-style introduction to “transfer learning”. NN people like to think about this in particular way which I like because of the diversity of out-of-the-box ideas it invites and which I dislike because it is sloppy.
For me it seems natural to consider learning well-factored causal graphical models containing the necessary interaction effects as the platonic ideal here and everything else is just an approximation to that. The reason this is hot topic in neural nets, I suspect, is that it is convenient for massive, low-human-effort neural networks to ignore graphical structure to get predictively good results from regressions in observational data by ignoring that structure, and this leads us into strife when the situation cahnges. To recover the causal consistency in a black-box model is even more tedious than a classical one. Also, it fits the social conventions of neural network research to reinvent methods to fix such problems without reference to previous conventions, for better and worse.
I am often confused by how surprised we are about the difficulties of transferring models between domains and how continual is the flow of new publications on this theme; e.g. Google AI Blog: How Underspecification Presents Challenges for Machine Learning.
One thing that the machine learning set up gives us which is an additional emphasis: external validity, the statistical framing, would ask you whether the model you have learnt is still useful on new data. The transfer learning set up invites use to consider if we can transfer some of the computational effort from learning on one data set to learning on new dataset, and if so, how much. Maybe that is a useful insight?
This connects also to semi-supervised learning and fairness, argues (Schölkopf, Bernhard et al. 2012; Schölkopf 2019).
Once again a different angle but possibly the same underlying idea, we could argue that interaction effects are probably what we want to learn.
Standard graphical models
We can just try some basic graphical model technology and see how far we get. If the right independences are enforced, presumably we are doing something not too far from learning a transferable model? Or, if we work out that the necessary parameters are not identifiable, then we discover that we cannot in fact learn a transferable model, right? (But maybe we can learn a somewhat transferable model?) I guess the key weakness is that graphical models will miss some types of transferability, specifically, independences that are dependent on particualr values of the nodes, so this might be less powerful.
External validity in policy
See also anthropic principles, science for policy.
I have lots of ideas about policy for the world and I think that some of the ideas are good because of some mix of scientific research and personal experience.1 So let us suppose that I am broadly sympathetic to some policy instrument (state ownership of power utilities? diversity quotas in hiring? etc) because I have seen them work in the past. The question is, how universally should I be in favour of that policy? How do I find out what are the circumstances that make these policy instruments achieve my desired outcomes? Here is one that arose in my workplace recently: Presumably a diversity quota requiring a certain percentage of the workforce be, say, women, would be pointless in a society with perfect gender equality, and ineffectual in a society which has failed to train any women at all with the required skills. Most societies will not be at either of those extremes, but what is the range of gender inequity where the hiring quotas would be a useful policy intervention? What other predictors will change their effectiveness? This policy is not a good idea in and of itself but rather in a particular context. Burying that essential context is common in debates observationally.
Rather than universal policy prescriptions, it is worth wondering what specificity policies have and constantly checking if they apply here.
Tools
Salad
salad is a library to easily setup experiments using the current state-of-the art techniques in domain adaptation. It features several of recent approaches, with the goal of being able to run fair comparisons between algorithms and transfer them to real-world use cases.
WILDS
WILDS: A Benchmark of in-the-Wild Distribution Shifts
To facilitate the development of ML models that are robust to real-world distribution shifts, our ICML 2021 paper presents WILDS, a curated benchmark of 10 datasets that reflect natural distribution shifts arising from different cameras, hospitals, molecular scaffolds, experiments, demographics, countries, time periods, users, and codebases.
Meta
“Meta” (a.k.a. transfer?) learning in pytorch. I’m not actually sure. TBC.
References
Although I realistically copied some ideas from my acquaintances, but maybe even those ideas have the same sort of empirical basis. Let us optimistically assume so for now 🤞.↩︎
No comments yet. Why not leave one?