Inductive biases
Few-shot learning, learning fast weights, learning to learn
2025-06-15 — 2025-06-15
Wherein neural network architectures are treated as inductive dispositions to be quantified by how they make learning target phenomena easier, and parallels with human cognitive architectures are outlined.
functional analysis
how do science
meta learning
model selection
optimization
statmech
I’m not sure what this truly means or whether anyone is, but I think it means something like quantifying architectures that make it “easier” to learn about the phenomena of interest. This is a practical engineering discipline in NNs, but maybe also interesting to think about in humans.