Aunty Val’s digestive
G’day, duckies!
Word from my nephew Dan is that you lot have been absolutely frothing for a neat little wrap-up of what he’s been banging on about on his website.
Settle yourselves, loves. Take a deep breath and stop flapping your gums, because Aunty Val’s got it sorted.
I’ve rummaged through Dan’s latest scribbles and pulled out the good bits, then wrapped them up nice and tidy—like a proper packet of arrowroot bikkies from the servo.
If you’re keen, you can chuck your email in here and get these updates sent straight to your inbox now and then. I’ll write one up whenever I’m not flat out knocking together a batch of lamingtons.
2026-02-04: speaking on purpose, MaxEnt, attention economy, social brain, London
Over the past fortnight Dan’s put out two new posts and tidied up four older ones — busy enough to stay out of mischief. Big theme is being more careful with words: “intentional language” is just speaking on purpose instead of letting sloppy phrases do your thinking for you. He’s also been banging on about MaxEnt — short for “maximum entropy” — which is a fancy way of saying “when you don’t know much, don’t make up extra stories; stick to the least-committed guess that still fits what you do know”. Then there’s the attention economy (everyone scrapping for your eyeballs), plus a refresh on the social brain — how much of your thinking is basically other people living rent-free in your head. And yes, there’s a bit of London and a research residency update in the mix too, because the lad can’t help himself.
2026-01-20: Imprecise Bayesianism, Generative AI workflows, Causal inference, AI evals, travel notes
Over the past nine days the lad’s been flat out: five new posts and eleven updates. He’s been wrestling with imprecise Bayesianism — that means admitting you don’t always know a single tidy probability and working with ranges instead — and put together a hands-on ‘Generative AI workflows and hacks 2026’ for getting models to behave. There’s a new piece on causal inference in learning to act — that is, figuring out what actually causes outcomes so decisions aren’t just guesses — plus updates across causality, reinforcement learning, probabilistic-graph visuals, AI evals and even his scientific writing. Oh, and he slipped in travel notes from London and the San Francisco Bay Area. Plenty here if you like knowing why models do what they do, not just watching them spit out answers.
2026-01-11: reinforcement learning, probability foundations, information decomposition, causality, research residency
Strewth, the lad’s been prolific this past week — seven new posts and five updates to boot. He’s diving into hierarchical reinforcement learning (that’s training agents with layers of planning), dusting off probability from Rényi and Cox angles (fancy ways to measure uncertainty), and wrestling with multivariate information decomposition — which is just a mouthful for how information splits between lots of variables. There’s also a piece on causally embedded agency, where agents are treated as part of their world, not magic boxes. Oh, and he announced a research residency. Fair dinkum, it’s all very academic and very Dan.
2025-12-22: NeurIPS notes, AI evals, reward hacking, scaling laws, automation economics
It was a busy one this past week: five new posts and twenty updates. The lad’s been at NeurIPS and spat out garbled highlights, while also noodling on ’stochastic parrots’ — that is, big models that mostly remix their training data—and on how we actually test these things with AI evals. There’s a worrying little idea about human reward hacking (where systems game their incentives), plus sober pieces on scaling laws and what clever machines do to work and money. Expect a mix of conference gossip, technical noodling, and plain talk about the economic fallout—I don’t know what half of it means, but Dan seems convinced it’s important, duckies.
