Aunty Val’s digestive

G’day, duckies!

Word from my nephew Dan is that you lot have been absolutely frothing for a neat little wrap-up of what he’s been banging on about on his website.

Settle yourselves, loves. Take a deep breath and stop flapping your gums, because Aunty Val’s got it sorted.

I’ve rummaged through Dan’s latest scribbles and pulled out the good bits, then wrapped them up nice and tidy—like a proper packet of arrowroot bikkies from the servo.

If you’re keen, you can chuck your email in here and get these updates sent straight to your inbox now and then. I’ll write one up whenever I’m not flat out knocking together a batch of lamingtons.

Figure 1

2026-01-21: Imprecise Bayesianism, London, San Francisco Bay Area, scientific writing, reinforcement learning, AI evals

The lad’s been on the go this past week: three new posts and four updates. He gives a proper look at imprecise Bayesianism — that means using ranges of probability instead of pretending there’s one neat number — and wrote two travel-y pieces from London and the San Francisco Bay Area. The updates tidy up tips on scientific writing, some reinforcement learning bits (that’s training systems with rewards, like teaching a dog), a bikes post on maintenance and riding, and AI evals — that means the tests we use to see if models are any good. A handy mix of nerdy theory, travel notes, and practical fixes you might actually read.

digest

2026-01-20: Imprecise Bayesianism, Generative AI workflows, Causal inference, AI evals, travel notes

Over the past nine days the lad’s been flat out: five new posts and eleven updates. He’s been wrestling with imprecise Bayesianism — that means admitting you don’t always know a single tidy probability and working with ranges instead — and put together a hands-on ‘Generative AI workflows and hacks 2026’ for getting models to behave. There’s a new piece on causal inference in learning to act — that is, figuring out what actually causes outcomes so decisions aren’t just guesses — plus updates across causality, reinforcement learning, probabilistic-graph visuals, AI evals and even his scientific writing. Oh, and he slipped in travel notes from London and the San Francisco Bay Area. Plenty here if you like knowing why models do what they do, not just watching them spit out answers.

digest

2026-01-11: reinforcement learning, probability foundations, information decomposition, causality, research residency

Strewth, the lad’s been prolific this past week — seven new posts and five updates to boot. He’s diving into hierarchical reinforcement learning (that’s training agents with layers of planning), dusting off probability from Rényi and Cox angles (fancy ways to measure uncertainty), and wrestling with multivariate information decomposition — which is just a mouthful for how information splits between lots of variables. There’s also a piece on causally embedded agency, where agents are treated as part of their world, not magic boxes. Oh, and he announced a research residency. Fair dinkum, it’s all very academic and very Dan.

digest

2025-12-22: NeurIPS notes, AI evals, reward hacking, scaling laws, automation economics

It was a busy one this past week: five new posts and twenty updates. The lad’s been at NeurIPS and spat out garbled highlights, while also noodling on ’stochastic parrots’ — that is, big models that mostly remix their training data—and on how we actually test these things with AI evals. There’s a worrying little idea about human reward hacking (where systems game their incentives), plus sober pieces on scaling laws and what clever machines do to work and money. Expect a mix of conference gossip, technical noodling, and plain talk about the economic fallout—I don’t know what half of it means, but Dan seems convinced it’s important, duckies.

digest
No matching items