Aunty Val’s digestive

G’day, duckies!

Word from my nephew Dan is that you lot have been absolutely frothing for a neat little wrap-up of what he’s been banging on about on his website.

Settle yourselves, loves. Take a deep breath and stop flapping your gums, because Aunty Val’s got it sorted.

I’ve rummaged through Dan’s latest scribbles and pulled out the good bits, then wrapped them up nice and tidy—like a proper packet of arrowroot bikkies from the servo.

If you’re keen, you can chuck your email in here and get these updates sent straight to your inbox now and then. I’ll write one up whenever I’m not flat out knocking together a batch of lamingtons.

Figure 1

2026-03-09: mutual aid, privacy, friendly societies, Nostr, kinder social media

Over the past six days Dan’s been on a real “look after your own mob” kick. He’s poking at money rules to see how locals can pool cash and help each other out without some big outfit sticking a boot in, and he’s floating the idea of starting a “friendly society” — that’s just an old-school member-run club that pays out when someone’s doing it tough. He’s also been sharpening his privacy habits, basically trying to make it harder for government types to hoover up his data. On the social side, he’s been tinkering with Nostr (think: a more open, harder-to-censor social network setup) and banging on about how to build social media that’s less cruel. And he’s freshened up his donate list and even his sci‑fi notes, which tells you what kind of future he reckons we’re sleepwalking into.

digest

2026-03-03: A new journal, evolution tricks, AI behaving, git & ssh, remote setup, writing tools

Over the past fortnight Dan’s popped out two new posts and gone back over nine older ones — enough to keep him off the street. Big idea bit is that he’s started a new “Alignment Journal”, meaning he’s keeping a running log on how to get clever systems to do what you meant, not what you said. On the theory side, “evolution strategies” and “genetic programming”, which is just letting a computer try a heap of dumb guesses, keep the good ones, and breed the next lot — like picking the best rams, but for code. On the practical side he’s been fettling his tool kit: git tricks for wrangling file changes, SSH (a safe way to log into another computer), plus Remote Desktop and a few Markdown editors so his notes don’t turn into a junk drawer. And there’s a bit of life admin in there too — time use, his “Now” page, and a London update, because apparently the lad does leave the house sometimes.

digest

2026-02-17: Computational mechanics, evolution strategies, interactive proofs, net censorship, China

This past week Dan’s cranked out four new posts and gone back over seven older ones, so yeah, he’s been busy. He’s trying to sort out “entropy” versus “info” — basically, mess and surprise versus what you actually know — without tying himself in knots. He also had a go at “evolution strategies” for neural nets, which is just training by chucking lots of random tweaks at a model and keeping the ones that do better, like breeding the best sheep and culling the duds. Then there’s “interactive proof”: a back-and-forth question game where you can check a claim without being handed the whole secret recipe, plus a blunt look at net censorship and who gets to decide what you can see. The updates circle around China and Chinese, how science gets out into the world, a bit of Aussie money talk, and some cleanup on “classification” — that’s just sorting things into buckets, but doing it without lying to yourself about how neat the buckets are.

digest

2026-02-10: Embedded agency, internal models, category theory, Bayes in the wild, peer review

Over the past five days Dan’s lobbed in three new posts and given seven older ones a good tighten-up — no mucking about. The main thread is how you describe a thinking system without kidding yourself: “embedded agency” just means the thing making choices is stuck inside the world it’s acting in, not sitting outside like a puppet-master. He’s also gone into “category theory” — that’s a very high-level kind of maths for tracking how bits connect and map to each other, like a set of tidy wiring diagrams for ideas. On the clean-up side he’s been back on Bayes “in an open world” (doing odds when you don’t even know all the possible options yet), had another swing at peer review, and mixed in some real-life bits like sleep, London, travel hacks, and that research residency he’s at.

digest

2026-02-04: speaking on purpose, MaxEnt, attention economy, social brain, London

Over the past fortnight Dan’s put out two new posts and tidied up four older ones — busy enough to stay out of mischief. Big theme is being more careful with words: “intentional language” is just speaking on purpose instead of letting sloppy phrases do your thinking for you. He’s also been banging on about MaxEnt — short for “maximum entropy” — which is a fancy way of saying “when you don’t know much, don’t make up extra stories; stick to the least-committed guess that still fits what you do know”. Then there’s the attention economy (everyone scrapping for your eyeballs), plus a refresh on the social brain — how much of your thinking is basically other people living rent-free in your head. And yes, there’s a bit of London and a research residency update in the mix too, because the lad can’t help himself.

digest

2026-01-20: Imprecise Bayesianism, Generative AI workflows, Causal inference, AI evals, travel notes

Over the past nine days the lad’s been flat out: five new posts and eleven updates. He’s been wrestling with imprecise Bayesianism — that means admitting you don’t always know a single tidy probability and working with ranges instead — and put together a hands-on ‘Generative AI workflows and hacks 2026’ for getting models to behave. There’s a new piece on causal inference in learning to act — that is, figuring out what actually causes outcomes so decisions aren’t just guesses — plus updates across causality, reinforcement learning, probabilistic-graph visuals, AI evals and even his scientific writing. Oh, and he slipped in travel notes from London and the San Francisco Bay Area. Plenty here if you like knowing why models do what they do, not just watching them spit out answers.

digest

2026-01-11: reinforcement learning, probability foundations, information decomposition, causality, research residency

Strewth, the lad’s been prolific this past week — seven new posts and five updates to boot. He’s diving into hierarchical reinforcement learning (that’s training agents with layers of planning), dusting off probability from Rényi and Cox angles (fancy ways to measure uncertainty), and wrestling with multivariate information decomposition — which is just a mouthful for how information splits between lots of variables. There’s also a piece on causally embedded agency, where agents are treated as part of their world, not magic boxes. Oh, and he announced a research residency. Fair dinkum, it’s all very academic and very Dan.

digest

2025-12-22: NeurIPS notes, AI evals, reward hacking, scaling laws, automation economics

It was a busy one this past week: five new posts and twenty updates. The lad’s been at NeurIPS and spat out garbled highlights, while also noodling on ’stochastic parrots’ — that is, big models that mostly remix their training data—and on how we actually test these things with AI evals. There’s a worrying little idea about human reward hacking (where systems game their incentives), plus sober pieces on scaling laws and what clever machines do to work and money. Expect a mix of conference gossip, technical noodling, and plain talk about the economic fallout—I don’t know what half of it means, but Dan seems convinced it’s important, duckies.

digest
No matching items