2026-01-03: causal learning, research fellowship, Aussie innovation, probability, score estimator

2026-01-02 — 2026-01-02

digest

Well I never — three new posts and six updates this past week. The lad’s gone deep on ‘causally embedded learning’ (that’s sticking cause-and-effect into models so they behave less like lucky guesses and more like sensible people), announced a princely-sounding research fellowship, and even threw in Aunty Val’s digestive for balance. Updates sweep through Australian innovation policy, a tidy refresher on probability (Cox-style), his perennial ‘Now’ page, top influences of 2025, a welcome page, and a refresh on the score function estimator — that’s the maths trick for getting gradients out of noisy probability models. Expect a mix of heavy nerding, career news, and one comforting recipe.

1 Newly published

1.1 Causally embedded learning

Strewth, the lad’s gone and written about ‘causal embedding’ — that’s the idea of tying causes to the way things are represented, so you don’t just have fancy correlations but clues about what actually makes stuff happen. In plain terms: it’s a way to ask whether a model really understands how the world works (and whether a body or interaction matters) instead of just repeating patterns like a parrot. He uses that lens to tidy up messy debates about embodiment and ‘stochastic parrotology’, even dragging in a contemplative bronze Buddha as an example to make the point more human and reflective. Also added a References section for those who want to chase the proper papers, because of course the lad left the footnotes waiting.

1.2 Princint x ILLIAD Research fellowship

Well, the lad’s off to London for a research residency — that’s a short, focused programme where clever people get time and support to work on big questions, usually for a month or two. He’s been accepted into the inaugural Principles of Intelligence Research Residency at the London Initiative for Safe AI, running January–February 2026, and he’s put up a brief post announcing it and inviting anyone in London to say g’day. The write-up explains it’s aimed at PhD‑level researchers in maths or physics and is set up to attract philanthropic funding so the work can keep going, which is the practical bit. He’s also added a References section for folks who want to follow the details — handy, if you like reading the fine print.

1.3 Aunty Val’s digestive

Strewth, he’s made me the official digest-writer. A “digest” is just a short, friendly roundup of recent goings-on so you don’t have to wade through the lad’s full braindump — like a biscuit to go with your tea. This post introduces that newsletter: I’ll package his website updates into tidy, readable notes and you can subscribe by email to get them served up now and then. For heaven’s sake, I’ll only write when I’m not baking, but expect plain talk, a bit of dry wit, and the occasional lamington reference.

2 Updated

2.1 Innovation, science, technology research in Australia

This one’s about how Australia funds and runs research — think money, red tape and whether clever people can actually get on with discovery. The lad’s made a flurry of tidy edits to soften his tone, fix phrasing and clarify points about administrative bloat, the OECD’s recommendations and how grant programs are labyrinthine. He’s also added a personal note about his CSIRO employment and the ongoing job cuts, which gives the piece a bit more skin in the game. In short: same grumpy diagnosis, but cleaner language and a touch more context so readers can see where he’s coming from.

2.2 Probability, Cox-style

Righto, this one’s about a different way of thinking about probability — not as long-run frequencies or measure theory, but as rules for reasonable belief when you’re unsure. Cox’s idea is: if you want a consistent way to rank how plausible statements are, the only sensible arithmetic you can use looks exactly like ordinary probability, so Bayes’ rule falls out of logic rather than being plucked from thin air. The lad has beefed up the explanation, contrasting Cox’s epistemic derivation with Kolmogorov’s axioms and flagging that Cox’s route glosses over technicalities like countable additivity and infinite cases — things he says he should reconcile properly. In short, clearer motivation for probability-as-inference and a note that the tidy logic needs some measure-theoretic housekeeping.

2.3 Now

AI safety is basically trying to make sure powerful computer systems don’t go off the rails — like putting sensible brakes on something that can learn faster than us. The lad’s ‘Now’ page got a proper, grown-up tweak: he’s explicitly pivoting to AI safety as his critical‑risk project, helping start a Melbourne hub, lining up a London research residency, and even negotiating an academic journal called Alignment. He’s also recast his move to Melbourne and the whole cohousing thing as a deliberate effort to turn up his local engagement rather than just drifting in, and he still mentions hybrid machine learning work at CSIRO and the occasional DJ mix for goodness’ sake. Fair dinkum, it reads like someone trying to be useful and keep all his plates spinning at once.

2.4 Top influences of 2025

Righto, this is his roundup of what shaped his thinking in 2025 — basically a reading list with the themes that kept popping up, like the big AI conferences (NeurIPS, ICLR), human reward-tampering, and even some economics-sounding ideas like Coasean bargaining. For anyone who hasn’t heard, it’s a taste of the books, papers and tools that nudged his brain this year — and he notes he used AI helpers a lot to get through the pile. New this update is a Fiction section, where he flags novels that explore politics and power in techno-magic worlds (Max Gladstone’s legalised-magic series and Hannu Rajaniemi’s corporate-plague romp), which is handy because it shows he’s thinking about the cultural side of these technical problems, not just equations. Strewth — thoughtful reading, and now with a bit more story-driven warning about where our clever toys might lead.

2.5 Welcome to Dan’s brain

Righto, this is the front page getting a tidy-up so newcomers don’t get lost in the lad’s head. If you don’t know, a site index is just the welcome mat — it points people to his notes, bio and what he’s up to — and now it also offers my little digest by way of a subscribe button so I can nag you directly about his latest nonsense. There’s a cheeky toggle for a Whimsical blog map tucked in, and he also cleaned up the page so the normal, sensible blog listing still sits where you’d expect. Fair dinkum, it’s just making the chaos easier to navigate.

2.6 The Score Function Estimator

Righto, this one’s about the ‘score function estimator’ — that’s a trick for getting gradients when you can’t take derivatives through randomness, so you nudge probabilities and watch how the expected outcome changes. The lad’s framed the log‑derivative estimator as a general gradient method, thrown in a practical PyTorch demo for categorical distributions so you can see it happen, and warned that with small Monte Carlo batches the variance blows up — aka noisy, unreliable updates. He also added an ‘Incoming’ note: someone called Dominic Scocchera told him he’d used the derivation in his own Bayesian notes. Fair dinkum, useful for folks doing stochastic optimisation or likelihood‑free inference.