2026-02-04: speaking on purpose, MaxEnt, attention economy, social brain, London

2026-02-04 — 2026-02-04

Over the past fortnight Dan’s put out two new posts and tidied up four older ones — busy enough to stay out of mischief. Big theme is being more careful with words: “intentional language” is just speaking on purpose instead of letting sloppy phrases do your thinking for you. He’s also been banging on about MaxEnt — short for “maximum entropy” — which is a fancy way of saying “when you don’t know much, don’t make up extra stories; stick to the least-committed guess that still fits what you do know”. Then there’s the attention economy (everyone scrapping for your eyeballs), plus a refresh on the social brain — how much of your thinking is basically other people living rent-free in your head. And yes, there’s a bit of London and a research residency update in the mix too, because the lad can’t help himself.

digest

1 Newly published

1.1 Intentional language is ok

Here’s the cheeky thought: talking about machines like they’ve got “wants” and “beliefs” might not be dumb — it might be how our brains do their best thinking. Dan kicks off with that card-flipping puzzle where everyone stuffs it up in abstract logic, but nails it when it’s framed like spotting a cheater at the pub, because we’re built for social rules, not clean little syllogisms. Then he brings in Dennett’s “intentional stance”, which is just a handy shortcut: instead of staring at a billion knobs inside an AI, you say “it thinks I’m asking for X” and you can actually predict what it’ll do. He’s not saying the thing’s conscious; he’s saying this language ports the problem into the bit of your brain that’s fast. And he does warn about overdoing it — see enough “agency” in clouds and toasters and you’ll end up believing the whole paddock’s got a personality.

1.2 MaxEnt inference

Alright, this one’s about MaxEnt inference — instead of doing Bayes the usual way, you write down what you know as hard rules (constraints), then pick the spread-outest probability story that still obeys them. “Maximum entropy” just means you’re not sneaking in extra assumptions when you’ve got no right to. Dan yaps through why Jaynes made a big deal of it, has a squint at folks who tried to pin down the maths properly, and even points out some of that ‘axioms’ stuff might be a bit wonky. The interesting bit is he links it to predictive coding — the brain-as-guessing-machine idea — and to optimal transport via Lagrange duality, which is a fancy way of saying the same problem can be solved from the other end. Worth a look if you ever need a clean way to update beliefs from partial info without kidding yourself.

2 Updated

2.1 London

London’s got this funny little social tell: it’s not just what you buy, it’s where you buy it — supermarkets and bakeries act like class badges, even when the shelves all look much the same and everything costs a fortune anyway. Dan’s clocked that the ‘vibe’ matters as much as the price, which is very UK and a bit cooked. But the real head-scratcher is heating: people whinge about bills nonstop, then run roaring gas heaters in leaky old barns and crack the window when it’s too hot instead of touching the thermostat. It’s like watching someone tip water into a bucket with a hole, then blame the water company. Makes you wonder what’s habit, what’s status, and what’s just folks refusing to learn the controls.

2.2 Attention economy

Funny thing is, Dan’s stopped treating the “attention economy” like a moral panic and started treating it like a proper scarce resource — like money or time — that you can actually put into a maths problem as a hard limit. Then the penny drops: platforms aren’t just “distracting”, they’re competing to bend that limit their way by predicting what’ll snag you one more minute. But he also points out the simple econ story isn’t the whole show, because our brains have quirks — novelty hooks you, willpower runs down, and random little rewards keep you coming back — so the end result can look addictive even if nobody sat down twirling a villain moustache. The real question he’s circling is what you even optimise for, over whose time, and over what stretch, when the game’s being played on human biology that wasn’t built for endless cheap dopamine hits.

2.3 PIBBS x ILLIAD Research residency January 2026

Dan’s off to London for this AI residency, and the juicy bit is him trying to nail down “agency” as just another part of a cause‑and‑effect map — not some spooky magic. If you can draw the choices and the feedback loops cleanly, you can actually check your story about an “agent” instead of just going off vibes.

2.4 The social brain

Here’s a funny twist: a lot of our “reasoning” isn’t built for finding the truth, it’s built for winning an argument in front of other people. Dan’s leaning into that social-brain idea — that your brain’s more like a little committee doing sales pitches than a cold calculator. He also makes the point that saying things like “she wants X” or “the model believes Y” isn’t baby talk, it’s a decent shorthand for what’s driving the choices. Makes you wonder how much of your own thinking is you being smart, and how much is you writing a speech for the crowd.

3 Minor tweaks