Australian AI Safety Forum Sydney 2024
Scattered notes from the floor
November 7, 2024 — November 8, 2024
1 Liam’s bit
Liam Carroll introduces the forum.
Context:
2 Tiberio’s bit
Goal: catalyse the Australian AI Safety Community.
Our world in data does good visualisation of AI stuff too, it seems: Test scores of AI systems on various capabilities relative to human performance (Kiela et al. 2023).
See also Data on Notable AI Models | Epoch AI.
3 Dan Murfet’s bit
Key insight: “that is not dangerous it is just a configuration of atoms” is a poor reassurance about an oncoming landslide. “that is not dangerous it is just a configuration of compute” might fail to reassure us.
“Nuclear safety case” analogy: we need to be able to make a safety case for AI systems as they do with nuclear systems.
4 Kimberlee Weatherall
Governance of risk is something we do not historically do great at. It is hard, and we are frequently badly incentivised to get good at it.
5 Hoda Heidari
Red Teaming is hard. Mode collapse can exist in adversarial bug finding games.
6 Ryan Kidd
7 Seth Lazar
https://mintresearch.org/aisafety
8 Marcus Hutter
AIXI stuff.
9 Panel
“Go the full Aschenbrenner”
10 Links mentioned
- Harmony Labs - About
- AI Safety Fundamentals – BlueDot Impact
- [2410.05229] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
- Mapping Technical Safety Research at AI Companies — Institute for AI Policy and Strategy
- Minimum message length
- Context tree weighting
- Context Tree Weighting acc Marcus Hutter
- [2102.04074] Learning Curve Theory
- Infra-Bayesianism Sequence
- Australia NZ safety landscape