Model explanation with sparse autoencoders

Monosemanticity, sparsity and foundation models

August 29, 2024 — August 29, 2024

adversarial
classification
communicating
feature construction
game theory
high d
language
machine learning
metrics
mind
NLP
sparser than thou
Figure 1

Placeholder to talk about one hyped means of explaining models, especially large language models, by using sparse autoencoders.

1 References

Cunningham, Ewart, Riggs, et al. 2023. Sparse Autoencoders Find Highly Interpretable Features in Language Models.”
Marks, Rager, Michaud, et al. 2024. Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models.”
Moran, Sridhar, Wang, et al. 2022. Identifiable Deep Generative Models via Sparse Decoding.”
O’Neill, Ye, Iyer, et al. 2024. Disentangling Dense Embeddings with Sparse Autoencoders.”
Park, Choe, and Veitch. 2024. The Linear Representation Hypothesis and the Geometry of Large Language Models.”
Saengkyongam, Rosenfeld, Ravikumar, et al. 2024. Identifying Representations for Intervention Extrapolation.”
von Kügelgen, Besserve, Wendong, et al. 2023. Nonparametric Identifiability of Causal Representations from Unknown Interventions.” In Advances in Neural Information Processing Systems.