Ablation studies and lesion studies

In order to understand it we must be able to break it



Can we work out how a complex or adaptive works by destroying bits of it?

Can a biologist fix a radio (Lazebnik 2002)? Could a neuroscientist even understand a microprocessor? (Jonas and Kording 2017) How about if we use the phrase edge of chaos?

Great analogy: Interpretability Creationism:

[…] Stochastic Gradient Descent is not literally biological evolution, but post-hoc analysis in machine learning has a lot in common with scientific approaches in biology, and likewise often requires an understanding of the origin of model behavior. Therefore, the following holds whether looking at parasitic brooding behavior or at the inner representations of a neural network: if we do not consider how a system develops, it is difficult to distinguish a pleasing story from a useful analysis. In this piece, I will discuss the tendency towards “interpretability creationism” – interpretability methods that only look at the final state of the model and ignore its evolution over the course of training—and propose a focus on the training process to supplement interpretability research.

That essay clearly connects to model explanation, and I think, observational studies.

References

Jonas, Eric, and Konrad Paul Kording. 2017. Could a Neuroscientist Understand a Microprocessor? PLOS Computational Biology 13 (1): e1005268.
Lazebnik, Yuri. 2002. Can a biologist fix a radio?—Or, what I learned while studying apoptosis.” Cancer Cell 2 (3): 179–82.
Meyes, Richard, Melanie Lu, Constantin Waubert de Puiseau, and Tobias Meisen. 2019. Ablation Studies in Artificial Neural Networks.”

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.