Neuron-level interpretations aim to explain network behaviors and properties by investigating neurons responsive to specific perceptual or structural input patterns. Although there is emerging work in the vision and language domains, none is explored for acoustic models. To bridge the gap, we introduce AND, the first Audio Network Dissection framework that automatically establishes natural language explanations of acoustic neurons based on highly-responsive audio. AND features the use of LLMs to summarize mutual acoustic features and identities among audio. Extensive experiments are conducted to verify AND's precise and informative descriptions. In addition, we demonstrate a potential use of AND for audio machine unlearning by conducting concept-specific pruning based on the generated descriptions. Finally, we highlight two acoustic model behaviors with analysis by AND: (i) models discriminate audio with a combination of basic acoustic features rather than high-level abstract concepts; (ii) training strategies affect model behaviors and neuron interpretability -- supervised training guides neurons to gradually narrow their attention, while self-supervised learning encourages neurons to be polysemantic for exploring high-level features.
Overview of Audio Network Dissection pipeline.
Our framework utilizes SALMONN to produce captions for each audio clip in the probing dataset. For each neuron, we select top-K highly-activated audio samples with their descriptions to serve as one of the inputs of AND. To identify the common characteristics among top-k discriminated audios, we adopt Llama-2-chat-13B to summarize these descriptions.
We highlight three dedicated modules in AND: (A) closed-concept identification, (B) summary calibration, and (C) open-concept identification.
Fig 1. Overall Pipeline of AND.
Cite this work
T.-Y. Wu1, Y.-X. Lin1, and T.-W. Weng,
AND: Audio Network Dissection for Interpreting Deep Acoustic Models, ICML 2024.
@inproceedings{AND,
title={AND: Audio Network Dissection for Interpreting Deep Acoustic Models},
author={Tung-Yu Wu, Yu-Xiang Lin, and Tsui-Wei Weng},
booktitle={Proceedings of International Conference on Machine Learning (ICML)},
year={2024}
}