[1] T. Oikarinen and T.-W. Weng, "Clip-dissect: Automatic description of neuron representations in deep vision networks." International Conference on Learning Representations, 2023.
[2] T. Oikarinen, Subrho Das, Lam M. Nguyen and T.-W. Weng, "Label-free Concept Bottleneck Models." International Conference on Learning Representations, 2023.
[3] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, "Network dissection: Quantifying interpretability of deep visual representations." Computer Vision and Pattern Recognition, 2017.
[4] Koh, P. W., Nguyen, T., Tang, Y. S., Mussmann, S., Pierson, E., Kim, B., and Liang, P., "Concept bottleneck models." International Conference on Machine Learning, 2020.
[5] Kirkpatrick, James, et al. "Overcoming catastrophic forgetting in neural networks." Proceedings of the national academy of sciences, 2017.
[6] Lopez-Paz, David, and Marc'Aurelio Ranzato. "Gradient episodic memory for continual learning." Advances in neural information processing systems, 2017.
[7] Zenke, Friedemann, Ben Poole, and Surya Ganguli. "Continual learning through synaptic intelligence." International conference on machine learning, 2017.
[8] Li, Zhizhong, and Derek Hoiem. "Learning without forgetting." IEEE transactions on pattern analysis and machine intelligence, 2017.
[9] Wang, Shipeng, et al. "Training networks in null space of feature covariance for continual learning." Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2021.
[10] Aljundi, Rahaf, et al. "Online continual learning with maximal interfered retrieval." Advances in neural information processing systems, 2019.