Adél Bajcsi, Anna Bajcsi, Szabolcs Pável, Ábel Portik, Csanád Sándor, Annamária Szenkovits, Orsolya Vas, Zalán Bodó, and Lehel Csató

Comparative Study of Interpretable Image Classification Models

Explainable models in machine learning are increas- ingly popular due to the interpretability-favoring architectural features that help human understanding and interpretation of the decisions made by the model. Although using this type of model – similarly to “robustification” – might degrade prediction accuracy, a better understanding of decisions can greatly aid in the root cause analysis of failures of complex models, like deep neural networks. In this work, we experimentally compare three self-explainable image classification models on two datasets – MNIST and BDD100K –, briefly describing their operation and highlighting their characteristics. We evaluate the backbone models to be able to observe the level of deterioration of the prediction accuracy due to the interpretable module introduced, if any. To improve one of the models studied, we propose modifications to the loss function for learning and suggest a framework for automatic assessment of interpretability by examining the linear separability of the prototypes obtained.

Reference:

DOI:  10.36244/ICJ.2023.5.4

Download  

Please cite this paper the following way:

Adél Bajcsi, Anna Bajcsi, Szabolcs Pável, Ábel Portik, Csanád Sándor, Annamária Szenkovits, Orsolya Vas, Zalán Bodó, and Lehel Csató, "Comparative Study of Interpretable Image Classification Models", Infocommunications Journal, Special Issue on Applied Informatics, 2023, pp. 20-26, https://doi.org/10.36244/ICJ.2023.5.4

Technical Co-Sponsors


  

  

Supporter



 

National Cooperation Fund, Hungary