Publication: XAI implementation for deep learning classification system
Loading...
Date
2024-08
Authors
Tiong, Ji Kai
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In recent years, the implementation of deep learning techniques into medical diagnostics has increased. However, the increasing complexity of deep learning models presents a significant challenge in terms of interpretability, which is crucial for gaining trust from healthcare professionals and meeting regulatory requirements. Therefore the primary objective of this research is to enhance the transparency and understandability of these models by applying Explainable Artificial Intelligence (XAI) techniques such as Local Interpretable Model-Agnostic Explanations (LIME) and Integrated Gradients (IG) to pretrained models that are used to classify osteosarcoma images. This is achieved by conducting a comprehensive literature review on osteosarcoma and existing XAI techniques, followed by the practical application of LIME and IG to analyse and explain the output of the pretrained Vgg16 and DenseNet201 model. The results shows that LIME and IG can effectively help in explaining the results of the models, with the results of LIME easier to understand and analyse when compared to the results of IG as LIME highlight the decisive regions of the image while IG highlights the decisive pixels of the image. These techniques improve the interpretability of model outputs, thereby facilitating better understanding and trust among medical practitioners. In conclusion integrating XAI into deep learning models is essential for their ethical and reliable use in healthcare. Further improvement of the models can be done by expanding and diversifying the dataset as well as avoid filtering the images for training.