Bridging the AI Explainability Gap in Cardiac Imaging: A Review of Hybrid Approaches Using Grad-CAM and SHAP
Keywords:
Cardiac Imaging, Explainable AI, Grad-CAM; SHAPAbstract
The growing adoption of deep learning within the domain of medical imaging has yielded substantial enhancements in the capabilities of computer-assisted diagnosis and prognosis. However, the ability to interpret and explain the decisions made by these models continues to pose a significant obstacle, which impedes their successful integration into clinical practice. This review paper explores a hybrid approach that leverages the Gradient-weighted Class Activation Mapping technique and the Shapley Additive Explanations technique to bridge the artificial intelligence explainability gap in cardiac imaging. The paper discusses the strengths and limitations of these techniques, their application in cardiac imaging, and the potential for integrating them into a machine learning pipeline for robust and trustworthy artificial intelligence systems. Furthermore, it emphasizes the significance of developing artificial intelligence systems that are clinically translatable, addressing the explainability gap between clinical experts and non-experts. This ensures wider inclusion of diverse stakeholders involved in patient care, ultimately leading to improved patient outcomes and enhanced trust in AI-driven healthcare solutions.
Downloads
References
P. Covas et al., “Artificial Intelligence Advancements in the Cardiovascular Imaging of Coronary Atherosclerosis,” Front Cardiovasc Med, vol. 9, Mar. 2022, doi: 10.3389/fcvm.2022.839400.
X. Wang and H. Zhu, “Artificial Intelligence in Image-based Cardiovascular Disease Analysis: A Comprehensive Survey and Future Outlook,” Feb. 2024.
Lin, M. Kolossváry, I. Išgum, P. Maurovich-Horvat, P. J. Slomka, and D. Dey, “Artificial intelligence: improving the efficiency of cardiovascular imaging,” Expert Rev Med Devices, vol. 17, no. 6, pp. 565–577, Jun. 2020, doi: 10.1080/17434440.2020.1777855.
J. D. Fuhrman, N. Gorre, Q. Hu, H. Li, I. El Naqa, and M. L. Giger, “A review of explainable and interpretable AI with applications in COVID‐19 imaging,” Med Phys, vol. 49, no. 1, pp. 1–14, Jan. 2022, doi: 10.1002/mp.15359.
Amin, K. Hasan, S. Zein-Sabatto, D. Chimba, I. Ahmed, and T. Islam, “An Explainable AI Framework for Artificial Intelligence of Medical Things,” in 2023 IEEE Globecom Workshops (GC Wkshps), IEEE, Dec. 2023, pp. 2097–2102. doi: 10.1109/GCWkshps58843.2023.10464798.
S. B. Mallampati and H. Seetha, “A Review on Recent Approaches of Machine Learning, Deep Learning, and Explainable Artificial Intelligence in Intrusion Detection Systems,” Majlesi Journal of Electrical Engineering; Isfahan, vol. 17, no. 1, pp. 29–54, Mar. 2023.
Saporta et al., “Benchmarking saliency methods for chest X-ray interpretation,” Nat Mach Intell, vol. 4, no. 10, pp. 867–878, Oct. 2022, doi: 10.1038/s42256-022-00536-x.
Patrício, J. C. Neves, and L. F. Teixeira, “Explainable Deep Learning Methods in Medical Image Classification: A Survey,” ACM Comput Surv, vol. 56, no. 4, pp. 1–41, Apr. 2024, doi: 10.1145/3625287.
Z. Wang, K. Qian, H. Liu, B. Hu, B. W. Schuller, and Y. Yamamoto, “Exploring interpretable representations for heart sound abnormality detection,” Biomed Signal Process Control, vol. 82, Apr. 2023, doi: 10.1016/j.bspc.2023.104569.
W. Zeng, L. Shan, C. Yuan, and S. Du, “Advancing cardiac diagnostics: Exceptional accuracy in abnormal ECG signal classification with cascading deep learning and explainability analysis,” Appl Soft Comput, vol. 165, p. 112056, Nov. 2024, doi: 10.1016/j.asoc.2024.112056.
L. Li, W. Ding, L. Huang, X. Zhuang, and V. Grau, “Multi-modality cardiac image computing: A survey,” Med Image Anal, vol. 88, p. 102869, Aug. 2023, doi: 10.1016/j.media.2023.102869.
M. Athanasiou, K. Sfrintzeri, K. Zarkogianni, A. C. Thanopoulou, and K. S. Nikita, “An explainable XGBoost–based approach towards assessing the risk of cardiovascular disease in patients with Type 2 Diabetes Mellitus,” in 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), IEEE, Oct. 2020, pp. 859–864. doi: 10.1109/BIBE50027.2020.00146.
J. , Amann et al., “To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems.,” PLOS Digital Health, vol. 1, no. 2, p. 80, Nov. 2022.
S. Sh. Taher, S. Y. Ameen, and J. A. Ahmed, “Advanced Fraud Detection in Blockchain Transactions: An Ensemble Learning and Explainable AI Approach,” Engineering, Technology & Applied Science Research, vol. 14, no. 1, pp. 12822–12830, Feb. 2024, doi: 10.48084/etasr.6641.
F. Markus, J. A. Kors, and P. R. Rijnbeek, “The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies,” Jan. 01, 2021, Academic Press Inc. doi: 10.1016/j.jbi.2020.103655.
R. Tiwari, “Explainable AI (XAI) and its Applications in Building Trust and Understanding in AI Decision Making,” INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT, vol. 07, no. 01, Jan. 2023, doi: 10.55041/ijsrem17592.
T. Vaiyapuri, “Utilizing Explainable AI and Biosensors for Clinical Diagnosis of Infectious Vector-Borne Diseases,” Engineering, Technology & Applied Science Research, vol. 14, no. 6, pp. 18640–18648, Dec. 2024, doi: 10.48084/etasr.9026.
J. Wang, L. Zhang, Y. Huang, and J. Zhao, “Safety of Autonomous Vehicles,” 2020, Hindawi Limited. doi: 10.1155/2020/8867757.
D. Gunning and D. W. Aha, “DARPA’s Explainable Artificial Intelligence Program Deep Learning and Security,” 2019.
S. Bai, S. Nasir, R. A. Khan, S. Arif, A. Meyer, and H. Konik, “Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques,” Jun. 2024, [Online]. Available: http://arxiv.org/abs/2406.00532
A. B. Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI,” Oct. 2019, [Online]. Available: http://arxiv.org/abs/1910.10045
Salih et al., “A Review of Evaluation Approaches for Explainable AI With Applications in Cardiology,” Nov. 22, 2023. doi: 10.36227/techrxiv.24573304.v1.
Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks,” Oct. 2017, doi: 10.1109/WACV.2018.00097.
R. K. Sheu and M. S. Pardeshi, “A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System,” Oct. 01, 2022, MDPI. doi: 10.3390/s22208068.
Z. Wang, L. Wu, and X. Ji, “An Interpretable Deep Learning System for Automatic Intracranial Hemorrhage Diagnosis with CT Image,” in Proceedings of the 2021 International Conference on Bioinformatics and Intelligent Computing, BIC 2021, Association for Computing Machinery, Inc, Jan. 2021, pp. 338–357. doi: 10.1145/3448748.3448803.
H. Mankodiya, D. Jadav, R. Gupta, S. Tanwar, W. C. Hong, and R. Sharma, “OD-XAI: Explainable AI-Based Semantic Object Detection for Autonomous Vehicles,” Applied Sciences (Switzerland), vol. 12, no. 11, Jun. 2022, doi: 10.3390/app12115310.
Dewi, R. C. Chen, H. Yu, and X. Jiang, “XAI for Image Captioning using SHAP,” Journal of Information Science and Engineering, vol. 39, no. 4, pp. 711–724, Jul. 2023, doi: 10.6688/JISE.202307_39(4).0001.
S. M. Hussain et al., “Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence,” Applied Sciences (Switzerland), vol. 12, no. 12, Jun. 2022, doi: 10.3390/app12126230.
Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G. Z. Yang, “XAI-Explainable artificial intelligence,” Sci Robot, vol. 4, no. 37, Dec. 2019, doi: 10.1126/scirobotics.aay7120.
Salih et al., “Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models,” Circ Cardiovasc Imaging, vol. 16, no. 4, p. E014519, Apr. 2023, doi: 10.1161/CIRCIMAGING.122.014519.
Neves et al., “Interpretable heartbeat classification using local model-agnostic explanations on ECGs,” Comput Biol Med, vol. 133, Jun. 2021, doi: 10.1016/j.compbiomed.2021.104393.
P. Singh and A. Sharma, “Interpretation and Classification of Arrhythmia Using Deep Convolutional Network,” IEEE Trans Instrum Meas, vol. 71, 2022, doi: 10.1109/TIM.2022.3204316.
N. I. Papandrianos, A. Feleki, S. Moustakidis, E. I. Papageorgiou, I. D. Apostolopoulos, and D. J. Apostolopoulos, “An Explainable Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM,” Applied Sciences (Switzerland), vol. 12, no. 15, Aug. 2022, doi: 10.3390/app12157592.
Tang, J. Chen, L. Ren, X. Wang, D. Li, and H. Zhang, “Reviewing CAM-Based Deep Explainable Methods in Healthcare,” May 01, 2024, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/app14104124.
V. Jahmunah, E. Y. K. Ng, R. S. Tan, S. L. Oh, and U. R. Acharya, “Explainable detection of myocardial infarction using deep learning models with Grad-CAM technique on ECG signals,” Comput Biol Med, vol. 146, Jul. 2022, doi: 10.1016/j.compbiomed.2022.105550.
Sakai et al., “Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening,” Biomedicines, vol. 10, no. 3, Mar. 2022, doi: 10.3390/biomedicines10030551.
Petch, S. Di, and W. Nelson, “Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology,” Feb. 01, 2022, Elsevier Inc. doi: 10.1016/j.cjca.2021.09.004.
K. Kırboğa and E. U. Küçüksille, “Identifying Cardiovascular Disease Risk Factors in Adults with Explainable Artificial Intelligence,” Anatol J Cardiol, vol. 27, no. 11, pp. 657–663, Nov. 2023, doi: 10.14744/AnatolJCardiol.2023.3214.
Y. M. Ayano, F. Schwenker, B. D. Dufera, and T. G. Debelee, “Interpretable Machine Learning Techniques in ECG-Based Heart Disease Classification: A Systematic Review,” Jan. 01, 2023, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/diagnostics13010111.
Salau, N. Agwu Nwojo, M. Mahamat Boukar, and O. Usen, “Advancing Preauthorization Task in Healthcare: An Application of Deep Active Incremental Learning for Medical Text Classification,” Engineering, Technology & Applied Science Research, vol. 13, no. 6, pp. 12205–12210, Dec. 2023, doi: 10.48084/etasr.6332.
J. Hou et al., “Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks,” Oct. 2024.
X. Li, Y. Huang, Y. Ning, M. Wang, and W. Cai, “Multi-branch myocardial infarction detection and localization framework based on multi-instance learning and domain knowledge,” Physiol Meas, vol. 45, no. 4, Apr. 2024, doi: 10.1088/1361-6579/ad3d25.
H. Shin, G. Noh, and B. M. Choi, “Photoplethysmogram based vascular aging assessment using the deep convolutional neural network,” Sci Rep, vol. 12, no. 1, Dec. 2022, doi: 10.1038/s41598-022-15240-4.
Jafari et al., “Automatic Diagnosis of Myocarditis Disease in Cardiac MRI Modality using Deep Transformers and Explainable Artificial Intelligence,” Oct. 2022, [Online]. Available: http://arxiv.org/abs/2210.14611
C. M. Frade et al., “Toward characterizing cardiovascular fitness using machine learning based on unobtrusive data,” PLoS One, vol. 18, no. 3, p. e0282398, Mar. 2023, doi: 10.1371/journal.pone.0282398.
P. Elias et al., “Deep Learning Electrocardiographic Analysis for Detection of Left-Sided Valvular Heart Disease,” J Am Coll Cardiol, vol. 80, no. 6, pp. 613–626, Aug. 2022, doi: 10.1016/j.jacc.2022.05.029.
Alamatsaz, L. Tabatabaei, M. Yazdchi, H. Payan, N. Alamatsaz, and F. Nasimi, “A lightweight hybrid CNN-LSTM explainable model for ECG-based arrhythmia detection,” Biomed Signal Process Control, vol. 90, Apr. 2024, doi: 10.1016/j.bspc.2023.105884.
Zamora et al., “Prognostic Stratification of Familial Hypercholesterolemia Patients Using AI Algorithms: A Gender-Specific Approach,” Oct. 14, 2024. doi: 10.1101/2024.10.11.24315359.
Ribeiro, D. A. C. Cardenas, J. E. Krieger, and M. A. Gutierrez, “Interpretable Deep Learning Model For Cardiomegaly Detection with Chest X-ray Images,” in Anais do XXIII Simpósio Brasileiro de Computação Aplicada à Saúde (SBCAS 2023), Sociedade Brasileira de Computação - SBC, Jun. 2023, pp. 340–347. doi: 10.5753/sbcas.2023.229943.
Valsaraj et al., “Development and validation of echocardiography-based machine-learning models to predict mortality,” EBioMedicine, vol. 90, p. 104479, Apr. 2023, doi: 10.1016/J.EBIOM.2023.104479.
D. Kusumoto et al., “A deep learning-based automated diagnosis system for SPECT myocardial perfusion imaging,” Sci Rep, vol. 14, no. 1, Dec. 2024, doi: 10.1038/s41598-024-64445-2.
S. Bouazizi and H. Ltifi, “Enhancing accuracy and interpretability in EEG-based medical decision making using an explainable ensemble learning framework application for stroke prediction,” Decis Support Syst, vol. 178, Mar. 2024, doi: 10.1016/j.dss.2023.114126.
Ribeiro, D. A. C. Cardenas, F. M. Dias, J. E. Krieger, and M. A. Gutierrez, “Explainable AI in Deep Learning-based Detection of Aortic Elongation on Chest X-ray Images,” Aug. 31, 2023. doi: 10.1101/2023.08.28.23294735.
J. Teuho et al., “Explainable deep-learning-based ischemia detection using hybrid O-15 H2O perfusion positron emission tomography and computed tomography imaging with clinical data,” Journal of Nuclear Cardiology, vol. 38, Aug. 2024, doi: 10.1016/j.nuclcard.2024.101889.
Elsevier, “ScienceDirect Research Database.” Accessed: Oct. 19, 2024. [Online]. Available: https://www.sciencedirect.com
[Allen Institute for AI (AI2), “Semantic Scholar Research Database.” Accessed: Oct. 20, 2025. [Online]. Available: https://www.semanticscholar.org
I. of H. (NIH) U.S. National Library of Medicine (NLM), “PubMed Research Database.” Accessed: Oct. 19, 2025. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov
E. P. V. Le et al., “Using machine learning to predict carotid artery symptoms from CT angiography: A radiomics and deep learning approach,” Eur J Radiol Open, vol. 13, p. 100594, Dec. 2024, doi: 10.1016/j.ejro.2024.100594.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.