Explainable AI Techniques for Data Integration: Enhancing Trust and Transparency in Automated Data Fusion

Authors

  • Arvind Kumar Chaudhary

Keywords:

Explainable AI, Data Fusion, Trustworthy Systems, Schema Matching, Information Provenance

Abstract

The rising use of automated data integration in vital fields, such as healthcare, finance, and government, has created concerns about reliability, visibility, and who is accountable. While explainable artificial intelligence is making progress, most strategies are still focused on classification tasks and don’t consider the complete process of fusing different types of data, from aligning schemas to matching entities and prioritizing sources. This paper offers a comprehensive framework that ensures explanations can be applied during key steps of data integration, with a taxonomy that splits explanations into source-level, schema-level, and instance-level. This architecture is built based on information provenance and causal inference. It brings together symbolic logic, neural models, and post-hoc explanation tools such as SHAP and LIME. We introduce several new assessment metrics—such as Explanation Fidelity Delta and Trust Alignment Score—and put forward a set of tests to be used as a benchmark. Implementing this approach in healthcare has shown that it improves decision-making accuracy, and models still perform strongly. Research outcomes help establish trustworthy and clear data fusion systems that address both ethical demands and demands from regulations.

Downloads

Download data is not yet available.

References

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier.

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning.

Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust.

Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).

Guidotti, R., et al. (2018). A survey of methods for explaining black box models.

Gunning, D., et al. (2019). XAI—Explainable artificial intelligence.

Vilone, G., & Longo, L. (2020). Explainable artificial intelligence: A systematic review.

Gilpin, L. H., et al. (2018). Explaining explanations: An overview of interpretability of machine learning.

Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey.

Zhang, Q., et al. (2019). Visual interpretability for deep learning: A survey.

Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (XAI): Towards medical AI.

Samek, W., et al. (2019). Explainable AI: Interpreting, explaining and visualizing deep learning.

Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions [SHAP].

Montavon, G., et al. (2018). Methods for interpreting and understanding deep neural networks.

Shrikumar, A., et al. (2017). Learning important features through propagating activation differences [DeepLIFT].

Amann, J., et al. (2020). Explainability for artificial intelligence in healthcare.

Holzinger, A., et al. (2017). What do we need to build explainable AI systems for the medical domain?

Choi, E., et al. (2016). RETAIN: An interpretable predictive model for healthcare.

Tonekaboni, S., et al. (2019). What clinicians want: Contextualizing explainable machine learning for clinical end use.

Hoffman, R. R., et al. (2018). Metrics for explainable AI: Challenges and prospects.

Goldstein, A., et al. (2015). Visualizing statistical learning with plots of individual conditional expectation.

Wang, D., et al. (2019). Design challenges in building explainable AI (XAI) systems.

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”.

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy.

Rudin, C. (2019). Stop explaining black box models and use interpretable models instead.

Pearl, J. (2019). The seven tools of causal inference, with reflections on machine learning.

Lipton, Z. C. (2018). The mythos of model interpretability.

Martens, D., & Provost, F. (2014). Explaining data-driven document classifications.

Arrieta, A. B., et al. (2020). Explainable AI: Concepts,

Downloads

Published

23.02.2024

How to Cite

Arvind Kumar Chaudhary. (2024). Explainable AI Techniques for Data Integration: Enhancing Trust and Transparency in Automated Data Fusion. International Journal of Intelligent Systems and Applications in Engineering, 12(17s), 900 –. Retrieved from https://www.ijisae.org/index.php/IJISAE/article/view/7556

Issue

Section

Research Article