Developing Knowledge-Centric Frameworks for Enhancing Web Search Diversity through Semantic Artificial Intelligence
Keywords:
Semantic Web Technologies, Machine Learning, Explainability, XAI.Abstract
Machine Learning methods, particularly Artificial Neural Networks, have garnered significant interest in both research and practical applications because to their substantial promise in prediction tasks. Nevertheless, these models often fail to provide explainable results, which is an essential criterion in several high-stakes fields such as healthcare or transportation.
Concerning explainability, Semantic Web Technologies provide semantically interpretable tools that facilitate reasoning on knowledge bases. Consequently, the inquiry emerges about how Semantic Web Technologies and associated notions might enhance explanations inside Machine Learning systems. This discussion presents contemporary methodologies for integrating Machine Learning with Semantic Web Technologies, focusing on model explainability, derived from a rigorous literature review. In this process, we also emphasize the areas and applications propelling the study field and examine the methods by which explanations are provided to the user. Based on these observations, we propose avenues for further study on the integration of Semantic Web Technologies with Machine Learning.
Downloads
References
Alirezaie, M., Langkvist, M., Sioutis, M., Lout_, A.: Semantic Referee: A neuralsymbolic framework for enhancing geospatial semantic segmentation. Semantic Web Journal (2019)
Batet, M., Valls, A., Gibert, K.: Performance of ontology-based semantic similarities in clustering. In: Proceedings of the 10th International Conference on Artificial Intelligence and Soft Computing. pp. 281{288. Springer, Berlin, Heidelberg (2010)
Batet, M., Valls, A., Gibert, K., S_anchez, D.: Semantic clustering using multiple ontologies. In: Arti_cial Intelligence Research and Development - Proceedings of the 13th International Conference of the Catalan Association for Arti_cial Intelligence. pp. 207{216. IOS Press, Amsterdam, The Netherlands (2010)
Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y., Kankanhalli, M.: Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. pp. 582:1{582:18. CHI '18, ACM, New York, NY, USA (2018)
Adadi, A., Berrada, M.: Peeking inside the black-box: A survey on explainable arti_cial intelligence (XAI). IEEE Access 6, 52138{52160 (2018) Aditya, S., Yang, Y., Baral, C.: Explicit reasoning over end-to-end neural architectures for visual question answering. In: Proceedings of the Thirty-Second AAAI Conference on Arti_cial Intelligence. New Orleans, Louisiana, USA (2018)
Ai, Q., Azizi, V., Chen, X., Zhang, Y.: Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms 11(9), 137 (2018)
Alirezaie, M., Langkvist, M., Sioutis, M., Lout_, A.: A symbolic approach for explaining errors in image classi_cation tasks. In: IJCAI Workshop on Learning and Reasoning. Stockholm, Sweden (2018)
Bellini, V., Schiavone, A., Di Noia, T., Ragone, A., Di Sciascio, E.: Knowledgeaware autoencoders for explainable recommender systems. In: Proceedings of the 3rd Workshop on Deep Learning for Recommender Systems. pp. 24{31. DLRS 2018, ACM, New York, NY, USA (2018)
Biran, O., Cotton, C.: Explanation and justi_cation in machine learning: A survey. In: Proceedings of the IJCAI-17 Workshop on Explainable AI (XAI). pp. 8{13. Melbourne, Australia (2017)
Brynjolfsson, E., Mitchell, T.: What can machine learning do? Workforce implications. Science 358(6370), 1530{1534 (2017)
Che, Z., Kale, D., Li, W., Bahadori, M.T., Liu, Y.: Deep computational phenotyping. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 507{516. KDD '15, ACM, New York, NY, USA (2015)
Chen, J., Lecue, F., Pan, J.Z., Horrocks, I., Chen, H.: Knowledge-based transfer learning explanation. In: Sixteenth International Conference on Principles of Knowledge Representation and Reasoning. pp. 349{358. Tempe, AZ, USA (2018)
Cherkassky, V., Dhar, S.: Interpretation of black-box predictive models. In: Measures of Complexity, pp. 267{286. Springer, New York (2015)
Choi, E., Bahadori, M.T., Song, L., Stewart, W.F., Sun, J.: GRAM: Graph-based attention model for healthcare representation learning. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.pp. 787{795. KDD '17, ACM, New York, NY, USA (2017)
Clos, J., Wiratunga, N., Massie, S.: Towards explainable text classi_cation by jointly learning lexicon and modi_er terms. In: IJCAI-17 Workshop on Explainable AI (XAI). pp. 19{23. Melbourne, Australia (2017)
Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. In: Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Arti_cial Intelligence (AI*IA 2017). Bari, Italy (2017)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
All papers should be submitted electronically. All submitted manuscripts must be original work that is not under submission at another journal or under consideration for publication in another form, such as a monograph or chapter of a book. Authors of submitted papers are obligated not to submit their paper for publication elsewhere until an editorial decision is rendered on their submission. Further, authors of accepted papers are prohibited from publishing the results in other publications that appear before the paper is published in the Journal unless they receive approval for doing so from the Editor-In-Chief.
IJISAE open access articles are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This license lets the audience to give appropriate credit, provide a link to the license, and indicate if changes were made and if they remix, transform, or build upon the material, they must distribute contributions under the same license as the original.