TOWARDS USER-CENTRIC EXPLANATIONS FOR EXPLAINABLE MODELS: A REVIEW

Authors

  • Ali Hassan Universiti Kebangsaan Malaysia (UKM)
  • Riza Sulaiman University of Prince Mugrin, Madinah
  • Mansoor Abdullateef Abdulgabber Universiti Kebangsaan Malaysia (UKM)
  • Hasan Kahtan University of Malaysia, Kuala Lumpur

Keywords:

User-Centric, Explainable Artificial Intelligence, Human-AI Interaction, Machine Learning

Abstract

Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.

Downloads

Download data is not yet available.

Downloads

Published

2021-09-01

How to Cite

Ali Hassan, Riza Sulaiman, Mansoor Abdullateef Abdulgabber, & Hasan Kahtan. (2021). TOWARDS USER-CENTRIC EXPLANATIONS FOR EXPLAINABLE MODELS: A REVIEW. JOURNAL INFORMATION AND TECHNOLOGY MANAGEMENT (JISTM), 6(22), 36–50. Retrieved from https://gaexcellence.com/jistm/article/view/1124