יעל פלדמן מגור

אקדמי בכיר

Explainable AI for Unsupervised Machine Learning

A Proposed Scheme Applied to a Case Study with Science Teachers

Yael Feldman-Maggor, Tanya Nazaretsky, Giora Alexandron

Explainable Artificial Intelligence (XAI) seeks to render Artificial Intelligence (AI) models transparent and comprehensible, potentially increasing trust and confidence in AI recommendations. This research explores the realm of XAI within unsupervised educational machine learning, a relatively under-explored topic within Learning Analytics (LA). It introduces an XAI framework designed to elucidate clustering-based personalized recommendations for educators. Our approach involves a two-step validation: computational verification followed by domain-specific evaluation concerning its impact on teachers’ AI acceptance. Through interviews with K-12 educators, we identified key themes in teachers’ attitudes toward the explanations. The main contribution of this paper is a new XAI scheme for unsupervised educational machine-learning decision-support systems. The second is shedding light on the subjective nature of educators’ interpretation of XAI schemes and visualizations.

שפת פרסום אנגלית
דפים 436-444
סטטוס פרסום פורסם - 01.01.2024

Keywords

Clustering
Explainable Artificial Intelligence
Personalized Learning

ASJC Scopus subject areas

Information Systems
Computer Science Applications
גישה למסמך
10.5220/0012687000003693
קבצים וקישורים אחרים
Link to publication in Scopus