
Yael Feldman-Maggor
The Impact of Explainable AI on Teachers’ Trust and Acceptance of AI EdTech Recommendations
The Power of Domain-specific Explanations
Trust is crucial for teachers’ adoption of AI-enhanced educational technologies (AI-EdTech), yet how this trust is formed and maintained remains poorly understood. An aspect of the system design that seems profoundly related to trust is transparency, which can be achieved through explainable AI (XAI) approaches. The present study seeks to explore the dynamic nature of teachers’ trust in AI EdTech systems, how it relates to understandability, and XAI’s role in enhancing it. Building upon Hoff and Bashir’s ‘trust in automation’ model (2015), we propose a theoretical model that connects these factors. We validated the applicability of the proposed model to AI in Education context using a mixed-method, within-subject design that measured understandability, trust, and acceptance of AI recommendations among 41 in-service chemistry teachers. The results showed a significant positive correlation between the three factors, as anticipated by the model, and demonstrated the heterogeneous understandability of different XAI schemes, with domain-driven schemes superior to data-driven ones. In addition, the study reveals two additional factors influencing teachers’ adoption of AI-EdTech: pedagogical perspectives and workload reduction potential. The study provides a theoretical explanation of how different XAI schemes impact trust through understandability. Furthermore, it emphasizes the need for greater attention to XAI, which fosters trust and facilitates the acceptance of AI-EdTech.
| Publication language | English |
| Pages | 2889-2922 |
| Volume | 35 |
| Issue number | 5 |
| Publication status | Published - 01.12.2025 |