בית הספר להנדסת חשמל ומחשבים
אירועים וסמינריםלפורטל הסטודנטיאלי

Oren Abstract: In the last few years, we have been witnessing a revolution in AI that is beginning to penetrate all walks of life. However, the increasing size and complexity of modern AI models make it difficult to understand why they produce particular outputs, limiting our ability to trust, debug, and safely deploy them. Explainable AI (XAI) aims to address this challenge by providing insights into model decisions.

Oren Abstract: In the last few years, we have been witnessing a revolution in AI that is beginning to penetrate all walks of life. However, the increasing size and complexity of modern AI models make it difficult to understand why they produce particular outputs, limiting our ability to trust, debug, and safely deploy them. Explainable AI (XAI) aims to address this challenge by providing insights into model decisions. In this talk, I will present my recent research in XAI. I will begin by motivating why XAI is key to safe and responsible progress in AI. I will then briefly overview several of my recent contributions to XAI, with examples from vision, language, and recommender systems. The main part of the talk will focus on the recent Soft Local Completeness (SLOC) method (presented as Oral at ICCV 2025): SLOC rethinks completeness - a widely discussed property in XAI that requires the attributions (explanation elements) to sum to the model’s output. While completeness intuitively suggests that a model’s prediction is “completely explained” by its attributions, its global formulation is insufficient to guarantee faithful explanations. We contend that promoting completeness locally within attribution subregions, in a soft manner, can serve as a standalone guiding principle for producing faithful explanations. To this end, we introduce the concept of the completeness gap as a flexible measure of completeness and propose several optimization procedures that minimize this gap across subregions of the attribution map. From a causal perspective, we show that the attributions produced by our method effectively estimate the density of the average treatment effect of input features, providing a rigorous causal grounding for our approach. Extensive evaluations across multiple model architectures, modalities, and benchmarks demonstrate that our method yields state-of-the-art explanations. Bio: Oren Barkan is a faculty member in the Computer Science Department at the Open University of Israel. Prior to his academic appointment, he held AI research positions at IBM, Google, and Microsoft. He received his B.Sc. in Computer Engineering and his M.Sc. in Computer Science from the Hebrew University, and his Ph.D. in Computer Science from Tel Aviv University. His research focuses on AI and explainability, with applications to computer vision, natural language processing, recommender systems, and audio synthesis. He is the author and inventor of more than 70 papers and 25 patents in the field of AI, respectively.
12 ינואר 2026