School of Electrical and Computer Engineering
Events and Seminars
Search
Search

AI Seminar at the School of Electrical Engineering and Computer Science 24/06/2025

Departmental seminar

DiTASK: Diffeomorphic Multi-Task Fine-Tuning of Vision Transformers
24 June 2025

Pre-trained Vision Transformers now serve as powerful tools for computer vision. Yet, efficiently adapting them for multiple tasks remains a challenge that arises from the need to modify the rich hidden representations encoded by the learned weight matrices, without inducing interference between tasks. Current parameter-efficient methods like LoRA, which apply low-rank updates, force tasks to compete within constrained subspaces, ultimately degrading performance

We introduce DiTASK, a novel Diffeomorphic Multi-Task Fine-Tuning approach that maintains pre-trained representations by preserving weight matrix singular vectors, while enabling task-specific adaptations through neural diffeomorphic transformations of the singular values. By following this approach, DiTASK enables both shared and task-specific feature modulations with minimal added parameters. Our theoretical analysis shows that DiTASK achieves full-rank updates during optimization, preserving the geometric structure of pre-trained features, and establishing a new paradigm for efficient multi-task learning (MTL). Our experiments on PASCAL MTL and NYUD show that DiTASK achieves state-of-the-art performance across four dense prediction tasks, using 75% fewer parameters than existing methods

Short Bio

Chaim Baskin is an Assistant Professor (Senior Lecturer) in the School of Electrical and Computer Engineering at Ben-Gurion University of the Negev (BGU). He serves as the Academic Head of Computing within the school and is a member of the Data Science Research Center. He also leads the INSIGHT Lab, which focuses on developing efficient, robust, and explainable AI systems for real-world deployment. Prof. Baskin is a Senior Member of the IEEE

His research interests include foundation models, multimodal learning, vision-language models (VLMs), adversarial robustness, graph neural networks (GNNs), resource-efficient learning, domain adaptation, model interpretability, and real-world generalization. Prof. Baskin’s work spans both theoretical and applied aspects of machine learning, computer vision, and artificial intelligence, addressing key challenges in scalability, robustness, and adaptability across domains such as autonomous systems, medical imaging, natural language processing, biology, and social networks. He collaborates with leading researchers worldwide and actively engages with industry partners

He regularly publishes in top-tier venues such as NeurIPS, CVPR, ICLR, ICCV, JMLR, and TMLR, and frequently serves as an Area Chair and reviewer for leading conferences and journals in his field