בית הספר
האירועים הקרובים
20באפריל
בשעה 13:00
Building 37, Room 202
Title: Going Straight to the Source
Ori Ernst
Title: Going Straight to the Source
Abstract:
Quality assessments of generated text usually focus on the output and the model that produced it, measuring model confidence scores, detecting errors, and performing exhaustive evaluations. However, a critical component is frequently overlooked in this process: the input source. In this talk, I introduce Source Learning: a paradigm that examines the source only, before any text is generated. By analyzing the source alone, we can proactively predict the performance of large language models, identify problematic sentences, and inform interventions without generating any text. Because it bypasses inference, Source Learning is fast, resource‑light, model‑agnostic, and it reveals the true origins of generation errors. Ultimately, it opens a new path for improving generation quality by shifting attention back to where it all begins - the input.
Bio:
"Ori is an NLP researcher who recently completed a postdoctoral fellowship as an IVADO fellow at McGill University and the Mila AI Institute, under the guidance of Prof. Jackie Cheung. His research centers on text generation, with a focus on multi-document setups, hallucination reduction, and text attribution. He earned his Ph.D. from the Natural Language Processing Lab at Bar-Ilan University. Ori has published extensively at leading ACL conferences, including a Best Paper Runner-Up award, and co-organizes the New Frontiers in Summarization (NewSumm) workshop. He has also held research-oriented roles in industry, including at Amazon, IBM, and Intel.
26באפריל
בשעה 16:30
Building 37, Room 202
ICASSP + ICC Prep
Sunday April 26th
Building 37, room 201
Session 1 (16:30-18:10; oral)
16:30-16:40 N. Shlezinger, opening remarks
16:40-17:00 A. Golan, AI-Aided Consensus Kalman Tracking in Partially-Known State-Space Models
17:00-17:20 S. Konstantino, Unsupervised Adaptation of AI DoA Estimators via Downstream Tracking
17:20-17:40 T. Shiran, Deep Unfolded Subspace-Based DoA Recovery from Sparse Arrays
17:40-18:00 O. Eger, Learning to Refine LLRs: Modular Neural Augmentation for MIMO-OFDM Receivers
18:00-18:10 Setting up posters + Coffee break
Session 2 (18:10-19:00; posters)
M. Tatarjitzky, AMBIDROP: ARRAY-AGNOSTIC SPEECH ENHANCEMENT USING AMBISONICS ENCODING AND DROPOUT-BASED LEARNING
O. Weisman, Conformal Prediction Aided Kalman Filters with Confidence Intervals
M. Azoulay, Dynamic Spike-and-Slab Particle Filtering for Topology Tracking
G. Francis and G. Masury, Millisecond-Order Self-Adaptive AI WiFi Receiver
27באפריל
בשעה 13:00
Building 37, Room 202
Student Name: George Vershinin
Degree: Ph.D.
Advisors: Prof. Omer Gurewitz & Prof. Asaf Cohen
Title Talk: On Cost-Aware Designs for Sequential Hypothesis Testing
Abstract: We study the sequential hypothesis-testing framework, where a single decision maker adaptively selects sensing actions to identify the true hypothesis under an average-error constraint as swiftly as possible. By associating each action with a positive cost, the decision maker seeks to minimize the expected total cost, which embodies delay in the literal sense, in contrast to previous works that measure delay using the expected number of samples until the decision is made.
We focus on two types of action costs: the simple constant cost model and the random cost model. Under the random cost model, we study two cost-revelation models: ex-post, in which the cost is revealed only after the sample is obtained (embodying billing), and ex-ante, in which the cost accrues before the sample is acquired (embodying latency). Additionally, the decision maker may opt to abandon the current action and apply a different action (i.e., preemption).
We prove that the optimal (cost-aware) strategies in such settings should maximize the ratio of the information bits gained per sample to the expected cost per sample induced by the action-selecting policy rather than optimizing the intuitive per-step information gain per cost. In addition, the effects of preemption under two cost-revelation models are analyzed; In the ex-post model, preemption does not affect performance, and the problem reduces to the constant-cost setting. In contrast, in the ex-ante model, preemption inflates the number of actions taken but may boost performance. We characterize when preemption is beneficial and study several families in detail.
Building on numerous insights, we adapt classical sequential testing schemes to cost-aware settings while preserving their asymptotic guarantees and improving performance in the finite regime.