בית הספר להנדסת חשמל ומחשבים
אירועים וסמינריםלפורטל הסטודנטיאלי
בית הספר להנדסת חשמל ומחשבים באוניברסיטת בן-גוריון בנגב

בית הספר
להנדסת חשמל ומחשבים

לפתח ולעצב את המחר, בכל קנה מידה: מקוונטים ועד רשתות נוירונים

האירועים הקרובים

share
אירוע ללא תשלום
13באפריל
בשעה 13:00
בניין 37, חדר 202
Title: On Spatial Audio Perception for Low-Order Binaural Reproduction
Name: Or Berebi Academic advisor: Prof Boaz Rafaely Department: School of electrical and computer engineering Title: On Spatial Audio Perception for Low-Order Binaural Reproduction Abstract: Binaural audio aims to recreate immersive 3D soundscapes through headphones, yet achieving high-fidelity realism often requires high-order processing that is computationally demanding. This seminar explores strategies to enhance spatial perception within low-order reproduction frameworks. We will examine how signal processing techniques can mitigate common artifacts like spatial blurring and timbre coloration, which often plague lower-complexity systems. By integrating psychoacoustic insights with optimized virtual loudspeaker encoding and HRTF modeling, we can bridge the gap between computational efficiency and perceptual accuracy. The discussion will highlight methods for maintaining robust localization and immersion
share
אירוע ללא תשלום
15באפריל
בשעה 13:00
Building 37, Room 202
Departmental Seminar - Lecture Introductory Information Name: Zikun Tan Advisor: Prof. Ron Dabora PhD student in Electrical & Computer Engineering Topic: Rate-Distortion Analysis for Sampled Correlated Cyclostationary Gaussian Processes
Departmental Seminar - Lecture Introductory Information Name: Zikun Tan Advisor: Prof. Ron Dabora PhD student in Electrical & Computer Engineering Topic: Rate-Distortion Analysis for Sampled Correlated Cyclostationary Gaussian Processes Abstract: We study the rate-distortion function (RDF) for sampled cyclostationary Gaussian processes with memory, representing, e.g., the sampling of communications signals for the subsequent application of digital processing. Accounting for the inherent random jitter in local oscillators and keeping the sampling interval smaller than the memory length of the continuous-time (CT) source process to facilitate reliable modeling, induce a discrete-time (DT) wide-sense almost cyclostationary (WSACS) process with memory model upon the sampled signals. The main challenge follows from the information-instability of DT WSACS processes, which renders conventional informationtheoretic approaches inapplicable. We use the information-spectrum framework to study the compression of incoming source sequences in two settings: when processing starts immediately upon reception and when a bounded delay exists between consecutive source sequences. Our analysis provides novel insights relating source memory, sampling frequency synchronization, and achievable compression rates. We show that, contrary to the sampled stationary case, the RDF for sampled cyclostationary processes is very sensitive to sampling rate synchronization. We also demonstrate that the RDF is not a monotonically decreasing function for the sampling rate and how introducing delay simplifies the compression scheme and lowers the rates
share
אירוע ללא תשלום
20באפריל
בשעה 13:00
Building 37, Room 202
Title: Going Straight to the Source Ori Ernst
Title: Going Straight to the Source Abstract: Quality assessments of generated text usually focus on the output and the model that produced it, measuring model confidence scores, detecting errors, and performing exhaustive evaluations. However, a critical component is frequently overlooked in this process: the input source. In this talk, I introduce Source Learning: a paradigm that examines the source only, before any text is generated. By analyzing the source alone, we can proactively predict the performance of large language models, identify problematic sentences, and inform interventions without generating any text. Because it bypasses inference, Source Learning is fast, resource‑light, model‑agnostic, and it reveals the true origins of generation errors. Ultimately, it opens a new path for improving generation quality by shifting attention back to where it all begins - the input. Bio: "Ori is an NLP researcher who recently completed a postdoctoral fellowship as an IVADO fellow at McGill University and the Mila AI Institute, under the guidance of Prof. Jackie Cheung. His research centers on text generation, with a focus on multi-document setups, hallucination reduction, and text attribution. He earned his Ph.D. from the Natural Language Processing Lab at Bar-Ilan University. Ori has published extensively at leading ACL conferences, including a Best Paper Runner-Up award, and co-organizes the New Frontiers in Summarization (NewSumm) workshop. He has also held research-oriented roles in industry, including at Amazon, IBM, and Intel.