סמינר 3.11.25 בית הספר להדנסת חשמל ומחשבים
abstract:
Graph neural networks (GNNs) have been used to solve hard combinatorial problems, yet their behavior is often opaque. We develop an explainable-AI lens for neural solvers that reveals the algorithmic concepts they self-organize around and uses them to build better models and heuristics. On SAT, we probe NeuroSAT's dynamics and uncover a small set of low-dimensional, evolving concepts—most notably a notion of support that tracks assignment confidence—making these concepts explicit yields a compact student model with comparable accuracy and faster inference, suggesting tweaks that accelerate WalkSAT. For graph coloring, vertex embeddings form a simplex-like geometry where distances encode confidence and high-support anchors stabilize assignments. For Max-Clique, we demonstrate that GNNs learn degree-based ranking. When combined with a least-probable-removal decoder, this approach outperforms standard rounding and transfers to Sparse PCA. Overall, we turn black-box predictions into algorithmic insight—linking latent geometry to classical heuristics and guiding learning-augmented solvers.





