Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

T: Fachverband Teilchenphysik

T 71: Data, AI, Computing, Electronics VII

T 71.1: Vortrag

Donnerstag, 19. März 2026, 16:15–16:30, KH 00.024

Shapes are not enough: Preservattack and its use for finding vulnerabilities and uncertainties in machine learning applicationsPhilip Bechtle1, Lucie Flek2, Philipp Alexander Jung3, Akbar Karimi2, •Timo Saala1, Alexander Schmidt3, Matthias Schott1, Philipp Soldin4, Christopher Wiebusch4, and Ulrich Willemsen31Institute of Physics, University of Bonn, Germany — 2Bonn-Aachen Institute of Technology, University of Bonn, Germany — 3Institute of Experimental physics III B, RWTH Aachen University, Germany — 4Institute of Experimental physics III A, RWTH Aachen University, Germany

In High Energy Physics, machine learning has become crucial for advancing our understanding of fundamental phenomena. Deep learning models increasingly analyze both simulated and experimental data, supported by rigorous tests of physically motivated systematic uncertainties. Numerical evaluations quantify differences between data and simulation, and comparisons of marginal distributions and feature correlations in control regions are standard. However, physical guidance and regional constraints cannot guarantee capturing of all deviations.

We propose a novel adversarial attack exploiting the remaining space of hypothetical deviations between simulation and data after such tests. The resulting perturbations stay within uncertainty bounds - evading standard validation - while still fooling the underlying model. We also suggest mitigation strategies and argue that robustness to adversarial effects is crucial when interpreting deep learning results in particle physics.

Keywords: Adversarial Attack; Deep Learning; Uncertainty; Robustness

100% | Bildschirmansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2026 > Erlangen