# SKM 2023 – wissenschaftliches Programm

## Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

# TUT: Tutorien

## TUT 1: Physics Meets Machine Learning (joint session DY/TUT/TT)

### TUT 1.2: Tutorium

### Sonntag, 26. März 2023, 16:45–17:30, HSZ 01

**The Unreasonable Effectiveness of Gaussians in the Theory of Deep Neural Networks** — •Zohar Ringel — Racah Institute of Physics, Hebrew University in Jerusalem

Physical Sciences are in many ways the success story of explaining fundamental phenomena using simple math [1]. The fact that physical phenomena could be arranged in that manner is remarkable. Yet this simplicity does not necessarily carry over to life sciences or data sciences. Indeed prominent authors have argued against our desire to rely on neat mathematical structures when analyzing big data [2].

In the past half-decade several results have emerged which balance mathematical simplicity with data-induced complexity. These could be seen as a middle ground between the above juxtaposing views. The common divider here is the use of Gaussian distributions as approximants of various different quantities in deep neural networks (DNNs). Specifically these Gaussians emerge when describing outputs of DNNs with random weights, outputs of trained DNNs at random times, outputs of fixed DNNs over random input data, and fluctuations of hidden DNN pre-activations. In this tutorial I will present these quantities, provide arguments supporting their Gaussianity, and outline several theoretical implications.

[1] The Unreasonable Effectiveness of Mathematics in the Natural Sciences. Wigner (1960)

[2] The Unreasonable Effectiveness of Data. Halevy, Norvig, Pereira (2009)