Bereiche | Tage | Auswahl | Suche | Aktualisierungen | Downloads | Hilfe

DY: Fachverband Dynamik und Statistische Physik

DY 13: Poster - Glasses / Stat. Phys. Bio. / Networks (joint session DY/BP/CPP/SOE)

DY 13.16: Poster

Dienstag, 1. April 2014, 09:30–12:30, P1

Self-stabilizing Learning Rules in Neural Models driven by Objective Functions — •Rodrigo Echeveste and Claudius Gros — Institut für Theoretische Physik, Johann Wolfgang Goethe Universität, Max-von-Laue-Str. 1, Frankfurt am Main, Germany

In the present work, learning rules for a neuronal model are derived from two objective functions. On the one hand, the neuron’s firing bias is adjusted by minimizing the Kullback-Leibler divergence with respect to an exponential output distribution. On the other hand, learning rules for the synaptic weights are obtained by minimizing a Fisher information that measures the sensitivity of the input distribution with respect to the growth of the synaptic weights. In this way, we obtain rules that both account for Hebbian/anti-Hebbian learning and stabilize the system to avoid unbounded weight growth. As a by-product of the derivation, a sliding threshold, similar to the one found in BCM models, is obtained for the learning rules.

As a first application of these rules, the single neuron case is studied in the context of principal component analysis and linear discrimination. We observe that the weight vector aligns to the principal component when the input distribution has a single direction of maximal variance but, when presented with two directions of equal variance, the neuron tends to pick the one with larger negative Kurtosis. In particular, this fact allows the neuron to linearly separate bimodal inputs. Robustness to large input sizes (  1000 inputs) is also studied, observing that the neuron is still able to find the principal component in a distribution under these conditions.

100% | Bildschirmansicht | English Version | Kontakt/Impressum/Datenschutz
DPG-Physik > DPG-Verhandlungen > 2014 > Dresden