Advantages of heterogeneity of parameters in spiking neural network training

Perez-Nieves N, Leung VCH, Dragotti PL, Goodman DFM
Cognitive Computational Neuroscience (2019)
doi: 10.32470/CCN.2019.1173-0
2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
 

Abstract

It is very common in studies of the learning capabilities of spiking neural networks (SNNs) to use homogeneous neural and synaptic parameters (time constants, thresholds, etc.). Even in studies in which these parameters are distributed heterogeneously, the advantages or disadvantages of the heterogeneity have rarely been studied in depth. By contrast, in the brain, neurons and synapses are highly diverse, leading naturally to the hypothesis that this heterogeneity may be advantageous for learning. Starting from two state-of-the-art methods for training spiking neural networks (Nicola & Clopath, 2017, Shrestha & Orchard 2018), we found that adding parameter heterogeneity reduced errors when the network had to learn more complex patterns, increased robustness to hyperparameter mistuning, and reduced the number of training iterations required. We propose that neural heterogeneity may be an important principle for brains to learn robustly in real world environments with highly complex structure, and where task-specific hyperparameter tuning may be impossible. Consequently, heterogeneity may also be a good candidate design principle for artificial neural networks, to reduce the need for expensive hyperparameter tuning as well as for reducing training time.

Links

Categories