Learning to localise sounds with spiking neural networks
Goodman DFM, Brette R
Advances in Neural Information Processing Systems
(2010) 23
Abstract
To localise the source of a sound, we use location-specific properties
of the signals received at the two ears caused by the asymmetric
filtering of the original sound by our head and pinnae, the head-related
transfer functions (HRTFs). These HRTFs change throughout an organism's
lifetime, during development for example, and so the required neural
circuitry cannot be entirely hardwired. Since HRTFs are not directly
accessible from perceptual experience, they can only be inferred from
filtered sounds. We present a spiking neural network model of sound
localisation based on extracting location-specific synchrony patterns,
and a simple supervised algorithm to learn the mapping between synchrony
patterns and locations from a set of example sounds, with no previous
knowledge of HRTFs. After learning, our model was able to accurately
localise new sounds in both azimuth and elevation, including the
difficult task of distinguishing sounds coming from the front and back.
Links
Related software
A Python simulator for spiking neural networks.
Categories