We have a postdoctoral position available (deadline Sept 30). Click for details.

 

A Unifying Framework for Neuro-Inspired, Data-Driven Detection of Low-Level Auditory Features

Weerts L, Clopath C, Goodman DFM
Cognitive Computational Neuroscience (2019)
doi: 10.32470/CCN.2019.1245-0
2019 Conference on Cognitive Computational Neuroscience, 13-16 September 2019, Berlin, Germany
 

Abstract

Our understanding of hearing and speech recognition rests on controlled experiments requiring simple stimuli. However, these stimuli often lack the characteristics of complex sounds such as speech. We propose an approach that combines neural modelling with machine learning to determine relevant low-level auditory features. Our approach bridges the gap between detailed neuronal models that capture specific auditory responses, and research on the statistics of real-world speech data and speech recognition. First, we introduce a feature detection model with a modest number of parameters that is compatible with auditory physiology. In order to objectively determine relevant feature detectors within the model parameter space, the model is tested in a speech classification task, using a simple classifier that approximates the information bottleneck. This framework allows us to determine the best model parameters and their neurophysiological and psychoacoustic implications. We show that our model can capture a variety of well-studied features (such as amplitude modulations and onsets) and allows us to unify concepts from different areas of hearing research. Our approach has various potential applications. Firstly, it could lead to new, testable experimental hypotheses for understanding hearing. Moreover, promising features could be directly applied as a new acoustic front-end for speech recognition systems.

Links

Categories