New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social). arxiv.org/abs/2507.16043 Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! 🧵👇 🤖🧠🧪

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.058Z

First of all, using synthetic datasets we find that it can extract information encoded in interspike intervals and patterns of coincidence. No problem, and degrades gracefully as we start disrupting the temporal information.

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.059Z

How about in more realistic datasets? We looked at the popular SHD spiking speech recognition dataset from @fzenke.bsky.social but the problem was that it has too much spike rate information. You can get around 50% accuracy using a multilayer perceptron on spike counts alone.

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.060Z

So we designed a modified version of this dataset where we choose a subset of around 200 of the 700 neurons, and then randomly select a fixed number of spikes on each trial. This keeps the spike timing meaningful, but discards all spike count information.

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.061Z

This dataset is harder but can still be solved to an accuracy of around 50% with an SNN+delays, and as you perturb spike timing or use an MLP on spike counts, it drops to chance accuracy.

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.062Z

We think this is a good new dataset for testing spike timing abilities of spike based-learning algorithms and models. We've released the code and dataset in the same format as SHD so it should be easy to start using this: github.com/neural-recko... zenodo.org/records/1615...

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.063Z

We tried one last test: testing on data where we reverse time. Accuracy drops a small amount for SNNs without delays, but a large amount for SNNs trained with delays. This better matches humans (we struggle to identify reversed speech), maybe suggesting that delay-based models might be a better fit.

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.064Z

We hope to add some more results to this before we send it to a journal for review, so please do give us your feedback! And many thanks to the first author, incredible MSc student Ziqiao Yu, and postdoc co-author @pengfei-sun.bsky.social.

Dan Goodman (@neuralreckoning.bsky.social) 2025-07-24T17:03:42.065Z

Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks

Preprint
 

Abstract

We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond firing rates. We first design synthetic tasks isolating intra-neuron inter-spike intervals and cross-neuron synchrony under matched spike counts. On more complex spike-based speech recognition datasets (Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC), we construct variants where spike count information is eliminated and only timing information remains, and show that Surrogate GD-trained SNNs are able to perform significantly above chance whereas purely rate-based models perform at chance level. We further evaluate robustness under biologically inspired perturbations -- including Gaussian jitter per spike or per-neuron, and spike deletion -- revealing consistent but perturbation-specific degradation. Networks show a sharp performance drop when spike sequences are reversed in time, with a larger drop in performance from SNNs trained with delays, indicating that these networks are more human-like in terms of behaviour. To facilitate further studies of temporal coding, we have released our modified SHD and SSC datasets.

Links

Categories