How does the structure of a neural circuit shape its function? @neuralreckoning.bsky.social & I explore this in our new preprint: doi.org/10.1101/2025... 🤖🧠🧪 🧵1/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.716Z

We start from an artificial neural network with 3 sets of units and 9 possible weight matrices (or pathways). By keeping the two feedforward pathways (W_ih, W_ho) and adding the other 7 in any combination, we can generate 2^7 distinct architectures. All 128 are shown in the post above. 🧵2/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.717Z

This allows us to interpolate between: Feedforward - with no additional pathways. Fully recurrent - with all nine pathways. We term the 126 architectures between these two extremes *partially recurrent neural networks* (pRNNs), as signal propagation can be bidirectional, yet sparse. 🧵3/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.718Z

To compare pRNN function, we introduce a set of multisensory navigation tasks we call *multimodal mazes*. In these tasks, we simulate networks as agents with noisy sensors, which provide local clues about the shortest path through each maze. We add complexity by removing cues or walls. 🧵4/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.719Z

We trained over 25,000 pRNNs on these tasks. And measured their: 📈 Fitness (task performance) 💹 Learning speed 📉 Robustness to various perturbations (e.g. increasing sensor noise) From these data, we reach three main conclusions. 🧵5/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.720Z

First, across tasks and functional metrics, many pRNN architectures perform as well as the fully recurrent architecture. Despite having less pathways and as few as ¼ the number of parameters. This shows that pRNNs are efficient, yet performant. 🧵6/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.721Z

Second, to isolate how each pathway changes network function, we compare pairs of circuits which differ by one pathway. Across pairs, we find that pathways have context dependent effects. E.g. here hidden-hidden connections decrease learning speed in one task but accelerate it in another. 🧵7/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.722Z

Third, to explore why different circuits function differently, we measured 3 traits from every network. We find that different architectures learn distinct sensitivities and memory dynamics which shape their function. E.g. we can predict a network’s robustness to noise from its memory. 🧵8/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.723Z

We’re excited about this work as it: ⭐ Explores a fundamental question: how does structure sculpt function in artificial and biological networks? ⭐ Provides new models (pRNNs), tasks (Multimodal mazes) and tools, in a pip-installable package: github.com/ghoshm/Multi... 🧵9/9

Marcus Ghosh (@marcusghosh.bsky.social) 2025-08-01T08:26:57.724Z

Partial recurrence enables robust and efficient computation

Preprint
 

Abstract

Neural circuits are sparse and bidirectional. Meaning that signals flow from early sensory areas to later regions and back. Yet, between connected areas there exist some but not all pathways. How does this structure, somewhere between feedforward and fully recurrent, shape circuit function? To address this question, we designed a new recurrent neural network model in which a set of weight matrices (i.e. pathways) can be combined to generate every network structure between feedforward and fully recurrent. We term these architectures partially recurrent neural networks (pRNNs). We trained over 25,000 pRNNs on a novel set of reinforcement learning tasks, designed to mimic multisensory navigation, and compared their performance across multiple functional metrics. Our findings reveal three key insights. First, many architectures match or exceed the performance of fully recurrent networks, despite using as few as one-quarter the number of parameters; demonstrating that partial recurrence enables energy efficient, yet performant solutions. Second, each pathway’s functional impact is both task and circuit dependent. For instance, feedback connections enhance robustness to noise in some, but not all contexts. Third, different pRNN architectures learn solutions with distinct input sensitivities and memory dynamics, and these computational traits help to explain their functional capabilities. Overall, our results demonstrate that partial recurrence enables robust and efficient computation - a finding that helps to explain why neural circuits are sparse and bidirectional, and how these principles could inform the design of artificial systems.

Links

Categories