#preprint! Defining neural modularity is hard: much history. We used toy ANNs to show structural and functional definitions not tightly related, resource constraints important, and we need to start thinking about temporal dynamics.
— Dan Goodman (@neuralreckoning) July 28, 2023
🧵 with @GabrielBna1 https://t.co/h70TXa7jFT
To get at structural modularity we use the simplest possible setup, 2 modules with input, output and varying amounts of connections between modules. Highly modular when p close to 0, not modular at all when p close to 1. Modularity Q=(1-p)/2(1+p). pic.twitter.com/4ikd3Twu6s
— Dan Goodman (@neuralreckoning) July 28, 2023
Input consists of pairs of MNIST digits. Task is to return one or the other of the two digits depending on whether their parity is the same or different. If modules specialise on one digit, they only need to communicate a single parity bit. (Messy details: see paper.)
— Dan Goodman (@neuralreckoning) July 28, 2023
Getting at functional modularity is harder! We came up with three definitions that it turns out relate to ideas from Fodor, Shallice and Cooper. Either training decoders on frozen (and ablated) networks, or looking for correlations in module states. They give coherent results! 🥳 pic.twitter.com/nz3dBQOWaL
— Dan Goodman (@neuralreckoning) July 28, 2023
First result: tuning structural modularity Q is enough to induce functional modularity, but only at levels way beyond what is considered highly structurally modular (Q>0.4). The task is simple but shows that brainlike structural modularity not enough to get functional modularity. pic.twitter.com/Z2CaFATsOh
— Dan Goodman (@neuralreckoning) July 28, 2023
If you vary number of neurons n, connectivity p, covariance in input data c and architecture, you get a more complicated picture. In brief, having tighter resource constraints and certain types of architecture make functional modularity more likely to emerge. pic.twitter.com/imNHPcmKYH
— Dan Goodman (@neuralreckoning) July 28, 2023
Unrolling the recurrent network in time and treating different time points as different modules, we find that specialisation has rich temporal dynamics. In brief, specialisation dynamics follows the dynamics of information flow through the network (check out the paper for more). pic.twitter.com/ipNCVKGWqI
— Dan Goodman (@neuralreckoning) July 28, 2023
This is based on toy models. Good! If we can't define what we mean in the simplest possible case, what hope do we have for real world cases? We think toy ANNs are a great proving ground for candidate definitions of modularity, and that we need new ideas for dynamic modularity.
— Dan Goodman (@neuralreckoning) July 28, 2023
Once we have good working definitions for these simple cases, we can start applying them to more complex artificial and biological neural networks. The definitions in this paper could be used with NeuroPixels data for example.
— Dan Goodman (@neuralreckoning) July 28, 2023
Dynamics of specialization in neural modules under resource constraints
Abstract
Links
Categories