New #Preprint Alert!! 🤖 🧠 🧪 What if we could train neural cellular automata to develop continuous universal computation through gradient descent ?! We have started to chart a path toward this goal in our new preprint: arXiv: arxiv.org/abs/2505.13058 Blog: gabrielbena.github.io/blog/2025/be... 🧵⬇️

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:24:27.348Z

Here's the gist: Traditional CAs (think Conway's Game of Life) have been mathematically proven Turing-complete... but designing them is HARD. You have to hand-craft rules, at the cost of arduous efforts. What if instead we could just... train them to compute, offloading the burden? Enter #NCA !

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:24:27.349Z

For those of you who've missed it, quick NCA primer: - Traditional cellular automata = hand-crafted rules (like Conway's Game of Life). - Neural Cellular Automata = local rule learned by a neural network through gradient descent! distill.pub/2020/growing...

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:24:27.350Z

We propose a novel framework that disentangles the concepts of “hardware” and “state” within the NCA. For us: - Rules = "Physics" dictating state transitions. - Hardware = Immutable + heterogeneous scaffold guiding the CA behaviour. - State = Dynamic physical & computational substrate.

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:24:27.351Z

Think of it like having a computing substrate: - Some universal laws of physics apply to every unit of a motherboard / of a brain. - These units are (usually) setup in a fixed, meaningful manner... - But their evolving state (electrical charges / neurochemical patterns) govern the computation.

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:24:27.352Z

But how do we "instruct" the NCA what to do, what task to perform, on which data ? Basically, how do we interface with this dynamical substrate to "make" it do interesting computation ? This is the role of the hardware ! This acts as a translation between human intent, and the dynamical substrate.

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:24:27.353Z

Through this framework, we are able to successfully train on a variety of computational primitives of matrix arithmetics. Here is an example of the NCA performing Matrix Translation + Rotation directly in its computational state (and, by design, only using local interactions to do so) !

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.274Z

But here's where it gets REALLY wild... We didn't just train on computational primitives... We then used our pre-trained NCA to emulate a small neural network and solve MNIST digit classification! The entire neural network "lives" inside the CA state space!

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.275Z

More on the MNIST demo: We pre-train a linear classifier, decompose the 784×10 matrix multiplication into smaller blocks, and let the NCA process them in PARALLEL! Emulated accuracy: 60% (vs 84%), not perfect due to error accumulation, but it WORKS! This is a neural network running inside a CA! 🤯

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.276Z

Btw, this isn't just academic curiosity. We're talking about: 🔸 Analogue computers that could be more efficient than digital ones (without the need to revert to binary-level operations). 🔸 #Neuromorphic computing that mimics how brains actually work. 🔸 Bypassing the von Neumann bottleneck ?

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.277Z

Our approach also enables task composition, meaning we can chain operations together! Example: Distribute matrix → Multiply → Rotate → Return to original position It's like programming, but the "execution" is continuous dynamics! We're building a neural compiler!

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.278Z

I quite like this idea of a compiler ! Think of it like having dual timescale: - FAST: State / neuronal dynamics (computation happens) - SLOW: Hardware reconfiguration (program flow) This separation mirrors classical computer architecture but within a continuous, differentiable substrate !

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.279Z

Taking it even further: We're developing a graph-based "Hardware Meta-Network"! Users define tasks as intuitive graphs (nodes = regions, edges = operations), and a GNN + coordinate-MLP generates the hardware configuration! It's literally a compiler from human intent → NCA computation! 🤖

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.280Z

In conclusion: continuous cellular automata could be universal computers when trained right. This might change how we should think about: - What can compute ? - How to design computers ? - The future of efficient AI hardware. 🚀 Let's train physics-based computers !🚀

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.281Z

Thank you very much to my co-author Maxence Faldor (maxencefaldor.github.io) and to our supervisors @neuralreckoning.bsky.social and Antoine Cully from Imperial College !! And again: arXiv: arxiv.org/abs/2505.13058 Blog: gabrielbena.github.io/blog/2025/be...

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.282Z

We will also be present at GECCO 2025, specifically at the EvoSelf Workshop, to present this work : evolving-self-organisation-workshop.github.io See you there, I hope !

Gabriel Béna 🌻 (@solarpunkgabs.bsky.social) 2025-06-04T18:37:08.283Z

A Path to Universal Neural Cellular Automata

Béna G, Faldor M, Goodman DFM, Cully A
GECCO (2025)
Proceedings of the Genetic and Evolutionary Computation Conference Companion
 

Abstract

Cellular automata have long been celebrated for their ability to generate complex behaviors from simple, local rules, with well-known discrete models like Conway's Game of Life proven capable of universal computation. Recent advancements have extended cellular automata into continuous domains, raising the question of whether these systems retain the capacity for universal computation. In parallel, neural cellular automata have emerged as a powerful paradigm where rules are learned via gradient descent rather than manually designed. This work explores the potential of neural cellular automata to develop a continuous Universal Cellular Automaton through training by gradient descent. We introduce a cellular automaton model, objective functions and training strategies to guide neural cellular automata toward universal computation in a continuous setting. Our experiments demonstrate the successful training of fundamental computational primitives - such as matrix multiplication and transposition - culminating in the emulation of a neural network solving the MNIST digit classification task directly within the cellular automata state. These results represent a foundational step toward realizing analog general-purpose computers, with implications for understanding universal computation in continuous dynamics and advancing the automated discovery of complex cellular automata behaviors via machine learning.

Links

Categories