Join us
PhD
I am interested in supervising students with a strong mathematical, computational or neuroscience background. Projects could be carried out in several possible areas relating to the work in the group. You might be interested in doing some general reading on computational neuroscience. Some suggestions for topics that would be interesting to me are below, but I'm very happy to consider other possibilities. In addition to working within the group, studying at Imperial College provides excellent opportunities for interacting with other theoretical and experimental researchers, both at Imperial and in the many neuroscience groups in London.
Supervision style. It's important to select a PhD supervisor who you can work well with. My approach to PhD supervision is as follows. Students' projects are their own. I'm happy to make suggestions of things I find interesting and provide guidance, but I won't tell you exactly what to do. I would expect to see you on average around one hour per week, and this can either be at a regular time or arranged ad hoc. We have a weekly two hour lab meeting, lunch plus one hour doing either a journal club, tutorials, or presenting early stage research results for feedback. I would encourage you to get in touch with one of my current PhD students (see the list here) to have an informal chat about life in the group and at Imperial.
Postdoctoral
If there are any open positions they will be listed below. Please also get in touch if you are interested in applying for your own funding through a fellowship scheme, for example.
Themes and suggested topics
The main goal of the group at the moment is to understand computations based on the principle of sparsity in time and space. We are interested in this in both a neuroscience and machine learning setting. A key example is "spiking" neural networks comprised of elements that communicate sparsely in time and are connected sparsely in space. An example of the questions this brings up is: how do dynamically evolving networks of spatially located neurons compute given the communication and computational bottlenecks induced by that spatial arrangement?
To get a feel for this sort of work, check out the videos on my YouTube channel. In particular, take a look at the Cosyne tutorial I gave on spiking neural networks, and the longer Neuroscience for machine learners course I teach.
The list of questions below might give you some inspiration, but if you have another suggestion to make that you think I would be interested to supervise, please do get in touch because working with someone on a topic they are passionate about is always the most rewarding thing.
-
The brain as a community. Different parts of the brain are locally
densely connected and sparsely connected to each other more globally. One
way to think of the brain is as a community of agents that have to reach
consensus by communicating with each other through channels with limited
bandwidth and tight temporal constraints. What sort of solutions does
this suggest?
[Sample publication] -
The brain is able to discover and integrate multiple sources of information,
either across or within modalities, rapidly and without supervision. How
does it do this without running into a combinatorial explosion?
[Sample publication] -
How can we compare a complex brain model to experimental data in a meaningful
way?
[Sample publication] -
Hypothesis generation. The unique quality of the brain is its ability
to solve complex tasks in difficult environments. Modern machine learning
enables us for the first time to design models that can perform comparably to
the brain. Can we use techniques from machine learning combined with our
limited knowledge of the structure of the brain to suggest hypotheses for the
computational roles of neural components?
[Sample publication] -
Robustness. Machine learning techniques such as deep networks are
often not robust in the same way as the brain (for example,
adversarial images).
Can we design more robust machine learning techniques inspired by the brain?
[Sample publication 1] [Sample publication 2] -
Spiking neurons. Implementing functional networks using
spiking neurons. Investigating the advantages of disadvantages
of spiking versus artificial neurons. Are there computations that
fundamentally require spiking neurons?
[Sample publication] -
Heterogeneity. There is a great deal of theory on
homogeneous networks of neurons, but heterogeneity (of neuron
properties for example) may be functionally important.
[Sample publication] -
Temporal processing. The brain has to process a continuous
stream of sensory information which can arrive at unexpected times and
may need to be rapidly processed. The temporal structure of the
input at fast timescales may be important (particularly in the
auditory system).
[Sample publication] -
Multiplexing. Model neurons or networks usually address only a
single task, however in the brain it appears neurons are involved
in multiple computations simultaneously and may multiplex information
or computations.
[Sample publication] -
Auditory scene analysis and the cocktail party problem. How
do we separate out multiple sound sources and either listen to them
all or focus on a single one? This is very relevant to the problem of
speech recognition in the presence of background noise or multiple
speakers, which is an unsolved problem.
[Sample publication] -
Binding. How do we group multiple features (auditory or across
modalities) into underlying objects?
[Sample publication] -
Neural simulation. I'm always interested in work on techniques
for neural simulation, and encourage anyone who works with me to contribute
to the Brian simulator.
[Sample publication 1] [Sample publication 2]