Join us
PhD
I am interested in supervising students with a strong mathematical, computational or neuroscience background. Projects could be carried out in several possible areas relating to the work in the group. You might be interested in doing some general reading on computational neuroscience. Some suggestions for topics that would be interesting to me are below, but I'm very happy to consider other possibilities. In addition to working within the group, studying at Imperial College provides excellent opportunities for interacting with other theoretical and experimental researchers, both at Imperial and in the many neuroscience groups in London.
Supervision style. It's important to select a PhD supervisor who you can work well with. My approach to PhD supervision is as follows. Students' projects are their own. I'm happy to make suggestions of things I find interesting and provide guidance, but I won't tell you exactly what to do. I would expect to see you on average around one hour per week, and this can either be at a regular time or arranged ad hoc. We have a weekly two hour lab meeting, lunch plus one hour doing either a journal club, tutorials, or presenting early stage research results for feedback. I would encourage you to get in touch with one of my current PhD students (see the list here) to have an informal chat about life in the group and at Imperial.
Postdoctoral
We have had a lot of success in supporting postdoctoral researchers to obtain their own independent funding through fellowships. If you are interested in applying for a fellowship please check out my department's list of fellowship schemes that we can support. This includes various schemes and their deadlines. Please get in touch if you are interested in applying on any of these schemes.
We do not currently have any open direct postdoctoral positions.
Themes and suggested topics
Currently, the main focus of our research is on understanding how features of biological brains contribute to "intelligent" behaviour. In particular, we are interested in the role of resource constraints (energy, space, time). We want to solve these questions not only to understand brains, but also because answers to questions about how the brain has solved these problems may throw light on how we can design synthetic "intelligent" systems that are able to operate with limited resources. This naturally ties in with an interest in neuromorphic computing, the application of brain-like ideas to the design of new energy-efficient computing devices. A thread that runs through all this work is the use of spiking neural networks, both because this is the mechanism used by the brain, but also because it has shown considerable potential in a neuromorphic computing setup.
Here are some examples of these types of questions that we have either worked on previously or are planning on working on. But, don't let this list or the paragraph above stop you from getting in contact with other related ideas!
-
Modularity and specialisation.
Different parts of the brain have relatively sparse
interconnections, and yet they are able to work together as a whole. How is this
possible? Is it related to the way that some areas of the brain seem to have
specialised functions? Are there advantages to this from a learning or
resource efficiency point of view?
[Sample publication] -
Relating low-level mechanisms to high-level function.
The brain uses a very rich array of different low level mechanisms, from
the structure of neurons, down to different types of ion channels. Which
of these mechanisms are important for simulating and understanding the
brain as a whole?
[Sample publication on heterogeneity] [Sample publication on neuromodulation] [Sample publication on delay learning] [Sample publication on the role of nonlinearity in multimodal integration] -
Spiking neurons. The brain uses discrete but precisely timed "spikes"
instead of the continuous activations used in neural networks. Does this have
computational or resource efficiency advantages? How can we train networks with
these discontinuities?
[Sample publication on rate versus time in SNNs] [Sample publication on sparsity in training SNNs] -
Neuromorphic computing. How do we use all these to design better
hardware that can carry out "intelligent" workloads with reduced energy
requirements?
[Sample publication on algorithm-hardware co-design]