I am interested in supervising students with a strong mathematical, computational or neuroscience background. Projects could be carried out in several possible areas relating to the work in the group. You might be interested in doing some general reading on computational neuroscience. Some suggestions for topics that would be interesting to me are below, but I'm very happy to consider other possibilities. In addition to working within the group, studying at Imperial College provides excellent opportunities for interacting with other theoretical and experimental researchers, both at Imperial and in the many neuroscience groups in London.
Supervision style. It's important to select a PhD supervisor who you can work well with. My approach to PhD supervision is as follows. Students' projects are their own. I'm happy to make suggestions of things I find interesting and provide guidance, but I won't tell you exactly what to do. I would expect to see you on average around one hour per week, and this can either be at a regular time or arranged ad hoc. We have a weekly two hour lab meeting, lunch plus one hour doing either a journal club, tutorials, or presenting early stage research results for feedback. I would encourage you to get in touch with one of my current PhD students (see the list here) to have an informal chat about life in the group and at Imperial.
I do not currently have any open postdoctoral positions, but please get in touch if you are interested in applying for your own funding through a fellowship scheme, for example.
Themes and suggested topics
I am particularly interested at the moment in applying methods from machine learning to models with a more biological flavour than the artificial neural networks typically studied in machine learning. This could include neurons with temporal dynamics, spiking neurons, etc. The aim is to use machine learning methods to find biologically relevant insights. To get a feel for this sort of work, check out the videos on my YouTube channel. In particular, take a look at the Cosyne tutorial I gave on spiking neural networks.
With that said, all of the topics below are interesting to me, and if one of these suggestions particularly resonates with you, the most rewarding for me is always to work with someone on a topic they also find exciting.
Machine learning and neuroscience
Hypothesis generation. The unique quality of the brain is its ability
to solve complex tasks in difficult environments. Modern machine learning
enables us for the first time to design models that can perform comparably to
the brain. Can we use techniques from machine learning combined with our
limited knowledge of the structure of the brain to suggest hypotheses for the
computational roles of neural components?
Brain-inspired architectures. One of the most successful machine
learning techniques in recent years is the convolutional neural network
which is inspired by the structure of the visual system. Can we find other
powerful architectures inspired by different areas of the brain?
Machine listening. There has been incredibly successful work in
visual recognition using machine learning, but the auditory equivalent is
relatively less well studied.
Robustness. Machine learning techniques such as deep networks are
often not robust in the same way as the brain (for example,
Can we design more robust machine learning techniques inspired by the brain?
[Sample publication 1] [Sample publication 2]
Spiking neurons. Implementing functional networks using
spiking neurons. Investigating the advantages of disadvantages
of spiking versus artificial neurons. Are there computations that
fundamentally require spiking neurons?
Heterogeneity. There is a great deal of theory on
homogeneous networks of neurons, but heterogeneity (of neuron
properties for example) may be functionally important.
Temporal processing. The brain has to process a continuous
stream of sensory information which can arrive at unexpected times and
may need to be rapidly processed. The temporal structure of the
input at fast timescales may be important (particularly in the
- Multiplexing. Model neurons or networks usually address only a single task, however in the brain it appears neurons are involved in multiple computations simultaneously and may multiplex information or computations.
Auditory and other sensory systems
Sound localisation. Few models of sound localisation are able
to handle the complexity of real acoustic environments, with
multiple sources, background noise, reverberation, etc.
- Auditory scene analysis and the cocktail party problem. How do we separate out multiple sound sources and either listen to them all or focus on a single one? This is very relevant to the problem of speech recognition in the presence of background noise or multiple speakers, which is an unsolved problem.
- Binding. How do we group multiple features (auditory or across modalities) into underlying objects?
Simulation and data analysis
Neural simulation. I'm always interested in work on techniques
for neural simulation, and encourage anyone who works with me to contribute
to the Brian simulator.
Analysing large scale neural data.
New experimental techniques are becoming available which provide several
orders of magnitude more data than were previously available, but there is not yet
agreement on methods for using this data to understand how the brain functions.