I am interested in supervising students with a strong mathematical, computational or neuroscience background. Projects could be carried out in several possible areas relating to the work in the group. Some suggestions for topics that would be interesting to me are below, but I'm very happy to consider other possibilities. In addition to working within the group, studying at Imperial College provides excellent opportunities for interacting with other theoretical and experimental researchers, both at Imperial and in the many neuroscience groups in London.
Applicants for a PhD position should initially send me a brief CV and cover letter with a description of research interests or a proposed project, and will eventually have to formally apply through the standard Imperial College mechanism (for more information, see here). There are several opportunities for funding which we can discuss if you are offered a position. Note that a masters degree is required for PhD study at Imperial: please see the PhD requirements page (and the Country-specific requirements).
I am also interested in supervising PhD students in neuroinformatics as part of the HiPEDS CDT (Centre for Doctoral Training in High Performance Embedded and Distributed Systems). Note that a masters degree is also required for entry.
I do not currently have any open postdoctoral positions, but please get in touch if you are interested in applying for your own funding through a fellowship scheme, for example.
Themes and suggested topics
- Spiking neurons. Implementing functional networks using spiking neurons. Investigating the advantages of disadvantages of spiking versus artificial neurons. Are there computations that fundamentally require spiking neurons?
- Functional neural networks. Models in neuroscience often address toy situations which do not represent the complexity of tasks that the brain has to solve. Focussing our attention on models that can actually carry out a non-trivial task may help us to develop better models of the brain.
- Heterogeneity. There is a great deal of theory on homogeneous networks of neurons, but heterogeneity (of neuron properties for example) may be functionally important.
- Temporal processing. The brain has to process a continuous stream of sensory information which can arrive at unexpected times and may need to be rapidly processed. The temporal structure of the input at fast timescales may be important (particularly in the auditory system).
- Multiplexing. Model neurons or networks usually address only a single task, however in the brain it appears neurons are involved in multiple computations simultaneously and may multiplex information or computations.
Auditory and other sensory systems
- Sound localisation. Few models of sound localisation are able to handle the complexity of real acoustic environments, with multiple sources, background noise, reverberation, etc.
- Auditory scene analysis and the cocktail party problem. How do we separate out multiple sound sources and either listen to them all or focus on a single one? This is very relevant to the problem of speech recognition in the presence of background noise or multiple speakers, which is an unsolved problem.
- Binding. How do we group multiple features (auditory or across modalities) into underlying objects?
Machine learning and neuroscience
- Hypothesis generation. I am not convinced that machine learning offers a good model of how the brain works, but it may be interesting to look at representations or computations found using machine learning techniques to suggest hypotheses about how the brain carries out similar tasks.
- Brain-inspired architectures. One of the most successful machine learning techniques in recent years is the convolutional neural network which is inspired by the structure of the visual system. Can we find other powerful architectures inspired by different areas of the brain?
- Machine listening. There has been incredibly successful work in visual recognition using machine learning, but the auditory equivalent is relatively less well studied.
- Robustness. Machine learning techniques such as deep networks are often not robust in the same way as the brain (for example, adversarial images). Can we design more robust machine learning techniques inspired by the brain?
Simulation and data analysis
- Neural simulation. I'm always interested in work on techniques for neural simulation, and encourage anyone who works with me to contribute to the Brian simulator.
- Analysing large scale neural data. New experimental techniques are becoming available which provide several orders of magnitude more data than were previously available, but there is not yet agreement on methods for using this data to understand how the brain functions.