Deep Learning is a fast-evolving field in artificial intelligence that has been driving breakthrough advances in many application areas in recent years. It has become one of the most in-demand skillsets in machine learning and AI, far exceeding the supply of people with an expertise in this field. This course is aimed at PhD students within the Mathematics department at Imperial College who have no prior knowledge or experience of the field. It will cover the foundations of Deep Learning, including the various types of neural networks used for supervised and unsupervised learning. Practical tutorials in Tensorflow are an integral part of the course, and will enable students to build and train their own deep neural networks for a range of applications. The course also aims to describe the current state-of-the-art in various areas of Deep Learning, theoretical underpinnings and outstanding problems.
Summary of the syllabus
Overview and basic concepts of deep learning and machine learning. Supervised and unsupervised learning. Underfitting and overfitting. Typical problem tasks.
Optimisation of neural networks. The backpropagation method. Neural network optimisers: SGD, (Nesterov) momentum, Adagrad, RMSProp, Adadelta, Adam. Network initialisation strategies. Batch normalisation.
Convolutional neural networks (CNNs). Convolutional arithmetic, strides, padding, transposed convolutions, pooling operations.
Introduction to reinforcement learning. Markov decision processes. Policy iteration, value iteration, dynamic programming. Model-free RL. Q-learning. Policy gradient.
Sequence modelling. Recurrent neural networks (RNNs), LSTM, GRU. Autoregressive models, attention mechanisms.
Normalising flows. Probability under change of variables. Inverse autoregressive flow (IAF), MADE, MAF, IAF, NICE, RealNVP.
Generative Adversarial Networks (GANs). GAN convergence. Mode collapse. Wasserstein GAN. Spectral normalisation.
Variational autoencoders (VAEs). Evidence lower bound (ELBO). Reparameterization trick. Importance weighted autoencoders (IWAE). Disentangled representations.
Theoretical foundations of deep learning. Optimal transport, mean-field gradient flow, random matrix theory, spectral norm bounds.
The recommended text for the course is Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville (2016), MIT Press, www.deeplearningbook.org
Course format: The course will consist of 2 hour sessions every week for 10 weeks in the Autumn term. Each session will be a combination of a lecture-style presentation followed by a practical Tensorflow tutorial. If possible, students should bring a laptop to work independently on the Tensorflow material.
Time, date, location: Every Wednesday 10th October to 5th December 16.00 - 18.00, Huxley building, LT 144. There will be an extra session on Monday 15th October 15.00 - 17.00, in the Huxley building LT 145.
Prerequisites: There are no formal prerequisites for the course, but it is recommended that students have a basic knowledge of Python in order to follow the Tensorflow tutorials.
Credit: This course can be taken for credit.
Code repository: Jupyter notebooks for the Tensorflow tutorial and assignment material will be posted in the course repository at https://github.com/pukkapies/dl-imperial-maths
Students following the tutorials on their laptops will need to install OpenAI Gym to follow the reinforcement learning tutorials.
Kevin is an Honorary Research Fellow at Imperial College London and co-founder of FeedForward AI. He obtained his PhD in 2003 from the Department of Mathematics at Imperial College, in the area of dynamical systems. He has also held postdoctorate positions at Imperial College, and was awarded a Marie Curie Individual Fellowship, which he spent at the Potsdam Institute for Climate Impact Modelling in Germany. During these positions his research interests became more focused on machine learning, and specifically adapting ML technologies for numerical analysis problems in dynamical systems. He was the Head of Research at the London music AI startup Jukedeck, where he oversaw the development of the deep learning framework for automatic music composition. In 2018 he set up his own machine learning consultancy, FeedForward AI, with a focus on the music & the creative industries. His particular interest in the field of deep learning is generative modelling. @kn_webster / firstname.lastname@example.org
Pierre is currently researching his PhD in deep reinforcement learning at the Data Science Institute of Imperial College. He also helps to run the Deep Learning Network and organises thematic reading groups there. Prior to that, he has worked in electronics as a research engineer and in quantitative finance as a trader. He has studied electrical engineering at ENST, probability theory and stochastic processes at Universite Paris VI - Ecole Polytechnique, and business management at HEC. His other research interests in the field of deep learning include neural network theory, as well as stochastic optimization methods. @KloudStrife / email@example.com