2026/27 Undergraduate Module Catalogue

COMP3620 Neural Networks & Deep Learning

20 Credits Class Size: 300

Module manager: Marc de Kamps
Email: m.dekamps@leeds.ac.uk

Taught: Semester 1 (Sep to Jan) View Timetable

Year running 2026/27

This module is not approved as a discovery module

Module summary

Deep learning is the part of artificial intelligence that relies on multi-layer neural networks to perform for example classification, regression, clustering or segmentation. The module provides an overview of the historical development of the field through discussion of Rosenblatt’s perceptron and multi-layered perceptron networks and Hopfield networks, followed by Convolutional Neural Networks and U-Nets. Loss functions such as mean-squared error and cross entropy will be discussed and linked to conventional statistical concepts such as logistic regression. Stochastic and steepest gradient descent will be used to minimize loss functions. Practical work will address the implementation of deep networks in python, as well as the preparation and cleaning of datasets using suitable imputation methods to address missingness. The training loop will be studied: choice of optimizer, organisation into minibatches. Common evaluation methods will be introduced: confusion matrix; accuracy; precision; recall; F1-score.

Objectives

The module will provide the theoretical background for the most popular deep learning architectures and introduce the basic components of such networks. Their practical implementation will be studied and applied to datasets which are common benchmarks. There will be a strong emphasis on evaluating the performance of these networks and the visualization of the results. An introduction to generative neural networks will be provided.

Learning outcomes

On successful completion of the module students will be able to:



Explain the role of basic elements of a deep learning network

Explain the concept of a loss function, and list examples of them

Derive learning rules like the perceptron algorithm, Widrow-Hoff learning rule, backpropagation

Train deep networks for classification, regression, clustering and segmentation

Apply metrics to establish whether training was successful, and evaluate the resulting network, recognising under- and overfitting and recognise the dangers of class imbalance.

Analyse the results and present clear convincing visualizations of their results.

Explain the basic concepts behind some generative neural networks

Apply data cleaning and processing techniques for deep learning, addressing missingness and using imputation

Evaluate and fine-tune transformer architectures.

Skills outcomes

On successful completion of the module students will be able to:

Critically explain the theoretical principles underpinning neural networks and deep learning, including network architectures, learning algorithms, and optimisation techniques.

Design, implement, and train neural network and deep learning models to address practical problems using appropriate data, tools, and frameworks.

Evaluate the performance and limitations of neural network models using suitable metrics, validation strategies, and diagnostic techniques, and interpret results in a rigorous manner.

Justify model design choices and methodological decisions with reference to theoretical concepts, empirical evidence, and best practice within the field.

Communicate technical findings effectively, demonstrating the ability to present, analyse, and reflect on neural network and deep learning solutions in a clear and structured manner.

Syllabus

Deep learning basics: Perceptron, Logistic Regression as a soft perceptron, multi-layer perceptron

Stochastic Gradient Descent: Concept of Gradient, Backpropagation, auto-differentiation, saddle points, error landscape

Convolutional Neural Networks and U-Net: Convolution, weight sharing, pooling, deep layers, skip connections: applications image recognition: MNIST, tumours, segmentation

Machine Learning Operations: building a pipeline using appropriate software libraries, data cleaning, missingness, imputation; evaluation of classifiers; confusion matrix; precision, recall, F1, cross validation, W&Bs

Presentation and Visualization of Results: Discuss and analyse visualization design decisions from the perspective of human perception and cognition. Discuss and analyse solutions that allow visualizations to scale to big data. Communicate effectively complex topics concerning data science systems to technical and non-technical audiences.

Generative Methods: Gaussian mixture models, Evidence lower bound, Variational autoencoder

Natural Language processing: Transformers, downloading, finetuning, applications

Teaching Methods

Delivery type Number Length hours Student hours
Lectures 22 2 44
Practicals 11 2 22
Private study hours 134
Total Contact hours 66
Total hours (100hr per 10 credits) 200

Reading List

Check the module area in Minerva for your reading list

Last updated: 30/04/2026

Errors, omissions, failed links etc should be notified to the Catalogue Team