Module manager: Yujia Chen
Email: Y.Chen9@leeds.ac.uk
Taught: Semester 2 (Jan to Jun) View Timetable
Year running 2026/27
This module is not approved as an Elective
This module explores the role of Explainable Artificial Intelligence (XAI) in business decision-making and how AI systems can be made transparent, trustworthy, and compliant in real-world organisational contexts. It introduces state-of-the-art techniques, such as SHAP, LIME, and counterfactual explanations, for interpreting machine learning and deep learning models, and demonstrates their application across a range of business settings. As organisations increasingly rely on AI-driven systems for high-stakes decisions in areas such as finance, healthcare, and public policy, the need for transparency and accountability has become a central concern - both for building stakeholder trust and for meeting evolving regulatory requirements such as the GDPR and the EU AI Act. This module equips students with the critical understanding needed to navigate these challenges, enabling them to evaluate when and why explainability is required and to consider how it contributes to responsible AI governance and ethical risk management within organisations.
This module aims to equip students with both the conceptual foundations and practical skills needed to implement and evaluate Explainable Artificial Intelligence (XAI) in business contexts. It introduces students to the landscape of XAI methods, covering both inherently interpretable models and post-hoc explanation techniques.
Through a combination of lectures and hands-on practical sessions using Python, students will gain experience in interpreting machine learning model outputs using a range of XAI techniques and critically reflect on the strengths and limitations of different explanation approaches in practice.
Real-world case studies are used to explore how explainability supports business decision-making, stakeholder communication, and organisational AI governance.
On successful completion of the module students will be able to:
1. Explain and critically evaluate the theoretical foundations, taxonomies, and key assumptions underpinning different XAI techniques.
2. Implement and apply a range of XAI techniques to interpret machine learning and deep learning models.
3. Assess and compare the quality and suitability of XAI techniques across varying data types, model classes, and decision-making contexts.
4. Analyse and justify how explainability supports responsible AI practices in high stakes, with consideration of regulatory frameworks.
On successful completion of the module students will be able to:
Academic Skills:
Digital and Data Literacy: Design and implement reproducible workflows to generate and interpret explainable AI outputs using appropriate digital tools.
Critical Thinking and Problem Solving: Critically analyse AI-driven decisions by identifying model limitations, bias, and explainability-related risks.
Work Ready Skills:
Communication and Professional Skills: Communicate and justify AI explanations and analytical findings to both technical and non-technical audiences.
Enterprise and Innovation: Translate explainable AI techniques into actionable insights that support organisational decision-making, risk management, and innovation.
| Delivery type | Number | Length hours | Student hours |
|---|---|---|---|
| Lectures | 10 | 1.5 | 15 |
| Practicals | 8 | 1.5 | 12 |
| Private study hours | 123 | ||
| Total Contact hours | 27 | ||
| Total hours (100hr per 10 credits) | 150 | ||
Students may submit a brief project outline or draft section of their coursework, outlining their chosen context, XAI methods, and evaluation approach. Written or verbal formative feedback is provided on the suitability, coherence, and depth of their proposed direction.
In addition, practical workshops include formative exercises aligned with the final assessment, with structured verbal feedback provided on analytical approach and interpretation.
| Assessment type | Notes | % of formal assessment |
|---|---|---|
| Assignment | A 3000-word coursework applying explainable AI techniques to a real-world context. | 100 |
| Total percentage (Assessment Coursework) | 100 | |
The resit for this module will be 100% by 3,000-word assignment.
Check the module area in Minerva for your reading list
Last updated: 30/04/2026
Errors, omissions, failed links etc should be notified to the Catalogue Team