Module manager: Professor Leonid Bogachev
Email: L.V.Bogachev@leeds.ac.uk
Taught: Semester 1 (Sep to Jan) View Timetable
Year running 2026/27
None
| MATH2701 | Statistical Methods |
| MATH2715 | Statistical Methods |
| MATH3723 | Statistical Theory |
| MATH5741M | Statistical Theory and Methods |
MATH3723 Statistical Theory
This module is not approved as an Elective
From introductory statistics, you know how to calculate sample mean and variance, construct confidence intervals and test hypotheses in the normal model. This module focuses on the pivotal case with data in the form of independent samples and gives a general unified theory of statistical procedures such as parameter estimation and hypotheses testing, highlighting optimality and large-sample properties. The module also introduces a modern Bayesian approach in statistics, making comparisons with classical inference.
Through a combination of theory and example-based practice, the module will allow students to solidify and deepen their understanding of basic statistical techniques by appreciating a general unified theory of parameter estimation and hypotheses testing with a view on optimality and large-sample properties. A modern Bayesian approach to statistical problems will also be introduced and developed in parallel with classical statistical inference.
On successful completion of the module students will have demonstrated the following learning outcomes relevant to the subject: 1. Understand the role of bias and mean squared error in parameter estimation and be able to calculate them in closed form or asymptotically. 2. Understand the concept of sufficiency and its use in constructing improved estimators via the Rao–Blackwell theorem. 3. Understand the Cramér–Rao inequality and the concept of Fisher’s information, including efficiency of estimators. 4. Understand and be able to discuss large-sample properties of the maximum likelihood and moment estimators. 5. Understand the setting and main concepts of statistical hypothesis testing, including critical region and types of error. 6. Understand the concept of power and be able to prove and apply the Neyman–Pearson lemma in testing simple and composite hypotheses. 7. Understand the setting and main concepts of Bayesian approach, including interpretation of the model parameters, prior and posterior distributions, and the Bayes formula. 8. Discuss the role of posterior mean as an optimal Bayesian estimator and be able to compute it in a given model. 9. Understand the setting and main concepts of Bayesian hypothesis testing, including the prior and posterior odds and the Bayes factor. 10. Understand the concept of credible interval as a Bayesian analogue of a confidence interval and be able to construct credible intervals in a given model. 11. Use a statistical package R with real or simulated data to fit parametric models and to write a report presenting and interpreting the results.
Main topics to be covered include: 1. Parameter estimation: bias, mean squared error, consistency. 2. Sufficiency: factorisation criterion, Rao–Blackwell theorem. 3. Fisher’s information, Cramér–Rao inequality. Vector case, Fisher’s information matrix. 4. Maximum likelihood and method of moments: examples and large-sample properties. 5. Hypothesis testing: Neyman–Pearson lemma, uniformly most powerful tests. 6. Confidence intervals. 7. Bayesian statistics: prior and posterior distributions. 8. Bayesian hypothesis testing: prior and posterior odds. 9. Credible intervals. 10. Revision of standard tests. Additional topics that build on these may be covered as time allows. Further details of possible topics will be delivered closer to the time that the module runs.
| Delivery type | Number | Length hours | Student hours |
|---|---|---|---|
| Supervision | 0 | 0 | 0 |
| Lectures | 28 | 1 | 28 |
| Seminars | 5 | 1 | 5 |
| Practicals | 1 | 2 | 2 |
| Independent online learning hours | 7 | ||
| Private study hours | 101 | ||
| Total Contact hours | 35 | ||
| Total hours (100hr per 10 credits) | 143 | ||
Four worksheets (for two weeks each); marked by the lecturer, individual formative feedback provided. · (Optional) Seven online quizzes; marked automatically, feedback provided as correct/incorrect marking and showing model answers with comments. · (Optional) Six R Labs supporting statistical computing skills from entry level to knowledgeable manipulation of data for statistical inference covered in the module; formative feedback available at the user’s request as model solutions with annotated R code.
Check the module area in Minerva for your reading list
Last updated: 30/04/2026
Errors, omissions, failed links etc should be notified to the Catalogue Team