ÐÓ°ÉÂÛ̳

 

MA333      Half Unit
Optimisation for Machine Learning

This information is for the 2024/25 session.

Teacher responsible

Dr Ahmad Abdi

Availability

This course is available on the BSc in Data Science, BSc in Mathematics and Economics, BSc in Mathematics with Data Science, BSc in Mathematics with Economics and BSc in Mathematics, Statistics and Business. This course is available as an outside option to students on other programmes where regulations permit. This course is available with permission to General Course students.

Pre-requisites

Students should be familiar with the fundamentals of continuous optimisation, to the level in Optimisation Theory (MA208) or equivalent.

Course content

Machine learning uses tools from statistics, mathematics, and computer science for a broad range of problems in data analytics. The course introduces a range of optimisation methods and algorithms that play fundamental roles in machine learning. This is primarily a proof-based course that focuses on the underlying mathematical models and concepts. The secondary goal of the course is to demonstrate implementations of the discussed algorithms on problems from machine learning, their limitations on large training sets, and how to overcome such obstacles.

After a review of basic tools from convex analysis, Lagrangian duality, and Karush-Kuhn-Tucker conditions, the course makes a deep dive into first- and second-order optimisation methods and their convergence guarantees. The first-order methods include projected, conditional, and stochastic gradient descent. Newton's method from second-order optimisation is also covered. The course also considers online convex optimisation, and covers online gradient and multiplicative weight methods.

A key component of the course is the application of optimisation methods to machine learning. As such, we will see applications of the methods taught to linear regression, ridge and lasso regularization, logistic regression, binary classification and support vector machines, and online learning algorithms such as Perceptron and Winnow. A key learning outcome is how to solve such problems in the presence of large training sets.

Teaching

This course is delivered through a combination of classes and lectures totalling a minimum of 32 hours across Winter and Spring terms.

During the lectures, the focus will be on the optimisation methods and their convergence guarantees. During the classes, in addition to discussing the exercise sheets, implementations of the methods will be shown and their effectiveness on large training sets will be discussed.

Formative coursework

Students will be expected to produce 10 exercises in the WT.

Written answers to set problems will be expected on a weekly basis.

Indicative reading

  • Vishnoi, N. (2018). Algorithms for Convex Optimization (2021). Cambridge University Press.
  • Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
  • Nesterov, Y. (2018). Lectures on convex optimization (Vol. 137). Springer.
  • B. Gärtner and M. Jaggi. Optimization for machine learning (lecture notes), 2021.
  • E. Hazan. Introduction to online convex optimization (lecture notes), 2021.

Assessment

Exam (90%, duration: 2 hours) in the spring exam period.
Coursework (10%) in the WT.

A combination of the weekly exercies (set and marked in Winter Term) count as coursework.

Key facts

Department: Mathematics

Total students 2023/24: 7

Average class size 2023/24: 7

Capped 2023/24: No

Value: Half Unit

Course selection videos

Some departments have produced short videos to introduce their courses. Please refer to the course selection videos index page for further information.

Personal development skills

  • Self-management
  • Problem solving
  • Application of information skills
  • Application of numeracy skills
  • Specialist skills