BannerHauptseite TUMHauptseite LehrstuhlMathematik SchriftzugHauptseite LehrstuhlHauptseite Fakultät

Reading Seminar: Optimization methods for large-scale machine learning: Stochastic gradient methods and beyond

In this reading seminar, we want to present and discuss classical and recent numerical optimization algorithms that are used in the context of machine learning applications. We plan to discuss how optimization problems arise in machine learning applications as text classification and the training of deep neural networks and what makes them challenging. In this context, we study stochastic gradient methods, noise reduction methods, accelerated gradient methods and second-order methods as stochastic quasi-Newton, and their respective mathematical analysis. The starting point of our discussions will be the survey article [1].

In weekly short presentations, the participants present parts of the survey and related research papers. The presentations are accompanied by a discussion among the participants. The targeted audience of the reading seminar are Ph.D. students and master students with research interests related to the covered mathematical topics.

The goal of the seminar is to enlarge the mathematical toolkit of the participants and to familiarize them with current developments in the mentioned fields of research. Furthermore, another goal is the enhancement of presentation skills of the participants.


  • The schedule and all the references can be found in the Moodle course.
  • Students who want to participate can sign up in TUMonline.

Preliminary schedule

The reading seminar takes usually place on Tuesdays at 1:00 pm in room 02.07.023.

23.10.2018 Dominik Stöger

Chapter 1-3
30.10.2018 Timo Klock

Chapter 4
13.11.2018 Olga Graf

Chapter 4, part II
20.11.2018 Stefan Bamberger

Chapter 5
27.11.2018 Michael Rauchensteiner

Chapter 7
04.12.2018 Claudio Verdun

Chapter 6, part I
11.12.2018 Christian Kümmerle

A Newton-based method for non-convex optimization with fast evasion of saddle points
18.12.2018 no seminar
08.01.2019 Oleh Melnyk
15.01.2019 Tim Fuchs

Sub-sampled Newton methods - Local convergence
22.01.2019 Johannes Maly

Chapter 8


  Person e-mailSorted ascending
  Christian Kümmerle
  Dominik Stöger
  Johannes Maly
  Sandro Belz
  Sara Krause-Solberg


[1]    L. Bottou, F. E. Curtis, J. Nocedal. Optimization Methods for Large-Scale Machine Learning, SIAM Rev., 60(2), 223–311.