BannerHauptseite TUMHauptseite LehrstuhlMathematik SchriftzugHauptseite LehrstuhlHauptseite Fakultät

Workshop Donau-Isar-Inn

- Approximation Theory and Applications            

in honor of Prof. Dr. Rupert Lasser
Prof. Lasser

Program     Registration     Directions


(preliminary) Schedule

1pm    Welcome Address

The talk will give a short survey on frame theory in Hilbert spaces, followed by a more detailed discussion of the recent recearch topic "dynamical sampling." Formulated in purely mathematical terms, the key question is when and how a frame a frame can be represented via iterations of a certain bounded operator, acting on a fixed vector in the underlying Hilbert space. The talk presents joint work with Marzieh Hasannasab. ​

Ole Christensen Pfeil
DTU - Technical University of Denmark
1:50pm    Break

Studying the approximation theoretic properties of neural networks with smooth activation function is a classical topic. The networks that are used in practice, however, most often use the non-smooth Re-L-U activation function. Despite the recent incredible performance of such networks in many classification tasks, a solid theoretical explanation of this success story is still missing.

In this talk, we will present recent results concerning the approximation theoretic properties of deep ReLU neural networks which help to explain some of the characteristics of such networks; in particular we will see that deeper networks can approximate certain classification functions much more efficiently than shallow networks, which is not the case for most smooth activation functions. We emphasize though that these approximation theoretic properties do not explain why simple algorithms like stochastic gradient descent work so well in practice, or why deep neural networks tend to generalize so well; we purely focus on the expressive power of such networks.

As a model class for classifier functions we consider the class of (possibly discontinuous) piecewise smooth functions for which the different "smooth regions" are separated by smooth hyper surfaces. Given such a function, and a desired approximation accuracy, we construct a neural network which achieves the desired approximation accuracy, where the erroris measured in L^2. We give precise bounds on the required size (in terms of the number of weights) and depth of the network, depending on the approximation accuracy, on the smoothness parameters of the given function, and on the dimension of its domain of definition. Finally, we show that this size of the networks is optimal, and that networks of smaller depth would need significantly more weights than the deep networks that we construct, in order to achieve the desired approximation accuracy.​

Felix Voigtländer Pfeil
Catholic University of Eichstätt-Ingolstadt

Abstract here. ​

Sina Bittens Pfeil
University of Göttingen
3:00pm    Coffee and Cake

In diesem kurzen Beitrag möchte ich ausgehend von einer kurzen historischen Betrachtung des Gebietes der Harmonischen Analyse, über die Tagungen zur Harmonischen Analyse in der abstraktesten Form, bei denen wir einaner erstmals begegnet sind, auf die Entwicklungen der letzten 40 Jahre eingehen.

Diese Zeit ist geprägt von einer Reaktivierung und starken Verbreiterung der Forschungsaktivitäten, vor allem aber auch des Anwendungsbezuges sowie der Verwendung von Numerischen Methoden. Die Entwicklung der Wavelets, aber ebenso der Gabor Analysis oder neuerdings der Shearlets, aber genauso Einsichten im Bereich der orthogonalen Polynome und Hypergruppen haben dazu beigetragen.

Ich werde an die junge Generation appellieren, weiter Anstrengungen in diese Richtung zu machen, auch um die Bedeutung der Harmonischen Analyse im Gesamtkontext der Angewandten Mathematik hochzuhalten. ​

Hans Georg Feichtinger Pfeil
University of Vienna
4:05pm    Break

We will present new results on nonuniform sampling in shift-invariant spaces whose generator is a totally positive function. For a subclass of such generators the sampling theorems can be formulated in analogy to the theorems of Beurling and Landau for bandlimited functions.

In contrast to the cardinal series, the reconstruction procedures for sampling in a shift-invariant space with a totally positive generator are local and thus accessible to numerical linear algebra.​

Karlheinz Gröchenig Pfeil
University of Vienna
4:55pm    Break

I will discuss recovery of (approximately) sparse vectors from its subsampled convolution with a random vector and, in particular, provide bounds on the number of samples required for successful recovery. Apart from this standard linear compressive sensing setup, we will present new results for one-bit compressed where only the sign of each entry of the subsampled convolution is retained. Finally, we generalize to certain nonlinear functions replacing the sign function. ​

Holger Rauhut Pfeil
RTWH Aachen
5:50pm    End of Scientific Program
7:00pm    Dinner
Location: Scheidegger Pfeil

Workshop dinner

We invite the participants to join us for dinner at 7 PM at restaurant Scheidegger Pfeil.


If you are interested in our workshop, please register until June 15, 2018 and indicate whether you plan to attend the conference dinner. This will facilitate the organization of the event. Registration is free.

For registration, please contact Frau Silvia Toth-Pinter ( If you have further questions, please contact the organizers (


The venue of the workshop is the building of the Mathematics and Informatics departments of TU München (Boltzmannstr. 3, 85747 Garching) on the Garching campus, Room MI 00.06.011 (Lecture Hall MI HS 3), located straight through the hall when entering through the main entrance.

The Garching campus can easily be reached by subway (U-Bahn Linie U6, station Garching-Forschungszentrum) from Munich city center. There are many parking lots free of charge in Ludwig-Prandtl-Straße at the back of the building. Further details can be found here.


Frank Filbir Pfeil
Sara Krause-Solberg
Nada Sissouno