Prof. Dr. Sebastian Pokutta
Vice President and Division Head
Mathematical Algorithmic Intelligence
AI in Society, Science, and Technology (AIS²T)
Zuse Institute Berlin (ZIB)
Professor for Optimization and Machine Learning
Institute of Mathematics
Electrical Engineering and Computer Science (courtesy)
Technische Universität Berlin
- Combettes, C. W., and Pokutta, S. (2020). Boosting Frank-Wolfe by Chasing Gradients. Preprint. [arXiv] [summary] [code]
- Carderera, A., and Pokutta, S. (2020). Second-order Conditional Gradients. Preprint. [arXiv] [code]
- Pfetsch, M., and Pokutta, S. (2020). IPBoost – Non-Convex Boosting via Integer Programming. Preprint. [arXiv] [summary] [code]
- Pokutta, S., Singh, M., and Torrico, A. (2020). On the Unreasonable Effectiveness of the Greedy Algorithm: Greedy Adapts to Sharpness. Preprint. [arXiv] [poster]
- Combettes, C. W., and Pokutta, S. (2019). Revisiting the Approximate Carathéodory Problem via the Frank-Wolfe Algorithm. Preprint. [arXiv] [summary] [slides] [code]
Select Recent Talks and Teaching.
- 05/2020: (technical) “Beyond Worst-case Rates: Data-dependent Rates in Learning and Optimization”. Keynote at MIP 2020 (online). [slides]
- 03/2020: (general) “Künstliche Intelligence im Arbeitsalltag”. Keynote at Künstliche Intelligenz => verständlich @ TH Wildau (Wildau, Germany). [slides]
- 09/2019: (technical) “Smooth Constraint Convex Minimization via Conditional Gradients”. Plenary at 19th French-German-Swiss conference on Optimization (Nice, France). [slides]
- 09/2019: (technical) “Mirror Descent and related methods in Linear and Discrete Optimization”. Talk at Cargese Workshop on Combinatorial Optimization (Cargese, France). [slides]
- 07/2019: (technical) “Locally Accelerated Conditional Gradients”. Talk at Research Institute for Mathematical Sciences Seminar, Kyoto University (Kyoto, Japan). [slides]
- SoSe/2020: Discrete Optimization and Machine Learning (seminar)
Recent Blog Posts.
- 05/2020: An update on SCIP
- 04/2020: Psychedelic Style Transfer
- 03/2020: Boosting Frank-Wolfe by Chasing Gradients
- 02/2020: Non-Convex Boosting via Integer Programming
- 11/2019: Approximate Carathéodory via Frank-Wolfe
- 05/2020: Our Special Priority Program (SPP) proposal ‘Theoretical Foundations of Deep Learning’ was funded by DFG. With an overall budget of EUR 8.5m, this program sets out to significantly boost our fundamental understanding of Deep Learning. The coordination team is Gitta Kutyniok (speaker), Martin Burger, Matthias Hein, Sebastian Pokutta, and Ingo Steinwart. DFG Press Release (German), TUB Press Release (German)
- 04/2020: Research Campus MODAL enters second funding phase
- 03/2020: Preliminary COVID19 forecast model / version for Berlin
- 03/2020: The third conference on “Discrete Optimization and Machine Learning” is cancelled due to COVID-19.
- 09/2019: We are organizing the third conference on “Discrete Optimization and Machine Learning” in May 2020 at Kyoto University in Kyoto.