Tuesday, Sept. 21, 2021
The Fall 2021 Special Quarter on Robustness in High-dimensional Statistics and Machine Learning is sponsored by The Institute for Data, Econometrics, Algorithms, and Learning (IDEAL), a multi-discipline, multi-institution collaborative institute that focuses on key aspects of the theoretical foundations of data science. The special-quarter activities include mini-workshops, seminars, graduate courses, and a reading group. The research goal is to explore several theoretical frameworks and directions towards designing estimators and learning algorithms that are tolerant to errors, contamination, and misspecification in data.
The kick-off event for this quarter will be held on Tuesday, September 21, 2021 at 3 pm Chicago/Central time. We will briefly introduce the institute, the key personnel and information about the various activities during the special quarter. There will also be short research talks by the organizers of the special quarter. You can register for this event using the registration form. Please join us at the kick-off event!
Logistics
- Date: Tuesday, Sept. 21st, 2021 from 3:00-5:00 PM
- Location: Gather.town with Panopto streaming
- Registration: Registered participants will get a link to the workshop by email.
- WATCH FULL EVENT HERE
Schedule
- 3:00-3:05: Opening Remarks- watch the intro and overview here
- 3:05-3:25: Overview of the Fall 2021 Special Quarter program
- 3:30-3:55: (University of Chicago) watch the talk here
- 3:55-4:20:
- 4:20-4:45:
- 4:45-5:00: Q&A and socializer.
Titles and Abstracts
We study the fundamental problem of high-dimensional robust estimation when a constant fraction of the input samples are adversarially corrupted. Recent work gave the first polynomial-time robust algorithms for a wide range of statistical and machine learning tasks with dimension-independent error guarantees.
In this talk, we will discuss two exciting new directions in the area of high-dimensional robust statistics: (1) designing faster algorithms for robust estimation, with the ultimate goal of matching the runtime of the fastest non-robust algorithms if possible, and (2) exploring more direct non-convex formulations of robust estimation and analyzing their optimization landscape. This talk is mostly based on joint works with Ilias Diakonikolas, Rong Ge, and Mahdi Soltanolkotabi.