Logistics
Date: Friday, October 24th 2025
Location: Northwestern University, 3rd floor Mudd library (Room: 3514) 2233 Tech Dr, Evanston, IL 60208.
Parking: For those driving to the workshop, attendees can park in the North Campus garage 2311 N Campus Dr #2300, Evanston, IL 60208. https://maps.northwestern.edu/
Description:
Many modern statistical tasks are high-dimensional, which makes it a challenge to develop algorithms for them that are both computationally and statistically efficient. Recent developments in algorithmic statistics have made significant advances in both the positive (developing such algorithms) and negative (providing evidence for impossibility) directions. This workshop will highlight state-of-the-art developments in the area, bringing together leading experts and junior researchers for discussions and new collaborations.
Time | Event
2:20 – 2:40 pm: Alina Harbuzova Computational Equivalence of Spiked Covariance and Spiked Wigner Models
Abstracts:
Speaker: Shuangping Li
Title: Local Geometric Structure Detection in Random Graphs: Information-Theoretic Limits
Abstract: We study the problem of detecting a planted random geometric subgraph within an Erdős–Rényi graph. We characterize the information-theoretic limits of detection as a function of the ambient dimension, identifying thresholds where geometric structure becomes distinguishable from randomness. This is based on forthcoming joint work with Jinho Bok and Sophie Yu.
Speaker: Brice Huang
Title: Strong low degree hardness in random optimization problems
Abstract: The overlap gap property (OGP) is a powerful geometric framework for proving algorithmic lower bounds in random search and optimization problems. This framework has been used to show low degree lower bounds in problems including maximum independent set, random k-SAT, and the Ising perceptron, with several works showing that degree O(1) polynomial algorithms succeed with probability at most 1− Ω(1). We show a strengthening of this lower bound: degree o(N) polynomials succeed with probability o(1). Our proof is based on a general-purpose enhancement of the ensemble OGP, which can be used to upgrade existing algorithmic barriers in a black-box way. The bounds we obtain are sharp assuming a version of the low degree conjecture, and suggest a general heuristic for translating an OGP into a running time lower bound in random search problems. Based on joint work with Mark Sellke.
Speaker: Stefan Tiegel
Title: Hardness of Learning from Worst-Case Lattice Problems
Abstract: Can we base hardness of statistical inference problems on well-studied worst-case assumptions? That is, can we show reductions from problems we believe to be hard on worst-case inputs (such as SAT or problems on lattices) to natural learning problems in which inputs are random (such as tensor completion or agnostically learning halfspaces)?
Speaker: He Jia
Title: Low-Degree Method Fails to Predict Robust Subspace Recovery
Abstract: The low-degree polynomial (LDP) framework has been highly successful in predicting computational-vs-statistical gaps for high-dimensional problems in average-case analysis and machine learning. This success has led to the low-degree conjecture, which posits that this method captures the power and limitations of efficient algorithms for a wide class of high-dimensional statistical problems. We show that the low-degree method fails to predict the computational tractability of a natural and basic high-dimensional estimation problem. This is a special case of the well-studied robust subspace recovery problem. In contrast to the lower bounds, we give a simple, robust, and efficient algorithm that solves the problem (and highly noisy variants of it), leveraging anti-concentration properties of the construction. Our results suggest that the low-degree method fails to capture algorithmic techniques based on anti-concentration, challenging its universality as a predictor of computational barriers.
Speaker: Vaidehi Srinivas
Title: Estimating High-dimensional Confidence Sets: A Robust Estimation Perspective
Abstract: We study the problem of finding confidence sets for arbitrary distributions in high-dimensions. This problem has connections to robust high-dimensional estimation. We establish these connections, and demonstrate how techniques from one domain translate to the other.
Speaker: Alina Harbuzova
Title: Computational Equivalence of Spiked Covariance and Spiked Wigner Models
Abstract: This work focuses on the computational hardness of average-case problems. While the extensive study of worst-case complexity yielded the classification of problems into equivalence classes, such a classification is largely missing in average-case complexity. We address this gap by establishing the first computational equivalence between two canonical models of high-dimensional statistics: the sparse spiked covariance and sparse spiked Wigner models. Our main contribution is the first average-case reduction that transforms the sparse spiked covariance model into the sparse spiked Wigner model. Our approach leverages a new perturbation equivariance property for Gram-Schmidt orthogonalization, enabling the removal of dependence in the noise while preserving the signal. Based on arXiv:2503.02802.
Speaker: Pravesh Kothari
Title: The Low-Degree Conjecture and Average-Case Complexity
Abstract: Over the last few years, a growing body of work has derived hardness results for average-case hypothesis testing/estimation problems by bounding the low-degree advantage (LDA) — a quantitative estimate of the closeness of low-degree moments — between a null distribution and a related planted distribution. This line of work is supported by Hopkins’s low-degree conjecture, which asserts that low-degree moment matching implies the difficulty of noisy hypothesis testing whenever the null distribution is a product and the planted distribution is permutation invariant. For many problems, vanishing LDA tantalizingly predicts hardness precisely in the regimes where a natural algorithm, often based on spectral methods, fails.
In this talk, I will discuss recent progress: 1) a refutation of Hopkins’ conjecture and 2) failures of concrete algorithms from vanishing LDA. I will end with some speculations and future directions.
Based on joint works with Rares Buhai, Tim Hsieh, Aayush Jain, Daniel Kane, Jerry Li, Sidhanth Mohanty, and Stefan Tiegel.
Organizers:
Miklos Z. Racz
He Jia
Sidhanth Mohanty