Logistics

Date: Friday, October 24th 2025

Location: Northwestern University, 3rd floor Mudd library (Room: 3514) 2233 Tech Dr, Evanston, IL 60208.

Parking: For those driving to the workshop, attendees can park in the North Campus garage 2311 N Campus Dr #2300, Evanston, IL 60208https://maps.northwestern.edu/txt/facility/646 You’ll exit the garage on the opposite side from the car entrance and you’ll see Mudd Library directly in front of you across a grassy lawn area. Take the elevator to your right in the library lobby to the 3rd floor.

Registration

Description:

Many modern statistical tasks are high-dimensional, which makes it a challenge to develop algorithms for them that are both computationally and statistically efficient. Recent developments in algorithmic statistics have made significant advances in both the positive (developing such algorithms) and negative (providing evidence for impossibility) directions. This workshop will highlight state-of-the-art developments in the area, bringing together leading experts and junior researchers for discussions and new collaborations.

 
Schedule:  

Time | Event

9:00 – 9:45 am: Shuangping Li Local Geometric Structure Detection in Random Graphs: Information-Theoretic Limits
9:45 – 10:15 am: break 
10:15 – 11:00 am: Brice Huang Strong low degree hardness in random optimization problems
11:00 – 11:40 am: break 
11:40 am – 12:00 pm: Stefan Tiegel Hardness of Learning from Worst-Case Lattice Problems
12:00 – 12:20 pm: He Jia Low-Degree Method Fails to Predict Robust Subspace Recovery
12:20 – 2:00 pm: lunch 
2:00 – 2:20 pm: Vaidehi Srinivas Estimating High-dimensional Confidence Sets: A Robust Estimation Perspective

2:20 – 2:40 pm: Alina Harbuzova Computational Equivalence of Spiked Covariance and Spiked Wigner Models

2:40 – 3:20 pm: break
3:20 – 4:05 pm: Pravesh Kothari The Low-Degree Conjecture and Average-Case Complexity
4:05 – 5:00 pm: poster session and reception

 

Abstracts: 

Speaker: Shuangping Li

Title: Local Geometric Structure Detection in Random Graphs: Information-Theoretic Limits

Abstract: We study the problem of detecting a planted random geometric subgraph within an Erdős–Rényi graph. We characterize the information-theoretic limits of detection as a function of the ambient dimension, identifying thresholds where geometric structure becomes distinguishable from randomness. This is based on forthcoming joint work with Jinho Bok and Sophie Yu.

Speaker: Brice Huang

Title: Strong low degree hardness in random optimization problems

Abstract: The overlap gap property (OGP) is a powerful geometric framework for proving algorithmic lower bounds in random search and optimization problems. This framework has been used to show low degree lower bounds in problems including maximum independent set, random k-SAT, and the Ising perceptron, with several works showing that degree O(1) polynomial algorithms succeed with probability at most 1− Ω(1). We show a strengthening of this lower bound: degree o(N) polynomials succeed with probability o(1). Our proof is based on a general-purpose enhancement of the ensemble OGP, which can be used to upgrade existing algorithmic barriers in a black-box way. The bounds we obtain are sharp assuming a version of the low degree conjecture, and suggest a general heuristic for translating an OGP into a running time lower bound in random search problems. Based on joint work with Mark Sellke. 

Speaker: Stefan Tiegel

Title: Hardness of Learning from Worst-Case Lattice Problems

Abstract: Can we base hardness of statistical inference problems on well-studied worst-case assumptions? That is, can we show reductions from problems we believe to be hard on worst-case inputs (such as SAT or problems on lattices) to natural learning problems in which inputs are random (such as tensor completion or agnostically learning halfspaces)?

Over the last few years several works have constructed such reductions from fundamental worst-case problems on lattices (such as gap-SVP and SIVP) to successfully explain the intractability of many inference problems. These lattice problem have been extensively studied in the cryptography community for several decades and are believed to be intractable even for quantum computers. In this talk I will briefly survey some of these recent results. In particular, I will show hardness results for fundamental problems related to learning (variants of) halfspaces: Agnostically learning a single halfspace and learning the intersection of few halfspaces.

Speaker: He Jia 

Title: Low-Degree Method Fails to Predict Robust Subspace Recovery

Abstract: The low-degree polynomial (LDP) framework has been highly successful in predicting computational-vs-statistical gaps for high-dimensional problems in average-case analysis and machine learning. This success has led to the low-degree conjecture, which posits that this method captures the power and limitations of efficient algorithms for a wide class of high-dimensional statistical problems. We show that the low-degree method fails to predict the computational tractability of a natural and basic high-dimensional estimation problem. This is a special case of the well-studied robust subspace recovery problem. In contrast to the lower bounds, we give a simple, robust, and efficient algorithm that solves the problem (and highly noisy variants of it), leveraging anti-concentration properties of the construction. Our results suggest that the low-degree method fails to capture algorithmic techniques based on anti-concentration, challenging its universality as a predictor of computational barriers.

Speaker: Vaidehi Srinivas

Title: Estimating High-dimensional Confidence Sets: A Robust Estimation Perspective

Abstract: We study the problem of finding confidence sets for arbitrary distributions in high-dimensions. This problem has connections to robust high-dimensional estimation. We establish these connections, and demonstrate how techniques from one domain translate to the other.

Speaker: Alina Harbuzova 

Title: Computational Equivalence of Spiked Covariance and Spiked Wigner Models

Abstract: This work focuses on the computational hardness of average-case problems. While the extensive study of worst-case complexity yielded the classification of problems into equivalence classes, such a classification is largely missing in average-case complexity. We address this gap by establishing the first computational equivalence between two canonical models of high-dimensional statistics: the sparse spiked covariance and sparse spiked Wigner models. Our main contribution is the first average-case reduction that transforms the sparse spiked covariance model into the sparse spiked Wigner model. Our approach leverages a new perturbation equivariance property for Gram-Schmidt orthogonalization, enabling the removal of dependence in the noise while preserving the signal. Based on arXiv:2503.02802

Speaker: Pravesh Kothari

Title: The Low-Degree Conjecture and Average-Case Complexity

Abstract: Over the last few years, a growing body of work has derived hardness results for average-case hypothesis testing/estimation problems by bounding the low-degree advantage (LDA) — a quantitative estimate of the closeness of low-degree moments — between a null distribution and a related planted distribution. This line of work is supported by Hopkins’s low-degree conjecture, which asserts that low-degree moment matching implies the difficulty of noisy hypothesis testing whenever the null distribution is a product and the planted distribution is permutation invariant. For many problems, vanishing LDA tantalizingly predicts hardness precisely in the regimes where a natural algorithm, often based on spectral methods, fails.

The relative ease of computing LDA, coupled with the hardness predictions agreeing with the failure of a natural algorithm, has led to widespread applications of this framework, even to cryptography and statistics. But we know remarkably little about the rigorous implications of vanishing LDA.
Hopkins’s conjecture predicts that vanishing LDA implies the absence of all efficient algorithms. But so far, we do not know how to rule out even specific natural algorithms, such as spectral methods or the eponymous “low-degree polynomial algorithms”—taking thresholds of low-degree polynomials of the inputs.

In this talk, I will discuss recent progress: 1) a refutation of Hopkins’ conjecture and 2) failures of concrete algorithms from vanishing LDA. I will end with some speculations and future directions. 

Based on joint works with Rares Buhai, Tim Hsieh, Aayush Jain, Daniel Kane, Jerry Li, Sidhanth Mohanty, and Stefan Tiegel. 

 

Organizers:

Miklos Z. Racz

He Jia

Sidhanth Mohanty

 

 

Join Our Newsletter