The following graduate courses are offered during Winter/Spring 2023 by University of Illinois at Chicago, Northwestern University, Toyota Technological Institute at Chicago, University of Chicago, and Illinois Institute of Technology:

 

University of Illinois at Chicago (UIC):

 

Mathematical Theory of Artificial Intelligence Gyorgy Turan MWF 2:00-2:50 pm

 

Valiant’s learning model, positive and negative results in learnability, automation inference,perceptrons, Rosenblatt’s theorem, convergence theorem, threshold circuits, inductive inference of programs, grammars and automata.

Deep Representation Learning  Xinhua Zhang (hybrid) W 6:00-8:40 pm

The primary goal of this course is to prepare students to conduct research in representation learning — either in the development of improved representation learning methods or the application of representation learning techniques to new domains or priors. Students will gain familiarity with the key technical challenges surrounding representation learning and state-of-the-art representation learning methods.

Introduction to Neural Networks Ahmet Enis Cetin MW 3:00-4:30 pm

An introductory course to neural networks.

Introduction to Pattern Recognition Ahmet Enis Cetin  (hybrid) TR 3:30-5:00 pm

Introduction to pattern recognition, supervised and unsupervised learning, Naive Bayes, Gaussian
Mixture Models, K-means clustering, KNN, Hidden Markov models

Information and Learning Mesrob Ohannessian (in person) TR 9:30-10:45am UIC Spring Semester January 9 – May 5

A first mathematical look at what information is, what it means to learn, and how the two are related. This course covers the basics of statistical inference and learning under the lens of information theory. This means that in addition to specific methods and algorithms that acquire knowledge from observations, this course also highlights the limits of what is possible and explains what it would take to reach them. Concepts are illustrated with applications. Topics covered: Statistical Inference, Entropy and Compression, Concentration Inequalities, Efficiency and Universality, PAC Learning, Model Complexity, Regularization, Mutual Information and Lower Bounds.

Deep Learning and Modern Applications Theja Tulabandhula (in person) R 3.30-6.00 pm UIC Spring Semester

Broadly, we will cover topics spanning deep learning and reinforcement learning. In particular, we will study popular deep learning architectures, their design choices and how they are trained. This will be motivated by business applications dealing with image, text and tabular data. Finally, we will look at online and reinforcement learning frameworks and their role in sequential decision making settings such as retail.

 

Northwestern University (NU):

 

Explanation and reproducibility in data-driven sciences  Jessica Hullman (in-person) [Winter] MW 3:30-4:50 pm

In this seminar course, we will consider what it means to produce reproducible explanations in data-driven science. As the complexity and size of available data increases, intuitive explanations of what has been learned from data are in high demand. However, what does it mean for an explanation to be accurate and reproducible, and how do threats to validity of data-driven inferences differ depending on the underlying goal of statistical modeling? The readings of the course will be drawn from recent and classic literature pertaining to reproducibility, replication, and explanation in data inference published in computer science, statistics, and related fields. The course is structured in three parts. In part one we will examine recent evidence of problems of reproducibility, replicability and robustness in data-driven science. In part two we will examine theories and evidence related to causes of these problems. In part three, we will consider solutions and open questions. Topics include: ML reproducibility, the social science replication crisis, adaptive data analysis, causal inference, generalizability, and uncertainty communication.

Recent Highlights in Theoretical CS Aravindan Vijayaraghavan (in-person) [Winter] F 2:00-4:50 pm

This graduate-level seminar class will cover great papers in theoretical computer science from the past two decades. In every class, a student will present one of the papers, chosen from a curated list of papers across different areas of TCS. The goal is to show complete proof details for the main results in the paper, to the extent that this is possible within 2 hours. In addition to learning about these results and the mathematical techniques they introduce, a goal of this course is to stimulate further research on open problems related to the paper.

Econometrics of Networks Eric Auerbach(unspecified) [Winter] TR 11:00-1:00 pm

(description pending)

Topics in Econometrics Ivan Canay(unspecified)  [Spring] TR 1.30-3.20 pm (tentative)

This course is the third quarter in the graduate econometrics sequence. It aims to cover modern econometrics topics from a theoretical point of view but with lessons for practitioners, so it is intended to both, students interest in econometrics and students interested in applied micro. The first part of the class covers local average treatment effects, marginal treatment effects, and roy models. These are commonly used tools in applied micro. The second part this year will cover double de-bias machine learning for treatment effects and estimation of treatment effects via surrogates. The third part covers local asymptotic approximations, contiguity, and local asymptotic normality. The last part covers uniformly valid approximations, with applications to the bootstrap and subsampling in moment inequality models.

Introduction to Econometrics Ivan Canay(unspecified) [Spring] TR 9:00-10.50am (tentative)

This course is the third quarter of the first year graduate econometric sequence. It covers estimation and inference in a variety of settings, including linear models with endogeneity, panel data models, difference in differences, and other models that are widely used in empirical economics. The course assumes that all students are comfortable with the kind of asymptotic theory covered in 480-2, so the focus of the discussion will be on issues of identification, interpretation, and practical implementation to some degree. Some topics do require advanced asymptotic arguments and those will be covered in class. The class schedule on the last page contains a detailed list of topics

 

Toyota Technological Institute at Chicago (TTIC):

Mathematical Toolkit Avrim Blum  (in person) [Spring] MW 1:30-2:50 pm

The course is aimed at first-year graduate students and advanced undergraduates. The goal of the course is to collect and present important mathematical tools used in different areas of computer science. The course will mostly focus on linear algebra and probability. We intend to cover the following topics and examples: Abstract linear algebra: vector spaces, linear transformations, Hilbert spaces, inner product, Gram-Schmidt orthogonalization, eigenvalues and eigenvectors, Singular Value Decomposition, SVD applications. Discrete probability: events and random variables, Markov, Chebyshev and Chernoff-Hoeffding bounds. Balls and bins problems. Threshold phenomena in random graphs. Randomized algorithms (e.g., polynomial identity testing, perfect
matchings, low-congestion routing). Gaussian variables, concentration inequalities, dimension reduction. Additional topics (to be chosen from based on time and interest): Martingales, Markov Chains, Random Matrices.

Computational and Metric Geometry Yury Makarychev (in person) [Spring] Tu/Th 11:00am-12:20 pm

The course covers fundamental concepts, algorithms and techniques in computational and metric geometry. Topics covered include: convexity and convex hulls, range searching, segment intersection, Voronoi diagrams, Delaunay triangulations, metric and normed spaces, low–distortion metric embeddings and their applications in approximation algorithms, stochastic decompositions of metric spaces, dimensionality reduction, approximate nearest neighbor search and locality–sensitive hashing. The course textbook is “Computational Geometry” by M. de Berg, O. Cheong, M. van Kreveld, M. Overmars.

Planning, Learning, and Estimation for Robotics and AI Matt Walter (in person) [Spring] Tu/Th 9:30am-10:50am

This course concerned with fundamental techniques in robotics and artificial intelligence (AI), with an emphasis on probabilistic inference, learning, and planning under uncertainty. The course will investigate the theoretical foundations underlying these topics as rigorous mathematical tools that enable solutions to real-world problems drawn broadly from robotics and AI. The course will cover topics that include: Bayesian filtering (Kalman filtering, particle filtering, and dynamic Bayesian networks), simultaneous localization and mapping, planning, Markov decision processes, partially observable Markov decision processes, reinforcement learning, and graphical models.

 

University of Chicago (UC):

 

Causal Machine Learning Sanjog Misra, Max Farrell (in person) [Winter] W 1:30-4:30 pm

This course will bring students to the cutting edge in causal inference, giving them a solid theoretical understanding and ready-to-deploy tools for research. Using machine learning for estimation and inference of treatment effects has become an important part of modern academic economics. Students in this class will learn the theoretical underpinnings of this material as well as how to carefully and correctly apply the techniques in research. The course will prepare students for both theoretical and applied dissertation research. Each topic will be covered for two weeks,
one covering theory and one covering application. Topics will include the basics of causal
inference, nonparametric estimation, semiparametric inference, and double machine learning.

Topics in learning under distribution shifts Cong Ma (in-person) [Winter] (time TBD)

Traditional supervised learning assumes that the training and testing distributions are the same. Such a no-distribution-shift assumption, however, is frequently violated in practice. In this course, we survey topics in machine learning in which distribution shifts naturally arise. Possible topics include supervised learning with covariate shift, off-policy evaluation in reinforcement learning, and offline reinforcement learning

Trustworthy Machine Learning Victor Veitch (in person) [Spring] (time TBD)

This course focuses on how and why machine learning systems fail in the wild, and the shortcomings of traditional train/test evaluations. Topics include domain shifts, fairness, underspecificaiton, explainability, evaluation, and AI alignment.

 

Illinois Institute of Technology (IIT):

 

Data Security and Privacy  Binghui Wang (in-person + stream & recording) MW 10:00-11:15am

This course talks about the fundamental models of ensuring data privacy and security, and explores potential theoretical models, algorithms, and technologies that can enhance data privacy and security in different systems and applications, such as recommenders, search engines, location-based services, social network, cloud computing, cryptocurrencies, and smart grid.

Mathematical Statistics Lulu Kang (in-person) (time TBD)

Theory of sampling distributions; principles of data reduction; interval and point estimation, sufficient statistics, order statistics, hypothesis testing, correlation and linear regression; introduction to linear models.

Secure Machine Learning Design and Application Ren Wang (in-person) (time TBD)

The focus of this course is to teach students how to adapt fundamental techniques of robustness evaluation and enhancement into different use cases of adversarial machine learning in computer vision, signal processing, and power system

Join Our Newsletter