Thursday, October 26th – Friday, October 27th, 2023

Ana Marasović on Challenges in Fostering (Dis)Trust in AI, Assistant Professor of Computer Science, University of Utah 

Synopsis

This two-day workshop explores the critical intersection of computer science (CS), law, and policy in addressing the challenges and opportunities associated with ensuring reliability and trust in machine learning and artificial intelligence systems (AI). The workshop aims to convene experts and interested researchers, legislators, and regulators from all fields touching upon AI. The aim of this workshop is to foster collaboration and understanding between computer scientists and legal and policy experts and to address the challenges and opportunities of ensuring reliability and trust in AI systems.

The agenda is divided into four sessions.

(1) Foundations in CS and Law for Reliability and Trust. This session introduces reliability and trust from both CS and legal perspectives, highlighting principles and technologies. A panel will then discuss and provide insights on how these disciplines can contribute to shaping trustworthy and reliable AI.

(2-3) Generative AI and Specific AI Challenges. These sessions introduce generative AI, delving into the technical, legal, and regulatory landscape of generative AI. We will also analyze specific AI uses and challenges, examining reliability and trust in CS, law, and policy through real-world examples such as platform regulation, content moderation, misinformation, privacy in machine learning, cybersecurity, conversational AI, and automating legal advice and decision-making.

(4) AI Policy and Law. This session features a panel of experts exploring AI risks and benefits and emerging and future policy, legislation, and regulation. Discussion topics will include the EU AI Act, the 2023 proposed U.S. Algorithmic Accountability Act, and the Illinois law to form a task force on Generative AI. Additional talks will address platform regulation and AI and legal reasoning.

Daniel B. Rodriguez, Harold Washington Professor of Law, Northwestern Pritzker School of Law

Logistics

Kangwook Lee on Demystifying Large Language Models: A Comprehensive Overview, Assistant Professor of Electrical and Computer Engineering, Computer Science, University of Wisconsin- Madison

Schedule (more details are forthcoming)

Thursday, October 26th, 2023

Part One: Reliability and Trust: Foundations in CS and Law

8:00 am Breakfast

9:00 am Welcome and Introduction

  • Aravindan Vijayaraghavan, Associate Professor of Computer Science, Northwestern University
  • Mesrob Ohannessian, Assistant Professor of Electrical and Computer Engineering, University of Illinois Chicago
  • Gyorgy Turan, Professor of Mathematics, Statistics and Computer Science, University of Illinois Chicago
  • Daniel W. Linna Jr., Senior Lecturer & Director of Law and Technology Initiatives, Northwestern University

9:05 am Introduction to Reliability and Trust in CS

  • Ana Marasović on Challenges in Fostering (Dis)Trust in AI, Assistant Professor of Computer Science, University of Utah 

9:50 am Introduction to Reliability and Trust in Law

10:35 am Coffee Break + Networking

11:00 am Panel: Exploring the CS+Law Intersection: Fostering Interdisciplinary Research on Important Issues

  • Ana Marasović, Assistant Professor of Computer Science, University of Utah
  • Robert Sloan, Professor of Computer Science and Department Head, University of Illinois Chicago
  • Charlotte Tschider, Associate Professor of Law, Loyola University Chicago School of Law
  • Moderated by: Daniel W. Linna Jr., Senior Lecturer & Director of Law and Technology Initiatives, Northwestern Pritzker School of Law & McCormick School of Engineering

12:30 pm Lunch

Part Two: Introduction to Generative AI and Specifics of Reliability and Trust in CS and Law

1:30 pm Introduction to Large Language Models (LLMs): Benefits and Risks

  • Introduction to LLMs,  what they can and cannot do
    • Kangwook Lee on Demystifying Large Language Models: A Comprehensive Overview, Assistant Professor of Electrical and Computer Engineering, Computer Science, University of Wisconsin- Madison
  • Landscape of (non-LLM) Generative AI
    • Michael Maire, Associate Professor of Computer Science, University of Chicago
  • Q&A on LLMs and Generative AI

2:30 pm How AI Tilts the Playing Field: Privacy, Fairness, and the Shadow of Risk

  • Robert Sloan, Professor of Computer Science and Department Head, University of Illinois Chicago

2:50 pm Coffee Break – Roundtable Discussions

3:20 pm Modeling Case-based Legal Argument in an Age of Generative AI

  • Kevin Ashley, Professor of Law and Intelligent Systems, Senior Scientist at the Learning Research and Development Center, University of Pittsburgh

4:05 pm Specifics of Reliability and Trust (Short Talks – 15 minutes + 5 minutes Q&A)

  • Charlotte Tschider, Associate Professor of Law, Loyola University Chicago School of Law
  • V.S. Subrahmanian on Judicial Support Tool: Finding the k-Most Likely Judicial Worlds, Walter P. Murphy Professor of Computer Science, McCormick School of Engineering, Northwestern University
  • Aziz Huq, Frank and Bernice J. Greenberg Professor of Law, University of Chicago Law School

Evening Speaker and Organizer Dinner

_____________________

Friday, October 27th, 2023

8:00 am Breakfast

9:00 am Opening remarks

9:05 am Generative AI and Large Language Models Reliability and Trust issues from CS perspectives

  • David McAllester – Professor, Toyota Technological Institute, Professor, Part-Time, Department of Computer Science, University of Chicago

Specifics of Reliability and Trust in CS and Law (continued)

9:50 am Specifics of Reliability and Trust (Short Talks – 15 minutes + 5 minutes Q&A)

  • Paul Gowder on The Networked Leviathan: For Democratic Platforms, Professor of Law, Associate Dean of Research and Intellectual Life, Northwestern University Pritzker School of Law
  • Ermin Wei on Incentivized Federated Learning and Unlearning, Associate Professor of Electrical and Computer Engineering, Associate Professor of Industrial Engineering & Management Sciences, Northwestern University McCormick School of Engineering

10:30 am Coffee Break

11:00 am Specifics of Reliability and Trust (Short Talks – 15 minutes + 5 minutes Q&A)

  • Anthony J. Casey on Your Self Driving Law Has Arrived, Donald M. Ephraim Professor of Law and Economics, Faculty Director, The Center on Law and Finance, University of Chicago Law
  • Sabine Brunswicker on The Impact of Empathy in Conversational AI on Perceived Trustworthiness and Usefulness: Insights from a Behavioral Experiment with a Legal Chatbot, Professor for Digital Innovation, Director of the Research Center for Open Digital Innovation (RCODI) at Purdue University
  • Aloni Cohen on Control, Confidentiality, and the Right to be Forgotten, Assistant Professor, Department of Computer Science and Data Science
  • Student talk – TBA

12:30 pm Lunch

Part Three: AI, Rule of Law, and Society

Big Research Questions: AI, Rule of Law, and Society

 1:30pm Formal Methods and the Law

  • Sarah Lawsky on Formal Methods and the Law, Stanford Clinton Sr. and Zylpha Kilbride Clinton Research Professor of Law, Vice Dean, Northwestern University Pritzker School of Law

2:15pm Roundtable Discussions and Coffee Break

 3:30 pm Panel on “Legislating and Regulating AI”

  • Tom LynchChief Information Officer, Bureau of Technology, Cook County
  • Rep. Abdelnasser Rashid, Illinois State Representative, 21st House District
  • Moderated by: Daniel W. Linna Jr., Senior Lecturer & Director of Law and Technology Initiatives, Northwestern Pritzker School of Law & McCormick School of Engineering

5:00 pm Conclude

Michael Maire, Associate Professor of Computer Science, University of Chicago

Titles and Abstracts

Speaker: Kevin D. Ashley

Title: Modeling Case-based Legal Argument in an Age of Generative AI

Abstract: Researchers in AI and Law have long modeled how legal professionals argue with cases and analogies. Their models have represented in a variety of ways the aspects of legal knowledge employed in predicting, explaining, and arguing for and against case outcomes. These include representing issues, rules, and factors courts employ in reasoning about and drawing analogies across cases. They have applied legal text analytic tools in attempting to bridge the gap between case texts and their argument models. With the recent advances in large language models, however, the question arises as to what roles, if any, their models of argument will play in an age of generative AI. This talk surveys various approaches my students and I have pursued to shed light on this question.

____

Speaker: Sabine Brunswicker

Title: The Impact of Empathy in Conversational AI on Perceived Trustworthiness and Usefulness: Insights from a Behavioral Experiment with a Legal Chatbot

Abstract: With the advancement in data-driven machine learning (ML) modeling (e.g. deep learning), and natural language processing, artificial intelligence (AI) is transforming everyday life. The rise around language models (LLMs) and foundational LLMs used in ChatCPT, has led to the general believe that online “chatbots” can not only support citizens in day-to-day task like online shopping. Indeed, proponents argue that chatbots can hold a level of “social intelligence” that allows them to render services in areas like law and healthcare, characterized by very interpersonal and empathic relationships between and a human expert and a citizen. Although existing research has shown that empathy is crucial for designing chatbot conversations that are perceived as trustworthy and useful, I argue that there is a major research gap: Existing research it fails to disentangle a chatbots “cognitive” intelligence – that is ability to provide factually correct answers – from its social and emotional intelligence as it is perceived through language. As part of collaborative research with Northwestern University, I present results of first behavioral study related to a broader research agenda on empathy in conversational AI. Guided by linguistic theories on syntax and rhetoric, we developed a first behavioral theory of empathy in the language display to explain relational outcomes of human-AI conversations in terms of cognitive effort, helpfulness, and trustworthiness. Using this theory, we designed a chatbot that integrated a rule-based logic for empathy in language display using syntactic and rhetorical linguistic elements that evoke empathy, distinct from the chatbots knowledge-based legal rules. Through a randomized controlled experiment with a 2 by 3 factorial design involving 277 participants, we compared the outcomes generated by an empathetic chatbot, with a non-empathetic chatbot using the same legal rule and a non-conversational service in the form of frequently asked questions (“FAQs”). The results indicate that subtle changes in language syntax and style can have substantial implications for the outcomes of human-AI conversations on perceived trustworthiness, usefulness, and cognitive effort. I will conclude my talk with providing an overview of ongoing work that aims to align an open LLM through a neuro-symbolic architecture that integrates rule-based models informed by this behavioral study with generative and foundational AI, and also discuss alternative statistical views towards trustworthiness in communication relationships informed by information theory and reachability analysis.

____

Speaker: Anthony Casey

Title: Your Self Driving Law Has Arrived

Abstract: With the continuing advances in machine learning and artificial intelligence systems, technology that converts broad legal goals into real-time personalized microdirectives has become a reality. Building on this technology, the use of self-driving contracts has expanded, and recent advances in Large Language Models bring the promise (and peril) of even further and more rapid expansion and the possible adoption of self-driving public laws and automated judging. While academics have written about these developments for years and private actors have pushed their development, regulatory preparation has lagged. This talk will focus on the immediate challenges posed by the boom of self-driving laws with a particular focus on the reliability and integrity of the service providers and the potential for misuse and manipulation of the this new technology.

____

Speaker: Aloni Cohen

Title: Control, Confidentiality, and the Right to be Forgotten

Abstract: Recent digital rights frameworks give users the right to delete their data from systems that store and process their personal information (e.g., the “right to be forgotten” in the GDPR).  How should deletion be formalized in complex systems that interact with many users and store derivative information? We argue that prior approaches fall short. Definitions of machine unlearning (Cao and Yang, S&P 2015) are too narrowly scoped and do not apply to general interactive settings. The natural approach of deletion-as-confidentiality (Garg et al., EUROCRYPT 2020) is too restrictive: by requiring secrecy of deleted data, it rules out social functionalities. We propose a new formalism: deletion-as-control. It allows users’ data to be freely used before deletion, while also imposing a meaningful requirement after deletion—thereby giving users more control. Deletion-as-control provides new ways of achieving deletion in diverse settings. We apply it to social functionalities, and give a new unified view of various machine unlearning definitions from the literature.

____

Speaker: Paul Gowder

Title: The Networked Leviathan: For Democratic Platforms

Abstract: This talk will focus attention on the institutional preconditions of trust and safety work, borrowing from political science to describe the high-level governance problems presented by networked technologies more generally, with brief remarks on their applicability to AI.

____

Speaker: Aziz Huq

Title: Artificially Intelligent Regulation

Abstract:This essay maps the potential, and risks, of artificially intelligent regulation: regulatory arrangements that use a complex computational algorithm or another artificial agent either to define a legal norm or to guide its implementation. The ubiquity
of AI systems in modern organizations all but guarantees that regulators or the parties they regulate will make use of learning algorithms or novel techniques to analyze data in the process of defining, implementing, or complying with regulatory requirements. We offer an account of the possible benefits and harms of artificially intelligent regulation. Its mix of costs and rewards, we show, depend primarily on whether AI is deployed in ways aimed merely at shoring up existing hierarchies, or whether AI systems are embedded in and around legal frameworks carefully structured and evaluated to better our lives, environment, and future.

____

Speaker: Sarah Lawsky

Title: Formal Methods and the Law

Abstract: This talk will address the potential and the limitations of using formal methods in the context of formalizations of the law. The talk will use as its primary example Catala, a domain-specific programming language designed specifically for formalizing tax law.

____

Speaker: Kangwook Lee

Title: Demystifying Large Language Models: A Comprehensive Overview

Abstract: This talk aims to provide a comprehensive introduction to large language models (e.g., GPT). We will delve into the fundamentals, explore the capabilities and limitations of these models, and discuss their potential impact. This talk is accessible to all, requiring no prior technical knowledge.

____

Speaker: Ana Marasovic

Title: Challenges in Fostering (Dis)Trust in AI

Abstract: What factors enable people to trust trustworthy models and distrust untrustworthy models? Broadly, (dis)trust can be derived from two sources: (1) intrinsic, which stems from understanding a model’s inner workings or reasoning, and (2) extrinsic, which is based on observing a model’s external behaviors. Evaluation benchmarks created by AI researchers can foster extrinsic (dis)trust in a given contract, but they must be properly constructed. Only then can they ensure that a model, to pass the test, must truly uphold the intended contract. I will overview the challenges of constructing valid evaluations. On the other hand, explainable AI (XAI) aims to provide insights into a model’s reasoning, thus foster intrinsic (dis)trust. XAI is not without its challenges, which I will discuss towards the end of my talk.

____

Speaker: Robert Hal Sloan, joint work with Richard Warner (Chicago-Kent Law School)

Title: How AI Tilts the Playing Field: Privacy, Fairness, and the Shadow of Risk

Abstract: Private sector applications of artificial intelligence (AI) raise related questions of informational privacy and fairness. Fairness requires that market competition occurs on a level playing field, and uses of AI unfairly tilt the field. Informational privacy concerns arise because AI tilts the field by taking information about activities in one area of one’s life and using it in ways that impose novel risks in areas not formerly associated with such risks. The loss of control over that information constitutes a loss of informational privacy. To illustrate both the fairness and privacy issues, imagine that Sally declares bankruptcy after defaulting on $50,000 of credit card debt, which she incurred paying for lifesaving medical treatment for her eight-year-old daughter. Post-bankruptcy Sally is a good credit risk. Her daughter has recovered, and her sole-proprietor business is seeing increased sales. Given her bankruptcy, however, an AI credit scoring system predicts that she is a poor risk and assigns her a low score. That low credit score casts a shadow that falls on her when her auto insurance company, which uses credit scores in its AI system as a measure of the propensity to take risks, raises her premium. Is it fair that saving her daughter’s life should carry with it the risk—realized in this case—of a higher premium? The pattern is not confined to credit ratings and insurance premiums. AI routinely creates risk shadows.  We address fairness questions in two steps. First, we turn to philosophical theories of fairness as equality of opportunity to spell out the content behind our metaphor of tilting the playing field. Second, we address the question of how, when confronted with a mathematically complex AI system, one can tell whether the system meets requirements of fairness. We answer by formulating three conditions whose violation makes a system presumptively unfair. The conditions provide a lens that reveals relevant features when policy makers and regulators investigate complex systems. Our goal is not to resolve fairness issues but to contribute to the creation of a forum in which legal regulators and affected parties can work to resolve them. The third of our three condition requires that systems incorporate contextual information about individual consumers, and we conclude by raising the question of whether our suggested approach to fairness significantly reduces informational privacy. We do not answer the question but emphasize that fairness and informational privacy questions can closely intertwine.

____

Speaker: V.S. Subramanian

Title: Judicial Support Tool: Finding the k-Most Likely Judicial Worlds

Abstract: Judges sometimes make mistakes. We propose JUST, a logical framework within which judges can record propositions about a case and witness statements where a witness says that certain propositions are true. JUST allows the judge/jury to assign a probability of her belief in a witness statement. A world is an assignment of true/false to each proposition, which is required to satisfy case specific integrity constraints. JUST’s explicit algorithm calculates the k-most likely worlds without using independence assumptions between propositions. The judge may use these calculated top-k most probable worlds to make her or his final decision. For this computation, JUST uses a suite of “combination” functions. We also develop JUST’s implicit algorithm, which is far more efficient. We test JUST using 10 combination functions on 5 real-world court cases and 19 TV court cases, and show the combinations under which JUST works well in practice. Joint work with M. Bolonkin, S. Chakrabarty, and C. Molinaro.

____

Speaker: Charlotte Tschider

Title: The Importance of “Humans Outside the Loop”

Abstract: Artificial Intelligence is not all artificial. After all, despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create it. Through data selection, decisional design, training, testing, and tuning to managing AI’s developments as it is used in the human world, humans exert agency and control over these choices and practices. AI is now ubiquitous: it is part of every sector and, for most people, their everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to actually remedy any wrongs. From the perspectives of both businesses licensing AI and AI users, this paper identifies key impediments to legal recovery and proposes an alternative regulatory scheme that reframes liability from injecting a human in the loop to focusing on the actions of humans outside the loop.

____

Speaker: Ermin Wei

Title: Incentivized Federated Learning and Unlearning

Abstract: To protect users’  right to be forgotten in federated learning, federated unlearning aims at eliminating the impact of leaving users’  data  on the global learned  model. The current research in federated unlearning mainly concentrated on developing effective and efficient unlearning techniques. However, the issue of incentivizing valuable users to remain engaged and preventing their data from being unlearned is still under-explored, yet important to the unlearned model performance. This work focuses on the incentive issue and develops an incentive mechanism for federated learning and unlearning. We first characterize the leaving users’ impact on the global model accuracy and the required communication rounds for unlearning. Building on these results, we propose a four-stage game to capture the interaction and information updates during the learning and unlearning process. A key contribution is to summarize users’ multi-dimensional private information into one-dimensional metrics to guide the incentive design.  We further investigate whether allowing federated unlearning is beneficial to the server and users, compared to a scenario without unlearning. Interestingly, users usually have a larger total payoff in the scenario with higher costs, due to the server’s excess incentives under information asymmetry. The numerical results demonstrate the necessity of unlearning incentives for retaining valuable leaving users,  and also show that our proposed mechanisms decrease the server’s cost by up to 53.91% compared to state-of-the-art benchmarks. This is a joint work with Ningning Ding and Randy Berry. 

____

Speaker: Chenhao Zhang

Title: Regulation of Algorithmic Collusion

Abstract: Algorithms have been widely used to price goods and services. However, there is a growing concern that the adoption of algorithmic pricing facilitates price collusion and hinders market competition. Several recent papers have shown that some configurations of certain algorithms, when in competition with each other, can find and sustain super-competitive prices. We give a framework for algorithms to collect data that proves their non-collusion and for regulators to audit non-collusion. We instantiate the framework on the repeated price competition model.

Organizers

David McAllester – Professor, Toyota Technological Institute, Professor, Part-Time, Department of Computer Science, University of Chicago

Join Our Newsletter