Tuesday, October 3rd, 2023, 7:30 PT / 10:30 ET / 16:30 CET
When & Where:
- Tuesday, October 3rd, 2023, 7:30 PT / 10:30 ET / 16:30 CET
- Online, via Zoom. The registration form is available here.
- Linjun Zhang, Rutgers University: “Fair conformal prediction”
Abstract: Multi-calibration is a powerful and evolving concept originating in the field of algorithmic fairness. For a predictor f that estimates the outcome y given covariates x, and for a function class C, multi-calibration requires that the predictor f(x) and outcome y are indistinguishable under the class of auditors in C. Fairness is captured by incorporating demographic subgroups into the class of functions C. Recent work has shown that, by enriching the class C to incorporate appropriate propensity reweighting functions, multi-calibration also yields target-independent learning, wherein a model trained on a source domain performs well on unseen, future target domains (approximately) captured by the re-weightings. The multi-calibration notion is extended, and the power of an enriched class of mappings is explored. HappyMap, a generalization of multi-calibration, is proposed, which yields a wide range of new applications, including a new fairness notion for uncertainty quantification (conformal prediction), a novel technique for conformal prediction under covariate shift, and a different approach to analyzing missing data, while also yielding a unified understanding of several existing seemingly disparate algorithmic fairness notions and target-independent learning approaches. A single HappyMap meta-algorithm is given that captures all these results, together with a sufficiency condition for its success.
- Mikhail Yurochkin, IBM Research and MIT-IBM Watson AI Lab: “Operationalizing Individual Fairness”
Abstract: Societal applications of ML proved to be challenging due to algorithms replicating or even exacerbating biases in the training data. In response, there is a growing body of research on algorithmic fairness that attempts to address these issues, primarily via group definitions of fairness. In this talk, I will illustrate several shortcomings of group fairness and present an algorithmic fairness pipeline based on individual fairness (IF). IF is often recognized as the more intuitive notion of fairness: we want ML models to treat similar individuals similarly. Despite the benefits, challenges in formalizing the notion of similarity and enforcing equitable treatment prevented the adoption of IF. I will present our work addressing these barriers via algorithms for learning the similarity metric from data and methods for auditing and training fair models utilizing the intriguing connection between individual fairness and adversarial robustness. Finally, I will demonstrate applications of IF with Large Language Models.
Discussant: Razieh Nabi, Emory University
YoungStatS project of the Young Statisticians Europe initiative (FENStatS) is supported by the Bernoulli Society for Mathematical Statistics and Probability and the Institute of Mathematical Statistics (IMS).
If you missed this webinar, you can watch the recording on our YouTube channel.