Publications
You can also find my papers on my Google Scholar profile.
Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models
Sanae Lotfi*, Yilun Kuang*, Brandon Amos, Micah Goldblum, Marc Finzi, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2024
π Spotlight Presentation
ICML Workshop on Theoretical Foundations of Foundation Models, 2024
π Best Paper Award & Oral Presentation
[arxiv]
Non-Vacuous Generalization Bounds for Large Language Models
Sanae Lotfi*, Marc Finzi*, Yilun Kuang*, Tim G. J. Rudner, Micah Goldblum, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2024
[arxiv, code]
Bayesian Model Selection, the Marginal Likelihood, and Generalization (Extended Paper)
Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
Journal of Machine Learning Research (JMLR), 2023
π Best Papers Track
[arxiv, code]
Mitigating Augmentation Bias with Input-Dependent Distributions over Augmentations
Sanae Lotfi, Tim G. J. Rudner, Brandon Amos, Andrew Gordon Wilson
Under review, soon on arxiv.
PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Sanae Lotfi*, Marc Finzi*, Sanyam Kapoor*, Andres Potapczynski*, Micah Goldblum, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2022
[arxiv, code]
Bayesian Model Selection, the Marginal Likelihood, and Generalization
Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2022
π Long oral presentation, top 2% submissions
π Outstanding Paper Award
[arxiv, code, poster, talk, slides]
Dangers of Bayesian Model Averaging under Covariate Shift
Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2021
[arxiv, code, poster]
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton, Wesley J. Maddox, Sanae Lotfi, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2021
π Spotlight Presentation
[arxiv, code, slides]
Evaluating Approximate Inference in Bayesian Deep Learning
Andrew Gordon Wilson, Sanae Lotfi, Sharad Vikram, Matthew D. Hoffman, Yarin Gal, Yingzhen Li, Melanie F. Pradier, Andrew Foong, Sebastian Farquhar, Pavel Izmailov
NeurIPS Competition and Demonstration Track, Proceedings of Machine Learning Research (PMLR), 2021
[plmr, code, website]
Adaptive First-and Second-Order Algorithms for Large-Scale Machine Learning
Sanae Lotfi, Tiphaine Bonniot de Ruisselet, Dominique Orban, Andrea Lodi
Annual Conference on Machine Learning, Optimization, and Data Science (LOD)
π Oral Presentation
[arxiv]
Stochastic Damped L-BFGS with Controlled Norm of the Hessian Approximation
Sanae Lotfi, Tiphaine B. de Ruisselet, Dominique Orban, Andrea Lodi
SIAM Conference on Optimization, 2021
π Oral Presentation
NeurIPS Optimization for Machine Learning Workshop, 2020
π Spotlight Presentation
[arxiv]
Stochastic First and Second Order Optimization Methods for Machine Learning
Sanae Lotfi
Masterβs Thesis, 2020
π Best Thesis Award in Applied Mathematics at Polytechnique Montreal
Polytechnique Montreal
* denotes equal contribution.
Surveys
Understanding the Generalization of Deep Neural Networks through PAC-Bayes bounds
Andres Potapczynski, Sanae Lotfi, Anthony Chen, Chris Ick
Mathematics of Deep Learning, CS-GA 3033, Spring 2022
Causal Representation Learning
Sanae Lotfi, Taro Makino, Lily Zhang
Inference and Representation, DS-GA 1005, Fall 2021