PhD student at New York University
I am a PhD student at the Center for Data Science at NYU advised by Professor Andrew Gordon Wilson and a Visiting Researcher in the Fundamental AI Research (FAIR) group at Meta AI where I work with Brandon Amos. I am currently interested in designing robust models that can generalize well in-distribution and out-of-distribution, alongside the closely related question of understanding and quantifying the generalization properties of deep neural networks.
My PhD research has been recognized with an ICML Outstanding Paper Award and is generously supported by the Microsoft Research PhD Fellowship, the DeepMind Fellowship, the Meta AI Mentorship Program and the NYU Center for Data Science Fellowship!
Prior to NYU, I obtained a master’s degree in applied mathematics from Polytechnique Montreal. I was fortunate to work there with Professors Andrea Lodi and Dominique Orban to design stochastic first- and second-order algorithms with compelling theoretical and empirical properties for machine learning and large-scale optimization. I was awarded the Best Master’s Thesis Award for this work. I also hold a master’s degree in general engineering and applied mathematics from CentraleSupélec.
In summer 2022, I was fortunate to work with Bernie Wang and Richard Kurle at Amazon as an Applied Scientist Intern.
You can contact me at sl8160@nyu.edu
CV, Google Scholar, LinkedIn, Twitter, Github
PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
Sanae Lotfi*, Marc Finzi*, Sanyam Kapoor*, Andres Potapczynski*, Micah Goldblum, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2022
Bayesian Model Selection, the Marginal Likelihood, and Generalization
Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2022
Long oral presentation, top 2% submissions
Outstanding Paper Award
[arxiv, code, poster, talk, slides]
Evaluating Approximate Inference in Bayesian Deep Learning
Andrew Gordon Wilson, Sanae Lotfi, Sharad Vikram, Matthew D. Hoffman, Yarin Gal, Yingzhen Li, Melanie F. Pradier, Andrew Foong, Sebastian Farquhar, Pavel Izmailov
NeurIPS Competition and Demonstration Track, Proceedings of Machine Learning Research (PMLR), 2022
[plmr, code, website]
Adaptive First-and Second-Order Algorithms for Large-Scale Machine Learning
Sanae Lotfi, Tiphaine Bonniot de Ruisselet, Dominique Orban, Andrea Lodi
Annual Conference on Machine Learning, Optimization, and Data Science (LOD), 2022
Oral presentation
[arxiv]
Dangers of Bayesian Model Averaging under Covariate Shift
Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2021
[arxiv, code, poster]
Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton, Wesley J. Maddox, Sanae Lotfi, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2021
Spotlight presentation
[arxiv, code, slides]
Stochastic Damped L-BFGS with Controlled Norm of the Hessian Approximation
Sanae Lotfi, Tiphaine B. de Ruisselet, Dominique Orban, Andrea Lodi
SIAM Conference on Optimization, 2021
Oral presentation
NeurIPS Optimization for Machine Learning Workshop, 2020
Spotlight presentation
[arxiv]
Stochastic First and Second Order Optimization Methods for Machine Learning
Sanae Lotfi
Master’s Thesis, 2020
Best Thesis Award in Applied Mathematics at Polytechnique Montreal
Polytechnique Montreal
*: Equal first authorship.
Are the Marginal Likelihood and PAC-Bayes Bounds the Right Proxies for Generalization?
Bayesian Model Selection, the Marginal Likelihood, and Generalization
Robustness of Deep Learning Models to Distribution Shift
Adaptive First and Second Order Algorithms for Large-Scale Machine Learning
Understanding the Generalization of Deep Neural Networks through PAC-Bayes bounds
Andres Potapczynski, Sanae Lotfi, Anthony Chen, Chris Ick
Class Project for the Mathematics of Deep Learning, CS-GA 3033, Spring 2022
Causal Representation Learning
Sanae Lotfi, Taro Makino, Lily Zhang
Class Project for Inference and Representation, DS-GA 1005, Fall 2021