Sanae Lotfi

Logo

PhD student at New York University

I am a PhD student at the Center for Data Science at NYU and a DeepMind fellow, advised by Professor Andrew Gordon Wilson. I am currently interested in designing robust models that can generalize well in and out of distribution. I also work on the closely related question of understanding and quantifying the generalization properties of deep neural networks. More broadly, my research interests include out-of-distribution generalization, Bayesian learning, probabilistic modeling, large-scale optimization, and loss surface analysis.

Prior to NYU, I obtained a master’s degree in applied mathematics from Polytechnique Montreal. I was fortunate to work there with Professors Andrea Lodi and Dominique Orban to design stochastic first- and second-order algorithms with compelling theoretical and empirical properties for machine learning and large-scale optimization. I received the Best Master’s Thesis Award in Applied Mathematics at Polytechnique Montreal for this work. I also hold an engineering degree in general engineering and applied mathematics from CentraleSupélec.

In summer 2022, I am excited to work with Bernie Wang and Richard Kurle at Amazon as an Applied Scientist Intern.


You can contact me at sl8160@nyu.edu

CV, Google Scholar, LinkedIn, Twitter, Github


Publications

Bayesian Model Selection, the Marginal Likelihood, and Generalization
Sanae Lotfi, Pavel Izmailov, Gregory Benton, Micah Goldblum, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2022
Long oral presentation, top 2% submissions
[arxiv, code]

Adaptive First-and Second-Order Algorithms for Large-Scale Machine Learning
Sanae Lotfi, Tiphaine Bonniot de Ruisselet, Dominique Orban, Andrea Lodi
Annual Conference on Machine Learning, Optimization, and Data Science (LOD), 2022
Oral presentation
[arxiv]

Dangers of Bayesian Model Averaging under Covariate Shift
Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew Gordon Wilson
Neural Information Processing Systems (NeurIPS), 2021
[arxiv, code]

Loss Surface Simplexes for Mode Connecting Volumes and Fast Ensembling
Gregory W. Benton, Wesley J. Maddox, Sanae Lotfi, Andrew Gordon Wilson
International Conference on Machine Learning (ICML), 2021
Spotlight presentation
[arxiv, code]

Stochastic Damped L-BFGS with Controlled Norm of the Hessian Approximation
Sanae Lotfi, Tiphaine B. de Ruisselet, Dominique Orban, Andrea Lodi
SIAM Conference on Optimization, 2021
Oral presentation
NeurIPS Optimization for Machine Learning Workshop, 2020
Spotlight presentation
[arxiv]

Stochastic First and Second Order Optimization Methods for Machine Learning
Sanae Lotfi
Master’s Thesis, 2020
Best Thesis in Applied Mathematics at Polytechnique Montreal
Polytechnique Montreal


Surveys

Understanding the Generalization of Deep Neural Networks through PAC-Bayes bounds
Andres Potapczynski, Sanae Lotfi, Anthony Chen, Chris Ick
Class Project for the Mathematics of Deep Learning, CS-GA 3033, Spring 2022

Causal Representation Learning
Sanae Lotfi, Taro Makino, Lily Zhang
Class Project for Inference and Representation, DS-GA 1005, Fall 2021