I am a research scientist at Google Research, New York City. Previously, I received my PhD degree from University of Illinois Urbana-Champaign, advised by Matus Telgarsky. I obtained my bachelor's degree in Computer Science from the ACM class at Shanghai Jiao Tong University. I am interested in machine learning and optimization, particularly in deep learning theory.
Email: ziweiji (at) google (dot) com
Reproducibility in Optimization: Theoretical Framework and Limits
Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I. Shamir.
NeurIPS 2022, Oral.
Agnostic Learnability of Halfspaces via Logistic Loss
Ziwei Ji, Kwangjun Ahn, Pranjal Awasthi, Satyen Kale, Stefani Karp.
ICML 2022, Long Presentation.
Actor-critic is implicitly biased towards high entropy optimal policies
Yuzheng Hu, Ziwei Ji, Matus Telgarsky.
ICLR 2022.
Early-stopped neural networks are consistent
Ziwei Ji, Justin D. Li, Matus Telgarsky.
NeurIPS 2021, Spotlight.
Fast Margin Maximization via Dual Acceleration
Ziwei Ji, Nathan Srebro, Matus Telgarsky.
ICML 2021.
Generalization bounds via distillation
Daniel Hsu, Ziwei Ji, Matus Telgarsky, Lan Wang.
ICLR 2021, Spotlight.
Characterizing the implicit bias via a primal-dual analysis
Ziwei Ji, Matus Telgarsky.
ALT 2021.
Directional convergence and alignment in deep learning
Ziwei Ji, Matus Telgarsky.
NeurIPS 2020, Spotlight.
Gradient descent follows the regularization path for general losses
Ziwei Ji, Miroslav Dudík, Robert E. Schapire, Matus Telgarsky.
COLT 2020.
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Ziwei Ji, Matus Telgarsky.
ICLR 2020.
Neural tangent kernels, transportation mappings, and universal approximation
Ziwei Ji, Matus Telgarsky, Ruicheng Xian.
ICLR 2020.
Risk and parameter convergence of logistic regression
Ziwei Ji, Matus Telgarsky.
In COLT 2019 under the name “The implicit bias of gradient descent on nonseparable data”.
Gradient descent aligns the layers of deep linear networks
Ziwei Ji, Matus Telgarsky.
ICLR 2019.
Social Welfare and Profit Maximization from Revealed Preferences
Ziwei Ji, Ruta Mehta, Matus Telgarsky.
WINE 2018.
Agnostic Learnability of Halfspaces via Logistic Loss
Seminar in Statistical Learning, Dynamical Systems and Probability, AI Institute in Leipzig, October 2022.
The dual of the margin: improved analyses and rates for gradient descent’s implicit bias
One World Seminar Series on the Mathematics of Machine Learning, December 2020.
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Illinois Institute for Data Science and Dynamical Systems (IDS2) Seminar Series, April 2020;
14th Annual Machine Learning Symposium, The New York Academy of Sciences, March 2020;
15th CSL Student Conference, February 2020.
Teaching assistant for UIUC CS 598 Deep Learning Theory (Fall 2020, Fall 2021).
Teaching assistant for UIUC CS 446 Machine Learning (Spring 2019).
Reviewer for NeurIPS, ICLR, COLT, ICML, EC, ITCS, IEEE Transactions on Information Theory.