Portrait of Kaizheng Wang  

Kaizheng Wang


Research Interests

Statistics, Unsupervised Learning (Clustering, Representation Learning, Latent Variable Models), Semi-Supervised Learning, Non-Convex Optimization, Distributed Optimization

Kaizheng Wang works at the intersection of optimization, machine learning, and statistics. He develops and studies scalable algorithms for analyzing massive data that are unstructured, incomplete, and heterogeneous. The methodologies have wide applications in signal processing, network analysis, recommendation systems, distributed computing, etc. He is also interested in uncertainty quantification and robustness certification for complex systems.

A main focus of Wang’s research is efficient extraction of key structures from high-dimensional data, which greatly reduces complexity and enhances interpretability. This includes dimensionality reduction, representation learning, clustering, ranking and other weakly supervised machine learning problems where labeled data are scarce and difficult to obtain. He leverages cutting-edge tools in optimization, statistics, and related fields to design principled approaches that faithfully output high-quality solutions.

Before coming to Columbia University, Wang received his PhD in Operations Research and Financial Engineering from Princeton University in 2020 and his BS in Mathematics from Peking University in 2015.


  • Assistant Professor of Industrial Engineering and Operations Research, Columbia University, 2020–


  • Harold W. Dodds Fellowship, Princeton University, 2019


Abbe, E., Fan, J., Wang, K., & Zhong, Y. Entrywise eigenvector analysis of random matrices with low expected rank. The Annals of Statistics, 2020+.

Ma, C., Wang, K., Chi, Y., & Chen, Y. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion and blind deconvolution. Foundations of Computational Mathematics, 20, 451–632 (2020).

Fan, J., Wang, D., Wang, K., & Zhu, Z. Distributed estimation of principal eigenspaces. The Annals of Statistics 47 (6): 3009-3031, 2019.

Chen, Y., Fan, J., Ma, C., & Wang, K. Spectral method and regularized MLE are both optimal for Top-K ranking. The Annals of Statistics 47 (4): 2204-2235, 2019.