Bio: I am currently a tenure-track faculty at CISPA Helmholtz Center for Information Security. Prior to that, I obtained my Ph.D. degree in the department of computer science at University of Virginia advised by Prof. David Evans in 2022. I received my M.S. degree from Department of Statistics at University of Virginia and my B.S. degree in Mathematics at Tsinghua University in 2017 and 2015, respectively. I am also a member of the European Laboratory for Learning and Intelligent Systems.

Research Interests: My research covers various topics in machine learning and security, including trustworthy machine learning, statistical machine learning, convex/non-convex optimization and deep learning. Recently, I focus on understanding the misbehavior of machine learning models against different adversaries and designing robust systems for various machine learning applications.

Open Positions: I am looking for self-motivated students who are interested in trustworthy machine learning starting in 2024, including PhD students, research assistants, intern and visiting students. Check Open Positions for more details.


Quickly discover relevant content by filtering publications.
(2023). What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?. NeurIPS 2023.

PDF Cite ArXiv

(2023). Provably Robust Cost-Sensitive Learning via Randomized Smoothing. ArXiv.

PDF Cite Code ArXiv

(2023). Transferable Availability Poisoning Attacks. ArXiv.

PDF Cite Code ArXiv

(2023). Generating Less Certain Adversarial Examples Improves Robust Generalization. ArXiv.

PDF Cite Code ArXiv

(2021). Incorporating Label Uncertainty in Intrinsic Robustness Measures. ICLR 2021 aisecure workshop.

PDF Code Poster Link