Bio: I am currently a tenure-track faculty at CISPA Helmholtz Center for Information Security. Prior to that, I obtained my Ph.D. degree in the department of computer science at University of Virginia advised by Prof. David Evans in 2022. I received my M.S. degree from Department of Statistics at University of Virginia and my B.S. degree in Mathematics at Tsinghua University in 2017 and 2015, respectively. I am also a member of the European Laboratory for Learning and Intelligent Systems.
Research Interests: My research covers various topics in machine learning and security, including trustworthy machine learning, statistical machine learning, convex/non-convex optimization and deep learning. Recently, I focus on understanding the misbehavior of machine learning models against different adversaries and designing robust systems for various machine learning applications.
Open Positions: I am looking for self-motivated students who are interested in trustworthy machine learning, including PhD students, research assistants, intern and visiting students. Check Open Positions for more details.
Understand the inherent vulnerabilities to indiscriminate data poisoning attacks for linear learners by studying the optimal poisoning strategy from the perspective of data distribution.
Built upon on a novel definition of label uncertainty, we develop an empirical method to estimate a more realistic intirnsic robustness limit for image classification tasks.
We develop a method to measure the concentration of image benchmarks using empirical samples and show that concentration of measure does not prohibit the existence of adversarially robust classifiers.
We propose a notion of cost-sensitive robustness for measuring classifier’s performance when adversarial transformations are not equally important, and provide a certified robust training method to optimize for it.