Bio: I am a tenure-track faculty at CISPA Helmholtz Center for Information Security. Prior to that, I obtained my Ph.D. degree from Department of Computer Science at University of Virginia advised by Prof. David Evans in 2022. I received my M.S. degree from Department of Statistics at University of Virginia and my B.S. degree in Mathematics and Applied Mathematics at Tsinghua University in 2017 and 2015, respectively. I am also a member of the European Laboratory for Learning and Intelligent Systems.
Research Interests: My research covers various topics in machine learning and security, including trustworthy machine learning, statistical machine learning, convex/non-convex optimization and deep learning. Recently, I focus on understanding the misbehavior of machine learning models against different adversaries and designing robust systems for various machine learning applications.
Open Positions: I am looking for self-motivated students who are interested in trustworthy machine learning, including PhD students, research assistants, intern and visiting students. Check Open Positions for more details.
We highlight the importance of using dynamic FR strategies to evaluate AFR methods, and propose DivTrackee as a promising countermeasure.
We show how prior claims about black-box access sufficing for optimal membership inference do not hold for most useful settings such as SGD, and validate our findings with a new white-box inference attack.
Understand the inherent vulnerabilities to indiscriminate data poisoning attacks for linear learners by studying the optimal poisoning strategy from the perspective of data distribution.
Built upon on a novel definition of label uncertainty, we develop an empirical method to estimate a more realistic intirnsic robustness limit for image classification tasks.