Xiao Zhang's Personal Website
Xiao Zhang's Personal Website
Home
Publications
Talks
Experience
Services
Contact
CV
Light
Dark
Automatic
Adversarial Examples
Understanding Intrinsic Robustness using Label Uncertainty
Built upon on a novel definition of label uncertainty, we develop an empirical method to estimate a more realistic intirnsic robustness limit for classification tasks
Xiao Zhang
,
David Evans
PDF
Cite
ArXiv
OpenReview
Incorporating Label Uncertainty in Intrinsic Robustness Measures
Advocate to understand the concentration of measure phenomenon regarding inputs regions with high label uncertainty
Xiao Zhang
,
David Evans
PDF
Poster
Link
Improved Estimation of Concentration Under Lp-Norm Distance Metric Using Half Spaces
We show that concentration of measure does not prohibit the existence of adversarially robust classifiers using a novel method of empirical concentration estimation.
Jack Prescott
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Post
Understanding the intrinsic robustness of image distributions using conditional generative models
We propose a way to characterize the intrinsic robustness of image distributions under L2 perturbations using conditional generative models.
Xiao Zhang
,
Jinghui Chen
,
Quanquan Gu
,
David Evans
PDF
Cite
Code
Slides
ArXiv
Link
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
We propose an unsupervised learning method for obtaining robust representations based on a notion of representation vulnerability.
Sicheng Zhu
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
Link
Post
Empirically Measuring Concentration: Fundamental Limits to Intrinsic Robustness
We develop a method to measure the concentration of image benchmarks using empirical samples and show that concentration of measure does not prohibit the existence of adversarially robust classifiers.
Saeed Mahloujifar
,
Xiao Zhang
,
Mohammad Mahmoody
,
David Evans
PDF
Cite
Code
Poster
ArXiv
Post
Cost-Sensitive Robustness against Adversarial Examples
We propose a notion of cost-sensitive robustness for measuring classifier’s performance when adversarial transformations are not equally important, and provide a certified robust training method to optimize for it.
Xiao Zhang
,
David Evans
PDF
Cite
Code
Poster
ArXiv
OpenReview
Post
Cite
×