Xiao Zhang's Homepage
Xiao Zhang's Homepage
About
Research
Publication
Student
Teaching
Service
Contact
Open Position
Light
Dark
Automatic
Adversarial Examples
Generating Less Certain Adversarial Examples Improves Robust Generalization
Build upon the notion of adversarial certainty, we develop a general training method to generate adversarial examples with reduced certainty for improving robust generalization.
Minxing Zhang
,
Michael Backes
,
Xiao Zhang
PDF
Cite
Code
ArXiv
OpenReview
Understanding Adversarially Robust Generalization via Weight-Curvature Index
We introduce the Weight-Curvature Index (WCI), a novel metric that captures the interplay between model parameters and loss landscape curvature to better understand and improve adversarially robust generalization in deep learning.
Yuelin Xu
,
Xiao Zhang
PDF
Cite
ArXiv
OpenReview
Understanding Intrinsic Robustness using Label Uncertainty
Built upon on a novel definition of label uncertainty, we develop an empirical method to estimate a more realistic intirnsic robustness limit for image classification tasks.
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Incorporating Label Uncertainty in Intrinsic Robustness Measures
Advocate to understand the concentration of measure phenomenon regarding inputs regions with high label uncertainty
Xiao Zhang
,
David Evans
PDF
Code
Poster
Link
Improved Estimation of Concentration Under Lp-Norm Distance Metric Using Half Spaces
We show that concentration of measure does not prohibit the existence of adversarially robust classifiers using a novel method of empirical concentration estimation.
Jack Prescott
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Post
Understanding the intrinsic robustness of image distributions using conditional generative models
We propose a way to characterize the intrinsic robustness of image distributions under L2 perturbations using conditional generative models.
Xiao Zhang
,
Jinghui Chen
,
Quanquan Gu
,
David Evans
PDF
Cite
Code
Slides
ArXiv
Link
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
We propose an unsupervised learning method for obtaining robust representations based on a notion of representation vulnerability.
Sicheng Zhu
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
Link
Post
Empirically Measuring Concentration: Fundamental Limits to Intrinsic Robustness
We develop a method to measure the concentration of image benchmarks using empirical samples and show that concentration of measure does not prohibit the existence of adversarially robust classifiers.
Saeed Mahloujifar
,
Xiao Zhang
,
Mohammad Mahmoody
,
David Evans
PDF
Cite
Code
Poster
ArXiv
Post
Cost-Sensitive Robustness against Adversarial Examples
We propose a notion of cost-sensitive robustness for measuring classifier’s performance when adversarial transformations are not equally important, and provide a certified robust training method to optimize for it.
Xiao Zhang
,
David Evans
PDF
Cite
Code
Poster
ArXiv
OpenReview
Post
Cite
×