Xiao Zhang's Homepage
Xiao Zhang's Homepage
About
Research
Publication
Student
Teaching
Service
Contact
Open Position
Light
Dark
Automatic
1
Provably Cost-Sensitive Adversarial Defense via Randomized Smoothing
We study how to certify and train for cost-sensitive robustness using randomized smoothing.
Yuan Xin
,
Dingfan Chen
,
Michael Backes
,
Xiao Zhang
PDF
Cite
ArXiv
DivTrackee versus DynTracker: Promoting Diversity in Anti-Facial Recognition against Dynamic FR Strategy
We highlight the importance of using dynamic FR strategies to evaluate AFR methods, and propose DivTrackee as a promising countermeasure.
Wenshu Fan
,
Minxing Zhang
,
Hongwei Li
,
Wenbo Jiang
,
Hanxiao Chen
,
Xiangyu Yue
,
Michael Backes
,
Xiao Zhang
PDF
Cite
ArXiv
DiffPAD: Denoising Diffusion-based Adversarial Patch Decontamination
We propose DiffPAD, a novel framework that harnesses the power of diffusion models for adversarial patch decontamination.
Jia Fu
,
Xiao Zhang
,
Sepideh Pashami
,
Fatemeh Rahimian
,
Anders Holst
Cite
ArXiv
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Understand the inherent vulnerabilities to indiscriminate data poisoning attacks for linear learners by studying the optimal poisoning strategy from the perspective of data distribution.
Fnu Suya
,
Xiao Zhang
,
Yuan Tian
,
David Evans
PDF
Cite
ArXiv
Understanding Intrinsic Robustness using Label Uncertainty
Built upon on a novel definition of label uncertainty, we develop an empirical method to estimate a more realistic intirnsic robustness limit for image classification tasks.
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Improved Estimation of Concentration Under Lp-Norm Distance Metric Using Half Spaces
We show that concentration of measure does not prohibit the existence of adversarially robust classifiers using a novel method of empirical concentration estimation.
Jack Prescott
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Post
Understanding the intrinsic robustness of image distributions using conditional generative models
We propose a way to characterize the intrinsic robustness of image distributions under L2 perturbations using conditional generative models.
Xiao Zhang
,
Jinghui Chen
,
Quanquan Gu
,
David Evans
PDF
Cite
Code
Slides
ArXiv
Link
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
We propose an unsupervised learning method for obtaining robust representations based on a notion of representation vulnerability.
Sicheng Zhu
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
Link
Post
Empirically Measuring Concentration: Fundamental Limits to Intrinsic Robustness
We develop a method to measure the concentration of image benchmarks using empirical samples and show that concentration of measure does not prohibit the existence of adversarially robust classifiers.
Saeed Mahloujifar
,
Xiao Zhang
,
Mohammad Mahmoody
,
David Evans
PDF
Cite
Code
Poster
ArXiv
Post
Cost-Sensitive Robustness against Adversarial Examples
We propose a notion of cost-sensitive robustness for measuring classifier’s performance when adversarial transformations are not equally important, and provide a certified robust training method to optimize for it.
Xiao Zhang
,
David Evans
PDF
Cite
Code
Poster
ArXiv
OpenReview
Post
»
Cite
×