Xiao Zhang's Homepage
Xiao Zhang's Homepage
About
Research
Publication
Student
Teaching
Service
Contact
Open Position
Light
Dark
Automatic
1
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Understand the inherent vulnerabilities to indiscriminate data poisoning attacks for linear learners by studying the optimal poisoning strategy from the perspective of data distribution.
Fnu Suya
,
Xiao Zhang
,
Yuan Tian
,
David Evans
PDF
Cite
ArXiv
Understanding Intrinsic Robustness using Label Uncertainty
Built upon on a novel definition of label uncertainty, we develop an empirical method to estimate a more realistic intirnsic robustness limit for image classification tasks.
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Improved Estimation of Concentration Under Lp-Norm Distance Metric Using Half Spaces
We show that concentration of measure does not prohibit the existence of adversarially robust classifiers using a novel method of empirical concentration estimation.
Jack Prescott
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
OpenReview
Post
Understanding the intrinsic robustness of image distributions using conditional generative models
We propose a way to characterize the intrinsic robustness of image distributions under L2 perturbations using conditional generative models.
Xiao Zhang
,
Jinghui Chen
,
Quanquan Gu
,
David Evans
PDF
Cite
Code
Slides
ArXiv
Link
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
We propose an unsupervised learning method for obtaining robust representations based on a notion of representation vulnerability.
Sicheng Zhu
,
Xiao Zhang
,
David Evans
PDF
Cite
Code
ArXiv
Link
Post
Empirically Measuring Concentration: Fundamental Limits to Intrinsic Robustness
We develop a method to measure the concentration of image benchmarks using empirical samples and show that concentration of measure does not prohibit the existence of adversarially robust classifiers.
Saeed Mahloujifar
,
Xiao Zhang
,
Mohammad Mahmoody
,
David Evans
PDF
Cite
Code
Poster
ArXiv
Post
Cost-Sensitive Robustness against Adversarial Examples
We propose a notion of cost-sensitive robustness for measuring classifier’s performance when adversarial transformations are not equally important, and provide a certified robust training method to optimize for it.
Xiao Zhang
,
David Evans
PDF
Cite
Code
Poster
ArXiv
OpenReview
Post
Learning One-hidden-layer ReLU Networks via Gradient Descent
We prove theoretical guarantees of learning one-hidden-layer neural networks with ReLU activations.
Xiao Zhang
,
Yaodong Yu
,
Lingxiao Wang
,
Quanquan Gu
PDF
Cite
Poster
ArXiv
Link
A Primal-Dual Analysis of Global Optimality in Nonconvex Low-Rank Matrix Recovery
A primal-dual based framework for analyzing the global optimality of nonconvex low-rank matrix recovery.
Xiao Zhang
,
Lingxiao Wang
,
Yaodong Yu
,
Quanquan Gu
PDF
Cite
Link
Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow
We present a new gradient-based optimization algorithm for inductive matrix completion, which achieves both linear rate of convengence and sample complexities linearly depending on the feature dimension.
Xiao Zhang
,
Simon Du
,
Quanquan Gu
PDF
Cite
Code
Poster
ArXiv
Link
»
Cite
×