Xiao Zhang's Homepage
Xiao Zhang's Homepage
About
Research
Publication
Student
Teaching
Service
Contact
Open Position
Light
Dark
Automatic
3
MASQUE: Localized Adversarial Makeup Generation with Text-Guided Diffusion Models
We introduce MASQUE, a diffusion-based framework that generates localized adversarial makeup guided by user-defined text prompts.
Youngjin Kwon
,
Xiao Zhang
PDF
Cite
Code
ArXiv
Generalizable Targeted Data Poisoning against Varying Physical Objects
We take the first step toward understanding the real-world threats of TDP by studying its generalizability across varying physical conditions.
Zhizhen Chen
,
Zhengyu Zhao
,
Subrat Kishore Dutta
,
Chenhao Lin
,
Chao Shen
,
Xiao Zhang
PDF
Cite
Code
ArXiv
DiffCAP: Diffusion-based Cumulative Adversarial Purification for Vision Language Models
This paper introduces DiffCAP, a novel diffusion-based purification strategy that can effectively neutralize adversarial corruptions in VLMs.
Jia Fu
,
Yongtao Wu
,
Yihang Chen
,
Kunyu Peng
,
Xiao Zhang
,
Volkan Cevher
,
Sepideh Pashami
,
Anders Holst
PDF
Cite
ArXiv
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs
We introduce Generative Adversarial Suffix Prompter (GASP), a novel framework that combines human-readable prompt generation with Latent Bayesian Optimization (LBO) to improve adversarial suffix creation in a fully black-box setting.
Advik Raj Basani
,
Xiao Zhang
PDF
Cite
Code
ArXiv
OpenReview
Predicting Time-varying Flux and Balance in Metabolic Systems using Structured Neural ODE Processes
We propose a structured neural ODE process model to estimate flux and balance samples using gene-expression time-series data
Santanu Rathod
,
Pietro Liò
,
Xiao Zhang
PDF
Cite
ArXiv
OpenReview
Invisibility Cloak: Disappearance under Human Pose Estimation via Backdoor Attacks
We propose a general framework to craft invisibility cloak for human pose estimation models
Minxing Zhang
,
Michael Backes
,
Xiao Zhang
PDF
Cite
ArXiv
Improving the Efficiency of Self-Supervised Adversarial Training through Latent Clustering-based Selection
We introduce a Latent Clustering-based Selection method to choose a core subset from the entire unlabeled dataset, aiming to improve the efficiency of self-supervised adversarial training while preserving robustness.
Somrita Ghosh
,
Yuelin Xu
,
Xiao Zhang
PDF
Cite
OpenReview
Understanding Adversarially Robust Generalization via Weight-Curvature Index
We introduce the Weight-Curvature Index (WCI), a novel metric that captures the interplay between model parameters and loss landscape curvature to better understand and improve adversarially robust generalization in deep learning.
Yuelin Xu
,
Xiao Zhang
PDF
Cite
ArXiv
OpenReview
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
We propose AutoDefense, a response-filtering based multi-agent defense framework that filters harmful responses from LLMs.
Yifan Zeng
,
Yiran Wu
,
Xiao Zhang
,
Huazheng Wang
,
Qingyun Wu
PDF
Cite
Code
ArXiv
Transferable Availability Poisoning Attacks
We propose an availability poisoning attack for generating transferable poisoned data across different victim learners.
Yiyong Liu
,
Michael Backes
,
Xiao Zhang
PDF
Cite
Code
ArXiv
»
Cite
×