Xiao Zhang's Homepage
Xiao Zhang's Homepage
About
Research
Publication
Student
Teaching
Service
Contact
Open Position
Light
Dark
Automatic
3
DivTrackee versus DynTracker: Promoting Diversity in Anti-Facial Recognition against Dynamic FR Strategy
We highlight the importance of using dynamic FR strategies to evaluate AFR methods, and propose DivTrackee as a promising countermeasure.
Wenshu Fan
,
Minxing Zhang
,
Hongwei Li
,
Wenbo Jiang
,
Hanxiao Chen
,
Xiangyu Yue
,
Michael Backes
,
Xiao Zhang
PDF
Cite
ArXiv
Can Targeted Clean-Label Poisoning Attacks Generalize?
We explore whether targeted clean-label data poisoning attacks can generalize to diverse target variations
Zhizhen Chen
,
Subrat Kishore Dutta
,
Zhengyu Zhao
,
Chenhao Lin
,
Chao Shen
,
Xiao Zhang
PDF
Cite
Code
ArXiv
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs
We introduce Generative Adversarial Suffix Prompter (GASP), a novel framework that combines human-readable prompt generation with Latent Bayesian Optimization (LBO) to improve adversarial suffix creation in a fully black-box setting.
Advik Raj Basani
,
Xiao Zhang
PDF
Cite
Code
ArXiv
Predicting Time-varying Flux and Balance in Metabolic Systems using Structured Neural ODE Processes
We propose a structured neural ODE process model to estimate flux and balance samples using gene-expression time-series data
Santanu Rathod
,
Pietro Liò
,
Xiao Zhang
PDF
Cite
ArXiv
Invisibility Cloak: Disappearance under Human Pose Estimation via Backdoor Attacks
We propose a general framework to craft invisibility cloak for human pose estimation models
Minxing Zhang
,
Michael Backes
,
Xiao Zhang
PDF
Cite
ArXiv
Improving the Efficiency of Self-Supervised Adversarial Training through Latent Clustering-based Selection
We introduce a Latent Clustering-based Selection method to choose a core subset from the entire unlabeled dataset, aiming to improve the efficiency of self-supervised adversarial training while preserving robustness.
Somrita Ghosh
,
Yuelin Xu
,
Xiao Zhang
PDF
Cite
OpenReview
Understanding Adversarially Robust Generalization via Weight-Curvature Index
We introduce the Weight-Curvature Index (WCI), a novel metric that captures the interplay between model parameters and loss landscape curvature to better understand and improve adversarially robust generalization in deep learning.
Yuelin Xu
,
Xiao Zhang
PDF
Cite
ArXiv
OpenReview
AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks
We propose AutoDefense, a response-filtering based multi-agent defense framework that filters harmful responses from LLMs.
Yifan Zeng
,
Yiran Wu
,
Xiao Zhang
,
Huazheng Wang
,
Qingyun Wu
PDF
Cite
Code
ArXiv
Provably Robust Cost-Sensitive Learning via Randomized Smoothing
Study how to certify and train for cost-sensitive robustness using randomized smoothing.
Yuan Xin
,
Michael Backes
,
Xiao Zhang
PDF
Cite
Code
ArXiv
Transferable Availability Poisoning Attacks
We propose an availability poisoning attack for generating transferable poisoned data across different victim learners.
Yiyong Liu
,
Michael Backes
,
Xiao Zhang
PDF
Cite
Code
ArXiv
»
Cite
×