Research

The research of our lab covers various topics in trustworthy machine learning, such as robustness, privacy and interpretability, as well as their applications in computer vision (CV), natural language processing (NLP), cybersecurity and etc. Some of our current research projects are:

Fundations of Adversarial Machine Learning

  • Understanding intrinsic robustness for inference-time/data-poisoning attacks
  • Understanding robust overfiting of adversarial training

Attack/Defense against Adversarial Examples

  • Semi-supervised methods for robust learning
  • Robust certification methods

General Definitions of Model Robustness

  • Cost-sensitive adversarial robustness
  • Out-of-distrbution robustness

NLP Robustness

  • Robustness Evaluation for Transformation-based Adversarial Defense