Robust domain adaptation via divergences

Anand N. Vidyashankar1 and David Kepplinger2
  • 1

    Department of Statistics, George Mason University, Fairfax, VA [avidyash@gmu.edu]

  • 2

    Department of Statistics, George Mason University, Fairfax, VA [dkepplin@gmu.edu]

Keywords: Domain adaptation, divergence, feature shift, label shift

Conventionally trained machine learning models often perform unreliably when the training and test distributions differ. Domain adaptation addresses this challenge by enabling models to transfer knowledge from a source domain to a mismatched target domain. However, many existing approaches fail to account for distributional irregularities, such as outliers and model misspecification, which can severely impact robustness. In this work, we propose a principled domain adaptation framework based on the Hellinger distance, providing theoretical guarantees on the learned model’s performance. We establish generalization bounds and demonstrate the effectiveness of our method through numerical experiments and real-world datasets. Furthermore, we discuss how our approach extends to a broader class of divergences, offering flexibility in handling various domain shift scenarios.