A Principled Approach to Model Validation in Domain Generalization

2023

Conference: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing

Lyu, Boyang and Nguyen, Thuan and Scheutz, Matthias and Ishwar, Prakash and Aeron, Shuchin

Domain generalization aims to learn a model with good generalization ability, that is, the learned model should not only perform well on several seen domains but also on unseen domains with different data distributions. State-of-the-art domain generalization methods typically train a representation function followed by a classifier jointly to minimize both the classification risk and the domain discrepancy. However, when it comes to model selection, most of these methods rely on traditional validation routines that select models solely based on the lowest classification risk on the validation set. In this paper, we theoretically demonstrate a trade-off between minimizing classification risk and mitigating domain discrepancy, i.e., it is impossible to achieve the minimum of these two objectives simultaneously. Motivated by this theoretical result, we propose a novel model selection method suggesting that the validation process should account for both the classification risk and the domain discrepancy. We validate the effectiveness of the proposed method by numerical results on several domain generalization datasets.

@inproceedings{lyuetal23icassp,
  title={A Principled Approach to Model Validation in Domain Generalization},
  author={Lyu, Boyang and Nguyen, Thuan and Scheutz, Matthias and Ishwar, Prakash and Aeron, Shuchin},
  year={2023},
  booktitle={2023 IEEE International Conference on Acoustics, Speech, and Signal Processing},
  url={https://hrilab.tufts.edu/publications/lyuetal23icassp.pdf}
  doi={10.1109/ICASSP49357.2023.10094659}
}