To deal with challenging settings in domain generalization (DG) where both data and label of the unseen domain are not available at training time, the most common approach is to design the classifiers based on the domain-invariant representation features, i.e., the latent representations that are unchanged and transferable between domains. Contrary to popular belief, we show that designing classifiers based on invariant representation features alone is necessary but insufficient in DG. Our analysis indicates the necessity of imposing a constraint on the reconstruction loss induced by representation functions to preserve most of the relevant information about the label in the latent space. More importantly, we point out the trade-off between minimizing the reconstruction loss and achieving domain alignment in DG.
@inproceedings{nguyenetal2022icmla, title={Trade-off between reconstruction loss and feature alignment for domain generalization}, author={Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin Aeron}, year={2022}, booktitle={International Conference on Machine Learning Applications (ICMLA)}, url={https://hrilab.tufts.edu/publications/nguyenetal2022icmla.pdf} }