Their scheme for domain adaptation, which is quite competitive and frequently used as a baseline in the literature is 1) minimize entropy on test data and 2) freeze all parameters except the affine transformation parameters of batch / layer normalization. This does not require the original source data or labels on the target data. The idea of entropy minimization is to encourage the latent representations of target and source data to align; the idea of freezing most parameters is to prevent entropy from being minimized with trivial spurious solutions.
Paper: https://arxiv.org/abs/2006.10726
This paper: https://arxiv.org/abs/2106.14999 proposes some enhancements.
Their scheme for domain adaptation, which is quite competitive and frequently used as a baseline in the literature is 1) minimize entropy on test data and 2) freeze all parameters except the affine transformation parameters of batch / layer normalization. This does not require the original source data or labels on the target data. The idea of entropy minimization is to encourage the latent representations of target and source data to align; the idea of freezing most parameters is to prevent entropy from being minimized with trivial spurious solutions.
Paper: https://arxiv.org/abs/2006.10726
This paper: https://arxiv.org/abs/2106.14999 proposes some enhancements.