Is there a way to use MobileStyleGAN as an image-to-image (A->B) style transfer model, similar to CycleGAN, rather than just an image synthesizer from no input? I have a custom dataset of cat faces, one set is real (domain A) and the other set are fake (domain B). I want an input of A to translate into domain B.