You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your original paper, the attribute code is mostly exchanged to guide the translation, but in your code, you add using random noise to guide the translation like MUNIT, why?
Actually, I also have some questions about mode-seeking in translation models.
If I translate a cat into a dog, since the cycle-consistency, the attribute encoder has to encode the details information of the original cat image in the attribute code. The details of the cat image should have increased the diversity of the translation (but in experiments, it seems not like this). So why, is it because of the weight of cycle-loss too small?
The text was updated successfully, but these errors were encountered:
In your original paper, the attribute code is mostly exchanged to guide the translation, but in your code, you add using random noise to guide the translation like MUNIT, why?
Actually, I also have some questions about mode-seeking in translation models.
If I translate a cat into a dog, since the cycle-consistency, the attribute encoder has to encode the details information of the original cat image in the attribute code. The details of the cat image should have increased the diversity of the translation (but in experiments, it seems not like this). So why, is it because of the weight of cycle-loss too small?
The text was updated successfully, but these errors were encountered: