You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
thanks for your work, i try your model with batch size 1 for small GPU(8GB,12GB) it doesn't wok. i check it both in ubuntu and windows;
in ubuntu it shows the error msg: floating point exception(core dumped)
in windows: it shows nothing but the execution stop.
Can you tell me the reason. BTW i check it in my friends computer too but the error is same
The text was updated successfully, but these errors were encountered:
Thank you for your interest. I think the problem lies in the forward function. In current code, we need two types of input: X_encoded and X_random, which are separated by batchsize. In order to run with batch size=1, you can modify the code accordingly.
thanks for your response, another question may be irrelevant to you sorry for that. as the model is too heavy my gtx-1080ti only run 200 epochs in three days. i have an option of 2 rtx-2080ti gpu's, can you guide me how to change the code for multi-gpu's. i check the other implementation like cycleGAN,pix2pixHD but i am not successful. your code structure is different from them. thanks
thanks for your work, i try your model with batch size 1 for small GPU(8GB,12GB) it doesn't wok. i check it both in ubuntu and windows;
in ubuntu it shows the error msg: floating point exception(core dumped)
in windows: it shows nothing but the execution stop.
Can you tell me the reason. BTW i check it in my friends computer too but the error is same
The text was updated successfully, but these errors were encountered: