Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with batch size 1 #38

Open
israrbacha opened this issue Nov 23, 2019 · 2 comments
Open

Error with batch size 1 #38

israrbacha opened this issue Nov 23, 2019 · 2 comments

Comments

@israrbacha
Copy link

israrbacha commented Nov 23, 2019

thanks for your work, i try your model with batch size 1 for small GPU(8GB,12GB) it doesn't wok. i check it both in ubuntu and windows;
in ubuntu it shows the error msg: floating point exception(core dumped)
in windows: it shows nothing but the execution stop.
Can you tell me the reason. BTW i check it in my friends computer too but the error is same

@HsinYingLee
Copy link
Owner

Thank you for your interest. I think the problem lies in the forward function. In current code, we need two types of input: X_encoded and X_random, which are separated by batchsize. In order to run with batch size=1, you can modify the code accordingly.

@israrbacha
Copy link
Author

thanks for your response, another question may be irrelevant to you sorry for that. as the model is too heavy my gtx-1080ti only run 200 epochs in three days. i have an option of 2 rtx-2080ti gpu's, can you guide me how to change the code for multi-gpu's. i check the other implementation like cycleGAN,pix2pixHD but i am not successful. your code structure is different from them. thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants