-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEAT] support providing DataLoader arguments to optimize GPU usage #1186
[FEAT] support providing DataLoader arguments to optimize GPU usage #1186
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
I'd prefer to introduce a single argument like |
I've updated this - the review is now essentially a direct replacement of the num_workers variables with a dataloader_kwargs dictionary (with default None) |
Sorry, by deprecating I meant keeping the argument and then doing something like: if self.num_workers_loader != 0: # value is not at its default
warnings.warn(
"The `num_workers_loader` argument is deprecated and will be removed in a future version. "
"Please provide num_workers through `dataloader_kwargs`, e.g. "
f"`dataloader_kwargs={'num_workers': {self.num_workers_loader}`",
category=FutureWarning,
)
dataloader_kwargs['num_workers'] = self.num_workers_loader |
@jmoralez ah yes I see - I've put that back in now and added the deprecation warnings to the base models class |
Thanks! Sorry, I messed up the suggestion, we should do the |
@jmoralez makes sense - that should be updated in both cases now! |
Thanks! I'm very sorry, I just realized from your changes to @cchallu what's the purpose of the |
@jasminerienecker in the meantime, can you please revert the changes to the predict method of |
…minerienecker/neuralforecast into jr-data-loader-optimisation
@jmoralez all good - that's now been updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This is to allow adjusting the torch pin_memory and prefetch_factor variables to optimize gpu usage.
Note: by adjusting these variables I am now able to increase GPU usage to 95% whereas with just the num_workers variable that is currently exposed to the interface, GPU usage hovers around 40-60%.