You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanking for the great work.
But when I tried to finetune the network on my own data, I encounter problems with efficiency.
If I set the num_workers in the Dataloader to >0, the data loading process becomes extremely slow and the loading time increases with each worker.
The time to backpropagate tho the graph (the time to execute this line of code) increases in proportion to the batch size. scaler.scale(loss).backward()
I want to ask if this is normal in finetuning or have I somehow introduced a bug. Also, is there anyway to speed up?
The text was updated successfully, but these errors were encountered:
Hi, Thanks for your interests.
I encountered a similar issue on another project on other servers.
The issue may be caused by the video reader we used decord. decord seems have some issue with pytorch's multiprocess in dataloader.
My solution is to add "spawn" when creating dataloader:
Hi, thanking for the great work.
But when I tried to finetune the network on my own data, I encounter problems with efficiency.
scaler.scale(loss).backward()
I want to ask if this is normal in finetuning or have I somehow introduced a bug. Also, is there anyway to speed up?
The text was updated successfully, but these errors were encountered: