-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train with Multi Gpu #11
Comments
非常感谢!这对于在多个 GPU 上训练将会非常有帮助。 Many thanks! This would be very useful for training on multiple GPUs. |
@kevinchow1993
time.sleep(random.uniform(0,3))
save_path = save_folder + '/trainval_X_U_' + year + '.txt'
if not osp.exists(save_path):
mmcv.mkdir_or_exist(save_folder)
np.savetxt(save_path, ann[X_U_single], fmt='%s')
X_U_path.append(save_path)
|
如果这种方式不起作用的话,我认为您也可以试试这样修改: if torch.cuda.current_devices() == 0: 在 if dist.is_initialized():
torch.distributed.barrier() If it does not work, I think you can also try like this: if torch.cuda.current_devices() == 0: Add threads synchronization after if dist.is_initialized():
torch.distributed.barrier() |
@yuantn if torch.cuda.current_device() == 0:
cfg = create_X_L_file(cfg, X_L, all_anns, cycle)
if dist.is_initialized():
torch.distributed.barrier() 会在第一次save checkpoint时卡住 |
是否还需要把返回的 if torch.cuda.current_device() == 0:
cfg_save = create_X_L_file(cfg, X_L, all_anns, cycle)
joblib.dump(cfg_save, 'cfg_save.tmp')
if dist.is_initialized():
torch.distributed.barrier()
cfg = joblib.load("cfg_save.tmp") Is it necessary to distribute the return if torch.cuda.current_device() == 0:
cfg_save = create_X_L_file(cfg, X_L, all_anns, cycle)
joblib.dump(cfg_save, 'cfg_save.tmp')
if dist.is_initialized():
torch.distributed.barrier()
cfg = joblib.load("cfg_save.tmp") |
当前代码在使用多卡训练时会出现 stop iteration的错,原因是某个卡上分配的数据比其他卡少,根本原因是由于在active_datasets.py中的create_X_L_file() 和 create_X_U_file(),多个卡会同时写同一个txt文件,导致先写完这个文件的卡创建dataloader时读取到了不全的txt.
解决方案:
The text was updated successfully, but these errors were encountered: