You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm guessing that you want to split a single version of the model across multiple GPUs, possibly to be able to run the model on GPU with less memory.
Unfortunately, this is not supported by the version of Aurora in this repository. Aurora, however, is just a plain PyTorch model, so model parallelism (I believe the kind that you're referring to) would be possible to implement.
@wesselb is this something you are looking into in a near-future?
Was this not implemented for the original training in the paper (i.e., was Aurora simply trained with Data parallelism + activation checkpointing for the Swin3D backbone)?
Hello, is there any way to run a inference with 2 or more GPUs?
The text was updated successfully, but these errors were encountered: