-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor of input and output dimesionality different #49
Comments
The default behaviour is to take in 2 time-steps of input (x-6h, x) and predict one step forecast (\hat{x+6}). So yes, I'd say that's by design |
@firatozdemir Yes, I understand the timestep dimensions will be different, but I was wondering why the latitude dimension for output is 720 when the input is 721? |
Ah, that's because observations are converted into patches of size 4x4, |
Hey @KennyWu! @firatozdemir is right: the model internally drops the last row (corresponding to the south pole) so the resulting tensor can be patched with a patch size of 4x4. If you want consistently between what goes into the model and what comes out, then you could drop the last row in the input batch, meaning that you're feeding in structures with shape |
Hello, I am attempting to fine tune the Aurora model, when I came across a dimensionality issue when using the predicted data for loss. Here is what the input tensor dimension looks for me:
batch.surf_vars['10u'].size()
torch.Size([1, 2, 721, 1440])
However with the predicted output by Aurora, I noticed the tensors are now dimension of
torch.Size([1, 1, 720, 1440])
I was wondering if this in part by design for the model, and If it is, how you finetuned Aurora when considering the different dimensions?
The text was updated successfully, but these errors were encountered: