You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am trying to train rayonet on custom data. I have prepared my data two times but I am getting the same error while training:
File "trainVTPxy90CLAHE.py", line 139, in
loss = trainer.train_step(batch)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/training.py", line 52, in train_step
loss = self.compute_loss(data)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/training.py", line 202, in compute_loss
occ_pred = self.model.decode(scale_factor, points_xy, c, c_local) # (B, n_points, num_samples)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/models/init.py", line 75, in decode
logits = self.decoder(scale_factor, points_xy, c_global, c_local)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/models/decoder.py", line 166, in forward
net = self.fc_geo1(net)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/ray-onet/im2mesh/layers.py", line 40, in forward
net = self.fc_0(self.actvn(x))
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [65536 x 258], m2: [259 x 256] at /tmp/pip-req-build-808afw3c/aten/src/THC/generic/THCTensorMathBlas.cu:290
Maybe I am missing sth during data preparation .
Any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
Hi,
I am trying to train rayonet on custom data. I have prepared my data two times but I am getting the same error while training:
File "trainVTPxy90CLAHE.py", line 139, in
loss = trainer.train_step(batch)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/training.py", line 52, in train_step
loss = self.compute_loss(data)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/training.py", line 202, in compute_loss
occ_pred = self.model.decode(scale_factor, points_xy, c, c_local) # (B, n_points, num_samples)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/models/init.py", line 75, in decode
logits = self.decoder(scale_factor, points_xy, c_global, c_local)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/ray-onet/im2mesh/rayonet/models/decoder.py", line 166, in forward
net = self.fc_geo1(net)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/ray-onet/im2mesh/layers.py", line 40, in forward
net = self.fc_0(self.actvn(x))
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/ubuntu/anaconda3/envs/rayonet/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: size mismatch, m1: [65536 x 258], m2: [259 x 256] at /tmp/pip-req-build-808afw3c/aten/src/THC/generic/THCTensorMathBlas.cu:290
Maybe I am missing sth during data preparation .
Any help would be greatly appreciated.
The text was updated successfully, but these errors were encountered: