You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your great works!
I ran DCVC-TCM with the released weight without update them, and added noise sampled from a uniform distribution to test the performance of the released weight using the training set(Vimeo-90K) mentioned in the paper, like this:
def quant(self, x):
if self.training:
return x + torch.nn.init.uniform_(torch.zeros_like(x), -0.5, 0.5)
else:
return torch.round(x)
However, I found the loss of adding uniform noise(0.295bpp with 39.335psnr) is quite larger than using round operation(0.112bpp with 42.291psnr).
Is this a normal phenomenon? Or did I use the wrong quantization processing method?
The text was updated successfully, but these errors were encountered:
Thanks for sharing your great works!
I ran DCVC-TCM with the released weight without update them, and added noise sampled from a uniform distribution to test the performance of the released weight using the training set(Vimeo-90K) mentioned in the paper, like this:
def quant(self, x):
if self.training:
return x + torch.nn.init.uniform_(torch.zeros_like(x), -0.5, 0.5)
else:
return torch.round(x)
However, I found the loss of adding uniform noise(0.295bpp with 39.335psnr) is quite larger than using round operation(0.112bpp with 42.291psnr).
Is this a normal phenomenon? Or did I use the wrong quantization processing method?
The text was updated successfully, but these errors were encountered: