Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why here it has frozen bn twice? #9

Open
rainylt opened this issue Oct 12, 2020 · 2 comments
Open

why here it has frozen bn twice? #9

rainylt opened this issue Oct 12, 2020 · 2 comments

Comments

@rainylt
Copy link

rainylt commented Oct 12, 2020

in FCOS.Pytorch/model/fcos.py

        def freeze_bn(module):
            if isinstance(module,nn.BatchNorm2d):
                module.eval()
            classname = module.__class__.__name__
            if classname.find('BatchNorm') != -1:
                for p in module.parameters(): p.requires_grad=False

Since module.eval() has frozen bn, why you additionally set p.requires_grad=False? Is there another module called BatchNorm*?

@VectXmy
Copy link
Owner

VectXmy commented Oct 12, 2020

eval() is not equivalent to requires_grad.

@rainylt
Copy link
Author

rainylt commented Oct 13, 2020

eval() is not equivalent to requires_grad.

Thank you for your early reply. I have learned that turn off the requires_grad can accelerate this module, but when we do inference, we always add with torch.no_grad to do the same thing, as it is in line 153 of eval.py. So does it means that here we turn off requires_grad in freeze_bn is for some situations in the training process, or just to make the code robust?

I`m learning pytorch with this code, so some questions may be stupid, please forgive me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants