Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding functional batch_norm to BatchNorm2d substitution #868

Merged
merged 6 commits into from
Dec 11, 2023

Conversation

edenlum
Copy link
Contributor

@edenlum edenlum commented Nov 23, 2023

Pull Request Description:

Checklist before requesting a review:

  • I set the appropriate labels on the pull request.
  • I have added/updated the release note draft (if necessary).
  • I have updated the documentation to reflect my changes (if necessary).
  • All function and files are well documented.
  • All function and classes have type hints.
  • There is a licenses in all file.
  • The function and variable names are informative.
  • I have checked for code duplications.
  • I have added new unittest (if necessary).

@haihabi
Copy link
Collaborator

haihabi commented Nov 24, 2023

Also link this PR to the open issue


def get_attributes_from_inputs(self, graph: Graph, node: BaseNode) -> dict:
input_nodes = graph.get_prev_nodes(node)
if len(input_nodes) == 5:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sure len(input_nodes) == 4 isn't an option

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's and error if "else"

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw that. I meant that it's not an option in torch. for example if it is craeted like this:
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=weight, bias=None
the bias is still created

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if weight\bias = None is doesn't create the input node

@edenlum edenlum linked an issue Nov 28, 2023 that may be closed by this pull request
Copy link
Collaborator

@elad-c elad-c left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See leftover comments


def get_attributes_from_inputs(self, graph: Graph, node: BaseNode) -> dict:
input_nodes = graph.get_prev_nodes(node)
if len(input_nodes) == 5:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw that. I meant that it's not an option in torch. for example if it is craeted like this:
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=weight, bias=None
the bias is still created


def batch_norm_wrapper(channels):
return partial(nn.functional.batch_norm,
running_mean=torch.zeros(channels, device='cuda'),
running_var=torch.ones(channels, device='cuda'))
running_mean=0+torch.randn(channels, device='cuda'),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you'll probably need to select the device accordingly, because tests are ran on CPU in github

@Idan-BenAmi Idan-BenAmi merged commit db41d92 into sony:main Dec 11, 2023
24 checks passed
@edenlum edenlum deleted the batch_norm branch December 12, 2023 07:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

torch.nn.functional.batch_norm - MCT support
4 participants