Skip to content

Commit

Permalink
[ET-VK] Don't specify memory layouts when testing (#7324)
Browse files Browse the repository at this point in the history
As title. Now that we have a memory metadata tagging pass that automatically determines the optimal memory layout to use for operators, there is no need to specify what memory layout to test in the Python export tests.

There were some issues with the memory metadata tagging pass when dealing with nodes that contain tensor lists, which have been fixed as part of this diff as well.

Differential Revision: [D67180897](https://our.internmc.facebook.com/intern/diff/D67180897/)

ghstack-source-id: 258028337
Pull Request resolved: #7322

Co-authored-by: Stephen Jia <[email protected]>
  • Loading branch information
pytorchbot and SS-JIA authored Dec 13, 2024
1 parent 037a0d3 commit 8460d42
Show file tree
Hide file tree
Showing 3 changed files with 43 additions and 86 deletions.
17 changes: 8 additions & 9 deletions backends/vulkan/_passes/tag_memory_meta_pass.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@

from executorch.exir.pass_base import ExportPass, PassResult

from torch._subclasses.fake_tensor import FakeTensor

from torch.fx.passes.tools_common import NodeList
from torch.fx.passes.utils.fuser_utils import topo_sort

Expand Down Expand Up @@ -138,9 +136,7 @@ def propose_node_storage(
return storage

for arg in node.args:
if isinstance(arg, torch.fx.Node) and isinstance(
arg.meta["val"], FakeTensor
):
if isinstance(arg, torch.fx.Node) and utils.is_tensor_node(arg):
storage = utils.get_node_storage_type(arg)
if storage is not None and storage in valid_storage_types:
return storage
Expand Down Expand Up @@ -178,9 +174,7 @@ def propose_node_layout(
return layout

for arg in node.args:
if isinstance(arg, torch.fx.Node) and isinstance(
arg.meta["val"], FakeTensor
):
if isinstance(arg, torch.fx.Node) and utils.is_tensor_node(arg):
layout = utils.get_node_memory_layout(arg)
if layout is not None and layout in valid_layouts:
return layout
Expand All @@ -202,14 +196,19 @@ def should_annotate(self, node) -> bool:
if not isinstance(node, torch.fx.Node):
return False

if not isinstance(node.meta["val"], FakeTensor):
if not utils.is_tensor_node(node):
return False

# Storage type and memory layout for tensorref will be determined at runtime
# so there's no use in setting those attributes ahead of time.
if node.meta.get("vkdg_tensorref", False):
return False

# Skip annotating output node. The output tensors should be annotated by the
# time the output node is observed.
if node.op == "output":
return False

return True

def should_delay_annotation(self, node: torch.fx.Node) -> bool:
Expand Down
Loading

0 comments on commit 8460d42

Please sign in to comment.