Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rspmm seems to behave differently from the spmm in TorchDrug #11

Open
hellohawaii opened this issue Aug 31, 2023 · 2 comments
Open

rspmm seems to behave differently from the spmm in TorchDrug #11

hellohawaii opened this issue Aug 31, 2023 · 2 comments

Comments

@hellohawaii
Copy link

Hi, thank you for your codes! However, I find that the rspmm seems to behave differently from the spmm in TorchDrug.

How I found this:
I tried to load the checkpoint trained by the official repo to the model in this repo and found that the performance was extremely low. I compared the results in each layer and found that in the function message_and_aggregate of the GeneralizedRelationalConv, the result produced by generalized_rspmm is different from the output produced by functional.generalized_rspmm in the official repo, although I have checked that the input is the same.

How to reproduce
In the function message_and_aggregate of the GeneralizedRelationalConv, before the line if self.message_func in self.message2mul:, add:

torch_drug_edge_index = torch.cat([edge_index, edge_type.unsqueeze(0)], dim=0)
torch_drug_edge_weight = edge_weight
adjacency = sparse_coo_tensor(torch_drug_edge_index, torch_drug_edge_weight, size=[num_node, num_node, self.num_relation])
# the adjacency above is the graph.adjacency in the official NBFNet repo, which is produced by https://github.com/DeepGraphLearning/torchdrug/blob/6066fbd82360abb5f270cba1eca560af01b8cc90/torchdrug/data/graph.py#L801
adjacency = adjacency.transpose(0, 1)

and replace the generalized_rspmm below with functional.generalized_rspmm from Torchdrug, just as official repo did (official repo use relation_input, with is the relation in this repo).

also, add necessary import to layers.py.

from torchdrug.layers import functional
from torchdrug.utils import sparse_coo_tensor
@hellohawaii
Copy link
Author

hellohawaii commented Aug 31, 2023

Well, I seem to figure this out. If I omit the line adjacency = adjacency.transpose(0, 1) in the code snippet, I can get the same output as the codes in this repo. So it is not the problem of rspmm codes.

Why did this repo not adopt this transpose to the rspmm version? Is this a bug? Or is the code from the official repo wrong?

@KiddoZhu
Copy link
Owner

Sorry for late response. Thanks for figuring this out! I think this is not a bug, but a compatibility problem. The expressiveness between the original and transposed adjacency matrix is equivalent for propagation, since we always add flipped triplets in the fact graph.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants