Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The architecture in the implementation is not consistent with what was described in the paper #13

Open
kltrock opened this issue Nov 2, 2022 · 1 comment

Comments

@kltrock
Copy link

kltrock commented Nov 2, 2022

Hi,

I notice there is a consistency between the implementation and what you have described in the paper.
In your paper, the self-attention module is used to self-attend between keypoint features, which are extracted from support features; and the cross-attention module is used to cross-attend resulting keypoint features to query features.

However, in the implementation, the self-attention module is used to self-attend between query features:

And the cross-attention module is used to cross-attend resulting query features to the keypoint features:

In this file, x is the query features, and query_embed is in fact the keypoint features.

@jin-s13
Copy link
Collaborator

jin-s13 commented May 23, 2024

Thank you for reporting this issue. I appreciate you taking the time to bring this to our attention. Unfortunately, the original author of this repository has since graduated and taken a position in industry. They no longer have plans to maintain the codebase going forward. If anyone in the community would be willing to help resolve this issue, I would be incredibly grateful. I would be happy to merge any fix that is proposed and tested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants