You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice there is a consistency between the implementation and what you have described in the paper.
In your paper, the self-attention module is used to self-attend between keypoint features, which are extracted from support features; and the cross-attention module is used to cross-attend resulting keypoint features to query features.
However, in the implementation, the self-attention module is used to self-attend between query features:
Thank you for reporting this issue. I appreciate you taking the time to bring this to our attention. Unfortunately, the original author of this repository has since graduated and taken a position in industry. They no longer have plans to maintain the codebase going forward. If anyone in the community would be willing to help resolve this issue, I would be incredibly grateful. I would be happy to merge any fix that is proposed and tested.
Hi,
I notice there is a consistency between the implementation and what you have described in the paper.
In your paper, the self-attention module is used to self-attend between keypoint features, which are extracted from support features; and the cross-attention module is used to cross-attend resulting keypoint features to query features.
However, in the implementation, the self-attention module is used to self-attend between query features:
Pose-for-Everything/pomnet/models/keypoint_heads/transformer_head.py
Line 89 in b28951b
And the cross-attention module is used to cross-attend resulting query features to the keypoint features:
Pose-for-Everything/pomnet/models/keypoint_heads/transformer_head.py
Line 99 in b28951b
In this file, x is the query features, and query_embed is in fact the keypoint features.
The text was updated successfully, but these errors were encountered: