Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where should I set the target_layer when applying Gradcam? #512

Open
jaehyeok-jeon opened this issue Dec 12, 2024 · 0 comments
Open

Where should I set the target_layer when applying Gradcam? #512

jaehyeok-jeon opened this issue Dec 12, 2024 · 0 comments
Assignees

Comments

@jaehyeok-jeon
Copy link

Hello, I am trying to apply gradcam to the RT-DETR_pytorch model.

Existing models such as resnet attach a hook to the act layer of layer_4(the last layer feature extracotor), the feature extractor, and calculate grad_cam through the corresponding feature and gradient values.

In transformer models such as rt-detr, I am curious as to whether I should hook the backbone/res_layers/3/blocks/2/act layer, which is the last layer of the backbone, or whether I should hook the three input_projs of the decoder.

If you need to hook a different layer than what I thought, I would appreciate it if you could tell me where that layer is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants