You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper, you use hard negative (HN) and soft negative(SN) to sample negative items.
"In HN, the negative item in each triplet is selected as the closest to the anchor in a batch."
"SN refers to picking the furthest negative item to the anchor within the batch."
The sampling methods described in the paper and that implemented by the code are opposite. Is there anything wrong in this code?
@Li-Zheng-94
Thanks for showing interest in our work.
According to the documentation from https://pytorch.org/docs/stable/generated/torch.topk.html:
If the largest is False then the k smallest elements are returned, so closest in distance with respect to the anchor. So this is what we use in HARD NEGATIVE sampling.
You can see it clearly here:
elif metric_samples == 'hard':
hard_neg_indexes = (torch.topk(cost_s, 2, dim=1, largest=False)[1][:, 1],
torch.topk(cost_im, 2, dim=1, largest=False)[1][:, 1])
Let me know if I understand the question correctly.
Thanks for your great work.
I have a question about sampling.
In your paper, you use hard negative (HN) and soft negative(SN) to sample negative items.
"In HN, the negative item in each triplet is selected as the closest to the anchor in a batch."
"SN refers to picking the furthest negative item to the anchor within the batch."
The sampling methods described in the paper and that implemented by the code are opposite. Is there anything wrong in this code?
semantic_adaptive_margin/model_CVSE.py
Line 727 in 1e8bf2f
semantic_adaptive_margin/model_CVSE.py
Line 730 in 1e8bf2f
The text was updated successfully, but these errors were encountered: