Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refinement Module #154

Open
ahm-nq opened this issue Dec 30, 2024 · 1 comment
Open

Refinement Module #154

ahm-nq opened this issue Dec 30, 2024 · 1 comment

Comments

@ahm-nq
Copy link

ahm-nq commented Dec 30, 2024

First of all, thank you for your incredible work on BiRefNet and for making it publicly available—it’s truly inspiring to see such dedication and innovation in the field.

I’ve been fine-tuning BiRefNet on a car segmentation dataset using the General_244.pth checkpoint and the same configuration provided in the original GitHub repository. However, after 164 epochs of fine-tuning, the results I’m achieving don’t seem to match the quality of the pretrained model.

While exploring the configuration file, I noticed an option to choose a Refinement Module, which is set to an empty string by default. Could this be a potential reason why my results aren’t as refined as those of the pretrained model?

Additionally, I couldn’t find explicit information in the paper about whether Refinement Modules were used during the training of the pretrained BiRefNet. Did you incorporate Refinement Modules to achieve the results shared in the repository? Any insights or guidance would be immensely helpful.

Thank you for your time and for creating such a remarkable contribution to segmentation research!

Let me know if you would like to seee some sample results.

@ZhengPeng7
Copy link
Owner

Hi, thanks for your interest in looking deeper into my codes.

Some refinement blocks can help, but the relevant codes are in chaos, and GPU memory costs could be extremely high. Based on these concerns, they were not used in my implementation.

I tested some refiners with SwinB as the backbone for lower GPU memory cost during training. In a relatively fair comparison, the one with a refiner can truly bring some improvement, as the screenshot shows below.

截屏2025-01-02 19 56 09

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants