You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for your incredible work on BiRefNet and for making it publicly available—it’s truly inspiring to see such dedication and innovation in the field.
I’ve been fine-tuning BiRefNet on a car segmentation dataset using the General_244.pth checkpoint and the same configuration provided in the original GitHub repository. However, after 164 epochs of fine-tuning, the results I’m achieving don’t seem to match the quality of the pretrained model.
While exploring the configuration file, I noticed an option to choose a Refinement Module, which is set to an empty string by default. Could this be a potential reason why my results aren’t as refined as those of the pretrained model?
Additionally, I couldn’t find explicit information in the paper about whether Refinement Modules were used during the training of the pretrained BiRefNet. Did you incorporate Refinement Modules to achieve the results shared in the repository? Any insights or guidance would be immensely helpful.
Thank you for your time and for creating such a remarkable contribution to segmentation research!
Let me know if you would like to seee some sample results.
The text was updated successfully, but these errors were encountered:
Hi, thanks for your interest in looking deeper into my codes.
Some refinement blocks can help, but the relevant codes are in chaos, and GPU memory costs could be extremely high. Based on these concerns, they were not used in my implementation.
I tested some refiners with SwinB as the backbone for lower GPU memory cost during training. In a relatively fair comparison, the one with a refiner can truly bring some improvement, as the screenshot shows below.
First of all, thank you for your incredible work on BiRefNet and for making it publicly available—it’s truly inspiring to see such dedication and innovation in the field.
I’ve been fine-tuning BiRefNet on a car segmentation dataset using the General_244.pth checkpoint and the same configuration provided in the original GitHub repository. However, after 164 epochs of fine-tuning, the results I’m achieving don’t seem to match the quality of the pretrained model.
While exploring the configuration file, I noticed an option to choose a Refinement Module, which is set to an empty string by default. Could this be a potential reason why my results aren’t as refined as those of the pretrained model?
Additionally, I couldn’t find explicit information in the paper about whether Refinement Modules were used during the training of the pretrained BiRefNet. Did you incorporate Refinement Modules to achieve the results shared in the repository? Any insights or guidance would be immensely helpful.
Thank you for your time and for creating such a remarkable contribution to segmentation research!
Let me know if you would like to seee some sample results.
The text was updated successfully, but these errors were encountered: