-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There are some differences between the paper and the code #4
Comments
Yes, we have noticed the problem. However, some of our previous experiments have shown that, if the order of max step and min step is reversed (including the frozen layers), the performance will change little. We are reproducing this now, and we will provide this log file soon. You can compare the log files in these two cases. |
And here are the results under the new condition and those of the last row in Tab. 1 in the paper:
Here is the output log: Google Drive | Baidu Drive (Extraction Code: 3nza) |
Thank you ! There is also a small question. The initial labeled experiment in Figure 5 of this paper should be similar in theory. Is it because your algorithm needs to train more epochs? |
No. As described in the subsection "Active Learning Settings" in Section 4.1, MI-AOD and other methods shared the same training setting, the same initialization and the same random seed. The reason of performance improvement has been explained in the last paragraph in Section 3.2, as:
So the reason can be summarized as: And the internal direct module effect has been shown in Tab.1. Using 5.0% data, IUL can improve it from 28.31 to 30.09, and IUR can further improve it to 47.18. |
Thank you for your reply and solved my confusion ! |
😄 |
In Figure 2 of the paper, first maximize instance uncertainty, then minimize instance uncertainty, but the code seems to be the opposite, and the frozen layers in different stages of the code are also the opposite
The text was updated successfully, but these errors were encountered: