-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about soft assignment #1
Comments
The output features produced by the proposed CVCL model become more similar in a single view during training if they belong to the same cluster. This benefits from the advantages of the deep neural network (DNN) model on data representation learning. Moreover, the output features of a single sample among the multiple views are forced to be more similar because of the cross-view contrastive loss item. In particular, the output features are considered to be the cluster assignments in CVCL. Please don’t hesitate to contact me if you have any further questions. Thank you. |
Thank you very much for your apply! |
I have carefully checked this dataset. It contains more than 2300 samples belonging to 20 categories. The recommended combinations of the parameters in CVCL code are not proper for this dataset. I simply tested one group of the parameters for this dataset. More than 50% (ACC) can be obtained by CVCL. Please feel free to contact me if you are interested it. Here is my email: [email protected] Thank you. |
The following results can be reproduced using the trained model. dim_high_feature = 32 The related code and the trained model are available at: |
Dear author,
I‘m wondering that if the initial clustering soft assignment probabilities h produced by MLP are mostly incorrect, wouldn't the confident assignment probabilities p become even more inaccurate after confidence enhancement? In such a scenario, how does the cluster-level contrastive loss ensure that the allocation process moves towards the correct direction?
Thank you!
The text was updated successfully, but these errors were encountered: