Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Sharing Pretrained Visual Encoder Weights (Mask-Autoencoding + Language-Supervised Pretraining on YouTube-ASL) #12

Open
Triver-ac opened this issue Dec 26, 2024 · 0 comments

Comments

@Triver-ac
Copy link

Dear authors,

We are very interested in your work on the SSVP-SLT-LSP framework, particularly the visual encoder pretrained using a method similar to masked autoencoding and language-supervised pretraining on the YouTube-ASL dataset. The results you achieved are impressive, and we believe that having access to your pretrained model weights would greatly aid in furthering research and experimentation in this area.

Could you kindly share the pretrained weights of your visual encoder used in the experiments? We are especially interested in the weights after the self-supervised pretraining and language-supervised pretraining stages, as described in your paper. This would help us replicate your results and explore further applications of your framework.

Thank you very much for your time and for contributing to the advancement of privacy-aware SLT. We look forward to hearing from you!

Best regards,
Ruiquan Zhang
Xiamen University

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant