You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are very interested in your work on the SSVP-SLT-LSP framework, particularly the visual encoder pretrained using a method similar to masked autoencoding and language-supervised pretraining on the YouTube-ASL dataset. The results you achieved are impressive, and we believe that having access to your pretrained model weights would greatly aid in furthering research and experimentation in this area.
Could you kindly share the pretrained weights of your visual encoder used in the experiments? We are especially interested in the weights after the self-supervised pretraining and language-supervised pretraining stages, as described in your paper. This would help us replicate your results and explore further applications of your framework.
Thank you very much for your time and for contributing to the advancement of privacy-aware SLT. We look forward to hearing from you!
Best regards,
Ruiquan Zhang
Xiamen University
The text was updated successfully, but these errors were encountered:
Dear authors,
We are very interested in your work on the SSVP-SLT-LSP framework, particularly the visual encoder pretrained using a method similar to masked autoencoding and language-supervised pretraining on the YouTube-ASL dataset. The results you achieved are impressive, and we believe that having access to your pretrained model weights would greatly aid in furthering research and experimentation in this area.
Could you kindly share the pretrained weights of your visual encoder used in the experiments? We are especially interested in the weights after the self-supervised pretraining and language-supervised pretraining stages, as described in your paper. This would help us replicate your results and explore further applications of your framework.
Thank you very much for your time and for contributing to the advancement of privacy-aware SLT. We look forward to hearing from you!
Best regards,
Ruiquan Zhang
Xiamen University
The text was updated successfully, but these errors were encountered: