You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 1, 2024. It is now read-only.
The experimental part of the consulting paper "VISUALVOICE: Audio-Visual Speech Separation with Cross-Modal Consistency" Table 1 The first part is audio only
#23
Open
attutude opened this issue
May 10, 2022
· 0 comments
Dear Dr. Gao:
Hi!
Thank you for your excellent work on this paper "VISUALVOICE: Audio-Visual Speech Separation with Cross-Modal Consistency". I would like to consult the experimental part. Do you have the code for Audio-Only[79] in Table 1? Why did I delete the visual information in the code VisualVoice, and only audio separation cannot achieve the result of Audio-Only[79] in Table 1. If you have this part of Audio-Only[79] code, can you send it to me to learn it? I would appreciate any help, thanks!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Dear Dr. Gao:
Hi!
Thank you for your excellent work on this paper "VISUALVOICE: Audio-Visual Speech Separation with Cross-Modal Consistency". I would like to consult the experimental part. Do you have the code for Audio-Only[79] in Table 1? Why did I delete the visual information in the code VisualVoice, and only audio separation cannot achieve the result of Audio-Only[79] in Table 1. If you have this part of Audio-Only[79] code, can you send it to me to learn it? I would appreciate any help, thanks!
The text was updated successfully, but these errors were encountered: