-
I assume you've got the best trained model from the configure in experiment/v01_drum. My best model is at checkpoint 8000 ~ 9000 and the loss is around 130~140. Should I train more? or can you provide a pre-trained model? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
I just use the latest checkpoint (at ~24k steps for v01_drums), which also happens to be the best one, with a validation loss of 153.4 and training loss around 120–160. You should hopefully be able to reproduce my results by using exactly my config file and training for the full amount of time. (At ~9k steps, my validation loss is around 158, so I guess it's worth it to train more.) Note that if you changed the batch size, you will also need to adjust the learning rate schedule. |
Beta Was this translation helpful? Give feedback.
-
FYI, the pre-trained checkpoints are uploaded here: https://groove2groove.telecom-paris.fr/data/checkpoints/ |
Beta Was this translation helpful? Give feedback.
-
@cifkao Thanks a lot. |
Beta Was this translation helpful? Give feedback.
I just use the latest checkpoint (at ~24k steps for v01_drums), which also happens to be the best one, with a validation loss of 153.4 and training loss around 120–160. You should hopefully be able to reproduce my results by using exactly my config file and training for the full amount of time. (At ~9k steps, my validation loss is around 158, so I guess it's worth it to train more.)
Note that if you changed the batch size, you will also need to adjust the learning rate schedule.