Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid dimension of transformer.embeddings.word_embeddings.weight for the ctranspath_448_bioclinicalbert checkpoint #7

Open
Lafite-Yu opened this issue Oct 11, 2024 · 0 comments

Comments

@Lafite-Yu
Copy link

RuntimeError: Error(s) in loading state_dict for CLIP:
	size mismatch for transformer.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 768]) from checkpoint, the shape in current model is torch.Size([28996, 768]).

The dimension of bioclinicalbert emilyalsentzer/Bio_ClinicalBERT (whose vocab size is 28996, and the vocab size of pubmedbert is 30522) and the provided ctranspath_448_bioclinicalbert.zip checkpoint are not the same, and causes failure in loading the checkpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant