-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this the right way to test the saved emmental models? #511
Comments
This cannot be commented out. Please execute this so that features are created and stored in Postgres.
"keys" in this case means the names of features that are used by Emmental.
You can check Regarding
|
Thank you for clarifying my doubts @HiromuHota ,
@HiromuHota Can you please comment, if this is good? |
I'd suggest two changes:
So your code should look like below.
This code assumes that the backend postgres has no key for Featurizer. |
@HiromuHota Thanks for the correction
This is the reason, I'm using train=True and then dropping the keys |
@saikalyan9981 Thank you for letting us know the reason behind. Here is why this happens: Your code should work as expected, but this |
I have gone through the code of packaging in ML Flow. Thank you, It was very useful for me. While testing, I think the code here hardware_fonduer_model classifies one document a time. However I would like to test in multiple documents at once:
So Is this code snippet correct, to test the model
Is this right way to do it?
I'm not sure, how to use upsert_keys, drop_keys and if I'm extracting features correctly? And should i add torch.no_grad() while predicting?
The text was updated successfully, but these errors were encountered: