-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python where op floatxfloat promotes to float64 #2380
Comments
The solution is to change the |
And |
|
My mistake, I didn't realize |
We currently don't have single-precision scalars, neither in Python nor C++. I hit some errors trying to add those (see #2403), but it could probably be done. However, it may be simpler to add a |
@mruberry the following lines in the PR branch above allow you to force the constant DataTypes to Float, which for pytorch/third_party/nvfuser/python_tests/test_python_frontend.py Lines 715 to 718 in 15035c2
Would that sufficiently address this issue? Note that we haven't changed the promotion rules for nvfuser: if an op receives only scalar floating point arguments, we do not use default floating type as is done in pytorch, but rather the highest-precision type of the given arguments. |
Yes, I think that would address the issue |
The text was updated successfully, but these errors were encountered: