-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tests] Fix CI for deprecated attention block when used with device_map
#9442
base: main
Are you sure you want to change the base?
Conversation
@@ -140,7 +140,15 @@ jobs: | |||
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms | |||
CUBLAS_WORKSPACE_CONFIG: :16:8 | |||
run: | | |||
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ | |||
# DeprecatedAttentionBlockTests::test_conversion_when_using_device_map fails |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you know why this test would fail with -n 1 --dist=loadfile
option? also did you tested it anywhere that it would work without this option?
I tried both
pytest -n 1 --dist=loadfile tests/models/test_attention_processor.py::DeprecatedAttentionBlockTests::test_conversion_when_using_device_map
and
pytest tests/models/test_attention_processor.py::DeprecatedAttentionBlockTests::test_conversion_when_using_device_map
both were passing on my machine; but I can see it failed on CI so just wondering
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah same. I tested on DGX and audace and both passed. So, no idea. Didn’t dig deeper because the blocks are deprecated anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so the test would pass if we remove this line "-n 1 --dist=loadfile" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On our CI, yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok! I'm ok with the change then
what do you think @DN6 ?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
@@ -140,7 +140,15 @@ jobs: | |||
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms | |||
CUBLAS_WORKSPACE_CONFIG: :16:8 | |||
run: | | |||
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \ | |||
# DeprecatedAttentionBlockTests::test_conversion_when_using_device_map fails |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok! I'm ok with the change then
what do you think @DN6 ?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
cc @DN6 is this ok to merge? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
@DN6 okay to merge? |
What does this PR do?
Ran a round of fast GPU tests (from
push_tests.yml
). They are all passing except for the deprecated attention block.I think the change is okay because it doesn't introduce any performance regressions in the CI, either.
The failure: https://github.com/huggingface/diffusers/actions/runs/10734214122/job/29768965396#step:6:4275