-
-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add timeout_worker_is_alive as an option for main.run. #2397
base: master
Are you sure you want to change the base?
add timeout_worker_is_alive as an option for main.run. #2397
Conversation
The option let user set the timeout for judging if a process is dead in multiprocessing mode(i.e., workers > 1).
This PR does fix some edge cases. Please fix the CI failures so we can merge it. |
ea7ecc6
to
f1873bc
Compare
checks passed, please go ahead |
Defaulting to a low number doesn't seem right. |
It reads to me to be defaulting to the exact number that is already being used to preserve behavior I think? https://github.com/encode/uvicorn/blob/master/uvicorn/supervisors/multiprocess.py#L65 |
Hi @abersheeran, |
I've tried almost Uvicorn alternatives including Gunicorn, Hypercorn, Daphne, Sanic, Tornado, Waitress. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job.
@click.option( | ||
"--timeout-worker-is-alive", | ||
type=float, | ||
default=5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this is wanted as default value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I've mentioned before: #2397 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's a good default value for most programs. Only those with heavy loads require larger values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally, the default timeout for a single request in httpx is also 5 seconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Failed some checks. Can I fix it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's an unrelated issue on the pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Kludex ,
I think, it's not only current branch issue.
Your master branch doesn't pass scripts/test
too.
Could you please checkout?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's a good default value for most programs. Only those with heavy loads require larger values.
Well... Those with heavy loads will have to change this. Which is a bothersome for them.
Summary
The option lets users set the timeout for judging if a process is dead in multiprocessing mode(i.e., workers > 1).
When the module calls uvicorn.run has some heavy load like importing a lot of modules, the is_alive check in supervisors.multiprocessing may be blocked. When this happens, the default 5 seconds timeout may cause the misjudgement of a process and kill it.
Issues like #2396 , https://stackoverflow.com/questions/78680384/during-uvicorn-startup-child-process-dies-in-kubernetes-cluster may be caused by the case.
Checklist