-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(HTTP): Retry failed HTTP requests with exponential backoff #1098
base: main
Are you sure you want to change the base?
feat(HTTP): Retry failed HTTP requests with exponential backoff #1098
Conversation
long res_code = 0; | ||
curl_easy_getinfo(curl_.get(), CURLINFO_RESPONSE_CODE, &res_code); | ||
if (res_code == 408 || res_code == 429 || res_code >= 500) { | ||
// FIXME: Should we clear download_cache_ and upload_cache_ here?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you'll need to ensure that the inside of the loop has no side-effects. I haven't analyzed to see what the caches do here, but it's worth looking into more closely.
You may want to write test cases that verify the correct behavior for POST/PUT with data. In the cmake
branch, you'll find HTTP file tests are all enabled and working. (In main
, they are all disabled.) You should be able to verify retry behavior with httpbin, and see that the final failed retry still has a response that reflects the data from the POST/PUT.
I recommend you try the cmake
branch in a separate clone, without using gclient. We have yet to update the docs for that, but this is what you'll want to try:
# Clone
git clone ...
# Use cmake branch
git checkout -b cmake upstream/cmake
# Get submodules
git submodule init && git submodule update
# Gen build files
cmake -S . -B build/ -DCMAKE_BUILD_TYPE=Debug
# Build
cmake --build build/ --config Debug --parallel
# Test
(cd build && ctest -C Debug -V)
If you have time, you could make a PR against the cmake
branch, which is where most development is happening right now as we rewrite the build system and dependencies.
Please merge with |
e66076a
to
b2ccb15
Compare
This has been rebased to As soon as CURL calls I don't think this can be done safely with the currently IoCache interface as the backing buffers. I believe the data from the upload_cache would have to written to some other intermediary buffer (without triggering read events) so that curl and retry safely, and only upon success should the IoCache circular buffer signal the read event and advance the pointers. Likewise for download cache, only upon successful download should the data be written to Alternatively to avoid the memory copies I think we'd have to take a lock on IoCache around the |
@cosmin I am fiddling with this but I can see this is not fixable via For Copying/backing up the data elsewhere until the request is successful would work fine, but it wouldn't be memory efficient if we are dealing with big files/requests. |
This PR implements exponential backoff retrials for http files.
I am not quite familiar with the code base & how packager works, so I could use some help in clarifying the following:
upload_cache_
would be drained, right? Does this mean we have to keep a copy of it?Tests yet to be added.
Closes #1095