-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multipart Upload Concurrency Issues #6460
Comments
I think it's more an enhancement |
What is the use-case for multiple shutdowns, this feels like a bug in the calling code?
I'm actually confused by the second one, could you provide more context on how you encountered this error, including which store implementation you are using. Parts are numbered and so it shouldn't matter if they complete out of order. |
Yes, calling multiple shutdowns it's a bug on the caller side for sure. Changes proposed by me should help mitigate this issue a bit by making shutdown calls relaxed For the second one, we're using an s3 object store and for some reason, it (s3) doesn't take into account the part number during the multipart upload and return error mentioned above. We managed to solve this issue by calling |
Your proposed change is racy, the first close will wait on the upload to complete, with subsequent calls "completing" instantly.
Perhaps you could write a simple reproducer for this, I'm not saying S3 doesn't do this but I want to be sure we've correctly identified the issue. Your proposed fix will serialize a round-trip which is unfortunate when many stores are not exactly low-latency. |
I will try to provide an example of the second issue, but meanwhile, I thought maybe we can change the fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), Error>> {
loop {
return match &mut self.state {
BufWriterState::Write(write) => {
if let Some(write) = write {
write.poll_for_capacity(cx, 0).map_err(|e| e.into())
} else {
panic!("Already shut down")
}
},
BufWriterState::Buffer(_, _) => Poll::Ready(Ok(())),
BufWriterState::Flush(_) => panic!("Already shut down"),
BufWriterState::Prepare(f) => {
self.state = BufWriterState::Write(ready!(f.poll_unpin(cx)?).into());
continue;
}
};
}
} By actually waiting for all downloads to complete we can give a user and ability to use |
What would be the benefit of this? It wouldn't be able to guarantee that there isn't any data still in flight as it can't upload the final data until shutdown is called. It is a valid point that we're taking a somewhat dubious interpretation of the AsyncWrite trait, but its a necessary evil. We could probably add further documentation to discourage the use of AsyncWrite |
yes, but flush will ensure that ongoing writes are flushed which is generally aligned with "flush". And after that, you're okay to finalize your upload in case of |
Only those it has actually started writing, I think the proposed behaviour is more confusing. We should just make shutdown do the right thing |
@tustvold Sorry for the late response, I was trying to find a reproducer for this issue unfortunately without success so far. If you think it will be okay to just update the shutdown, that would be great |
So the proposed change would be oneliner here: pub async fn finish(mut self) -> Result<PutResult> {
if !self.buffer.is_empty() {
self.wait_for_capacity(0).await?; // here
let part = std::mem::take(&mut self.buffer);
self.put_part(part.into())
}
self.wait_for_capacity(0).await?;
match self.upload.complete().await {
Err(e) => {
self.tasks.shutdown().await;
self.upload.abort().await?;
Err(e)
}
Ok(result) => Ok(result),
}
} |
This will serialize a round-trip adding latency to callers of finish, I would therefore be apprehensive about making this change without a reproducer of the issue it is seeking to fix. |
@tustvold makes sense |
I managed to reproduce the issue on our staging environment and find a problem that was causing it, even so I can't explain it fully. We have a code to upload data to the s3 store in chunks, we're using a combination of the Here is the code for fn put_part(&mut self, data: PutPayload) -> UploadPart {
let idx = self.part_idx;
self.part_idx += 1;
let state = Arc::clone(&self.state);
Box::pin(async move {
let part = state
.client
.put_part(&state.location, &state.upload_id, idx, data)
.await?;
state.parts.put(idx, part);
Ok(())
})
} In short, we synchronously increment the index for a writer using (&mut ref) which should protect us from races, and then create a future to write a part. We call this method here in pub(crate) fn put_part(&mut self, part: PutPayload) {
self.tasks.spawn(self.upload.put_part(part));
} We're spawning a new task to upload part to S3 and also additionally assigning idx for this partition, everything looks correct and safe as we are protected by During investigation I added a lot of "logging" to our fork of object-store. And found out a very weird thing. From time to time (I would even say often) I observed such things:
In short, even so last write before calling I don't know why it's happening like that, I will probably spend some more time trying to coin it out, but the solution for this problem is quite simple: pub(crate) fn put_part(&mut self, part: PutPayload) {
let task = self.upload.put_part(part);
self.tasks.spawn(task);
} create a task future outside of the spawn call to guarantee that we assigned a proper index to the part and then submit a task Please let me know what you think @tustvold 🙇 |
The two code samples are equivalent...
Are you sure this originates from Edit: In fact re-reading the initial issue description, you stated you were seeing issues with multiple shutdowns. This almost certainly points to misuse of |
This is the same proposal as before can you please articulate what problem you are trying to fix, with a reproducer. The fact you're unable to produce this, makes it very hard for me to not think this is a concurrency issue in your codebase |
Describe the bug
So recently we started seeing two issues:
arrow-rs/object_store/src/buffered.rs
Line 220 in 50e9e49
Multiple Shutdown
First of all, I want to note that multiple shutdown calls to the same writer are the issue by itself, but I think we can make the situation better with minimum effort.
Here is the code:
I think we can change it to something more friendly like this:
This way on a second shutdown call we just immediately return
Ok(())
Upload part size issue
Something leftover during the shutdown, complete before the previous upload, in this case, we're getting:
To mitigate this issue we probably should wait for all previous part uploads to complete and then upload the final part which may be smaller than the minimum size of the last one.
Here is the original code I propose to change:
by injecting
self.wait_for_capacity(0).await?;
before actually putting the last chunk we can mitigate this issue.This way we wait for all ongoing uploads before submitting the last part
The text was updated successfully, but these errors were encountered: