-
-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expect continue #204
Comments
I'm interested in what @ioquatix's thoughts are, so any Puma implementation doesn't get too far out of whack with what other servers might want to do. |
I have an example hack here: https://github.com/socketry/falcon/pull/206/files#diff-32e79c935c64dde33bd327b3b8c530bc4b842262ec53092b82830f7bb0177184 it's not clean but works. My example has other issues. Under load it doesn't work (several uploads fail). Didn't have the time for a thorough debugging. |
I think most people think of HTTP as a request/response protocol. Intermediate non-final responses break 99% of the interfaces people actually code against, e.g. response = client.request("GET", "/")
# and
def server(request)
return [200, ...]
end The only solution I have to this is to consider some kind of response chaining, e.g.: response = client.request("GET", "/")
response.status # 100
while !response.final?
response = response.next
end
response.status # 200 or something else (I'm not even sure how you'd do this with a post body - waiting for 100 continue on the client and then posting the body as a 2nd request??). I don't think the benefits of non-final responses is commensurate with the interface complexity they introduce. That's my current opinion, but I'd be willing to change it if there was some awesome new use case I was not familiar with. In addition, HTTP/2 already has stream cancellation for this kind of problem, and there is nothing wrong with cancelling a stream mid-flight, even for HTTP/1 - it's not as efficient as HTTP/2+ but it has a consistent semantic. So: in summary, I think provisional responses introduce significant complexities to the interface both on the client AND server, and I don't think the value they add to a few small corner cases is worth that complexity. Remember that every step of the way, including proxies, etc has to handle it correctly. The question in my mind is, do we want |
My motivation for opening the issue on Puma was puma/puma#3188 (comment), redirecting where an upload should go (e.g. my client POST to my app, app responds with redirect to some cloud storage – the app has generated the (temporary) URL for that, my clients doesn't need to know how to auth with the cloud storage, just my app) Since many years back the similar(?) feature Not sure where we go from here, I still have much to read up on in regards to Rack and HTTP/2 and so on. |
@ioquatix here's my idea for Puma: puma/puma#3188 (comment) |
@dentarg That's an interesting use case. Are there any clients that actually support the redirect mechanism as outlined? |
@ioquatix Yes, looks like |
Any others? |
akka/akka#15799 (comment) |
That's a great link. One part that stands out to me:
There be the dragons? I think my interpretation most closely aligns with akka/akka#15799 (comment) However, I appreciate how this might be possible to implement just as an internal detail. If that's true, what's the advantage?
This gives me some hope that maybe we don't need to expose it to the user... but it's followed by this:
The level of complexity seems pretty high to me... and it's followed by this:
Which makes me thing that the entire thing is not worth pursuing except if you enjoy suffering through the implementation and all the compatibility issues... in the best case, rejecting the incoming body, according to Dr Fielding, we have to close the connection, isn't that going to be worse for performance? i.e. isn't it easier just to close the connection if you want to reject the body? Not only that, but latency can be introduced by the client waiting for the Finally, for me, probably the biggest bias I have, is this problem is already solved in HTTP/2+ since closing a stream is so easy... Maybe for HTTP/1 it kind of sucks, but for HTTP/2+ I feel like this is a non-issue. I'm still intrigued and interested in where this discussion goes, but I'm not sure I have patience to actually do the implementation... |
Hi,
curl and also browsers are sending POST request with Expect: 100-continue header if the body is big enough. This is a nice feature but is hard to implement correctly in rack. As far as I understood the response would have to deliver the 100 continue response and then the sender is continuing the upload AND the server sends a full response.
Puma does this quite hackish : https://github.com/puma/puma/blob/87c052f514488286a9ee70855db8a265c90a4dbb/lib/puma/client.rb#L340
and totally fails the expect-continue purpose (the server should be able the respond with a non 100 and reject the POST if it's too big or has other issues - but this should be the app's decision and not the server's).
Is there a way to have a clean way to respond to expect-continue and decide how to continue (100 or something else)?
The text was updated successfully, but these errors were encountered: