-
Notifications
You must be signed in to change notification settings - Fork 750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tmax adjustments for individual adapters/bidders #3965
Comments
Thanks @scr-oath , If we supported the "secondary bidders" feature, different timeouts for secondary bidders would make sense. But until then, I'm not real fond of having different timeouts for different bidders. PBS gets only one response to the client, so there's only one timeout value. It doesn't make sense to me that bidderA would be told 200ms when bidderB gets 300ms because their endpoint is far away. We might as well let bidderA have that extended time. FWIW, we added region-specific timeout adjustment in #2398 -- the way this works specifically in PBS-Go is described in https://docs.prebid.org/prebid-server/endpoints/openrtb2/pbs-endpoint-auction.html#pbs-go-1 For instance, we've changed our network buffer in APAC because our network there isn't as good as in other reasons. But all bidders get the same tmax value. |
Actually give bidders the same time out, but optionally decrement the tmax actually sent to bidders. |
My understanding is that this is not about the PBS Adapter timeout at all. It is about signaling the external bidder how much time they have to run the auction. This signaling should consider the network latency between the originating PBS instance and the bidder. E.g. if both are in the same region and the network latency is < 20ms, then the bidder may use 800ms, however if it is in another region with a network latency of < 200ms, then the bidder may only use 400-600ms to not be timed-out I believe the best approach here would be for PBS to measure the pure network latency, calculate the P90 and apply that to the tmax calculation. |
The another potential issue that large timeouts >1s will increase queue of HTTP requests to bidders and number of working goroutines and active connections because PBS spawns a new goroutine for each bidder request. |
AGREE:
The resulting tmax is intended to be the amount of time a bidder can take in the handler to respond in time to go across the network and for the auction to wrap up. The actual timeout should be the tmax reported to the bidder + the network latency.
YES This is the crux - if a PBS server is deployed to multiple regions, but a bidder is only in one location, then network latency will be higher for them in "farther" (or slower) regions.
This is an interesting extension to the idea - yet, while I love the idea of measuring the exact thing, I do wonder about the added complexity for something that can be mostly statically determined and tuned. I'm curious but feel like it's a distraction, perhaps, to the feature how one might measure network latency - would this require each bidder set up a "ping" endpoint - something that does zero work but just answers a request as fast a possible - so the p99 of the resulting observed time could be dynamically set as the amount to subtract off from the tmax? |
Sorry, lost some context here... why couldn't the host company just set |
Is it possible to set that just for one bidder? (not the full auction) |
Ok, so my comment from 3 weeks ago is incorrect... bidders are not told the same tmax value. Rather, the goal here is manage slow bidders by telling them tmax-SLOW_BIDDER_BUFFER. Ok, here's a proposed solution... add two new configs:
If the bidder is on the slow bidder array, decrement its tmax by slowbidderbufferms, still subject to |
If it is all the same and doesn't exist yet, I would suggest: auction.biddertmax.offset:
- bidderA=-200 // bidder specific offset in ms
- bidderB=-100 // bidder specific offset in ms Or to the existing adapter settings. |
Not too late! Refining that direction, how about changing 'offset' to 'bidderoffset'?
I wouldn't suggest putting it in existing adapter settings -- these values would depend on the host company and we try to keep those files easily mergable. |
Any naming would be fine for us. The option to define it by bidder is what would be helpful. |
@bsardo , @SyntaxNode - what are your thoughts about defining bidderoffsets with negative numbers? I'm leaning towards just saying that "offsets" are always positive numbers that are subtracted from the tmax. I don't want to send a tmax to a bidder that's larger than the PBS tmax. |
Fine from my end. Maybe we use a better defining name then? e.g. |
Discussed in committee. Everything was changed around again. This is where we settled: Host-level config:
This just adjusts the tmax sent to the bidder to give them a higher sense of urgency given the expected network delays. It doesn't affect the actual timeout enforced by PBS. |
The team is implementing in PBS-Java. Several clarification questions have revealed the need for additional details.
bidder_specific_tmax = tmax - request_processing_time - bidder_network_latency_buffer_ms - bidder_response_duration_min_ms - bidder_specific_tmax_deduction_ms
If bidder_specific_tmax < auction.biddertmax.min, bidder_specific_tmax = auction.biddertmax.min Default auction.biddertmax.min remains the same at 50 |
As an example need, we have a bidder that's only in one location, however we have deployed PBS around the world. We need to adjust that bidder's network latency buffer accordingly in locations that are "farther" away so that it is told the appropriate amount of time it can use to answer.
Proposal:
The text was updated successfully, but these errors were encountered: