-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add the # of messages prioritized in the queue to the http resp… #5043
base: main
Are you sure you want to change the base?
Conversation
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5043 +/- ##
=======================================
Coverage 77.53% 77.53%
=======================================
Files 103 103
Lines 2110 2110
Branches 190 190
=======================================
Hits 1636 1636
Misses 453 453
Partials 21 21
|
e234faf
to
b35d465
Compare
b35d465
to
cf9c2c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good stuff!
excited that this task is fairly meaty, lots of interesting concurrency and touching a lot of systemic things in the relayer :)
- also updating tracing calls to use structured logs - update variable names to receiver/transmitter instead of rx/tx
- this creates a nice separation between each retry request and won't cause multiple requests to conflict with each others responses
e23f01a
to
491f981
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😎 awesome work, hitting the ground running from the first PR!!
match_res | ||
}) | ||
.any(|x| x); | ||
if matches { | ||
info!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
realizing it'd be useful to include the retry request in this log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can't really log the specific retry request because we only check if some retry request matches the operation.
Unless we want to log each retry request that matches in the .map()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right, though we could take advanced of the filter
result and accumulate it into a vec of maps, so we include all request IDs that matched in the log. It'd also be cleaner to refactor everything inside the .map(|Reverse(mut op)| {
into its own function, since it's getting quite crowded
resp.evaluated += relayer_resp.evaluated; | ||
resp.matched += relayer_resp.matched; | ||
} | ||
|
||
Ok(Json(resp)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd make things easier to debug if we returned a map of responses by uuid in the json response
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by this? Currently the prepare, submit and confirm queue all return the same UUID response 🤔
Or are you thinking of something like:
{
"prepare_queue": {
"evaluated": ...,
"matched": ...,
},
"submit_queue": {
"evaluated": ...,
"matched": ...,
},
"confirm_queue": {
"evaluated": ...,
"matched": ...,
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh right, every message has its own channel actually and you're just adding up responses from each sender, I missed that
Things are a bit more complicated:
- we should use
try_recv
instead ofrecv
everywhere, to avoid blocking on an empty channel. However because different senders may take varying amounts of time to respond to the request and the channel may be spuriously empty before we received all responses: - we need to wait on a response from every sender (
num_chains * num_queues
). So we should keep a counter and only return once we got responses from allreceiver.sender_strong_count()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
your logic to add up the response fields makes sense now
- add more logging - move test code into helper functions
- log matched request uuids
Description
Add a mpsc channel between relayer and message retry handler, so message retry handler knows the retry results from the relayer.
POST /message_retry
endpoint now returns a JSON with details on the retry request:Drive-by changes
Related issues
Fixes #4059
Backward compatibility
Testing
Updated unit tests to return a response so the handler is not blocked